Sample records for method quelques remarques

  1. Decrire et enseigner une competence de communication: remarques sur quelques solutions de continuite. L'Enseignement de la competence de communication en langues secondes. (Describing and Teaching Communicative Competence: Some Remarks on Solutions of Continuity. The Teaching of Communicative Competence in Second Languages.) Acts of the Colloquium of the Swiss Interuniversity Commission for Applied Linguistics. CILA Bulletin.

    ERIC Educational Resources Information Center

    Coste, Daniel

    Two projects of the Ecole Normale Superieure de Saint-Cloud (CREDIF) are described and critically analyzed in this paper: the definition of a threshold level, "Niveau-seuil," in French and a learning module, "Looking for Work," intended to teach necessary written French to migrant workers. The threshold level section is a…

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pluquet, Alain

    Cette théetudie les techniques d'identication de l'electron dans l'experience D0 au laboratoire Fermi pres de Chicago Le premier chapitre rappelle quelques unes des motivations physiques de l'experience physique des jets physique electrofaible physique du quark top Le detecteur D0 est decrit en details dans le second chapitre Le troisieme cha pitre etudie les algorithmes didentication de lelectron trigger reconstruction ltres et leurs performances Le quatrieme chapitre est consacre au detecteur a radiation de transition TRD construit par le Departement dAstrophysique Physique des Particules Physique Nucleaire et dInstrumentation Associee de Saclay il presente son principe sa calibration et ses performances Ennmore » le dernier chapitre decrit la methode mise au point pour lanalyse des donnees avec le TRD et illustre son emploi sur quelques exemples jets simulant des electrons recherche du quark top« less

  3. Remarques sur le Passif (suite) (Remarks on the Passive, Continued)

    ERIC Educational Resources Information Center

    Pinchon, Jacqueline

    1977-01-01

    The continuation of articles on the passive voice appearing in the "Grammaire vivante" section of the periodical. The production of the passive sentence is considered under two headings: the simple verb and the complex verbal group. (Text is in French.) (AMH)

  4. "All Quiet on the Western Front."

    ERIC Educational Resources Information Center

    Soderquist, Alisa

    Based on Erich Maria Remarque's novel "All Quiet on the Western Front" and other war literature, this lesson plan presents activities designed to help students understand that works of art about war can call up strong emotions in readers; and that the writing process can be applied to writing poems. The main activity of the lesson involves…

  5. A propos de quelques experiences de francais fonctionnel en milieu hispanaphone (A Commentary on Experiments in Functional French in a Spanish-Speaking Milieu)

    ERIC Educational Resources Information Center

    Baltzer, Francois

    1978-01-01

    A discussion of the adaptation of audiovisual methods to respond to various specific needs in Mexico City. Some of the topics discussed are: meeting needs of people involved in special fields, particularly science, technology and economics; and the use of television for functional French instruction. (AMH)

  6. Effects of Collaborative Mentoring on the Articulation of Training and Classroom Situations: A Case Study in the French School System

    ERIC Educational Resources Information Center

    Chalies, Sebastien; Bertone, Stefano; Flavier, Eric; Durand, Marc

    2008-01-01

    This study assessed the effects of a collaborative mentoring sequence on the professional development of a preservice teacher (PT). The analysis of data from observation and self-confrontation interviews identified work rules [Wittgenstein, L. (1996). In G. E. M. Anscomb & G. H. Von Wright (Eds.), "Remarques philosophiques"…

  7. Remarques critiques a propos de l'enquete international sur la litteratie (Critical Remarks Regarding the International Adult Literacy Survey).

    ERIC Educational Resources Information Center

    Manesse, Daniele

    2000-01-01

    States that French authorities refused to make International Adult Literacy Survey results public, citing methodological flaws, the need for better procedural precautions, and a more adequate notion of literacy. Presents a synthesis of the counter investigations demanded by French authorities that justify their doubts on the IALS definitions and…

  8. On the Implementation of Iterative Detection in Real-World MIMO Wireless Systems

    DTIC Science & Technology

    2003-12-01

    multientr~es et multisorties (MIMO) permettent une exploitation remarquable du spectre comparativement aux syst~mes traditionnels A antenne unique...vecteurs symboliques pilotes connus cause une perte de rendement n~gligeable comparativement au cas hypothdtique des connaissances des voies parfaites...useful design guidelines for iterative systems. it does not provide any fundamental understanding as to how the design of the detector can improve the

  9. Astéroides et satellites de Saturne: quelques résultats récents et contribution éventuelle des données Hipparcos.

    NASA Astrophysics Data System (ADS)

    Viateau, B.; Rapaport, M.

    48 astéroides et 2 satellites de Saturne étaient au programme de la mission Hipparcos, et diverses propositions ont été faites pour l'utilisation de ces données. Les auteurs présentent quelques résultats récents concernant ces objets, et susceptibles de 1) donner un supplément d'intére^t aux données astrométriques fournies par Hipparcos, 2) permettre de préciser les objectifs contenus dans diverses propositions.

  10. Tranel, Numero Special. Actes du 2eme colloque regional de linguistique, Neuchatel 2-3 Oct. 1986 (Tranel, Special Number. Proceedings of the Second Regional Linguistics Colloquium, Neuchatel, Switzerland, October 2-3, 1986).

    ERIC Educational Resources Information Center

    Neuchatel Univ. (Switzerland).

    This publication presents 20 conference papers in French and German on aspects of linguistic research. Papers include: "Objectivite et subjectivite dans la connaissance du langage" (Mahmoudian); "Focalisation et antifocus"(Bearth); "Remarques sur la notion d''etymologie populaire'"(Chambon); "Sur la construction serielle en malgache"; (Fugier)…

  11. Sexual Orientation and U.S. Military Personnel Policy: Options and Assessment

    DTIC Science & Technology

    1993-01-01

    include smaller actions, such as allocation of time to the new policy and keeping the change before members through video or other messages such as...were also taken. A condensed video and still picture S record has been provided separately, and the complete videotape and all photography have been...touching, leering. las- s’ylimilter,.Ies attouchements.Iles regards concupis-* civous remarks and the display of porno - cents, les remarques lascives et

  12. Le "Futur Anterieur" comme temps du passe: Remarques sur un emploi particulier frequent du "futur anterieur" en francais. (The "Futur Anterieur" as Past Tense: Remarks on a Particular Frequent Use of the "Futur Anterieur" in French).

    ERIC Educational Resources Information Center

    Steinmeyer, Georg

    1987-01-01

    Explains how the "futur anterieur" is often used to indicate past time in French grammar. Using authentic evidence from a news magazine, some hypotheses on the conditions of use of the "futur anterieur" are suggested. Criteria for distinguishing past tense functions from modal functions are also presented. (TR)

  13. Opticien Célèbre. André Maréchal

    NASA Astrophysics Data System (ADS)

    Haidar, Riad

    2017-12-01

    Physicien français spécialisé en optique, professeur à la faculté des sciences de l'université de Paris, André Maréchal a également été délégué général à la recherche scientifique et technique, et directeur général de l'Institut d'Optique. On lui doit notamment des résultats remarquables sur la théorie des aberrations et de la diffraction.

  14. The Impact of New Guidance and Control Systems on Military Aircraft Cockpit Design.

    DTIC Science & Technology

    1981-08-01

    de r~duction des surfaces de planche de bord et de complexit6 des interfaces homme /machine darns les a~ronefs de combat A haute performance...taut remarquer que dana l ’&tat actuel do la technique, une machine de reconnaissance do parole n’a pas do performances en propre. Sea performances...L’organe principal du dialogue 6tant une console A tube cathodique et clavier. L I ___ 15-3 Le vocabulaire comportait 119 mots, extraits de

  15. The Coast Artillery Journal. Volume 59, Number 6, December 1923

    DTIC Science & Technology

    1923-12-01

    Remarques sur Ie tir Fusant.-F-12, January, 1923. RegIa de Calculo Para Muelles Cilindricos.-Spa-2, June, 1923. Terrain Reduit pour Exercices de tir Fictif au...I to V, on a front Brussels-Metz, pivoting on the fortified area Metz- Thionville. The inner flank of the wheel was to be covered by the VI and VII...rapidly on the fron t Beauraing-Gedinne- Paliseul- Fay- des -Veneurs-Cuignon (5th Army) and Tetaigne-Margut-Quincy (4th Army)." Fearing the German advance

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Plusieurs orateurs rendent hommage au grand physicien et scientifique Vladimir Jurko Glaser (1924 - 1984) qui travaillait au Ruder Boscovic Institut à Zagreb avant de venir au Cern en 1957 où il trouvait un poste permanent au département de physique théorique. Walter Tearing, Harry Lehmann,Henry Epstein, Jacques Bros et André Martin font des résumés biographiques de leurs collègue et ami en honorant ses grands qualités d'homme et ses remarquables conquêtes de la science et leurs accomplissement.

  17. Moral and Ethical Dilemmas in Canadian Forces Military Operation: Qualitative and Descriptive Analyses of Commanders’ Operational Experiences

    DTIC Science & Technology

    2008-10-01

    aussi remarqué une proportion légèrement plus élevée de comptes rendus irrésolus qui mentionnaient l’honneur et la fierté personnels, comparativement ...military commanders during overseas missions. Note that the original interview protocol was semi-structured and designed to allow the interviewee to...should be selected so that no security classification is required. Identifiers, such as equipment model designation , trade name, military project code

  18. Quelques considerations sur la traduction medicale et pharmaceutique (Some Considerations in Medical and Pharmaceutical Translation)

    ERIC Educational Resources Information Center

    Sliosberg, A.

    1971-01-01

    Paper presented during the meeting of the Section Presse et Documentation" of the 29th International Congress of Pharmaceutical Science of the International Pharmaceutical Federation, London, September 10, 1969. (VM)

  19. Memorial V.J.Glaser

    ScienceCinema

    None

    2017-12-09

    Plusieurs orateurs rendent hommage au grand physicien et scientifique Vladimir Jurko Glaser (1924 - 1984) qui travaillait au Ruder Boscovic Institut à Zagreb avant de venir au Cern en 1957 où il trouvait un poste permanent au département de physique théorique. Walter Tearing, Harry Lehmann,Henry Epstein, Jacques Bros et André Martin font des résumés biographiques de leurs collègue et ami en honorant ses grands qualités d'homme et ses remarquables conquêtes de la science et leurs accomplissement.

  20. Quelques problemes poses a la grammaire casuelle (Some Problems Regarding Case Grammar)

    ERIC Educational Resources Information Center

    Fillmore, Charles J.

    1975-01-01

    Discusses problems related to case grammar theory, including: the organizations of a case grammar; determination of semantic roles; definition and hierarchy of cases; cause-effect relations; and formalization and notation. (Text is in French.) (AM)

  1. Lévitation magnétique par association d'aimants permanents et de supraconducteurs à haute température critique

    NASA Astrophysics Data System (ADS)

    Hiebel, P.; Tixador, P.; Chaud, X.

    1995-06-01

    Since their discovery in the years 1986/87, the high critical temperature superconductors have reached nowadays performances interesting enough to conceive passive magnetic bearings and suspensions which would combined permanent magnets and naturally stable superconducting pellets. After underlining the principal factors that affect the superconductormagnet interaction, different experimental results are given about vertical and axial forces with some stiffness values. The magnetization curve of a superconductor help to understand the hysteretic behavior of the force as a function of the distance between superconductor and magnet. So called simple and hybrid structures of superconducting magnetic suspension are presented. Finally simple numerical simulations allow to draw some interesting conclusions about both geometry and best fitting structure of permanent magnets. Depuis leur découverte dans les années 1986/87, les supraconducteurs à haute température critique ont désormais atteint des performances intéressantes et rendent envisageables des paliers et suspensions magnétiques passives associant aimants permanents et pastilles supraconductrices naturellement stables. Après avoir indiqué les termes importants influençant l'interaction supraconducteur - aimant, différents relevés expérimentaux sont donnés pour les forces verticales et transversales avec quelques valeurs de raideurs. La courbe d'aimantation d'un supraconducteur permet de comprendre le comportement hystérétique de la force en fonction de la distance supraconducteur-aimant. Les structures dites simple et hybride des suspensions magnétiques supraconductrices sont présentées. Enfin quelques simulations numériques simples permettent de dégager quelques conclusions intéressantes quant aux géométries respectives et aux structures d'aimants permanents les mieux adaptées.

  2. Study of the Retention of Fission Products by a Few Common Minerals. Application to the Treatment of Medium Activity Effluents; ETUDE DE LA RETENTION DES PRODUITS DE FISSION PAR QUELQUES MINERAUX USUELS. (APPLICATION AUX TRAITEMENTS D'EFFLUENTS DE MOYENNE ACTIVITE SPECIFIQUE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auchapt, J.M.

    1962-01-01

    The conditions in which Sr is fixed on calcite (the object of Geneva report P/395-USA-- 1958) are more closely studied and the work is extended to five fission products in the effluerts and to 17 common rocks and minerals. Although this fixation is not suitsble as a method of treating STE effluents (i.e., those from the effluent treatment plant at MIarcoule), the study shows that all the crystals considered are strongly contaminated by simple contact. (auth)

  3. A temps nouveaux, solutions nouvelles: quelques propositions (New Times, New Solutions: Some Proposals).

    ERIC Educational Resources Information Center

    Capelle, Guy

    1983-01-01

    Serious problems in education in Latin America arising from political, economic, and social change periodically put in question the status, objectives, and manner of French second-language instruction. A number of solutions to general and specific pedagogical problems are proposed. (MSE)

  4. Comprendre. La diffusion Raman exaltée de surface

    NASA Astrophysics Data System (ADS)

    Boubekeur-Lecaque, Leïla; Felidj, Nordin; Lamy de la Chapelle, Marc

    2018-02-01

    La spectroscopie Raman est une spectroscopie vibrationnelle très peu sensible qui limite l'analyse d'espèces chimiques aux fortes concentrations. Néanmoins, lorsque des molécules sont placées au voisinage d'une surface métallique nanostructurée, il est possible d'exalter considérablement leur signature Raman. On parle alors de diffusion Raman exaltée de surface. Les remarquables potentialités de cette technique ont nourri de nombreux champs d'étude tant pour le design de substrats dits SERS-actifs, que pour l'exploration d'applications en médecine, pharmacologie, défense ou le monde de l'art.

  5. Traduction et langues de specialite: Approches theoriques et considerations pedagogiques (Translation and Specialty Languages: Theoretical Approaches and Pedagogic Considerations).

    ERIC Educational Resources Information Center

    Guevel, Zelie, Ed.; Valentine, Egan, Ed.

    Essays on the teaching of translation and on specialized translation, all in French, include: "Perspectives d'optimisation de la formation du traducteur: quelques reflexions" ("Perspectives on Optimization of Training of Translation Teachers: Some Reflections") (Egan Valentine); "L'enseignement de la revision…

  6. Psychologie des discours et didactique des textes (Psychology of Discourse and the Teaching of Texts).

    ERIC Educational Resources Information Center

    Bronckart, Jean-Paul, Ed.

    1995-01-01

    This collection of articles on the nature of discourse and writing instruction include: "Une demarche de psychologie de discours; quelques aspects introductifs" ("An Application of Discourse Psychology; Introductory Thoughts") (Jean-Paul Bronckart); "Les procedes de prise en charge enonciative dans trois genres de texts expositifs" ("The Processes…

  7. [Not Available].

    PubMed

    Ettalbi, S; Ibnouzahir, M; Droussi, H; Wahbi, S; Bahaichar, N; Boukind, E H

    2009-06-30

    La brûlure est un accident qui reste toujours très fréquent au Maroc, ce qui fait d'elle un problème de la santé publique. Les brûlures, quand elles sont graves ou profondes, entraînent de façon quasi inéluctable des séquelles fonctionnelles et esthétiques. A travers deux observations de deux enfants présentant des séquelles de brûlures graves, ayant retenti péjorativement sur leurs scolarités, on a essayé de mettre en évidence quelques facteurs incriminés dans cette tragédie (feu, petites bouteilles de gaz et le manque d'infrastructure, du personnel médical et paramédical, du matériel ainsi que de la prévention) comme étant une grande cause dans la survenue de ces séquelles. Le but de notre travail est d'énumérer ces différents facteurs intriqués, ainsi que de proposer quelques solutions, tout en insistant sur la prévention.

  8. L'acquisition d'une language seconde: Quelques developpements theoriques recents (Second Language Acquisition: Some Recent Theoretical Developments).

    ERIC Educational Resources Information Center

    Py, Bernard, Ed.

    1994-01-01

    This collection of articles on second language learning includes: "Action, langage et discours. Les fondements d'une psychologie du langage" ("Action, Language, and Discourse. Foundations of a Psychology of Language") (Jean-Paul Bronckart); "Contextes socio-culturels et appropriation des languages secondes: l'apprentissage en milieu social et la…

  9. Canadian Association for the Study of Adult Education. Proceedings of the Annual Conference (4th, Montreal, Quebec, Canada, May 28-30, 1985).

    ERIC Educational Resources Information Center

    Canadian Association for the Study of Adult Education, Guelph (Ontario).

    These proceedings contain 28 papers (20 in English and 8 in French), including the following: "Beyond Ideology: The Case of the Corporate Classroom" (Zinman); "De quelques dimensions paradoxales de l'education interculturelle" (Ollivier); "Ideology, Indoctrination and the Language of Physics" (Winchester);…

  10. Spectroscopie pompe-sonde pour la détection de bioaérosols

    NASA Astrophysics Data System (ADS)

    Guyon, L.; Courvoisier, F.; Wood, V.; Boutou, V.; Bartelt, A.; Roth, M.; Rabitz, H.; Wolf, J. P.

    2006-10-01

    La fluorescence du Tryptophane excité par une impulsion ultra-brève à 270 nm peut être diminuée d'un facteur deux par une seconde impulsion à 800 nm, à l'aide d'un dispositif pompe-sonde. Cette décroissance est aussi observée pour les bactéries vivantes, dont le Tryptophane est l'un des fluorophores, tandis qu'aucune décroissance n'est observée pour d'autres molécules organiques comme le naphtalène ou le gazole, malgré des spectres d'absorption et de fluorescence similaires. Cette différence remarquable est très prometteuse pour la distinction d'aérosols biologiques et organiques.

  11. Aspect Epidemiologique des Sequelles de Brulures a Marrakech, Maroc, a Travers Deux Observations

    PubMed Central

    Ettalbi, S.; Ibnouzahir, M.; Droussi, H.; Wahbi, S.; Bahaichar, N.; Boukind, E.H.

    2009-01-01

    Summary La brûlure est un accident qui reste toujours très fréquent au Maroc, ce qui fait d'elle un problème de la santé publique. Les brûlures, quand elles sont graves ou profondes, entraînent de façon quasi inéluctable des séquelles fonctionnelles et esthétiques. A travers deux observations de deux enfants présentant des séquelles de brûlures graves, ayant retenti péjorativement sur leurs scolarités, on a essayé de mettre en évidence quelques facteurs incriminés dans cette tragédie (feu, petites bouteilles de gaz et le manque d'infrastructure, du personnel médical et paramédical, du matériel ainsi que de la prévention) comme étant une grande cause dans la survenue de ces séquelles. Le but de notre travail est d'énumérer ces différents facteurs intriqués, ainsi que de proposer quelques solutions, tout en insistant sur la prévention. PMID:21991156

  12. Quelques Facteurs Sociaux Agissant sur la Formation Permanente et l'Education Informelle en Algerie (Social Factors Acting upon Lifelong Learning and Informal Education in Algeria).

    ERIC Educational Resources Information Center

    Haddab, Mustapha

    1994-01-01

    Analyzes conditions that have led to an increase in private and collective educational initiatives in Algeria, highlighting political and socioeconomic changes since 1988. Indicates that after a long period of a public education monopoly, social factors have led to the development of alternative educational opportunities that are more responsive…

  13. 25th Birthday Cern- Restaurant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2006-05-05

    Cérémonie du 25ème anniversaire du Cern avec plusieurs orateurs et la présence de nombreux autorités cantonales et communales genevoises et personnalités, directeurs généraux, ministres, chercheurs.... Le conseiller féderal et chef du département des affaires étrangères de la confédération Monsieur Pierre Aubert prend la parole pour célébrer à la fois les résultats très remarquables de la coopération internationale en matière scientifique, mais aussi la volonté politique des états européens de mettre en commun leurs ressources pour faire oeuvre d'avenir. Un grand hommage est aussi donné aux deux directeurs disparus, les prof.Bakker et Gregory.

  14. Support de la famille dans l'education: Quelques aspects de la realite grecque (Support of the Family for Education: Aspects of the Situation in Greece).

    ERIC Educational Resources Information Center

    Varnava-Skoura, Gella

    1992-01-01

    Describes extended family structure in Greece and offers a profile of the family backgrounds of university students. Finds that the cultural capital and sociolinguistic codes of families are not determining factors for university entry in Greece. University students come from clerical and mixed families, who are willing to make necessary financial…

  15. Simulation 󈨔 Symposium.

    DTIC Science & Technology

    1980-11-21

    defensive , and both the question and the answer seemed to generate supporting reactions from the audience. Discrete Event Simulation The session on...R. Toscano / A. Maceri / F. Maceri (Italy) Analyse numerique de quelques problemes de contact en theorie des membranes 3:40 - 4:00 p.m. COFFEE BREAK...Switzerland Stockage de chaleur faible profondeur : Simulation par elements finis 3:40 - 4:00 p.m. A. Rizk Abu El-Wafa / M. Tawfik / M.S. Mansour (Egypt) Digital

  16. Quelques applications de la decomposition de l'entropie en psychologie et pedagogie (Some Applications of the Resolution of Entropy in Psychology and Pedagogy)

    ERIC Educational Resources Information Center

    Gillet, Louis

    1971-01-01

    Psychological and educational measurement is carried out according to the type of model used and data collected. The H entropy which shows the dispersion of the data can be divided into intragroup and intergroup entropy. Choice of colors, sociometrical choice, and the communications are three situations where this resolution can be applied. (MF)

  17. Le grand séisme de Huaxian (1556) : quelques documents chinois

    NASA Astrophysics Data System (ADS)

    Poirier, Jean-Paul

    2017-03-01

    The strong earthquake that struck Shaanxi, Shanxi and several other Chinese provinces in 1556 is generally considered as the deadliest of all earthquakes. It is said that the Chinese annals reported 830,000 casualties. We give here a translation into French of the relevant passage of the annals, as well as of a testimony of a survivor Qin Keda, and of a text engraved on a stela.

  18. Quelques aspects du folklore de la region Roannaise autour de 1950 (Some Aspects of Folklore of the Roanne Region about 1950).

    ERIC Educational Resources Information Center

    Long, Jacqueline

    1971-01-01

    This article examines several aspects of folklore characteristic of the region of Roanne, France, during the 1950's. The town of Roanne, located between Clermont Ferrand and Lyon on the Loire River, is described in terms of its festive activities during serveral key holidays. The erosion of various customs and traditions, an inevitable result of…

  19. Estimating criteria pollutant emissions using the California Regional Multisector Air Quality Emissions (CA-REMARQUE) model v1.0

    NASA Astrophysics Data System (ADS)

    Zapata, Christina B.; Yang, Chris; Yeh, Sonia; Ogden, Joan; Kleeman, Michael J.

    2018-04-01

    The California Regional Multisector Air Quality Emissions (CA-REMARQUE) model is developed to predict changes to criteria pollutant emissions inventories in California in response to sophisticated emissions control programs implemented to achieve deep greenhouse gas (GHG) emissions reductions. Two scenarios for the year 2050 act as the starting point for calculations: a business-as-usual (BAU) scenario and an 80 % GHG reduction (GHG-Step) scenario. Each of these scenarios was developed with an energy economic model to optimize costs across the entire California economy and so they include changes in activity, fuels, and technology across economic sectors. Separate algorithms are developed to estimate emissions of criteria pollutants (or their precursors) that are consistent with the future GHG scenarios for the following economic sectors: (i) on-road, (ii) rail and off-road, (iii) marine and aviation, (iv) residential and commercial, (v) electricity generation, and (vi) biorefineries. Properly accounting for new technologies involving electrification, biofuels, and hydrogen plays a central role in these calculations. Critically, criteria pollutant emissions do not decrease uniformly across all sectors of the economy. Emissions of certain criteria pollutants (or their precursors) increase in some sectors as part of the overall optimization within each of the scenarios. This produces nonuniform changes to criteria pollutant emissions in close proximity to heavily populated regions when viewed at 4 km spatial resolution with implications for exposure to air pollution for those populations. As a further complication, changing fuels and technology also modify the composition of reactive organic gas emissions and the size and composition of particulate matter emissions. This is most notably apparent through a comparison of emissions reductions for different size fractions of primary particulate matter. Primary PM2.5 emissions decrease by 4 % in the GHG-Step scenario vs. the BAU scenario while corresponding primary PM0.1 emissions decrease by 36 %. Ultrafine particles (PM0.1) are an emerging pollutant of concern expected to impact public health in future scenarios. The complexity of this situation illustrates the need for realistic treatment of criteria pollutant emissions inventories linked to GHG emissions policies designed for fully developed countries and states with strict existing environmental regulations.

  20. [Hippocrates and Schweitzer - comparison of their concepts of medical ethics].

    PubMed

    Romankow, J

    1999-01-01

    The Greek physician Hippocrates (c. 460-377 BC) is traditionally regarded as the founder of medicine as a scientific discipline and medical ethics. Hippocrates sought to rely on facts, observation and experiment in the diagnosis and treatment of illness. His work Corpus Hippocraticum included also the remarques on the aspects of environmental medicine. Albert Schweitzer (1875-1965), recipient of the 1952 Nobel Peace Prize, attained fame as a theologian and musician (his activity included a modern interpretation of J.S. Bach) before turning to missionary work in Africa. Having trained as a physician in Strasbourg, he founded (1913) a hospital at Lambarene, Gabon, to which he dedicated the rest of his life. Early in his life he felt deep "reverence for life". His philosophy culminated in an universal affirmative ethnics of an active charity.

  1. The Role of Information and Research in Educational Decision-Making: Some Questions. Le Role De L'Information Et De La Recherche Dans La Prise De Decisions En Matiere D'Education: Quelques Questions.

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific, and Cultural Organization, Paris (France).

    This paper, one of a series of Unesco technical information reports, looks at the educational decision makers in developing nations and examines their access to and use of information and research results. Written in English and in French, the paper consists of five parts. Part one discusses problems encountered by educational policy-makers and…

  2. "Bonjour History": Materials and Ideas for Teaching Louisiana's Cajun History = "Bonjour l'Histoire": Quelques Idees, Deux ou Trois Activites, et Plusieurs Materiaux pour l'Enseignement de l'Histoire des Cadiens en Louisiane.

    ERIC Educational Resources Information Center

    Mounier, Brenda

    The goals of this teacher's guidebook and videotape are designed to incorporate Acadian (Cajun) history into the 4th grade social studies curriculum and the 4th and 5th grade Louisiana 30-minute daily French programs and French immersion programs. Another goal is to create an awareness, appreciation, and understanding of Acadian history in…

  3. Flight Control Design - Best Practices

    DTIC Science & Technology

    2000-12-01

    n’était pas universellement disponible à l’époque. La première partie du rapport donne quelques exemples de problèmes de commandes de vol. Ils...pitch axis. We can infer a lesson learned in the form of design guidance for control allocation or priority. Rigorous analysis is required to define...flight excitation and data gathering manoeuvres are safe and are sufficient to produce the required information. BP9.5 Time must be allocated in the

  4. 25th Birthday Cern- Restaurant

    ScienceCinema

    None

    2017-12-09

    Cérémonie du 25ème anniversaire du Cern avec plusieurs orateurs et la présence de nombreux autorités cantonales et communales genevoises et personnalités, directeurs généraux, ministres, chercheurs.... Le conseiller féderal et chef du département des affaires étrangères de la confédération Monsieur Pierre Aubert prend la parole pour célébrer à la fois les résultats très remarquables de la coopération internationale en matière scientifique, mais aussi la volonté politique des états européens de mettre en commun leurs ressources pour faire oeuvre d'avenir. Un grand hommage est aussi donné aux deux directeurs disparus, les prof.Bakker et Gregory.

  5. Sidération myocardique au cours d'une intoxication au monoxyde de carbone (CO) chez une femme enceinte

    PubMed Central

    Coulibaly, Mahamadoun; Berdai, Mohamed Adnane; Labib, Smael; Harandou, Mustapha

    2015-01-01

    L'intoxication au monoxyde de carbone (CO) est la première cause de décès par intoxication en France. La littérature est ancienne et peu connue. Les signes les plus fréquents de l'intoxication sont la triade: Céphalées; asthénie, faiblesse musculaire surtout des membres inférieurs. Ses conséquences sont potentiellement graves pour le fœtus quand elle survient chez la femme enceinte, il est particulièrement exposé au risque d'hypoxie en raison de la forte affinité de son hémoglobine pour le CO qui traverse aisément le placenta. Les événements cardiovasculaires ne sont pas rares et peuvent être responsable d'une morbi-mortalité assez importante qui peuvent être d'apparition rapide ou secondaire mais régressent habituellement en quelques jours. Des SCA peuvent survenir lors d'une une intoxication au CO avec à l'extrême infarctus myocardique avec surélévation du segment ST. Il paraît légitime de proposer pour toutes les patientes: l’éloignement maternel de la source de CO; l'oxygénothérapie à 100% au masque facial par les services de secours et pendant le transfert; le traitement par oxygénothérapie hyperbare pour toutes les femmes enceintes, le plus rapidement possible et quelque soit l’âge gestationnel. PMID:26405502

  6. C3I for Crisis, Emergency and Consequence Management (C3I pour la gestion des crises, des urgences et de leurs consequences)

    DTIC Science & Technology

    2009-05-01

    Quelque soit le contexte, l’aide à la décision passe par une analyse en profondeur de trois (3) aspects importants interdépendants, à savoir le...information, including suggestions for reducing this burden to Department of Defense , Washington Headquarters Services, Directorate for Information...type de menace, nécessite en effet d’adopter une approche collective de la sécurité étendue à une coopération avec de multiples organisations civiles

  7. Instabilités et chaos dans les oscillateurs paramétriques optiques

    NASA Astrophysics Data System (ADS)

    Amon, A.; Suret, P.; Bielawski, S.; Derozier, D.; Zemmouri, J.; Lefranc, M.; Nizette, M.; Erneux, T.

    2004-11-01

    Nous discutons quelques mécanismes d'instabilité récemment observés dans un oscillateur paramétrique optique (OPO) : d'une part des instabilités opto-thermiques où le système oscille autour des courbes de résonance d'un ou plusieurs modes, d'autre part des oscillations rapides résultant de l'interaction de plusieurs modes transverses. La première observation expérimentale de chaos déterministe dans un OPO est également présentée.

  8. Problemes theoriques et methodologiques dans l'etude des langues/dialectes en contact aux niveaux macrologique et micrologique = Theoretical and Methodological Issues in the Study of Languages/Dialects in Contact at Macro- and Micro-Logical Levels of Analysis. Proceedings of the International Conference DALE (University of London)/ICRB (Laval University, Quebec)/ICSBT (Vrije Universiteit te Brussel) (London, England, May 23-26, 1985).

    ERIC Educational Resources Information Center

    Blanc, Michel, Ed.; Hamers, Josiane F., Ed.

    Papers from an international conference on the interaction of languages and dialects in contact are presented in this volume. Papers include: "Quelques reflexions sur la variation linguistique"; "The Investigation of 'Language Continuum' and 'Diglossia': A Macrological Case Study and Theoretical Model"; "A Survey of…

  9. Study of Some Mineral Exchangers for Use in Water at High Temperature; ETUDE DE QUELQUES ECHANGEURS MINERAUX UTILISABLES DANS L'EAU A HAUTE TEMPERATURE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hure, J.; Platzer, R.; Bittel, R.

    1959-10-31

    The study of the use of ion exchangers at high temperatures was made with a view to the purification of water in reactors. Natural ion exchangers with mineral structures (clay of the montmorillonite type), natural mineral compounds so treated as to give them the properties of ion exchangers (activated graphite), and synthetic mineral compounds (zirconium phosphates and hydroxides and thorium hydroxide) were investigated. The preparation of the minerals is described, and the results obtained with them are discussed in detail. (J.S.R.)

  10. Silent Ship Research Applications and Operation. Volume 2. Unclassified Papers. Proceedings of a Conference Held at SACLANTCEN on 2-4 October 1984

    DTIC Science & Technology

    1985-01-15

    moyennes calculees sur 62 bateaux sont priesnteesdans le tableau suivznt aoy’?nno mowvuw. desma yonn des mayavw dom % aupentation genral: bate"u A bateaux 5...i coque en bois, acier ou polyester. Le decoupaqe des variables en classes pernet de bitir deux matrices *un - tableau disjonctif complet", *un...pr6sentents quelques expertises ac oustiques provenant de deux t~tudes lr~alis~es .par le G.E.R.B.A.M. .I *la premiý-re, sur 95 thoniers ligneurs

  11. Modélisation par éléments finis 3D du champ magnétostatique dans les enroulements des réactances cuirassées de grande puissance. Comparaison avec le calcul en 2D

    NASA Astrophysics Data System (ADS)

    Ngnegueu, Triomphant; Terme, Claude; Mailhot, Michel

    1993-03-01

    In this paper, the finite element method is applied for the computation of the magnetostatic field in the windings of a shell-form reactor. The modeling is carried out in 3D, using FLUX3D, a software developed at the Laboratoire d'Electrotechnique de Grenoble. The results are compared to those obtained in 2D. These calculation results are also compared to some test results. Dans cet article, nous décrivons une application de la méthode des éléments finis pour la modélisation du champ magnétostatique dans les enroulements d'une réactance cuirassée de grande puissance. La modélisation est conduite en 3D, en utilisant le logiciel FLUX3D. Les résultats du calcul sont comparés avec ceux obtenus en 2D. Quelques comparaisons sont aussi effectuées avec des résultats de mesure.

  12. Un cosmologiste oublié: Jean Henri Lambert

    NASA Astrophysics Data System (ADS)

    Débarbat, Suzanne; Lévy, Jacques

    Si les travaux de Kepler ont eu une large influence sure les progrès réalisés en astronomie au cours du 17e siècle, le Siècle de lumières a vu apparaître de nouvelles conceptions. La court vie de J.H. lambert s'inscrit dans le 18e siècle. Il s'agit d'un nom bien connu dans différents domaines (photométrie, projections cartographiques, mathématiques appliquées, etc.); mais il n'est guàre mentionné en cosmologie, alors que Lambert y a fourni une contribution originale offrant quelques suprenantes anticipations...

  13. A Selection of Test Cases for the Validation of Large-Eddy Simulations of Turbulent Flows (Quelques cas d’essai pour la validation de la simulation des gros tourbillons dans les ecoulements turbulents)

    DTIC Science & Technology

    1998-04-01

    they approach the more useful (higher) Reynolds numbers. 8.6 SUMMARY OF COMPLEX FLOWS SQUARE DUCT CMPO00 UDOv 6.5 x 10’i E Yokosawa ei al. 164] pg...Sheets for: Chapter 8. Complex Flows 184 185 CMPOO: Flow in a square duct - Experiments Yokosawa , Fujita, Hirota, & Iwata 1. Description of the flow...These are the experiments of Yokosawa ei al (1989). Air was blown through a flow meter and a settling chamber into a square duct. Measuremsents were

  14. Reconstruction du Flux d'Energie et Recherche de Squarks et Gluinos dans l'Experience D0 (in French)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ridel, Melissa

    2002-01-01

    Le modèle standard décrit la matière et les interactions fondamentales qui la gouvernent (électromagnétique, faible et forte). L'analyse des données accumulées jusqu'à présent conffrme ces prédictions notamment les mesures de précision effectuées à LEP. Malgré tout, il doit se confronter à quelques dicultés théoriques qui laisseraient penser que le Modèle Standard n'est que la théorie effective d'une autre théorie à plus haute énergie....

  15. Study of Behavior of Some Varieties of Belgian Potatoes Subjected to Gamma Irradiation; ETUDE DU COMPORTEMENT DE QUELQUES VARIETES BELGES DE POMMES DE TERRE SOUMISES A L'IRRADIATION GAMMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirchmann, R.; De Proost, M.; Demalsy, P.

    1962-07-01

    Different varieties of potatoes were irradiated with doses between 5000 and 20000 rads and stored at two different temperatures. Irradiation has a grent influence on the weight loss of the potatoes during storage; the degree of sprout inhibition depends on the variety of the potatoes. The glutathione content and the oxygen consumption of potatoes are influenced by irradiation. The greatest effect of irradiation on the chemical composition concerns the starch; an increase in sugar content is observed. The culinary properties of potatoes are not changed by irradiation. (auth)

  16. Métier de sociologue, approche inductive et objet d'analyse. Brèves remarques à partir de Bourdieu.

    PubMed

    Hamel, Jacques

    2015-05-01

    This article seeks to reveal the role played by the inductive approach in sociology. Grounded Theory assumes its full importance in formulating sociological explanations. However, the theory does pose a problem, in that the "method" is not based on clearly defined operations, which remain implicit. This article attempts to show that the object of analysis-what is being analyzed-makes perceptible the operations implicitly conceived by the analyst, based on Grounded Theory. With qualitative analysis software, such as Atlas.ti, it is possible to shed light on these operations. The article is illustrated by the theory of Pierre Bourdieu and the epistemological considerations he developed as a result of his qualitative inquiry, La Misère du monde. Cet article cherche à montrer le rôle que joue l'approche inductive en sociologie. La Grounded Theory revêt son importance pour formuler l'explication sociologique. Celle-ci pose toutefois problème. En effet, la «méthode» ne repose pas sur des opérations clairement définies et celles-ci restent implicites. Dans cet article, on cherche à montrer que l'objet d'analyse-ce sur quoi porte l'analyse-rend perceptibles les opérations que l'analyste conçoit implicitement en s'appuyant sur la Grounded Theory. Les logiciels d'analyse qualitative, comme Atlas.ti, permettent d'autre part de les mettre en évidence. L'article est illustré par la théorie de Pierre Bourdieu et les considérations épistémologiques qu'a développées cet auteur à la suite de son enquête qualitative sur la Misère du monde. © 2015 Canadian Sociological Association/La Société canadienne de sociologie.

  17. Détermination des densités de charge d'espace dans les isolants solides par la méthode de l'onde thermique

    NASA Astrophysics Data System (ADS)

    Toureille, A.; Reboul, J.-P.; Merle, P.

    1991-01-01

    A non-destructive method for the measurement of space charge densities in solid insulating materials is described. This method called “ the thermal step technique ” is concerned with the diffusion of a step of heat applied to one side of the sample and with the resulting nonuniform thermal expansion. From the solution of the equation of heat, we have set up the relations between the measured current and the space charge densities. The deconvolution procedure leading to these charge densities is presented. Some results obtained with this method on XLPE and polypropylene slabs are given. Une nouvelle méthode non destructive, pour la mesure des densités de charges d'espace dans les isolants solides, est décrite. Cette méthode, dite de “ l'onde thermique ” est basée sur la diffusion d'un front de chaleur appliqué à une des faces de l'éprouvette et sur la dilatation non uniforme qui en résulte. A partir de la résolution de l'équation de la chaleur, nous avons établi les relations entre le courant mesuré et les densités de charges. Nous indiquons ensuite un procédé de déconvolution permettant de calculer ces densités de charge. Quelques résultats obtenus par cette méthode, sur des plaques de polyéthylène réticulé et polypropylène sont donnés.

  18. Optimisation thermique de moules d'injection construits par des processus génératifs

    NASA Astrophysics Data System (ADS)

    Boillat, E.; Glardon, R.; Paraschivescu, D.

    2002-12-01

    Une des potentialités les plus remarquables des procédés de production génératifs, comme le frittage sélectif par laser, est leur capacité à fabriquer des moules pour l'injection plastique équipés directement de canaux de refroidissement conformes, parfaitement adaptés aux empreintes Pour que l'industrie de l'injection puisse tirer pleinement parti de cette nouvelle opportunité, il est nécessaire de mettre à la disposition des moulistes des logiciels de simulation capables d'évaluer les gains de productivité et de qualité réalisables avec des systèmes de refroidissement mieux adaptés. Ces logiciels devraient aussi être capables, le cas échéant, de concevoir le système de refroidissement optimal dans des situations où l'empreinte d'injection est complexe. Devant le manque d'outils disponibles dans ce domaine, le but de cet article est de proposer un modèle simple de moules d'injection. Ce modèle permet de comparer différentes stratégies de refroidissement et peut être couplé avec un algorithme d'optimisation.

  19. Absorption of Some Mineral Salts by Root System of Different Woody Species and Accumulation over a Whole Vegetative Cycle; ABSORPTION DE QUELQUES SELS PAR L'APPAREIL RADICULAIRE DE DIFFERENTES ESPECES LIGNEUSES ET ACCUMULATION AU COURS D'UN CYCLE VEGETATIF COMPLET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gagnaire, J.

    1963-01-01

    The concentration power of plant tissues and the translocation speed of mineral salts vary considerably with the absorbed salt, the botanical species, the considered tissue, and the part of the vegetative cycle. In Grenoble, with Picea excelsa, the true dormance is short and is accompanied by a pre-dormance period and a post dormance period. In the vegetative period, Picea excelsa leaves concentrate less mineral salt than Acer campestris leaves (coefficient 2 for Ca--3 for phosphates) and Populus nigra leaves (coefficient 3 for Ca, coefficient 5 for phosphates). Results of tracer studies are tabulated. (C.H.)

  20. Andrei Sakharov

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2006-07-07

    Ce discours donné par Mons.Jonauch qui est né en Tchécoslovaquie et a fait ses études à Leningrad, Moscou et Prague, est organisé par le comité Youri Orlov. Le conférencier parle de Andrei Sakharov, ce physicien et homme soviétique qui fit ses études à Moscou, effectua des recherches sur les armes thermonucléaires et entra à l'Académie des Sciences d'URSS en 1953. Il participa à la mise au point de la bombe à hydrogène, mais s'opposa quelques années plus tard à la poursuite des expériences nucléaires. Il créa en 1970 le comité pour la défense des droits de l'homme ce que luimore » valut le prix Nobel de la paix en 1975.« less

  1. Dislocations et propriétés mécaniques des matériaux céramiques: Quelques problèmes

    NASA Astrophysics Data System (ADS)

    Castaing, J.; Dominguez Rodriguez, A.

    1995-11-01

    The study of plastic deformation of ceramic materials raised new problems on low temperature dislocation glide and high temperature dislocation climb. Mechanical behaviour can be explained. In this paper, we review some examples related to oxides which are linked to the activity of J. Philibert. L'étude de la déformation plastique de matériaux céramiques monocristallins a donné l'occasion de poser des nouveaux problèmes sur le glissement des dislocations à basse température et sur leur montée à haute température. Le comportement mécanique peut ainsi être expliqué. Nous passons en revue des cas concernant les oxydes dans lesquels J. Philibert a joué un rôle important.

  2. X-ray scattering by edge-dislocations in the S_A phase of mesomorphic side chain polyacrylates

    NASA Astrophysics Data System (ADS)

    Davidson, P.; Pansu, B.; Levelut, A. M.; Strzelecki, L.

    1991-01-01

    The X-ray diffraction patterns of mesomorphic side chain polymers in the S_A phase present diffuse streaks in shape of “butterfly wings”. We show that this diffuse scattering may be due to the presence of edge dislocations. On the basis of a previous description of edge dislocations within the framework of the elastic continuum theory of the S_A phase given by De Gennes, we have calculated the Fourier transform of the deformation field. Optical diffraction experiments on sketches of defects have also been made to reproduce the X-ray scattering patterns. Both methods show that this diffuse scattering may indeed be due to the presence of edge dislocations. Their density may be roughly estimated to some 10^8/cm^2. The size of their cores should be only a few Ångströms. From the decay of their elastic deformation field, a typical length λ = (K/B)^{1/2}≈ 1,5 Å can be obtained which shows that the elastic constant B of compression of the layers should be about two orders of magnitude larger in the “polymeric” S_A phase than in the “conventional” one. Les clichés de diffraction des rayons X par des polymères mésomorphes en peigne, en phase S_A, présentent des trainées diffuses en forme d'“ ailes de papillon ”. Nous montrons que cette diffusion diffuse peut s'expliquer par la présence de dislocations-coin. En partant de la description des dislocations-coin donnée par De Gennes dans le cadre de la théorie du continuum élastique de la phase S_A, nous avons calculé la transformée de Fourier du champ de déformation. Des expériences de diffraction optique sur des modèles de défauts ont aussi été effectuées afin de reproduire les clichés de diffraction des rayons X. Les deux méthodes montrent que cette diffusion diffuse peut en effet bien s'expliquer par la présence de dislocations-coin. Leur densité a été grossièrement estimée à quelques 10^8/cm^2. La taille de leurs coeurs ne devrait pas dépasser quelques Ångströms. D'après l'allure du champ de déformation élastique, on peut tirer une longueur typique λ = (K/B)^{1/2}≈ 1,5 Å, ce qui montre que la constante élastique B de compression des couches devrait être environ 100 fois plus élevée en phase S_A “ polymérique ” qu'en phase S_A “ usuelle ”.

  3. Paralysie flasque en début de grossesse: penser à l'hypokaliémie due aux vomissements gravidiques, à propos de deux observations dans un pays en voie de développement

    PubMed Central

    Fall, Maouly

    2015-01-01

    Les vomissements gravidiques peuvent être responsables de rares complications neuromusculaires mais graves notamment la paralysie hypokaliémique. Nous rapportons des observations de deux jeunes femmes d'origine guinéenne, sans histoires familiales ni antécédents particuliers, admises pour paralysie flasque des quatre membres dans un contexte de vomissements incoercibles en début de grossesse. Le bilan retrouvait une hypokaliémie majeure associée à quelques troubles électro-cardiographiques. L'apport de potassium par voie parentérale avait permis une récupération motrice totale. La paralysie hypokaliémique est une complication rare des vomissements gravidiques. Une supplémentation potassique prudente avec une surveillance électro-cardiographique et biologique permet une disparition spectaculaire des troubles neuromusculaires. PMID:26327956

  4. ÉTUDE de la Capture ÉLECTRONIQUE dans la DÉSINTÉGRATION du Nuclide 23Na

    NASA Astrophysics Data System (ADS)

    Charpak, G.

    L'étude de 22Na est faite avec un compteur Geiger 4π. On met en évidence l'émission d'un rayonnement de très basse énergle, indépendant des rayons β+, complètement absorbé dans un film de quelques microgrammes par centimètre carré d'aluminium on de matière plastique LC 600 et attribué aux électrons Auger d'énergie maximum 0,85 keV, qui suivent la capture électronique. En raison du très faible parcours de ces électrons , nous sommes amené à discuter particulièrement une méthode simple de préparation de sources radloactives minces et uniformes. Nous obtenons la valeta du rapport [ Capture Λ/Emission β+ = (6,5±0,9) pour 100.

  5. Magnetic levitation and MHD propulsion

    NASA Astrophysics Data System (ADS)

    Tixador, P.

    1994-04-01

    Magnetic levitation and MHD propulsion are now attracting attention in several countries. Different superconducting MagLev and MHD systems will be described concentrating on, above all, the electromagnetic aspect. Some programmes occurring throughout the world will be described. Magnetic levitated trains could be the new high speed transportation system for the 21st century. Intensive studies involving MagLev trains using superconductivity have been carried out in Japan since 1970. The construction of a 43 km long track is to be the next step. In 1991 a six year programme was launched in the United States to evaluate the performances of MagLev systems for transportation. The MHD (MagnetoHydroDynamic) offers some interesting advantages (efficiency, stealth characteristics, ...) for naval propulsion and increasing attention is being paid towards it nowadays. Japan is also up at the top with the tests of Yamato I, a 260 ton MHD propulsed ship. Depuis quelques années nous assistons à un redémarrage de programmes concernant la lévitation et la propulsion supraconductrices. Différents systèmes supraconducteurs de lévitation et de propulsion seront décrits en examinant plus particulièrement l'aspect électromagnétique. Quelques programmes à travers le monde seront abordés. Les trains à sustentation magnétique pourraient constituer un nouveau mode de transport terrestre à vitesse élevée (500 km/h) pour le 21^e siècle. Les japonais n'ont cessé de s'intéresser à ce système avec bobine supraconductrice. Ils envisagent un stade préindustriel avec la construction d'une ligne de 43 km. En 1991 un programme américain pour une durée de six ans a été lancé pour évaluer les performances des systèmes à lévitation pour le transport aux Etats Unis. La MHD (Magnéto- Hydro-Dynamique) présente des avantages intéressants pour la propulsion navale et un regain d'intérêt apparaît à l'heure actuelle. Le japon se situe là encore à la pointe des développements actuels avec en particulier les premiers essais en rade de Kobe de Yamato I, navire de 260 tonnes, entraîné par MHD.

  6. Les cooperatives et l'electrification rurale du Quebec, 1945--1964

    NASA Astrophysics Data System (ADS)

    Dorion, Marie-Josee

    Cette these est consacree a l'histoire de l'electrification rurale du Quebec, et, plus particulierement, a l'histoire des cooperatives d'electricite. Fondees par vagues successives a partir de 1945, les cooperatives rurales d'electricite ont ete actives dans plusieurs regions du Quebec et elles ont electrifie une partie significative des zones rurales. Afin de comprendre le contexte de la creation des cooperatives d'electricite, notre these debute (premiere partie) par une analyse du climat sociopolitique des annees precedant la naissance du systeme cooperatif d'electrification rurale. Nous y voyons de quelle facon l'electrification rurale devient progressivement, a partir de la fin des annees 1920, une question d'actualite a laquelle les divers gouvernements qui se succedent tentent de trouver une solution, sans engager---ou si peu---les fonds de l'Etat. En ce sens, la premiere etatisation et la mise sur pied d'Hydro-Quebec, en 1944, marquent une rupture quant au mode d'action privilegie jusque-la. La nouvelle societe d'Etat se voit cependant retirer son mandat d'electrifier le monde rural un an apres sa fondation, car le gouvernement Duplessis, de retour au pouvoir, prefere mettre en place son propre modele d'electrification rurale. Ce systeme repose sur des cooperatives d'electricite, soutenues par un organisme public, l'Office de l'electrification rurale (OER). L'OER suscite de grandes attentes de la part des ruraux et c'est par centaines qu'ils se manifestent. Cet engouement pour les cooperatives complique la tache de l'OER, qui doit superviser de nouvelles societes tout en assurant sa propre organisation. Malgre des hesitations et quelques delais introduits par un manque de connaissances techniques et de personnel qualifie, les commissaires de l'OER se revelent perspicaces et parviennent a mettre sur pied un systeme cooperatif d'electrification rurale qui produit des resultats rapides. Il leur faudra cependant compter sur l'aide des autres acteurs engages dans l'electrification, les organismes publics et les compagnies privees d'electricite. Cette periode de demarrage et d'organisation, traitee dans la deuxieme partie de la these, se termine en 1947-48, au moment ou l'OER et les cooperatives raffermissent leur maitrise du systeme cooperatif d'electrification rurale. Les annees 1948 a 1955 (troisieme partie de these) correspondent a une periode de croissance pour le mouvement cooperatif. Cette partie scrute ainsi le developpement des cooperatives, les vastes chantiers de construction et l'injection de millions de dollars dans l'electrification rurale. Cette troisieme partie prend egalement acte des premiers signes que quelque chose ne va pas si bien dans le monde cooperatif. Nous y verrons egalement les ruraux a l'oeuvre: comme membres, d'abord, mais aussi en tant que benevoles, puis a l'emploi des cooperatives. La quatrieme et derniere partie, les annees 1956 a 1964, aborde les changements majeurs qui ont cours dans l'univers cooperatif; il s'agit d'une ere nouvelle et difficile pour le mouvement cooperatif, dont les reseaux paraissent inadaptes aux changements de profil de la consommation d'electricite des usagers. L'OER sent alors le besoin de raffermir son controle des cooperatives, car il pressent les problemes et les defis auxquels elles auront a faire face. Notre etude se termine par l'acquisition des cooperatives par Hydro-Quebec, en 1963-64. Fondee sur des sources riches et variees, notre demarche propose un eclairage inedit sur une dimension importante de l'histoire de l'electricite au Quebec. Elle permet, ce faisant, de saisir les rouages et l'action de l'Etat sous un angle particulier, avant sa profonde transformation amorcee au cours des annees 1960. De meme, elle apporte quelques cles nouvelles pour une meilleure comprehension de la dynamique des milieux ruraux de cette periode.

  7. Etude spectroscopique des collisions moleculaires (hydrogene-azote et hydrogene-oxygene) a des energies de quelques MeV

    NASA Astrophysics Data System (ADS)

    Plante, Jacinthe

    1998-09-01

    Les resultats presentes ici proviennent d'une etude systematique portant sur les collisions a vitesse constante, entre les projectiles d'hydrogene (H+, H2+ et H3+ a 1 MeV/nucleon) et deux cibles gazeuses (N2 et O2), soumises a differentes pressions. Les collisions sont analysees a l'aide des spectres d'emission (de 400 A a 6650 A) et des graphiques intensite/pression. Les spectres ont revele la presence des raies d'azote atomique, d'azote moleculaire, d'oxygene atomique, d'oxygene moleculaire et d'hydrogene atomique. Les raies d'hydrogene sont observees seulement avec les projectiles H2+ et H3+. Donc les processus responsables de la formation de ces raies sont des mecanismes de fragmentation des projectiles. Pour conclure, il existe une difference notable entre les projectiles et les differentes pressions. Les raies d'azote et d'oxygene augmentent selon la pression et les raies d'hydrogene atomique presentent une relation non lineaire avec la pression.

  8. Diffraction des neutrons : principe, dispositifs expérimentaux et applications

    NASA Astrophysics Data System (ADS)

    Muller, C.

    2003-02-01

    La diffraction de neutrons, sur monocristal ou sur échantillon polycristallin (ou poudre), est une technique très largement utilisée, en science des matériaux comme en biologie, lorsque l'on souhaite déterminer la structure cristalline d'un composé ou d'une molécule. Toutefois, le degré de précision de la détermination structurale est très corrélé au choix de l'instrument utilisé. Il s'en suit que la question “comment choisir l'instrument le mieux adapté au composé et à la problématique ?" apparaît comme fondamentale. L'objectif de ce cours est de tenter de répondre à cette question en décrivant brièvement les caractéristiques instrumentales de différents diffractomètres, en exposant les avantages spécifiques des expériences de diffraction de neutrons et en donnant quelques exemples d'application.

  9. Andrei Sakharov

    ScienceCinema

    None

    2017-12-09

    Ce discours donné par Mons.Jonauch qui est né en Tchécoslovaquie et a fait ses études à Leningrad, Moscou et Prague, est organisé par le comité Youri Orlov. Le conférencier parle de Andrei Sakharov, ce physicien et homme soviétique qui fit ses études à Moscou, effectua des recherches sur les armes thermonucléaires et entra à l'Académie des Sciences d'URSS en 1953. Il participa à la mise au point de la bombe à hydrogène, mais s'opposa quelques années plus tard à la poursuite des expériences nucléaires. Il créa en 1970 le comité pour la défense des droits de l'homme ce que lui valut le prix Nobel de la paix en 1975.

  10. Quelques facteurs sociaux agissant sur la formation permanente et l'education informelle en Algerie

    NASA Astrophysics Data System (ADS)

    Haddab, Mustapha

    1994-05-01

    This article attempts an analysis of the conditions under which a certain degree of educational pluralism has begun, tentatively, to be seen in Algeria in association with the political and socio-economic changes that have taken place since 1988. After a long period of centralism codified in the National Charter of 1976, during which the public education system had become all but the only provider of education, in demand largely on account of the diplomas and certificates which it awarded, various social factors (including growth in unemployment among young people and those with qualifications, development of voluntary associations, inflexibility of public schools, various effects of the "language conflict" on the educational system, etc.) have since led to the appearance of varying educational activities. Some of these make up for the inadequacy of the public schools; others, less well established, respond to the emergence of the need for "lifelong education" or provide complementary training for social groups which may have a political or religious motivation. These tendencies are limited to the development of voluntary associations in Algeria.

  11. Lasers solides pompés par diode émettant des impulsions picosecondes à haute cadence dans l'ultraviolet

    NASA Astrophysics Data System (ADS)

    Balembois, F.; Forget, S.; Papadopoulos, D.; Druon, F.; Georges, P.; Devilder, P.-J.; Lefort, L.

    2005-06-01

    De nombreuses applications requièrent des sources lasers impulsionnelles ultraviolettes, présentant des durées d'impulsion et des cadences spécifiques. Grâce à l'utilisation de structures d'oscillateurs et d'amplificateurs originales il est possible de réaliser de telles sources à partir de lasers solides pompés par diodes et de profiter ainsi de la compacité, de l'efficacité et de la robustesse de ces sources. Nous présentons ici la réalisation d'un laser à verrouillage de modes et d'un microlaser déclenché permettant d'obtenir des impulsions ultraviolettes picosecondes à une cadence de quelques MHz en vue d'application à la microscopie de fluorescence résolue en temps, ainsi que la mise en œuvre pour le traitement des matériaux d'un système oscillateur-amplificateur produisant plus de 600 mW de rayonnement UV à 266 ou 355 nm avec des impulsions sub-nanosecondes.

  12. La genèse du concept de champ quantique

    NASA Astrophysics Data System (ADS)

    Darrigol, O.

    This is a historical study of the roots of a concept which has proved to be essential in modern particle physics : the concept of quantum field. The first steps were accomplished by two young theoreticians : Pascual Jordan quantized the free electromagnetic field in 1925 by means of the formal rules of the just discovered matrix mechanics, and Paul Dirac quantized the whole system charges + field in 1927. Using Dirac's equation for electrons (1928) and Jordan's idea of quantized matter waves (second quantization), Werner Heisenberg and Wolfgang Pauli provided in 1929-1930 an extension of Dirac's radiation theory and the proof of its relativistic invariance. Meanwhile Enrico Fermi discovered independently a more elegant and pedagogical formulation. To appreciate the degree of historical necessity of the quantization of fields, and the value of contemporaneous critics to this approach, it was necessary to investigate some of the history of the old radiation theory. We present the various arguments however provisional or naïve or wrong they could be in retrospect. So we hope to contribute to a more vivid picture of notions which, once deprived of their historical setting, might seem abstruse to the modern user. Nous présentons une étude historique des origines d'un concept devenu essentiel dans la physique moderne des particules : le concept de champ quantique. Deux jeunes théoriciens franchirent les premières étapes : Pascual Jordan quantifia le champ électromagnétique en 1925 grâce aux règles formelles de la mécanique des matrices naissante, et Paul Dirac quantifia le système complet charges + champ en 1927. Au moyen de l'équation de l'électron de Dirac (1928) et de l'idée de Jordan d'ondes de matière quantifiées (deuxième quantification), Werner Heisenberg et Wolfgang Pauli donnèrent en 1929-1930 une extension de la théorie du rayonnement de Dirac et la preuve de son invariance relativiste. Pendant ce temps Enrico Fermi découvrit indépendamment une formulation plus élégante et plus pédagogique. Pour apprécier le degré de nécessité historique de la quantification des champs et la valeur des critiques contemporaines à cette approche, nous avons dû analyser quelques points de l'histoire de l'ancienne théorie du rayonnement. Nous présentons les divers arguments quelque provisoires, naïfs ou faux qu'ils puissent sembler aujourd'hui. Ainsi nous espérons brosser un tableau plus vivant de notions menacées d'hermétisme si l'on oublie leurs fondements historiques.

  13. Conductivite dans le modele de Hubbard bi-dimensionnel a faible couplage

    NASA Astrophysics Data System (ADS)

    Bergeron, Dominic

    Le modele de Hubbard bi-dimensionnel (2D) est souvent considere comme le modele minimal pour les supraconducteurs a haute temperature critique a base d'oxyde de cuivre (SCHT). Sur un reseau carre, ce modele possede les phases qui sont communes a tous les SCHT, la phase antiferromagnetique, la phase supraconductrice et la phase dite du pseudogap. Il n'a pas de solution exacte, toutefois, plusieurs methodes approximatives permettent d'etudier ses proprietes de facon numerique. Les proprietes optiques et de transport sont bien connues dans les SCHT et sont donc de bonne candidates pour valider un modele theorique et aider a comprendre mieux la physique de ces materiaux. La presente these porte sur le calcul de ces proprietes pour le modele de Hubbard 2D a couplage faible ou intermediaire. La methode de calcul utilisee est l'approche auto-coherente a deux particules (ACDP), qui est non-perturbative et inclue l'effet des fluctuations de spin et de charge a toutes les longueurs d'onde. La derivation complete de l'expression de la conductivite dans l'approche ACDP est presentee. Cette expression contient ce qu'on appelle les corrections de vertex, qui tiennent compte des correlations entre quasi-particules. Pour rendre possible le calcul numerique de ces corrections, des algorithmes utilisant, entre autres, des transformees de Fourier rapides et des splines cubiques sont developpes. Les calculs sont faits pour le reseau carre avec sauts aux plus proches voisins autour du point critique antiferromagnetique. Aux dopages plus faibles que le point critique, la conductivite optique presente une bosse dans l'infrarouge moyen a basse temperature, tel qu'observe dans plusieurs SCHT. Dans la resistivite en fonction de la temperature, on trouve un comportement isolant dans le pseudogap lorsque les corrections de vertex sont negligees et metallique lorsqu'elles sont prises en compte. Pres du point critique, la resistivite est lineaire en T a basse temperature et devient progressivement proportionnelle a T 2 a fort dopage. Quelques resultats avec sauts aux voisins plus eloignes sont aussi presentes. Mots-cles: Hubbard, point critique quantique, conductivite, corrections de vertex

  14. Acides gras oméga-3 et dyslexie

    PubMed Central

    Zelcer, Michal; Goldman, Ran D.

    2015-01-01

    Résumé Question À la lumière de la hausse du nombre d’enfants d’âge scolaire ayant reçu un diagnostic de dyslexie, quel est le rôle des suppléments d’acides gras oméga-3 dans la prise en charge de cette affection? Réponse La dyslexie est le trouble d’apprentissage le plus répandu et elle est connue pour ses causes multifactorielles. De récentes données probantes pointent vers une corrélation entre le métabolisme défectueux des acides gras polyinsaturés et les troubles de neurodéveloppement, tels que la dyslexie. Bien que l’administration de suppléments d’acides gras oméga-3 aux enfants dyslexiques ait fait l’objet d’études, les données probantes sont limitées. Les critères diagnostiques homogènes de dyslexie, les mesures objectives de carence en acides gras et la surveillance étroite de l’apport alimentaire ne sont que quelques-uns des facteurs pouvant améliorer la qualité de la recherche dans ce domaine.

  15. Cancer métaplasique du sein: à propos d'un cas

    PubMed Central

    Babahabib, Moulay Abdellah; Chennana, Adil; Hachi, Aymen; Kouach, Jaoud; Moussaoui, Driss; Dhayni, Mohammed

    2014-01-01

    Les carcinomes métaplasiques du sein sont des tumeurs rares. Ils constituent un groupe hétérogène de tumeurs définis selon l'organisation mondiale de la santé comme étant un carcinome canalaire infiltrant mais comportant des zones de remaniements métaplasiques (de type épidermoïde, à cellules fusiformes, chondroïde et osseux ou mixte), qui varient de quelques foyers microscopiques à un remplacement glandulaire complet. Les aspects cliniques et radiologiques ne sont pas spécifiques. Le traitement associe la chirurgie, la radiothérapie et la chimiothérapie. L'hormonothérapie n'a pas de place. Le pronostic est sombre. L'histopathologie combinée à l'immunohistochimie permet de poser un diagnostic sure. Etant donné que la prise en charge thérapeutique est limitée, une nouvelle approche moléculaire pourrait modifier cette contribution faible et mal cernée des traitements systémiques classiques. Les patientes atteintes de carcinome métaplasique mammaire pourraient bénéficier de traitements ciblés, ce qui reste à confirmer par des essais cliniques. PMID:25870723

  16. Potentialités des lasers à fibre dans la génération de rayonnement cohérent UV

    NASA Astrophysics Data System (ADS)

    Martel, G.; Hideur, A.; Ortaç, B.; Lecourt, J.-B.; Chédot, C.; Brunel, M.; Chéron, B.; Limpert, J.; Tunnermann, T.; Grelu, Ph.; Gicquel-Guézo, M.; Labbé, C.; Loualiche, S.; Roussignol, Ph.; Sanchez, F.; Leblond, H.

    2006-12-01

    Le premier laser à fibre dopé aux ions de terres rares fonctionna au tout début des années 60. Il fournissait quelques milliwatts autour de 1 μm. Les décades suivantes virent très peu d'améliorations tant du côté des laboratoires que du point de vue industriel. La dernière décennie (1995/2005) vit se concrétiser la seconde révolution des lasers à fibres. Déjà kilowatt en continu, ils atteignent désormais les 1013 watts/cm2 avec des impulsions de la centaine de femtoseconde. Lors de cette présentation nous passerons en revue les potentialités des lasers à fibre. Nous décrirons les verrous technologiques qui ont été levés ces dix dernières années pour les régimes CW mais aussi femtosecondes. Nous montrerons également comment la prochaine génération de fibres optiques actuellement en développement permettra d'offrir des sources stables et de très haute puissance pour l'avenir proche.

  17. L'astronomie au féminin

    NASA Astrophysics Data System (ADS)

    Nazé, Yaël

    2006-03-01

    Qui détient le record des découvertes de comètes ? Une femme. Qui a permis de comprendre comment est organisée la population des étoiles ? Une femme. Qui a découvert la loi permettant d'arpenter l'Univers, a trouvé des phares dans l'espace, a compris le fonctionnement des forges stellaires et a bouleversé notre vision de l'Univers ? Encore et toujours une femme... Pourtant, quand on doit citer un astronome -- historique -- au hasard, on pense le plus souvent -- des hommes : Ptolémée, Galilée, Copernic ou, plus près de nous par exemple, Hubble. Certes, au cours des siècles, les femmes n'ont guère eu accès aux sciences en général et -- l'astronomie en particulier mais ce n'est pas une raison pour croire en l'absence totale de contributions dues au beau sexe ! C'est ce que dévoile ici l'auteur. Loin de toute forme de féminisme enragé, on suivra le parcours de quelques scientifiques importantes qui ont par hasard en commun une même particularité : leur sexe.

  18. Approaches to optimal aquifer management and intelligent control in a multiresolutional decision support system

    NASA Astrophysics Data System (ADS)

    Orr, Shlomo; Meystel, Alexander M.

    2005-03-01

    Despite remarkable new developments in stochastic hydrology and adaptations of advanced methods from operations research, stochastic control, and artificial intelligence, solutions of complex real-world problems in hydrogeology have been quite limited. The main reason is the ultimate reliance on first-principle models that lead to complex, distributed-parameter partial differential equations (PDE) on a given scale. While the addition of uncertainty, and hence, stochasticity or randomness has increased insight and highlighted important relationships between uncertainty, reliability, risk, and their effect on the cost function, it has also (a) introduced additional complexity that results in prohibitive computer power even for just a single uncertain/random parameter; and (b) led to the recognition in our inability to assess the full uncertainty even when including all uncertain parameters. A paradigm shift is introduced: an adaptation of new methods of intelligent control that will relax the dependency on rigid, computer-intensive, stochastic PDE, and will shift the emphasis to a goal-oriented, flexible, adaptive, multiresolutional decision support system (MRDS) with strong unsupervised learning (oriented towards anticipation rather than prediction) and highly efficient optimization capability, which could provide the needed solutions of real-world aquifer management problems. The article highlights the links between past developments and future optimization/planning/control of hydrogeologic systems. Malgré de remarquables nouveaux développements en hydrologie stochastique ainsi que de remarquables adaptations de méthodes avancées pour les opérations de recherche, le contrôle stochastique, et l'intelligence artificielle, solutions pour les problèmes complexes en hydrogéologie sont restées assez limitées. La principale raison est l'ultime confiance en les modèles qui conduisent à des équations partielles complexes aux paramètres distribués (PDE) à une échelle donnée. Alors que l'accumulation d'incertitudes et, par conséquent, la stockasticité ou l'aléat a augmenté la perspicacité et amis en lumière d'importantes relations entre l'incertitude, la fiabilité, le risque, et leur effet sur les coûts de fonctionnement, il a également (a) introduit une complexité additionnelle qui résulte dans un pouvoir prohibitif des moyens de calcul informatique même pour une simple estimation de l'incertitude; et (b) a conduita une reconnaissance de notre manque d'aptitude à maîtriser l'incertitude totale même en introduisant tous les paramètres connus de l'incertitude. La représentation du changement est introduit: une adaptation de nouvelles méthodes de contrôle intelligent qui va relâcher la dépendance à la rigidité des algorithmes, aux calculs informatiques intensifs, à la PDE stockastique, et qui modifiera l'emphase entre les MRDS—systèmes interactifs d'aide à la décision de multiresolutionelle (flexibles, adaptables et orientables selon les objectifs)—avec un fort apprentissage non (orienté vers l'anticipation plutôt que la prédiction), et une capacité d'optimisation efficiente très élevée, qui pourrait apporter le besoin de solutions pour la modélisation des problèmes de management des aquifères réalistes. Cet article met en lumière les liens entre les développements passés et les futurs moyens d'optimisation, de gestion et de contrôle des systèmes hydrogéologiques. A pesar de nuevos avances notables en hidrología estocástica y las adaptaciones de métodos avanzados de investigación de operaciones, control estocástico, e inteligencia artificial, las soluciones de problemas complejos del mundo real en hidrogeología han sido bastante limitadas. La principal razón es la dependencia definitiva en modelos de primer-principio que conducen a ecuaciones parciales diferencias de parámetro distribuido complejas (PDE) a una escala dada. Mientras que la adición de incertidumbre, y por lo tanto, estocasticidad o aleatoriedad ha incrementado la profundidad y resaltado relaciones importantes entre la incertidumbre, confiabilidad, riesgo, y su efecto en la función de costo, la adición también ha permitido (a) introducir complejidad adicional que resulta en potencia computacional excesiva aún para un solo parámetro incierto/aleatorio; y (b) llevar a reconocer nuestra discapacidad para evaluar la incertidumbre completa aún cuando se incluyen todos los parámetros inciertos. Se introduce un cambio paradigmático: una adaptación de nuevos métodos de control de inteligencia que relajarála dependencia en PDE estocásticas, rígidas y de uso computacional intensivo, cambiando el énfasis hacia un sistema de apoyo de decisiones de propósitos múltiples (MRDS) adaptivo, flexible, y orientadoa objetivos con fuerte aprendizaje sin supervisión (orientado a la anticipación más que a la predicción) con fuerte capacidad de optimización eficiente, lo cual podría aportar las soluciones necesarias a los problemas de manejo reales con los acuíferos. El artículo resalta los vínculos entre desarrollos pasados y control/planificación/optimización futura de sistemas hidrogeológicos.

  19. Extraction of water and solutes from argillaceous rocks for geochemical characterisation: Methods, processes and current understanding

    NASA Astrophysics Data System (ADS)

    Sacchi, Elisa; Michelot, Jean-Luc; Pitsch, Helmut; Lalieux, Philippe; Aranyossy, Jean-François

    2001-01-01

    This paper summarises the results of a comprehensive critical review, initiated by the OECD/NEA "Clay Club," of the extraction techniques available to obtain water and solutes from argillaceous rocks. The paper focuses on the mechanisms involved in the extraction processes, the consequences on the isotopic and chemical composition of the extracted pore water and the attempts made to reconstruct its original composition. Finally, it provides some examples of reliable techniques and information, as a function of the purpose of the geochemical study. Résumé. Cet article résume les résultats d'une synthèse critique d'ensemble, lancée par le OECD/NEA "Clay Club", sur les techniques d'extraction disponibles pour obtenir l'eau et les solutés de roches argileuses. L'article est consacré aux mécanismes impliqués dans les processus d'extraction, aux conséquences sur la composition isotopique et chimique de l'eau porale extraite et aux tentatives faites pour reconstituer sa composition originelle. Finalement, il donne quelques exemples de techniques fiables et d'informations, en fonction du but de l'étude géochimique. Resúmen. Este artículo resume los resultados de una revisión crítica exhaustiva (iniciada por el "Clay Club" OECD/NEA) de las técnicas de extracción disponibles para obtener agua y solutos en rocas arcillosas. El artículo se centra en los mecanismos involucrados en los procesos extractivos, las consecuencias en la composición isotópica y química del agua intersticial extraída, y en los intentos realizados para reconstruir su composición original. Finalmente, se presentan algunos ejemplos de técnicas fiables e información, en función del propósito del estudio geoquímico.

  20. Developpement d'une methode calorimetrique de mesure des pertes ac pour des rubans supraconducteurs a haute temperature critique

    NASA Astrophysics Data System (ADS)

    Dolez, Patricia

    Le travail de recherche effectue dans le cadre de ce projet de doctorat a permis la mise au point d'une methode de mesure des pertes ac destinee a l'etude des supraconducteurs a haute temperature critique. Pour le choix des principes de cette methode, nous nous sommes inspires de travaux anterieurs realises sur les supraconducteurs conventionnels, afin de proposer une alternative a la technique electrique, presentant lors du debut de cette these des problemes lies a la variation du resultat des mesures selon la position des contacts de tension sur la surface de l'echantillon, et de pouvoir mesurer les pertes ac dans des conditions simulant la realite des futures applications industrielles des rubans supraconducteurs: en particulier, cette methode utilise la technique calorimetrique, associee a une calibration simultanee et in situ. La validite de la methode a ete verifiee de maniere theorique et experimentale: d'une part, des mesures ont ete realisees sur des echantillons de Bi-2223 recouverts d'argent ou d'alliage d'argent-or et comparees avec les predictions theoriques donnees par Norris, nous indiquant la nature majoritairement hysteretique des pertes ac dans nos echantillons; d'autre part, une mesure electrique a ete realisee in situ dont les resultats correspondent parfaitement a ceux donnes par notre methode calorimetrique. Par ailleurs, nous avons compare la dependance en courant et en frequence des pertes ac d'un echantillon avant et apres qu'il ait ete endommage. Ces mesures semblent indiquer une relation entre la valeur du coefficient de la loi de puissance modelisant la dependance des pertes avec le courant, et les inhomogeneites longitudinales du courant critique induites par l'endommagement. De plus, la variation en frequence montre qu'au niveau des grosses fractures transverses creees par l'endommagement dans le coeur supraconducteur, le courant se partage localement de maniere a peu pres equivalente entre les quelques grains de matiere supraconductrice qui restent fixes a l'interface coeur-enveloppe, et le revetement en alliage d'argent. L'interet d'une methode calorimetrique par rapport a la technique electrique, plus rapide, plus sensible et maintenant fiable, reside dans la possibilite de realiser des mesures de pertes ac dans des environnements complexes, reproduisant la situation presente par exemple dans un cable de transport d'energie ou dans un transformateur. En particulier, la superposition d'un courant dc en plus du courant ac habituel nous a permis d'observer experimentalement, pour la premiere fois a notre connaissance, un comportement particulier des pertes ac en fonction de la valeur du courant dc decrit theoriquement par LeBlanc. Nous avons pu en deduire la presence d'un courant d'ecrantage Meissner de 16 A, ce qui nous permet de determiner les conditions dans lesquelles une reduction du niveau de pertes ac pourrait etre obtenue par application d'un courant dc, phenomene denomme "vallee de Clem".

  1. Antisuperconductors: Properties of Layered Compounds with Coupling

    NASA Astrophysics Data System (ADS)

    Carton, J.-P.; Lammert, P. E.; Prost, J.

    1995-11-01

    In this note, we consider properties of a hypothetical superconductor composed of Josephson-coupled microscopic layers with tunneling energy minimized at a phase difference of π. The non-zero phase offset in the ground state engenders an intriguing interplay between the superconductive ordering and structural lattice defects. Unusual magnetic properties are expected in the case of highly disordered crystals, which are consistent with observations of a “paramagnetic Meissner” or “Wohlleben” effect in high-T_c cuprate superconductors. Dans cette note, nous considérons les propriétés d'un supraconducteur hypothétique composé de couches microscopiques, couplées par effet Josephson, mais dont l'énergie de couplage est minimisée pour une différence de phase de π. L'état de base a des propriétés fascinantes dues à l'effet combiné de l'ordre supraconducteur et des défauts structuraux du cristal. Dans le cas de cristaux très désordonnés, on attend des propriétés magnétiques exceptionnelles, qui sont compatibles avec les observations dans quelques supraconducteurs cuprate haute-T_c d'un effet “Meissner paramagnétique” ou “Wohlleben”.

  2. Production du baryon Sigma+ dans les collisions e+e- au LEP

    NASA Astrophysics Data System (ADS)

    Joly, Andre

    Les mécanismes de production des baryons dans les interactions e+e- font l'objet de nombreuses études. De plus, les modes de production des baryons étranges semblent faire appel A des processus spécifiques, qui sont encore mal compris. Notre étude de la production des baryons Σ+ dans les interactions e+e- nous permet de formuler certaines remarques sur l'état des connaîssances acquises sur le sujet. Une methode de reconstruction originale et des critères de sélection spécifiques ont été développés afin d'identifier des baryons Σ+ de haute Energie ( ES+ > 5 GeV), partir de leur canal de désintégration en un proton et un π0 (S+-->p+p0 ). Trois mesures principales sont réalisées à partir de notre échantillon de baryons reconstruits. Le nombre mesuré de baryons Σ+ produits par événement e +e- à 91 GeV est de: =0.102+/-0.006(stat.) +/-0.008(syst.) +/-0.003(extrap.) où les erreurs sont dues à la statistique, aux systématiques et à la procédure d'extrapolation. Ce résultat est en accord avec ceux obtenus précédemment, mais avec des erreurs réduites. La section efficace différentielle en fonction de l'energie est mesurée et comparée aux prédictions des principaux générateurs Monte-Carlo (JETSET7.4(MOPS), JETSET7.4 et HERWIG5.9). A haute énergie, HERWIG ne semble pas reproduire les mesures, aussi bien que les deux versions de JETSET. Enfin, la position du maximum de la section efficace différentielle de production des baryons Σ+ en fonction de l'impulsion est mesurée. On trouve: overlinexoverlineS+=2.32+/- 0.47 Une étude spécifique du générateur JETSET7.4(MOPS) est réalisee, afin de mieux comprendre les mécanismes de production de l'étrangeté et du spin dans la production des baryons. Aucun générateur ne semble capable de décrire de manière simultanée la production du spin et de l'étrangeté.

  3. Peut-on améliorer la motivation des étudiants en médecine pour un cours fondamental de physiologie en intégrant à l’exposé magistral quelques méthodes pédagogiques actives?

    PubMed Central

    Bentata, Yassamine; Delfosse, Catherine

    2017-01-01

    La motivation des étudiants est une condition essentielle à l’apprentissage et la perception qu’a l’étudiant de la valeur qu’il accorde à une activité est l’une des trois composantes majeures de cette motivation. Comment amener les étudiants à percevoir l’utilité et l’intérêt de leurs cours universitaires tout en suscitant leur motivation ? L’objectif de l’étude est de déterminer la perception de la valeur attribuée par les étudiants au cours fondamental de physiologie et d’évaluer l’impact de l’intégration de quelques méthodes d’apprentissage actif aux exposés magistraux sur la motivation des étudiants du premier cycle des études médicales dans une jeune faculté. Cette étude prospective, a concerné les étudiants de deuxième année de médecine (PCEM2). Nous avons procédé initialement à une appréciation de la perception de la motivation des étudiants pour les cours universitaires via un premier questionnaire, après nous avons intégré deux activités pédagogiques: l’étude de cas et la réalisation de carte conceptuelle aux exposés magistraux du module de physiologie et à la fin nous avons évalué via un deuxième questionnaire l’impact de ces deux activités sur la motivation des étudiants. 131 et 109 étudiants ont rempli et rendu respectivement le 1er et le 2ème questionnaire parmi les 249 étudiants inscrits en PCEM2. La motivation de nos étudiants à l’égard de leurs cours universitaires était globalement très favorable, même si la motivation pour le cours de physiologie (70,8%) était légèrement moins importante que pour l’ensemble des cours (80%). Nos étudiants avaient apprécié les deux activités proposées et seulement 13% (pour l’étude de cas) et 16,8% (pour la carte) ne se montraient pas satisfaits. 40,9% des étudiants avaient réalisé une carte conceptuelle et la qualité de ces productions, jugée sur l’identification des concepts et des liens inter concepts, était globalement satisfaisante pour une première expérience. Influencée par de multiples facteurs internes et externes, la motivation des étudiants reste une grande problématique en milieu universitaire. Dans ce contexte, une planification rigoureuse d’activités pédagogiques diversifiées et actives est l’une des principales portes offertes à l’enseignant pouvant susciter cette motivation. PMID:29721145

  4. L’ostéotomie de Chiari dans la prise en charge de la dysplasie de la hanche chez l’adulte: à propos de 9 cas

    PubMed Central

    Shimi, Mohammed; Mahdane, Hicham; Mechchat, Atif; El Ibrahimi, Abedelhalim; El Mrini, Abedelmajid

    2013-01-01

    La dysplasie acétabulaire de l’adulte jeune entraîne, dans plus de 50% des cas, une coxarthrose secondaire avant l’âge de 50 ans, l’ostéotomie de CHIARI a été décrite initialement dans le traitement de la dysplasie acétabulaire de l’enfant et de l’adolescent, elle a vu ses indications s’étendre à la dysplasie acétabulaire de l’adulte. Nous avons réalisé 9 ostéotomies de CHIARI de 2009 à 2012. Les 9 hanches ont été évaluées cliniquement et radiologiquement en préopératoire et en postopératoire, avec un recul moyen de 18.4 mois. L’ostéotomie a été réalisée sur des hanches douloureuses dysplasiques, sans arthrose (45%) ou avec une arthrose peu évoluée (stade 2: 11%) ou évoluée (stade 3 et 4: 44%). Les résultats fonctionnels ont été très satisfaisants au dernier recul. En effet, le score PMA au dernier recul était de 17.4 en moyenne, avec en particulier, une action antalgique remarquable. Radiologiquement, l’ostéotomie a normalisé pratiquement dans tous les cas la coxométrie, grâce à une médialisation importante habituellement supérieure à 20 mm (87.5%). L’ostéotomie de CHIARI est une intervention sûre. Si l’indication est correctement posée, elle soulage remarquablement les patients et stoppe l’arthrose. Elle garde donc une place privilégiée dans le traitement de la coxarthrose même évoluée sur dysplasie acétabulaire pure ou mixte. PMID:23565312

  5. Efficacité de la gestion de vaccins et qualité de vaccination à l'antenne PEV Kisangani en République Démocratique du Congo

    PubMed Central

    Labama, Matthieu Betofe; Longembe, Eugène Basandja; Likwela, Joris Losimba

    2017-01-01

    Introduction La qualité des vaccins conditionne les résultats attendus de la vaccination. Elle est tributaire de l'efficacité de système de gestion technique et logistique mis en place. Cette étude est menée pour évaluer l'efficacité de la gestion des vaccins et d'en tirer des leçons. Méthodes Une étude rétrospective est menée pendant la période de 2010 à 2014 sur la gestion logistique de vaccins au niveau de l'antenne PEV Kisangani. La revue documentaire complétée par les entretiens semi-dirigés des gestionnaires et prestataires de services de vaccination ont permis d'évaluer la gestion de vaccins en se servant des modèle GEV de l'OMS, en vue de dégager les écarts. Résultats Il est observé une faible connaissance des prestataires sur les vaccins qui ne peuvent pas être congelés, sur les tests de congélation et d'autres dommages de vaccins. La gestion informatisée des données au niveau de l'antenne est correctement assurée. Aucun critère évalué n'a atteint l'objectif de 80%. Le respect de la température de stockage est de 70% au niveau de l'antenne ; le critère relatif à la gestion de vaccins est respectivement de 65% et 67% au niveau du BCZ et CS. Le critère relatif à la maintenance est nul à tous les niveaux. Conclusion Le dysfonctionnement de système logistique est remarquable à tous les niveaux de la pyramide sanitaire, ceci pourrait interférer avec la qualité et l'impact attendu de la vaccination. Une attention particulière doit être accordée à la maintenance des équipements. PMID:28748006

  6. Pharmacogénétique: qu'en est-il au Maroc?

    PubMed Central

    Idrissi, Meryem Janati; Ouldim, Karim; Amarti, Afaf; El Hassouni, Mohamed; Khabbal, Youssef

    2013-01-01

    La pharmacogénétique est l’étude de l'influence des variations génétiques interindividuelles sur la réponse aux médicaments, avec le but d'améliorer la prise en charge des patients en visant une médecine personnalisée. Au fait le génome de deux personnes ne diffère que par 0.1% des 3.2 milliards de paires de bases, ce qui implique les effets indésirables des médicaments, qui ont un très important impact sur le plan clinique que sur le plan économique. Or cette dernière décennie ces effets indésirables ont pu être évités grâce aux tests pharmacogénétiques. Au Maroc, la recherche en pharmacogénétique commence à susciter l'intérêt des chercheurs avec quelques études. Une toute première étude en 1986, sur l'acétylation de l'isoniazide chez la population marocaine, suivie par deux autres en 2011 se focalisant sur le métabolisme du tacrolimus et des anti-vitamines K. Ainsi l'espoir maintenant est d'identifier les majoritaires polymorphismes génétiques affectant les patients marocains, afin de leur fournir une prise en charge adaptée. PMID:23785548

  7. La neurobrucellose: une cause curable de surdité neurosensorielle à ne pas méconnaitre

    PubMed Central

    Malhi, Alae Bezzari; Ridal, Mohamed; Bouchal, Siham; Belahsen, Mohammed Faouzi; El Alami, Mohamed-Nourdine

    2015-01-01

    La brucellose est une zoonose ubiquitaire touchant en particulier les pays méditerranéens et le Moyen-Orient. Les manifestations neurologiques sont assez diverses. Nous rapportons l'observation d'un patient âgé de 45 ans, agriculteur, ayant consulté pour une surdité profonde bilatérale depuis 2 mois associée à des céphalées, des épisodes d'hémiparésie à bascule, de trouble de langage, spontanément résolutifs en quelques minutes et des sensations vertigineuses intermittentes depuis 09 mois. Les résultats de la ponction lombaire, de l'imagerie par résonnance magnétique et surtout la sérologie ont permis de conclure à une neurobrucellose. L’évolution sous bi-antibiothérapie a été favorable, avec régression des signes neurologiques, normalisation du LCR et amélioration de la surdité. La neurobrucellose est une affection grave dont le pronostic dépend de la précocité du diagnostic et du traitement. Nous pensons qu'un tableau clinique associant une surdité neurosensorielle et une symptomatologie neurologique progressive doit évoquer en premier une neurobrucellose, d'autant plus que le patient est à risque et dans les pays où cette maladie est endémique. PMID:26889303

  8. Excitations Élémentaires au Voisinage de la Surface de Séparation d'un Métal Normal et d'un Métal Supraconducteur

    NASA Astrophysics Data System (ADS)

    Saint-James, Par D.

    On étudie le spectre d'excitation pour une couche de métal normal déposée sur un supraconducteur. On montre que si l'interaction attractive électron-électron est négligeable dans le métal normal, il n'y a pas de gap d'énergie dans le spectre d'excitation, même si l'épaisseur de la couche normale est petite. Une étude analogue, conduisant à une conclusion similaire, est menée pour deux supraconducteurs accolés et pour des sphères de métal normal baignant dans un supraconducteur. L'effet prévu pourrait expliquer quelques résultats particuliers observés dans des mesures d'effet tunnel dans des supraconducteurs durs. The excitation spectrum of a layer of normal metal (N) deposited on a superconducting substrate (S) is discussed. It is shown that if the electron-electron attractive interaction is negligibly small in (N) there is no energy gap in the excitation spectrum even if the thickness of the layer (N) is small. A similar study, with equivalent conclusions, has been carried out for two superconductors and for normal metal spheres embedded in a superconductor. The effect may possibly explain some peculiar results of tunnelling experiments on hard superconductors.

  9. Étude de la structure des alliages vitreux Ag-As2S3 par diffraction de rayons X

    NASA Astrophysics Data System (ADS)

    Popescu, M.; Sava, F.; Cornet, A.; Broll, N.

    2002-07-01

    The structure of several silver alloyed arsenic chalocgenide has been determined by X-ray diffraction. For low silver doping the disordered layer structure, characteristic to the glassy AS2S3 is retained as demonstrated by the well developed first sharp diffraction peak in the X-ray diffraction pattern. For high amount of silver introduced in the As2S3 matrix, the disoredered layer configurations disappear, as shown by the diminishing and even disappearance of the first sharp diffraction peak in the X-ray patterns. A three-dimensional structure based on Ag2S -type configuration is formed. La structure de quelques alliages sulfure d'arsenic - argent a été determinée par diffraction de rayons X. Pour de faibles dopages à l'argent on conserve la structure desordonnées caractéristique des couches atomique d'As2S3 vitreux ; ceci est prouvé par la forte intensité du premier pic étroit de diffraction. Pour des plus grandes proportions d'argent la structure de l'alliage vitreux fait apparaître des unités structurales caractéristiques du cristal d'Ag2S et la configuration atomique avec des couches desordonnées disparaît (le premier pic étroit de diffraction s'évanouit) en faisant place à une structure tridimensionelle.

  10. Les effets des interfaces sur les proprietes magnetiques et de transport des multicouches nickel/iron et cobalt/silver

    NASA Astrophysics Data System (ADS)

    Veres, Teodor

    Cette these est consacree a l'etude de l'evolution structurale des proprietes magnetiques et de transport des multicouches Ni/Fe et nanostructures a base de Co et de l'Ag. Dans une premiere partie, essentiellement bibliographique, nous introduisons quelques concepts de base relies aux proprietes magnetiques et de transport des multicouches metalliques. Ensuite, nous presentons une breve description des methodes d'analyse des resultats. La deuxieme partie est consacree a l'etude des proprietes magnetiques et de transport des multicouches ferromagnetiques/ferromagnetiques Ni/Fe. Nous montrerons qu'une interpretation coherente de ces proprietes necessite la prise en consideration des effets des interfaces. Nous nous attacherons a mettre en evidence, a evaluer et a etudier les effets de ces interfaces ainsi que leur evolution, et ce, suite a des traitements thermiques tel que le depot a temperature elevee et l'irradiation ionique. Les analyses correlees de la structure et de la magnetoresistance nous permettront d'emettre des conclusions sur l'influence des couches tampons entre l'interface et le substrat ainsi qu'entre les couches elles-memes sur le comportement magnetique des couches F/F. La troisieme partie est consacree aux systemes a Magneto-Resistance Geante (MRG) a base de Co et Ag. Nous allons etudier l'evolution de la microstructure suite a l'irradiation avec des ions Si+ ayant une energie de 1 MeV, ainsi que les effets de ces changements sur le comportement magnetique. Cette partie debutera par l'analyse des proprietes d'une multicouche hybride, intermediaire entre les multicouches et les materiaux granulaires. Nous analyserons a l'aide des mesures de diffraction, de relaxation superparamagnetique et de magnetoresistance, les evolutions structurales produites par l'irradiation ionique. Nous etablirons des modeles qui nous aideront a interpreter les resultats pour une serie des multicouches qui couvrent un large eventail de differents comportements magnetiques et ceci en fonction de l'epaisseur de la couche magnetique de Co. Nous verrons que dans ces systemes les effets de l'irradiation ionique sont fortement influences par l'energie de surface ainsi que par l'enthalpie de formation, largement positive pour le systeme Co/Ag.

  11. Validation materielle d'une architecture generique de reseaux avioniques basee sur une gestion modulaire de la redondance

    NASA Astrophysics Data System (ADS)

    Tremblay, Jose-Philippe

    Les systemes avioniques ne cessent d'evoluer depuis l'apparition des technologies numeriques au tournant des annees 60. Apres le passage par plusieurs paradigmes de developpement, ces systemes suivent maintenant l'approche " Integrated Modular Avionics " (IMA) depuis le debut des annees 2000. Contrairement aux methodes anterieures, cette approche est basee sur une conception modulaire, un partage de ressources generiques entre plusieurs systemes et l'utilisation plus poussee de bus multiplexes. La plupart des concepts utilises par l'architecture IMA, bien que deja connus dans le domaine de l'informatique distribuee, constituent un changement marque par rapport aux modeles anterieurs dans le monde avionique. Ceux-ci viennent s'ajouter aux contraintes importantes de l'avionique classique telles que le determinisme, le temps reel, la certification et les cibles elevees de fiabilite. L'adoption de l'approche IMA a declenche une revision de plusieurs aspects de la conception, de la certification et de l'implementation d'un systeme IMA afin d'en tirer profit. Cette revision, ralentie par les contraintes avioniques, est toujours en cours, et offre encore l'opportunite de developpement de nouveaux outils, methodes et modeles a tous les niveaux du processus d'implementation d?un systeme IMA. Dans un contexte de proposition et de validation d'une nouvelle architecture IMA pour un reseau generique de capteurs a bord d?un avion, nous avons identifie quelques aspects des differentes approches traditionnelles pour la realisation de ce type d?architecture pouvant etre ameliores. Afin de remedier a certaines des differentes lacunes identifiees, nous avons propose une approche de validation basee sur une plateforme materielle reconfigurable ainsi qu'une nouvelle approche de gestion de la redondance pour l'atteinte des cibles de fiabilite. Contrairement aux outils statiques plus limites satisfaisant les besoins pour la conception d'une architecture federee, notre approche de validation est specifiquement developpee de maniere a faciliter la conception d'une architecture IMA. Dans le cadre de cette these, trois axes principaux de contributions originales se sont degages des travaux executes suivant les differents objectifs de recherche enonces precedemment. Le premier axe se situe au niveau de la proposition d'une architecture hierarchique de reseau de capteurs s'appuyant sur le modele de base de la norme IEEE 1451. Cette norme facilite l'integration de capteurs et actuateurs intelligents a tout systeme de commande par des interfaces normalisees et generiques.

  12. Inelastic neutron scattering study of icosahedral AlFeCu quasicrystal

    NASA Astrophysics Data System (ADS)

    Quilichini, M.; Hennion, B.; Heger, G.; Lefebvre, S.; Quivy, A.

    1992-02-01

    Dynamical properties of quasiperiodic structures are rather tricky and far from being understood. For quasicrystals only little information is available both theoretically and experimentally. In this paper we present new experimental results obtained by inelastic neutron scattering on a monodomain quasicrystal of Al{63}Cu{25}Fe{12} already investigated in a previous study [1]. In section 1 we recall the basic features of the quasiperiodic structures and briefly review theoretical works on the dynamics of quasicrystals which can be of some help to appreciate the experimental data presented in section 2 and discussed in section 3. Les propriétés dynamiques des structures quasipériodiques sont complexes et pas encore complètement comprises. Pour les quasicristaux on ne possède que peu d'études dynamiques tant du point de vue théorique qu'expérimental. Dans cette lettre nous présentons des nouveaux résultats obtenus par diffusion inélastique de neutrons avec un quasicristal monodomaine de Al{63}Cu{25}Fe{12} que nous avions déjà étudié [1]. Dans la partie 1 nous rappelons quelques propriétés spécifiques des structures quasipériodiques et nous résumons brièvement les travaux théoriques qui nous permettent une interprétation qualitative des données expérimentales présentées dans la partie 2 et discutées dans la partie 3.

  13. La dysplasie fibreuse: état des lieux

    PubMed Central

    Akasbi, Nessrine; Abourazzak, Fatima Ezzahra; Talbi, Sofia; Tahiri, Latifa; Harzy, Taoufik

    2015-01-01

    La dysplasie fibreuse des os est une affection osseuse bénigne congénitale mais non héréditaire, où l'os normal est remplacé par un tissu fibreux renfermant une ostéogenèse immature. Elle est due à une mutation du gène GNAS 1sur le chromosome 20q13, une mutation activatrice de la sous-unité α de la protéine G. C'est une pathologie qui est le plus souvent silencieuse, de découverte fortuite sur une radiographie standard ou révélée par une douleur osseuse ou une fracture pathologique. L'imagerie et l'histologie, quand elle est nécessaire, permettent d’établir le diagnostic. Bien qu'il ne s'agisse pas d'une tumeur, elle est souvent classée dans la catégorie des tumeurs osseuses bénignes pour des raisons de diagnostic différentiel radiographique et anatomopathologique. Elle peut être monostotique ou polyostotique. L'approche thérapeutique est essentiellement symptomatique. Quelques publications récentes ont suggéré l'intérêt majeur d'un bisphosphonate, en particulier le pamidronate, qui diminuerait les douleurs et stimulerait une reminéralisation progressive des zones ostéolytiques chez les patients traités. D'autres traitements tels que la thérapie ciblée sont en cours d’évaluation. PMID:26401215

  14. Avant Propos

    NASA Astrophysics Data System (ADS)

    Cornet, Alain; Broll, Norbert; Denier, Philippe

    2002-07-01

    Le quatrième colloque Rayons X et Matière (RX 2001) s'est tenu à Strasbourg du 4 au 7 décembre 2001. Comme lors des colloques précédents (1995, 1997 et 1999) nous avons pu réunir de nombreux chercheurs, industriels et constructeurs concernés par la caractérisation des matériaux. Cette manifestation qui se déroule tous les deux ans a pour objectifs de rendre compte périodiquement des avancées faites dans le domaine des techniques X et plus particulièrement de l'ingénierie des surfaces. Cette année nous avons ajouté deux nouveaux thèmes : étude des nanomatériaux et techniques des microfaisceaux. André GUINIER était avec nous lors du premier colloque (colloque 1995, consacré à la commémoration du centenaire de la découverte des rayons X). Nous sommes attristés par son décès et nous lui dédions le Colloque RX 2001. Les premiers articles de cet ouvrage rappellent sa vie de scientifique et ses principaux travaux. En même temps que nous écrivons ces quelques lignes, nous nous préparons à lancer le colloque RX 2003. Ces colloques ont lieu grâce à la participation constante des chercheurs et l'engagement des constructeurs et distributeurs de matériels ; nous les en remercions vivement et nous leur donnons rendez-vous à Strasbourg, en décembre 2003.

  15. Influence des conditions climatiques saisonnières sur quelques paramètres physiologiques dès boucs Créoles alimentés avec de l'ensilage de banane

    NASA Astrophysics Data System (ADS)

    Fauconneau, B.; Xande, A.

    1986-06-01

    Response of three groups of 12 male creole goats (weighing about 10 kg) to environmental variations was tested in Guadeloupe (French West Indies) respectively at three times in the year: end of humid season (October November), dry season (February March) and beginning of humid season (July August). Voluntary free intake of banana silage (silage of mixed green banana, bagassa, wheat bran and urea complemented with molasse) was not significantly affected by climatic variations. Three physiological parameters: rectal temperature, respiratory frequency and cardiac frequency were measured. These parameters were correlated with heat production dependent factors such as metabolic body weight, body weight gain and voluntary free intake. Rectal temperature increased all through the day until sunset and then decreased during the night. Both minimal rectal temperature and daily increase of rectal temperature were correlated with ambient temperature. Cardiac frequency increased during feeding. Generally cardiac frequency seemed to be correlated with activity of animals and so with behavioural response to environmental variations. Respiratory frequency was the most sensitive index of goat response to climate. The daily increase of respiratory frequency was important at the end of the humid season but was not observed in dry season. This increase was dependent on ambient temperature increase but also on air humidity characteristics and air velocity. These points are discussed according to integration of those physiological parameters in thermoregulation.

  16. Détection et prise en charge efficace à l’urgence d’une luxation de Lisfranc à faible vélocité

    PubMed Central

    Mayich, D. Joshua; Mayich, Michael S.; Daniels, Timothy R.

    2012-01-01

    Résumé Objectif Améliorer la capacité des médecins de soins primaires de reconnaître les mécanismes et les présentations courantes des luxations de Lisfranc à basse vélocité (LLF) et mieux faire comprendre le rôle de l’imagerie et les principes des soins primaires dans les cas de LLF à basse vélocité. Sources des données Une recension des ouvrages spécialisés dans MEDLINE a été effectuée et les résultats ont été résumés, concernant notamment l’anatomie et les mécanismes, les diagnostics cliniques et par imagerie et les principes de la prise en charge en milieu de soins primaires. Message principal Les LLF à basse vélocité sont le résultat de divers mécanismes et, à l’examen clinique et à l’imagerie, les signes révélateurs sont très peu perceptibles. Il faut donc un fort degré de suspicion et de prudence dans la prise en charge de ce type de blessure. Conclusion Si on met en pratique quelques principes thérapeutiques dans la prise en charge des LLF à basse vélocité, qui sont potentiellement dévastatrices si elles ne sont pas reconnues, et ce, dès leur présentation, il est possible d’optimiser l’issue de telles blessures.

  17. Maladie de Takayasu et polyarthrite rhumatoïde: une association rare - à propos d'une observation

    PubMed Central

    Frikha, Faten; Maazoun, Fatma; Snoussi, Mouna; Abid, Leila; Abid, Hanen; Bouassida, Walid; Kaddour, Neila; Bahloul, Zouhir

    2012-01-01

    L'artérite de Takayasu ou maladie de Takayasu (MT) et la polyarthrite rhumatoïde (PR) et sont deux maladies inflammatoires chroniques et leur association a été rapportée dans la littérature à travers quelques observations de cas sporadiques. Nous rapportons une nouvelle observation d'une telle association. Une patiente âgée de 44 ans, diagnostiquée avec une polyarthrite rhumatoïde à facteur rhumatoïde positif, qui a développé des céphalées avec des vertiges de caractère permanent. L'examen révélait un pouls radial et huméral abolis à droite, un souffle carotidien bilatéral et une tension artérielle imprenable à droite. L'artériographie a confirmé la présence d'une atteinte de l'arc aortique type MT. Le diagnostic d'une maladie de Takayasu associée à une polyarthrite rhumatoïde était retenu. La patiente était traitée par une corticothérapie (prednisone à la dose de 0,5 mg/kg par jour) et un traitement de fond par Méthotrexate avec une bonne réponse initiale. A travers notre observation et une revue de la littérature, les caractéristiques épidémiologiques, étiopathogéniques, cliniques, thérapeutiques et évolutives de cette association seront discutées. PMID:22937201

  18. Multiferroic RMnO3 thin films

    NASA Astrophysics Data System (ADS)

    Fontcuberta, Josep

    2015-03-01

    Multiferroic materials have received an astonishing attention in the last decades due to expectations that potential coupling between distinct ferroic orders could inspire new applications and new device concepts. As a result, a new knowledge on coupling mechanisms and materials science has dramatically emerged. Multiferroic RMnO3 perovskites are central to this progress, providing a suitable platform to tailor spin-spin and spin-lattice interactions. With views towards applications, the development of thin films of multiferroic materials have also progressed enormously and nowadays thin-film manganites are available, with properties mimicking those of bulk compounds. Here we review achievements on the growth of hexagonal and orthorhombic RMnO3 epitaxial thin films and the characterization of their magnetic and ferroelectric properties, we discuss some challenging issues, and we suggest some guidelines for future research and developments. En ce qui concerne les applications, le développement de films minces de matériaux multiferroïques a aussi énormément progressé, et de nos jours des films minces de manganites avec des propriétés similaires à celles des matériaux massifs existent. Nous passons en revue ici les résultats obtenus dans le domaine de la croissance de couches minces épitaxiés de RMnO3 hexagonal et orthorhombique et de la caractérisation de leurs propriétés magnétiques et ferroélectriques. Nous discutons certains enjeux et proposons quelques idées pour des recherches et développements futurs.

  19. Electromagnetic properties of a modular MHD thruster

    NASA Astrophysics Data System (ADS)

    Kom, C. H.; Brunet, Y.

    1999-04-01

    The magnetic field of an annular MHD thruster made of independent superconducting modules has been studied with analytical and numerical methods. This configuration allows to obtain large magnetized volumes and high induction levels with rapidly decreasing stray fields. When some inductors are out of order, the thruster remains still operational, but the stray fields increase in the vicinity of the failure. For given structural materials and superconductors, it is possible to determine the size of the conductor in order to reduce the electromagnetic forces and the peak field supported by the conductors. For an active field of 10 T in a 6 m ray annular active channel of a thruster with 24 modules, the peak field is exactly 15.6 T in the Nb3Sn conductors and the structure has to sustain 10^8 N/m forces. The necessity to place some magnetic or superconducting shield is discussed, particularly when the thruster is in a degraded regime. Nous présentons une étude analytique et numérique du champ magnétique d'un propulseur MHD naval annulaire, constitué de secteurs inducteurs supraconducteurs. Cette configuration nécessite des champs magnétiques élevés dans des volumes importants, et permet une décroissance rapide des champs de fuite. Lorsque quelques inducteurs sont en panne, le propulseur reste toujours opérationnel, mais les champs de fuite sont importants aux environs des modules hors service. Étant donné un matériau supraconducteur, il est possible de déterminer la forme des inducteurs dans le but de réduire à la fois les forces électromagnétiques et le surchamp supporté par le bobinage. Pour un propulseur annulaire constitué de 24 modules inducteurs, et un champ actif de 10 T au centre de la partie active du canal (r = 6 m) on obtient avec du Nb3Sn un champ maximun sur le conducteur de 15,5 T et la structure supporte une force de 10^8 N/m. De plus, la nécessité de placer des écrans magnétique ou supraconducteur en régime dégradé (mise hors service d'un ou de plusieurs modules inducteurs) est discutée.

  20. [Flavonoids of Artemisia campestris, ssp. glutinosa].

    PubMed

    Hurabielle, M; Eberle, J; Paris, M

    1982-10-01

    Four flavanones (pinostrobin, pinocembrin, sakuranetin and naringenin), one dihydroflavonol (7-methyl aromadendrin) and one flavone (hispidulin) have been isolated from Artemisia campestris L. ssp. glutinosa Gay and identified by spectroscopic methods. Artemisia campestris L. sous-espèce glutinosa Gay est une Composée Anthémidée largement répandue sur les sables du littoral méditerranéean et abondante dans certaines régions d'Espagne et d'Italie. Dans le cadre d'une étude chimiotaxonomique du genre Artemisia Tourn., nous nous sommes intéressés à l'analyse des flavonoïdes, composés jamais décrits, à notre connaissance, dans cette espèce d' Artemisia. Les sommités fleuries d' Artemisia campestris sous-espèce glutinosa, séchées et pulvérisées, sont dégraissées à l'ether de pétrole et épuisées par le chloroforme. Le fractionnement de l'extrait chloroformique, par chromatographie sur colonne de silice, et la purification de certaines fractions conduisent à l'isolement de six génines flavoniques, à l'etat pur. L' étude des spectres UV, des spectres de masse et des spectres de RMN [1,2] et la comparaison avec des échantillons authentiques permettent de proposer, pour ces flavonoïdes, les structures de la pinostrobine [3], de la pinocembrine [4], de la sakuranétine, de la naringénine [5] (flavanones), de la méthyl-7-aromadendrine, [6, 7] (dihydroflavonol) et de l'hispiduline [8, 9] (flavone); quatre de ces génines sont méthylées. Parmi ces flavonoïdes, la pinostrobine n'a jamais été décrite, à notre connaissance, dans la famille des Composées; la pinocembrine, la sakuranétine et la naringénine ont déjà été signalées chez quelques Astéracées et Eupatoriées [10], et l'hispiduline dans la tribu des Anthémidées ( Santolina chamaecyparissus L.) [8]. Seule, la méthyl-7-aromadendrine semble décrite, à ce jour, dans le genre Artemisia Tourn. [7].

  1. Quelle place pour l’anesthésie locorégionale chez les brûlés?

    PubMed Central

    Chaibdraa, A.; Medjelekh, M.S.; Saouli, A.; Bentakouk, M.C.

    2015-01-01

    Summary La pratique de l’anesthésie locorégionale chez les brûlés est limitée par de nombreux facteurs. Elle est considérée comme marginale dans l’approche multimodale du traitement de la douleur par excès de nociception. Ce travail rétrospectif, sur une période de 3 années, porte sur les anesthésies locorégionales (ALR) réalisées. Les résultats obtenus vont permettre, en regard de la rareté des données de la littérature, de formuler quelques suggestions sur la place de cette technique. Il a été recensé 634 ALR, dont 96% chez des adultes. Les membres inférieurs sont les plus concernés (76%). Des anesthésies rachidiennes ont été pratiquées chez 32 patients dont 4 enfants. Les incidents sont peu fréquents (3%) et sans gravité. L’ALR peut représenter une option utile dans la stratégie multimodale de prise en charge de la douleur, la réhabilitation passive précoce et la chirurgie de recouvrement par la greffe de peau. Elle mérite d’être explorée en ambulatoire, dans la mesure où 95% des brûlés ne sont pas hospitalisés. La place de l’anesthésie-locorégionale chez les brûlés devrait susciter plus d’intérêts, pour permettre d’établir des protocoles fondés sur une réflexion pluridisciplinaire. PMID:27279806

  2. Propriétés électriques d'hétérostructures a-GaAs/c-GaAs(n) et de structures de type MIS a-GaAsN/c-GaAs(n)

    NASA Astrophysics Data System (ADS)

    Aguir, K.; Fennouh, A.; Carchano, H.; Lollman, D.

    1995-10-01

    Heterojunctions were fabricated by deposit of amorphous GaAs and GaAsN on c-GaAs. I(V) and C(V) measurements were performed to determine electrical properties of these structures. The a-GaAs/c-GaAs(n) heterojunctions present a p-n junction like behaviour. The characteristics of the a-GaAsN/c-GaAs(n) heterojunctions present a MIS like structure behaviour with some imperfections. A fixed positive charge was detected and a density of interface states of about 10^{11} eV^{-1}cm^{-2} was evaluated. L'étude porte sur des couches minces de GaAs et de GaAsN amorphes déposées par pulvérisation cathodique RF réactive sur des substrats de GaAs cristallin. Les caractéristiques électriques I(V) et C(V) ont été mesurées. Les hétérojonctions a-GaAs/c-GaAs(n) présentent un effet redresseur. Cet effet laisse place à une caractéristique symétrique avec une forte atténuation de l'intensité du courant pour les structures a-GaAsN/cGaAs(n). Les structures réalisées ont alors un comportement semblable à celui d'une structure MIS imparfaite. L'existence d'une charge positive fixe dans le a-GaAsN a été mise en évidence. La densité des états d'interface au milieu de la bande interdite est évaluée à quelques 10^{11} cm^{-2}eV^{-1}.

  3. Photoémission de Csl induite par une impulsion laser intense femtoseconde

    NASA Astrophysics Data System (ADS)

    Belsky, A.; Vasil'Ev, A.; Yatsenko, B.; Bachau, H.; Martin, P.; Geoffroy, G.; Guizard, S.

    2003-06-01

    Nous avons mesuré pour la première fois les spectres de photoélectrons émis par un cristal isolant à large bande interdite, Csl, avec une dynamique de 10^6 coups/s, excité par la source laser haute cadence du C.E.L.I.A (800 nm, 40 fs, 1 kHz, 1 TW). L'émission d'électrons jusqu'à des énergies de quelques dizaines d'électrons-volts a été observée pour des impulsions d'éclairement compris entre 0.5 et 3 TW/cm^2, relativement faible donc par comparaison aux éclairements utilisés pour accélérer les électrons d'un atome aux mêmes énergies. Ces spectres contiennent tous, en particulier, deux bandes dans le domaine des basses énergies d'électrons (<5 eV), également observées lors d'études précédentes. Les électrons les plus énergétiques forment un plateau intense légèrement structuré et limité par une coupure exponentielle. Pour des impulsions de 3 TW/cm^2 cette coupure est située à 27 eV. L'insuffisance du mécanisme électron-photon-phonon, considéré jusqu'à présent comme le principal processus d'échauffement des électrons dans les solides en interaction non destructrice avec un champ laser, nous a poussé à proposer un mécanisme alternatif. Ce modèle met en évidence les transitions directes multiphotoniques dans la bande de conduction du solide qui sont incontournables du fait de sa structure électronique multi-branches

  4. Microscopie par rayons X dans la fenêtre de l'eau : faisabilité et intérêt pour la biologie d'un instrument de laboratoire

    NASA Astrophysics Data System (ADS)

    Adam, J. F.; Moy, J. P.

    2005-06-01

    La biologie étudie des structures ou des phénomènes sub-cellulaires. Pour cela la microscopie est la technique d'observation privilégiée. La résolution spatiale de la microscopie optique s'avère bien souvent insuffisante pour de telles observations. Les techniques plus résolvantes, comme la microscopie électronique par transmission sont souvent destructrices et d'une complexité peu adaptée aux besoins des biologistes. La microscopie par rayons X dans la fenêtre de l'eau permet l'imagerie rapide de cellules dans leur milieu naturel, nécessite peu de préparation et offre des résolutions de quelques dizaines de nanomètres. De plus, il existe un bon contraste naturel entre les structures carbonées (protéines, lipides) et l'eau. Actuellement cette technique est limitée aux centres de rayonnement synchrotron, ce qui impose une planification et des déplacements incompatibles avec les besoins de la biologie. Un tel microscope fonctionnant avec uns source de laboratoire serait d'une grande utilité. Ce document présente un état de l'art de la microscopie par rayons X dans la fenêtre de l'eau. Un cahier des charges détaillé pour un appareil de laboratoire ayant les performances optiques requises par les biologistes est présenté et confronté aux microscopes X de laboratoire déjà existants. Des solutions concernant la source et les optiques sont également discutées.

  5. Tensions entre rationalité technique et intérêts politiques : l’exemple de la mise en œuvre de la Loi sur les agences de développement de réseaux locaux de services de santé et de services sociaux au Québec

    PubMed Central

    Contandriopoulos, D.; Hudon, Raymond; Martin, Elisabeth; Thompson, Daniel

    2013-01-01

    Sommaire L’objet de cet article est constitué par les processus décisionnels entourant la mise en œuvre de la Loi sur les agences de développement de réseaux locaux de services de santé et de services sociaux (Loi 25). Nous entendons mettre en lumière les stratégies des groupes ou institutions de diverses natures qui ont fait valoir leurs préférences et ont tenté, avec un succès inégal, d’influencer les décisions relatives à cette réforme majeure de la structure du système de santé québécois. Au plan théorique, nous nous appuyons principalement sur les modèles d’analyse du lobbying qui, depuis les travaux fondateurs de Milbrath (1960, 1963), présentent cette pratique comme un processus fondamental d’échange d’information. Selon les données colligées dans les retranscriptions d’entrevues, les stratégies observées correspondent effectivement aux caractéristiques constitutives du lobbying et, dans quelques situations, à celles du patronage. La combinaison de ces divers éléments révèle que la mise en œuvre de la Loi 25 s’avère être avant tout un processus proprement politique. Ainsi, furent relégués au second plan les arguments techniques qui composaient initialement les objectifs de la Loi. PMID:23509412

  6. Le changement comme tradition dans la recherche et la formation a la recherche en biotechnologie et en peripherie Etude de cas en sciences de la sante, sciences naturelles et genie

    NASA Astrophysics Data System (ADS)

    Bourque, Claude Julie

    Le champ de la recherche scientifique et de la formation a la recherche est traverse depuis quelques dizaines d'annees par plusieurs courants et discours associes au changement, mais peu de travaux empiriques permettent de comprendre ce qui change concretement. C'est la contribution originale de cette these au champ de l'education, plus specifiquement a l'etude sociologique de l'enseignement superieur ou sont concentrees les activites liees a la triade thematique du programme doctoral dans lequel elle a ete produite : recherche, formation et pratique. L'enquete-terrain a ete realisee en 2009 et 2010 aupres de 808 repondants affilies a 60 etablissements au Quebec et a produit un vaste materiau de nature mixte (donnees quantitatives et qualitatives). Un portrait de la nebuleuse biotechnologique qui touche les secteurs des sciences de la sante, des sciences naturelles et du genie a ete realise. Ce domaine concerne des dizaines de disciplines et se revele de nature transdisciplinaire, mais les pratiques n'y sont pas davantage marquees par le changement que celles d'autres domaines connexes. Les dynamiques sociales ont fait l'objet d'analyses comparatives dans quatre contextes: le choix des programmes, des objets et des methodes, le financement, la diffusion et la planification de la carriere. Les resultats indiquent que les echanges entre les agents traditionnellement situes au coeur des activites de recherche dominent ces dynamiques dans tous les contextes etudies. L'etude des representations au fondement des pratiques a revele l'existence de trois ecoles de pensee qui coexistent dans le champ scientifique: academique, pragmatique et economiste. Ces ecoles permettent de categoriser les agents en fonction des zones de fractures qui marquent leurs oppositions tout en identifiant ce qu'ils ont en commun. Les representations et les pratiques liees a la formation temoignent d'un habitus plutot homogene, alors que les contradictions semblent plus souvent ancrees dans des luttes universitaires que scientifiques, concentrees sur la negociation du capital scientifique, symbolique et economique en jeu dans la formation doctorale, dans les carrieres auxquelles elle mene, et dans les qualites du titre de Ph.D. Au final, la confusion entre des logiques opposees peut etre reduite en reinterpretant le changement comme tradition du champ scientifique. Mots-cles Sociologie, education, enseignement superieur, science et technologie, biotechnologie, formation doctorale, champ scientifique, reseaux sociaux

  7. Effets des electrons secondaires sur l'ADN

    NASA Astrophysics Data System (ADS)

    Boudaiffa, Badia

    Les interactions des electrons de basse energie (EBE) representent un element important en sciences des radiations, particulierement, les sequences se produisant immediatement apres l'interaction de la radiation ionisante avec le milieu biologique. Il est bien connu que lorsque ces radiations deposent leur energie dans la cellule, elles produisent un grand nombre d'electrons secondaires (4 x 104/MeV), qui sont crees le long de la trace avec des energies cinetiques initiales bien inferieures a 20 eV. Cependant, il n'y a jamais eu de mesures directes demontrant l'interaction de ces electrons de tres basse energie avec l'ADN, du principalement aux difficultes experimentales imposees par la complexite du milieu biologique. Dans notre laboratoire, les dernieres annees ont ete consacrees a l'etude des phenomenes fondamentaux induits par impact des EBE sur differentes molecules simples (e.g., N2, CO, O2, H2O, NO, C2H 4, C6H6, C2H12) et quelques molecules complexes dans leur phase solide. D'autres travaux effectues recemment sur des bases de l'ADN et des oligonucleotides ont montre que les EBE produisent des bris moleculaires sur les biomolecules. Ces travaux nous ont permis d'elaborer des techniques pour mettre en evidence et comprendre les interactions fondamentales des EBE avec des molecules d'interet biologique, afin d'atteindre notre objectif majeur d'etudier l'effet direct de ces particules sur la molecule d'ADN. Les techniques de sciences des surfaces developpees et utilisees dans les etudes precitees peuvent etre etendues et combinees avec des methodes classiques de biologie pour etudier les dommages de l'ADN induits par l'impact des EBE. Nos experiences ont montre l'efficacite des electrons de 3--20 eV a induire des coupures simple et double brins dans l'ADN. Pour des energies inferieures a 15 eV, ces coupures sont induites par la localisation temporaire d'un electron sur une unite moleculaire de l'ADN, ce qui engendre la formation d'un ion negatif transitoire dans un etat electronique dissociatif, cette localisation est suivie d'une fragmentation. A plus haute energie, la dissociation dipolaire (i.e., la formation simultanee d'un ion positif et negatif) et l'ionisation jouent un role important dans le dommage de l'ADN. L'ensemble de nos resultats permet d'expliquer les mecanismes de degradation de l'ADN par les EBE et d'obtenir des sections efficaces effectives des differents types de dommages.

  8. Karst groundwater: a challenge for new resources

    NASA Astrophysics Data System (ADS)

    Bakalowicz, Michel

    2005-03-01

    Karst aquifers have complex and original characteristics which make them very different from other aquifers: high heterogeneity created and organised by groundwater flow; large voids, high flow velocities up to several hundreds of m/h, high flow rate springs up to some tens of m3/s. Different conceptual models, known from the literature, attempt to take into account all these particularities. The study methods used in classical hydrogeology—bore hole, pumping test and distributed models—are generally invalid and unsuccessful in karst aquifers, because the results cannot be extended to the whole aquifer nor to some parts, as is done in non-karst aquifers. Presently, karst hydrogeologists use a specific investigation methodology (described here), which is comparable to that used in surface hydrology. Important points remain unsolved. Some of them are related to fundamental aspects suc h as the void structure - only a conduit network, or a conduit network plus a porous matrix -, the functioning - threshold effects and non-linearities -, the modeling of the functioning - double or triple porosity, or viscous flow in conduits - and of karst genesis. Some other points deal with practical aspects, such as the assessment of aquifer storage capacity or vulnerability, or the prediction of the location of highly productive zones. Los acuíferos kársticos tienen características originales y complejas que los hacen muy diferentes de otros acuíferos: alta heterogeneidad creada y organizada por el flujo de agua subterránea, espacios grandes, velocidades altas de flujo de hasta varios cientos de m/h, manantiales con ritmo alto de flujo de hasta algunas decenas de m3/s. Diferentes modelos conceptuales que se conocen en la literatura tratan de tomar en cuenta todas estas particularidades. Los métodos de estudio usados en hidrogeología clásica- pozos, pruebas de bombeo y modelos distribuidos- son generalmente inválidos y no exitosos en acuíferos kársticos, debido a que los resultados no pueden extenderse a todo el acuífero ni a alguna de sus partes, como se hace en acuíferos no kársticos. Actualmente los hidrogeólogos kársticos usan una metodología de investigación específica, que se des cribe aquí, la cual es comparable con la que se utiliza en hidrología superficial. Puntos importantes permanecen sin resolverse. Algunos de ellos se relacionan con aspectos fundamentales tal como la estructura de espacios- solo una red de conductos, ouna red de conductos más una matriz porosa-, el funcionamiento-efectos threshold y no-linealidades-, el modelizado del funcionamiento-porosidad dobleo triple, o flujo viscoso en conductos- y génesis del karst. Algunos otros puntos se relacionan con aspectos prácticos, tal como la evaluación de la capacidad de almacenamiento del acuífero o la vulnerabilidad, o la predicción de la localización de zonas altamente productivas. Les aquifères karstiques présentent des caractères originaux complexes qui les distinguent profondément de tous les autres milieux aquifères: forte hétérogénéité créée et organisée par les écoulements souterrains eux-mêmes, vides de très grandes dimensions, vitesses d'écoulement pouvant atteindre quelques centaines de mètres par heure, sources à débit pouvant atteindre quelques dizaines de m3/s. Différents modèles conceptuels tentent de prendre en compte ces particularités. Les méthodes d'étude de l'hydrogéologie classique—forage, essai de pompage et modèles maillés - ne sont en général pas adaptés aux aquifères karstiques, parce que les résultats obtenus ne peuvent pas être étendus à l'aquifère tout entier, ou à certaines de ses parties, comme cela est fait pour les aquifères non karstiques. Actuellement les hydrogéologues du karst ont recours à une méthodologie d'étude spécifique décrite ici, comparable à celle utilisée en hydrologie de surface. Des points importants restent à résoudre. Certains concernent des aspects fondamentaux, comme la structure des vides - réseau de conduits uniquement, ou bien réseau de conduits et matrice poreuse -, le fonctionnement - problèmes de seuil et plus généralement les non-linéarités -, la modélisation du fonctionnement - double porosité ou écoulement visqueux en conduits - et de la genèse du karst. D'autres points portent sur des aspects pratiques, comme l'évaluation de la capacité de stockage ou de la vulnérabilité de l'aquifère, et la prédiction des zones à haute productivité.

  9. Hémophilie: état des lieux dans un service de pédiatrie dans la région de l'oriental du Maroc

    PubMed Central

    Benajiba, Noufissa; Boussaadni, Yousra EL; Aljabri, Mohammed; Bentata, Yassamine; Amrani, Rim; Rkain, Maria

    2014-01-01

    Pour les pays en voie de développement, l'hémophilie continue d’être une maladie de conséquence médicale et sociale désastreuse. Le but de ce travail est d'analyser le suivi d'une cohorte de patients hémophiles. Patients et méthodes: étude prospective étalée sur deux années et menée au centre référent d'hémophilie dans la région de l'orient du Maroc. Ont été inclus tous les patients présentant une hémophilie confirmée et âgé de moins de 18 ans. Résultats: sur 16 hémophiles, Quinze patients présentait une hémophilie A, l’âge moyen des patients était de 6,18 ans, la forme sévère, représentait 20,7%, la forme modérée: 33,3% et la forme mineure: 40%. Les circonstances de découverte étaient post circoncisionnelle chez 53,3% des patients, 20,7% post traumatique, 20% à l’âge de la marche; la durée d’évolution variait entre 2 mois et 10 ans. L'hémarthrose a été décrite au niveau des genoux, coudes et chevilles, avec une moyenne allant de 2 à 5 fois par an; l'arthropathie a été remarquée dans 33,3%. Le bilan immunologique a révélé des facteurs circulant inhibant chez deux patients. Le traitement était à base d'antalgiques, de plasma frais congelé. L'administration de facteurs VIII recombinés a été instaurée chez 40,6% des patients (plus de 90% des formes modérées et graves), grâce au programme national de prise en charge des hémophiles. Le décès était noté dans un seul cas lié à une hémorragie cérébrale. Conclusion: Nous insistons sur l'intérêt du programme national de prise en charge des hémophiles dernièrement instauré qui pourrait améliorer les conditions de vie de ces enfants. PMID:25404986

  10. Role of DNA repair enzymes in the cellular resistance to oxidative stress.

    PubMed

    Laval, J

    1996-01-01

    Oxidative stress occurs in cells when the equilibrium between prooxidant and antioxidant species is broken in favor of the prooxidant state. It is due to reactive oxygen species (ROS) generated either by the cellular metabolism such as phagocytosis, mitochondrial respiration, xenobiotic detoxification, or by exogenous factors such as ionizing radiation or chemical compounds performing red-ox reactions. Some ROS are extremely reactive and interact with all the macromolecules including lipids, nucleic acids and proteins. Cells have numerous defence systems to counteract the deleterious effects of ROS. Proteins and small molecules specifically eliminate ROS when they are formed. There are three species of superoxyde dismutases which transform the superoxyde anion O2- in hydrogen peroxyde H2O2 which in turn will be destroyed by peroxysomal catalase or by various peroxydases. There are numerous small molecules in the cell such as glutathion, alpha-tocopherol, vitamines A and C, melanine, etc. which are antioxydant molecules. ROS escaping destruction generate various lesions in DNA such as base modifications, degradation products of deoxyribose, chain breaks. These various lesions have been characterized and it is possible to quantitate them in the DNA of cells which have been irradiated or treated by free radical generating systems. The biological properties of the bases modified by ROS have been established. For example C8-hydroxyguanine (8-oxoG) is promutagenic since, if present in DNA during replication, it leads to incorporation of dAMP residues, leading to transversion mutation (GC-->TA). Purines whose imidazole ring is opened (Fapy residues) are stops for the DNA polymerase during DNA replication and are therefore potentially lethal lesions for the cell. Oxidized pyrimidines have comparable coding properties. Efficient DNA repair mechanisms remove these oxidized bases. In Escherichia coli cells, endonuclease III (NTH protein) and endonuclease VIII (NEI protein) excise many oxidized pyrimidines, whereas the FPG protein (formamidopyrimidine-DNA-glycosylase) eliminates 8-oxoG and Fapy lesions. Besides its DNA glycosylase activity, the protein FPG has a beta-lyase activity incising DNA at abasic site by a beta-delta elimination mechanism, and a dRPase activity. The FPG protein has a zinc finger motive which is mandatory for the recognition of its substrate. Mammalian cells have similar DNA repair proteins and it should be emphazized that there is conservation of the different functions and in most cases a remarquable homology of the amino acids sequences from E. coli to man.

  11. Statistical Physics on the Eve of the 21st Century: in Honour of J B McGuire on the Occasion of His 65th Birthday

    NASA Astrophysics Data System (ADS)

    Batchelor, Murray T.; Wille, Luc T.

    The Table of Contents for the book is as follows: * Preface * Modelling the Immune System - An Example of the Simulation of Complex Biological Systems * Brief Overview of Quantum Computation * Quantal Information in Statistical Physics * Modeling Economic Randomness: Statistical Mechanics of Market Phenomena * Essentially Singular Solutions of Feigenbaum- Type Functional Equations * Spatiotemporal Chaotic Dynamics in Coupled Map Lattices * Approach to Equilibrium of Chaotic Systems * From Level to Level in Brain and Behavior * Linear and Entropic Transformations of the Hydrophobic Free Energy Sequence Help Characterize a Novel Brain Polyprotein: CART's Protein * Dynamical Systems Response to Pulsed High-Frequency Fields * Bose-Einstein Condensates in the Light of Nonlinear Physics * Markov Superposition Expansion for the Entropy and Correlation Functions in Two and Three Dimensions * Calculation of Wave Center Deflection and Multifractal Analysis of Directed Waves Through the Study of su(1,1)Ferromagnets * Spectral Properties and Phases in Hierarchical Master Equations * Universality of the Distribution Functions of Random Matrix Theory * The Universal Chiral Partition Function for Exclusion Statistics * Continuous Space-Time Symmetries in a Lattice Field Theory * Quelques Cas Limites du Problème à N Corps Unidimensionnel * Integrable Models of Correlated Electrons * On the Riemann Surface of the Three-State Chiral Potts Model * Two Exactly Soluble Lattice Models in Three Dimensions * Competition of Ferromagnetic and Antiferromagnetic Order in the Spin-l/2 XXZ Chain at Finite Temperature * Extended Vertex Operator Algebras and Monomial Bases * Parity and Charge Conjugation Symmetries and S Matrix of the XXZ Chain * An Exactly Solvable Constrained XXZ Chain * Integrable Mixed Vertex Models Ftom the Braid-Monoid Algebra * From Yang-Baxter Equations to Dynamical Zeta Functions for Birational Tlansformations * Hexagonal Lattice Directed Site Animals * Direction in the Star-Triangle Relations * A Self-Avoiding Walk Through Exactly Solved Lattice Models in Statistical Mechanics

  12. Dynamique des ressources naturelles dans le Parc national de Manda: Cartographie et analyse pour le Développement durable

    NASA Astrophysics Data System (ADS)

    Ballah Solkam, Rosalie; Médard, Ndoutorlengar

    2018-05-01

    Au Tchad, le réseau d'aires protégées couvre près de 10,2% de la surface du pays et reste globalement représentatif de toute la diversité des écosystèmes de la région. Cependant, ce réseau n'est pas constitué d'écosystèmes intacts car de nombreuses altérations y ont été apportés (certaines espèces sont déjà au seuil critique d'extinction (Addax, gazelle dama, lamantin), voire ont disparu (Rhinocéros noir et blanc, Oryx)) surtout dans les parcs nationaux. Ce qui nous amène à nous interroger sur la dynamique des ressources naturelles et le degré de conservation du parc national de Manda? Une évaluation de la diversité biologique et des ressources hydrographiques de 1951 à 1999 sur la base de la bibliographie existante, de la carte topographique de 1956, des images satellitaires Landsat 5 et 7 TM et ETM+ de 2 périodes (1986, 1999), complétée par des interviews semi-structurés et des transects sur le terrain, permettra de mieux appréhender la dynamique des ressources et les actions de conservation de la biodiversité réalisées à cet effet. Les résultats montrent une dynamique progressive de la faune de 1951 à 1970, puis une dynamique régressive de 1970 à 1989. Après cette tumultueuse période, un repeuplement du parc s'opère de 1989 à 2002. Par contre, la flore est relativement bien conservée avec quelques cours d'eau, des mares, champs et plantations. Et cela grâce aux multiples projets de conservation de la biodiversité. La promotion de l'écotourisme serait une alternative au développement durable de ce parc.

  13. Etats excites en couche interne de haut spin de neon hautement ionise

    NASA Astrophysics Data System (ADS)

    Lapierre, Alain

    En plus d'être observés lors de plusieurs phénomènes d'interactions multi- électroniques et multi-atomiques, la description des états excités en couche interne est un test sensible à celle de la corrélation électronique. Suivant une spectroscopie faisceau- lame effectuée antérieurement des régions spectrales de l'ultraviolet et du visible (1800-5300 Å) de néon à 10 MeV, des raies spectrales (d'émission) satellites de celles des transitions hydrogéniques (l = n - 1) et l < n - 1, n = 6 - n' = 7, n = 7 - n' = 8 et n = 8 - n' = 9 du néon lithiumoïde (Ne VIII) sont assignées à l'aide de calculs Hartree-Fock, à des transitions de mêmes nombres quantiques principaux entre des états quadruplet dont le cœur est excité en 1s2s 3S. Quelques raies sont assignées à des transitions entre des niveaux n = 3 pour le Ne VI, VII et IX. Par la suite, les transitions quadruplet, quintuplet et sextuplet n = 2 - n' = 3 et n = 2 - n' = 4 du néon lithiumoïde, bérylliumoïde (Ne VII) et boroïde (Ne VI), respectivement, ont été investiguées par spectroscopie faisceau-lame dans la région spectrale des XUV (60-125 Å). Ces investigations sont supportées par des calculs Hartree-Fock et des régressions linéaires sur les séquences isoélectroniques, effectués en parallèle. Des mesures de la durée de vie moyenne de termes n = 3 ont été réalisées et plusieurs raies sont nouvellement identifiées à des transitions de Ne VI à IX.

  14. Le recours aux modeles dans l'enseignement de la biologie au secondaire : Conceptions d'enseignantes et d'enseignants et modes d'utilisation

    NASA Astrophysics Data System (ADS)

    Varlet, Madeleine

    Le recours aux modeles et a la modelisation est mentionne dans la documentation scientifique comme un moyen de favoriser la mise en oeuvre de pratiques d'enseignement-apprentissage constructivistes pour pallier les difficultes d'apprentissage en sciences. L'etude prealable du rapport des enseignantes et des enseignants aux modeles et a la modelisation est alors pertinente pour comprendre leurs pratiques d'enseignement et identifier des elements dont la prise en compte dans les formations initiale et disciplinaire peut contribuer au developpement d'un enseignement constructiviste des sciences. Plusieurs recherches ont porte sur ces conceptions sans faire de distinction selon les matieres enseignees, telles la physique, la chimie ou la biologie, alors que les modeles ne sont pas forcement utilises ou compris de la meme maniere dans ces differentes disciplines. Notre recherche s'est interessee aux conceptions d'enseignantes et d'enseignants de biologie au secondaire au sujet des modeles scientifiques, de quelques formes de representations de ces modeles ainsi que de leurs modes d'utilisation en classe. Les resultats, que nous avons obtenus au moyen d'une serie d'entrevues semi-dirigees, indiquent que globalement leurs conceptions au sujet des modeles sont compatibles avec celle scientifiquement admise, mais varient quant aux formes de representations des modeles. L'examen de ces conceptions temoigne d'une connaissance limitee des modeles et variable selon la matiere enseignee. Le niveau d'etudes, la formation prealable, l'experience en enseignement et un possible cloisonnement des matieres pourraient expliquer les differentes conceptions identifiees. En outre, des difficultes temporelles, conceptuelles et techniques peuvent freiner leurs tentatives de modelisation avec les eleves. Toutefois, nos resultats accreditent l'hypothese que les conceptions des enseignantes et des enseignants eux-memes au sujet des modeles, de leurs formes de representation et de leur approche constructiviste en enseignement representent les plus grands obstacles a la construction des modeles en classe. Mots-cles : Modeles et modelisation, biologie, conceptions, modes d'utilisation, constructivisme, enseignement, secondaire.

  15. Vibrations et relaxations dans les molécules biologiques. Apports de la diffusion incohérente inélastique de neutrons

    NASA Astrophysics Data System (ADS)

    Zanotti, J.-M.

    2005-11-01

    Le présent document ne se veut pas un article de revue mais plutôt un élément d'initiation à une technique encore marginale en Biologie. Le lecteur est supposé être un non spécialiste de la diffusion de neutrons poursuivant une thématique à connotation biologique ou biophysique mettant en jeu des phénomènes dynamiques. En raison de la forte section de diffusion incohérente de l'atome d'hydrogène et de l'abondance de cet élément dans les protéines, la diffusion incohérente inélastique de neutrons est une technique irremplaçable pour sonder la dynamique interne des macromolécules biologiques. Après un rappel succinct des éléments théoriques de base, nous décrivons le fonctionnement de différents types de spectromètres inélastiques par temps de vol sur source continue ou pulsée et discutons leurs mérites respectifs. Les deux alternatives utilisées pour décrire la dynamique des protéines sont abordées: (i)l'une en termes de physique statistique, issue de la physique des verres, (ii) la seconde est une interprétation mécanistique. Nous montrons dans ce cas, comment mettre à profit les complémentarités de domaines en vecteur de diffusion et de résolution en énergie de différents spectromètres inélastiques de neutrons (temps de vol, backscattering et spin-écho) pour accéder, à l'aide d'un modèle physique simple, à la dynamique des protéines sur une échelle de temps allant d'une fraction de picoseconde à quelques nanosecondes.

  16. Etude analytique du fonctionnement des moteurs à réluctance alimentés à fréquence variable

    NASA Astrophysics Data System (ADS)

    Sargos, F. M.; Gudefin, E. J.; Zaskalicky, P.

    1995-03-01

    In switched reluctance motors fed by a constant voltage source (like a battery) at high frequencies, the current becomes unpredictable and often cannot reach a given reference value, because of the variation of the inductances with the rotor position ; the “motional” m.m.f. generates commutation troubles which increase with the frequency. An optimal control as well as an approximate design of the motor require a quick and simple calculation of currents, powers and losses ; now, in principle, the non-linear electrical equation needs a numerical resolution, whose results cannot be extrapolated. By linearizing this equation by intervals, the method proposed here allows to express analytically, in any case, the phase currents, the torque and the copper losses, when the feeding voltage itself is constant by intervals. The model neglects saturation, but a simple adjustment of the inductance (chosen ad libitum) allows to deal with it. The calculation is immediate and perfectly accurate as long as the machine parameters themselves are well defined. Some results are given as examples for two usual feeding modes. Dans les machines à réluctance alimentées à haute fréquence par une source à tension constante, comme une batterie, le courant varie de manière difficilement prévisible, à cause de la variation des inductances avec la position du rotor, et souvent ne parvient pas à s'établir à une valeur de consigne imposée ; la f.é.m. “motionnelle” engendre des difficultés de communication qui s'aggravent avec l'augmentation de fréquence jusqu'à empêcher le fonctionnement. Tant pour optimiser la commande que pour dimensionner approximativement un moteur ; on doit pouvoir calculer simplement et rapidement le courant et la puissance ; or l'équation électrique, non linéaire, doit en principe être résolue numériquement et les résultats ne sont pratiquement pas extrapolables. En linéarisant par intervalles cette équation, la méthode proposée ici permer d'exprimer analytiquement et dans tous les cas les courants de phase, la puissance fournie et les pertes Joule, lorsque la tension aux bornes de l'enroulement est constante par morceaux. Le modèle utilisé néglige la saturation ; mais il est possible de tenir compte de celle-ci par des ajustements, facilement calculables, de la courbe d'inductance, quelle que soit son allure. Les calculs sont immédiats et parfaitement précis pour autant que les paramètres soient bien définis. Quelques résultats sont donnés à titre d'exemple, pour deux modes d'alimentation usuels.

  17. Préface

    NASA Astrophysics Data System (ADS)

    Hamieh, Tayssir

    2005-05-01

    Né simultanément à Mulhouse et à Beyrouth en 1996 dans le cadre d'une collaboraiion franco-libanaise sur une initiative personnelle de Monsieur Tayssir HAMIEH. le Colloque Franco-Libanais sur la Science des Matériaux (CSM), qui s'inscrit dans le cadre des relations étroites entre la France et le Liban, est très vite devenu une occasion très importante de rencontre entre scientifiques de haut niveau, non seulement, du contour méditerranéen mais également des pays européens, américains et arabes. La quatrieme édition CSM4 est une véritable réussite grâce à la participation des chercheurs confirmés dans tous les domaines des sciences de matériaux et venant de plusieurs pays tels que la France, I'Algérie, Le Liban, la Syrie, le Maroc, la Tunisie, l'Italie, l'Espagne, le Portugal, le Royaume Uni, les États-Unis, la Russie, l'Allemagne, le Japon et I'Inde ; pour présenter plus de 350 communications orales et par affiche et couvrant presque toutes les disciplines des systèmes des matériaux. Le choix des diffèrents thèmes du colloque sur la science des matériaux a été dicté par l'importance capitale de cette discipline dans notre civilisation moderne. En fait, les matériaux utilisés pour la fabrication artisanale ou industrielle d'objets, de produits et de systèmes ainsi que pour la réalisation de constructions et d'équipements ont de tout temps défini le niveau de notre civilisation technique. La réalisation des objectifs communs de notre monde en plein développement, pour ne pas dire en pleine mutation, est en grande partie tributaire de la mise au point de nouveaux matériaux et de procédés de transformation et d'assemblages nouveaux, présentant des performances et des qualités améliorées. Le colloque a illustré et traduit, de manière remarquables, l'excellente collaboration entre chercheurs libanais et français. Le partenariat est exemplaire par la qualité des laboratoires impliqués et par le niveau scientifique des résultats. Nous souhaitons que ce colloque franco-libanais sur la science des matériaux continue son succès implacable et incontournable et oeuvre pour une collaboration fructueuse entre les chercheurs des deux pays et les autres chercheurs des pays arabes et francophones. Coordinateur et Éditeur du Colloque Tayssir HAMIEH

  18. Colicin Killing: Foiled Cell Defense and Hijacked Cell Functions

    NASA Astrophysics Data System (ADS)

    de Zamaroczy, Miklos; Chauleau, Mathieu

    The study of bacteriocins, notably those produced by E. coli (and named colicins), was initiated in 1925 by Gratia, who first discovered "un remarquable exemple d'antagonisme entre deux souches de colibacilles". Since this innovating observation, the production of toxic exoproteins has been widely reported in all major lineages of Eubacteria and in Archaebacteria. Bacteriocins belong to the most abundant and most diverse group of these bacterial defense systems. Paradoxically, these antimicrobial cytotoxins are actually powerful weapons in the intense battle for bacterial survival. They are also biotechnologically useful since several bacteriocins are used as preservatives in the food industry or as antibiotics or as potential antitumor agents in human health care. Most colicins kill bacteria in one of two ways. The first type is those that form pores in the phospholipid bilayer of the inner membrane. They are active immediately after their translocation across the outer membrane. The translocation pathway requires generally either the BtuB receptor and the Tol (OmpF/TolABQR) complex, or the FepA, FhuA, or Cir receptor and the Ton (TonB/ExbBD) system. The second type of colicins encodes specific endonuclease activities that target DNA, rRNA, or tRNAs in the cytoplasm. To be active, these colicins require translocation across both the outer and inner membranes. The molecular mechanisms implicated in the complex cascade of interactions, required for the transfers of colicin molecules from the extracellular medium through the different "cellular compartments" (outer membrane, periplasm, inner membrane, and cytoplasm), are still incompletely understood. It is clear, however, that the colicins "hijack" specific cellular functions to facilitate access to their target. In this chapter, following a general presentation of colicin biology, we describe, compare, and update several of the concepts related to colicin toxicity and discuss recent, often unexpected findings, which help to advance our understanding of the molecular events governing colicin import. In particular, our review includes the following: (1) Structural data on the tripartite interaction of a colicin with the outer membrane receptor and the translocation machinery, (2) Comparison of the normal cellular functions of the Tol and Ton systems of the inner membrane with their "hijacked" roles during colicin import, (3) An analysis of the interaction of a nuclease-type colicin with its cognate immunity protein in the context of the immunity of producer cells, and of the dissociation of this complex in the context of the attack of the colicin on target cells, (4) Information on the endoproteolytic cleavage, which presumably accompanies the penetration of nuclease-type colicins into the cytoplasm. The new data presented here provides further insight into cellular functions "hijacked" or "borrowed" by colicins to permit their entry into target cells.

  19. La Station Laser ultra mobile: de l'Obtention d'une Exactitude centimétrique des Mesures à des Applications en Océanographie et Géodésie Spatiales

    NASA Astrophysics Data System (ADS)

    Nicolas, Joëlle

    2000-12-01

    La Station Laser Ultra Mobile est la plus petite station de télémétrie laser au monde, ne pesant que 300 kg, dédiée à la poursuite de satellites équipés de rétroréflecteurs laser. Elle utilise un petit télescope de 13 cm de diamètre placé sur une monture issue d'un théodolite de précision et motorisé, un laser très compact et une photodiode à avalanche permettant la détection au niveau du simple photo-électron. Les premières expériences (Corse, fin 1996) ont révélé de nombreuses instabilités dans la qualité des mesures. Ce travail concerne l'étude et la mise en place de nombreuses modifications techniques afin d'atteindre une exactitude centimétrique des mesures et de pouvoir participer à la campagne de validation des orbites et d'étalonnage de l'altimètre du satellite océanographique JASON-1 (2001). La précision instrumentale souhaitée a été vérifiée avec succès en laboratoire. Outre cet aspect instrumental et métrologique, une analyse a été développée afin de pouvoir estimer l'exactitude et la stabilité des observations de la station mobile après intégration des modifications. A partir d'une expérience de co-localisation entre les deux stations laser fixe du plateau de Calern, l'analyse est basée sur l'ajustement, par station, de coordonnées et d'un biais instrumental moyen à partir d'une orbite de référence des satellites LAGEOS. Des variations saisonnières ont été mises en évidence dans les séries temporelles des différentes composantes. La comparaison locale des déformations de la croûte terrestre se traduisant par des variations d'altitude issues des données laser a montré une cohérence remarquable avec les mesures du gravimètre absolu transportable FG5. Des signaux de même amplitude ont aussi été observés par GPS. Ces variations sont également mises en évidence à l'échelle mondiale et leur interprétation géophysique est faite (combinaison des effets de marées terrestres et polaire et des effets de charge atmosphérique).

  20. Aspects cliniques, électrocardiographiques et échocardiographiques de l’hypertendu âgé au Sénégal

    PubMed Central

    Sarr, Simon Antoine; Babaka, Kana; Mboup, Mouhamadou Cherif; Fall, Pape Diadie; Dia, Khadidiatou; Bodian, Malick; Ndiaye, Mouhamadou Bamba; Kane, Adama; Diao, Maboury; Ba, Serigne Abdou

    2016-01-01

    Introduction L’hypertension artérielle (HTA) du sujet âgé est un facteur indépendant de maladie cardio-vasculaire. Nos objectifs étaient de décrire les aspects cliniques, électrocardiographique et échocardiographiques de l’HTA du sujet âgé. Méthodes Nous avons mené une étude descriptive et transversale de Janvier à Septembre 2013. Etaient inclus les sujets hypertendus âgés d’au moins 60 ans suivis en ambulatoire au service de cardiologie de l’Hôpital Principal de Dakar. Les données statistiques étaient analysées par le logiciel Epi Info 7 et une valeur de p < 0,05 était retenue comme significative. Résultats Au total, 208 patients étaient inclus. L’âge moyen était de 69,9 ans avec une prédominance féminine (sex-ratio de 0,85). La pression artérielle moyenne était de 162/90mmHg. L’HTA était contrôlée dans 13% des cas. A l’électrocardiogramme, on notait un trouble du rythme (17,78%), une hypertrophie auriculaire gauche (45,19%), une hypertrophie ventriculaire gauche (28,85%) et 2 cas de bloc auriculo-ventriculaire complet. Le Holter ECG révélait 4 cas de tachycardie ventriculaire non soutenue (IVb de Lown), 6 cas de fibrillation atriale paroxystique et 1 cas de flutter atrial paroxystique. L’échocardiographie réalisée chez 140 patients retrouvait une HVG à prédominance concentrique chez 25 patients, plus fréquente chez les hommes (p=0,04) et une dilatation de l’oreillette gauche dans 56,42% des cas, plus fréquente chez les patients plus âgés (p= 0,01). Conclusion Les aspects électrocardiographiques et échocardiographiques dans la population hypertendue âgée sont caractérisés par l’hypertrophie ventriculaire gauche notamment concentrique, la fréquence des arythmies révélées quelques fois par l’enregistrement électrocardiographique de longue durée. PMID:28292040

  1. Bases anatomiques des lésions de l’artère pudendale externe lors de la chirurgie des varices du membre pelvien

    PubMed Central

    Gaye, Magaye; Ndiaye, Assane; Dieng, Papa Adama; Ndiaye, Aynina; Ba, Papa Salmane; Diatta, Souleymane; Ciss, Amadou Gabriel; Ndoye, Jean Marc Ndiaga; Diop, Mamadou; Ndiaye, Abdoulaye; Ndiaye, Mouhamadou; Dia, Abdarahmane

    2016-01-01

    Introduction L’artère pudendale externe est une branche collatérale de l’artère fémorale commune qui est destinée à la vascularisation du pénis ou du clitoris. Ses rapports avec la crosse de la grande veine saphène et de ses afférences, dans le trigone fémoral, sont très étroits. Cette situation fait qu’elle est souvent lésée lors de la crossectomie et de l’éveinage de la grande veine saphène. Ces lésions peuvent être à l’origine d’une dysfonction sexuelle. Méthodes Il s’agit d’une dissection de 22 régions inguinales chez 13 hommes et 9 femmes qui ont bénéficié d’un abord chirurgical du trigone fémoral. La distribution et les rapports de l’artère pudendale externe par rapport à la crosse de la grande veine saphène sont étudiés. Résultats L’artère pudendale externe unique est la plus fréquente. Toutes les artères pudendales externes ont pour origine l’artère fémorale commune. Le rapport le plus fréquent est le sous croisement de la crosse de la grande veine saphène par une artère pudendale externe unique. Par ailleurs, on a un précroisement, un croisement alterné et des rapports avec la veine fémorale commune et des afférences de la crosse de la grande veine saphène. Certaines techniques chirurgicales exposent plus ou moins à une lésion de l’artère pudendale externe. Conclusion Ce travail confirme les données antérieures mais montre encore quelques particularités sur les rapports entre la crosse de la grande veine saphène et l’artère pudendale externe. PMID:27795794

  2. In Praise of Sociology.

    PubMed

    Connell, Raewyn

    2017-08-01

    This reflection on the relevance of sociology starts with the different forms of social knowledge, and some autobiographical reflection on my engagement with the discipline. A research-based social science is made urgent by the prevalence of distortion and pseudoscience in the public realm. However, the research-based knowledge formation is embedded in a global economy of knowledge that centers on a privileged group of institutions and produces major imbalances on a world scale. Sociological data collection has important uses in policy and public discussion. But data need to be embedded in a larger project of understanding the world; this is what gives excitement to the work. Sociology has a potential future of marginality or triviality in the neoliberal economy and its university system. There are better trajectories into the future-but they have to be fought for. Cette réflexion sur l'utilité de la sociologie commence avec les différentes formes de savoir social, ainsi que quelques réflexions biographiques sur mon engagement avec la discipline. Le besoin d'une science sociale orientée vers la recherche est devenue nécessaire suite à la prédominance de la distorsion et de la pseudoscience dans la sphère publique. Par contre, ce savoir centré sur la recherche est lié à une économie globale de la connaissance qui est proche d'un groupe privilégié d'institutions et produit des déséquilibres majeurs au niveau mondial. La collecte de données sociologiques a une grande utilité en politique et dans les discussions publiques. Mais ces données doivent être liées à un projet plus large de compréhension du monde ; c'est ce qui rend ce travail excitant. La sociologie risque la marginalisation ou la trivialité dans une économie néo-libérale et son système universitaire. Il existe de meilleures trajectoires pour l'avenir - mais elles doivent être défendues. © 2017 Canadian Sociological Association/La Société canadienne de sociologie.

  3. Traduire encore Bertin aujourd'hui : pourquoi faire ? Les nouvelles faces cachées de la Sémiologie Graphique

    NASA Astrophysics Data System (ADS)

    Dhieb, Mohsen

    2018-05-01

    La « Sémiologie Graphique » de Bertin est un ouvrage majeur de la cartographie contemporaine. Cependant, dès sa parution en 1967, cet ouvrage a suscité autant d'intérêt que de méfiance : l'ouvrage n'était pas dans le « moule » de la littérature cartographique en vigueur. Pourtant, si plusieurs de ses principes et concepts fondateurs ont été adoptés, plusieurs autres ne sont pas bien passés ou n'ont pas été mis en pratique. Il y a comme un fossé entre certains énoncés théoriques de l'ouvrage et leur mise en pratique. Certaines affirmations de Bertin peuvent paraître obsolètes aujourd'hui à l'heure du numérique, comme la non prise en compte de l'aspect dynamique ou interactif de la carte. L'avènement récent des SIG, de la géomatique et des systèmes de Cartographie Assistée par Ordinateur a relégué les vieilles méthodes de traitement graphique de l'information au second rang. Dès lors pourquoi traduire encore aujourd'hui, à l'heure du numérique, la « Sémiologie Graphique » ? Justement, d'abord, parce que certains principes restent, en substance, peu connus ; plusieurs aspects demeurent encore à explorer ; certaines méthodes et techniques manuelles délaissées peuvent être revisitées et remises en marche grâce à la visualisation ; des innovations n'ont pas été expérimentées ou implémentées par les moyens numériques. Dans le monde arabe, peu de recherches se sont imprégnées de sémiologie graphique. Lors de la traduction de l'ouvrage en arabe, des points restés obscurs ont été décelés. Ce sont ces quelques idées et réflexions qui nous ont poussés à revivre cette formidable aventure.

  4. Violences conjugales à Antananarivo (Madagascar): un enjeu de santé publique

    PubMed Central

    Gastineau, Bénédicte; Gathier, Lucy

    2012-01-01

    Introduction La violence conjugale a été étudiée dans beaucoup de pays développés mais peu en Afrique subsaharienne. Madagascar est un pays où ce phénomène est peu documenté. Méthodes En 2007, une enquête sur la violence conjugale à Antananarivo (ELVICA) a été menée sur la violence conjugale envers les femmes dans la capitale malgache. ELVICA a interrogé 400 femmes en union, de 15 à 59 ans. Des informations sur les caractéristiques démographiques, socioéconomiques des couples ont été collectées ainsi que sur les actes de violences physiques des hommes sur leurs épouses. L’objectif de cet article est d’identifier les facteurs de risques de la violence conjugale grave, celle qui a des conséquences sur la santé physique des femmes. Résultats Trente-cinq pour cent des femmes qui ont déclaré avoir subi au moins une forme de violence physique au cours des 12 mois précédent l’enquête. Presque la moitié (46%) des femmes violentées ont déclaré avoir déjà eu des hématomes, et environ un quart (23%) des plaies avec saignement. Vingt-deux pour cent ont déjà dû consulter un médecin. Parmi les nombreuses variables socioéconomiques et démographiques testées, quelques-unes sont associées positivement au risque de violence conjugale grave: le fait pour une femme d’être en union consensuelle et d’avoir une activité professionnelle. Il y aussi un lien entre la violence subie et l’autonomie des femmes (liberté accordée par le mari de travailler, de circuler, de voir sa famille). Conclusion A Madagascar, comme ailleurs, la lutte contre les violences conjugales est un élément majeur de l’amélioration du statut et de la santé des femmes. PMID:22514757

  5. Traitement de surface par explosif du cuivre polycristallin : caractérisation microstructurale et comportement en fatigue plastique

    NASA Astrophysics Data System (ADS)

    Gerland, M.; Dufour, J. P.; Presles, H. N.; Violan, P.; Mendez, J.

    1991-10-01

    A new surface treatment technique with a primary explosive deposited in thin layer was applied to a polycrystalline pure copper. After treatment, surface roughness remains of high quality especially when compared to shot peened surfaces. The treated zone extends over several hundreds microns in depth and the microhardness profile exhibits a significant increasing of hardness with a maximum reaching 100% at the surface. The transmission electron microscopy shows a microstructure which changes with depth : below the surface, there is a thin recrystallized layer with very small grains followed by a region with numerous mechanical twins the density of which decreases when depth increases. Tested in fatigue with a constant plastic strain amplitude, the treated copper specimens exhibit a strong hardening from the first cycles compared to the untreated specimen ; however this initial hardening erases after 2% of the fatigue life. The fatigue resistance is not modified by the treatment. Une nouvelle technique de traitement de surface à l'aide d'un explosif primaire déposé en couche mince a été utilisée sur du cuivre pur polycristallin. L'état de surface après traitement reste de très bonne qualité, surtout comparé aux surfaces grenaillées. La zone traitée s'étend sur une profondeur de quelques centaines de microns et le profil de microdureté montre une importante augmentation de dureté avec un maximum en surface pouvant atteindre 100%. La micrcrostructure, observée par microscopie électronique en transmission, est caractérisée par une fine recristallisation en surface, puis par un abondant maclage dont la densité décroît lorsque la profondeur augmente. Testé en fatigue à déformation plastique imposée, le cuivre traité présente un fort écrouissage initial dès les premiers cycles, mais qui s'efface progressivement au cours du cyclage après 2% de la durée de vie, cette dernière n'étant pas modifiée par le traitement.

  6. Conséquences macroéconomiques du vieillissement de la population 1

    PubMed Central

    Lee, Ronald; Mason, Andrew

    2017-01-01

    Résumé La baisse de la fécondité et l’allongement de la durée de la vie amènent au vieillissement de la population et au ralentissement ou même à la baisse de la croissance démographique. Tandis que le ralentissement de la croissance démographique réduit la nécessité d’épargner, le vieillissement de la population impose des coûts liés à la dépendance des personnes âgées qui sont particulièrement lourds pour le secteur public. Le taux de fécondité est-il trop faible ? Le projet des Comptes de transferts nationaux (National Transfer Accounts ou NTA) fournit des données permettant de quantifier les coûts liés à la dépendance. Au vu de la répartition par âge actuelle des impôts et prestations dans les pays à revenu élevé et à revenu moyen supérieur, un taux d’environ trois naissances par femme permettrait de maximiser le ratio de soutien public. Les profils d’âge inclusifs de consommation et de revenus du travail sont toutefois plus pertinents en termes de niveau de vie. Les ratios de soutien correspondants sont maximisés par un taux de fécondité légèrement supérieur à deux naissances par femme. En allant un peu plus loin et en prenant en compte la baisse de l’épargne, un taux égal à 1,5 naissance par femme dans les pays à revenu moyen supérieur et à 1,8 naissance par femme dans les pays riches industrialisés permettrait de maximiser le niveau de vie. Au sein d’économies ouvertes, ces résultats seraient légèrement différents. Par ailleurs, la prise en compte du capital humain influerait également quelque peu sur ces résultats. PMID:28804226

  7. Séroprévalence du virus de l'herpès humain-8 chez des patients VIH positif à l'hôpital général de Yaoundé – Cameroun

    PubMed Central

    Jacky, Njiki Bikoï; Paul, Ndom; Lilian, Mupang; Sylvie, Agokeng Demanou

    2015-01-01

    L'épidémiologie de l'infection par le virus herpès humain de type 8 (HHV8) associée à celle à VIH, reste encore méconnue au Cameroun, bien que le pays soit considéré comme une zone endémique pour ces deux virus. L'objectif de ce travail était de ressortir le profil de la séroprévalence du HHV8 au sein de notre population d'étude. 57 personnes ont été recrutées à l'Hôpital Général de Yaoundé et suivies sur une durée 12 mois. Des anticorps IgG anti-HHV8 ont été déterminés par ELISA. Des paramètres autres, tels que l'âge, le sexe, le stade des maladies (SK et VIH/SIDA), le protocole ARV, ainsi que les taux de CD4 ont été utilisés pour déterminer les variables associées à la séropositivité au HHV8. Cette association a été évaluée par le test khi carré. La séroprévalence du HHV8 était de 90% dans notre population en début d'étude et de 74% douze mois plus tard, une séroprévalence qui restait élevée quelque soit le profil clinique, la tranche d'âge, le sexe ou le taux de CD4+ de l'individu. Aucune variable de l'étude n'était significativement associée à la séropositivité du HHV8. Le virus HHV8 semblait circuler au sein de notre population d'étude. Cependant l'on constate, douze mois plus tard, l'absence de manifestations cliniques du SK chez les patients VIH+ positifs, malgré des titres très élevés en IgG anti-HHV8. PMID:26090027

  8. Le bégaiement

    PubMed Central

    Perez, Hector R.; Stoeckle, James H.

    2016-01-01

    Résumé Objectif Fournir une mise à jour sur l’épidémiologie, l’hérédité, la physiopathologie, le diagnostic et le traitement du bégaiement développemental. Qualité des données Une recherche d’études récentes ou non portant sur l’épidémiologie, l’hérédité, la physiopathologie, le diagnostic et le traitement du bégaiement développemental a été effectuée dans les bases de données MEDLINE et Cochrane. La plupart des recommandations s’appuient sur des études de petite envergure, des données probantes de qualité limitée ou des consensus. Message principal Le bégaiement est un trouble d’élocution fréquent chez les personnes de tous âges, il altère la fluidité verbale normale et l’enchaînement du discours. Le bégaiement a été lié à des différences de l’anatomie, du fonctionnement et de la régulation dopaminergique du cerveau qui seraient de source génétique. Il importe de poser le diagnostic avec attention et de faire les recommandations qui conviennent chez les enfants, car de plus en plus, le consensus veut que l’intervention précoce par un traitement d’orthophonie soit cruciale chez les enfants bègues. Chez les adultes, le bégaiement est lié à une morbidité psychosociale substantielle, dont l’anxiété sociale et une piètre qualité de vie. Les traitements pharmacologiques ont soulevé l’intérêt depuis quelques années, mais les données cliniques sont limitées. Le traitement des enfants et des adultes repose sur l’orthophonie. Conclusion De plus en plus de recherches ont tenté de lever le voile sur la physiopathologie du bégaiement. La meilleure solution pour les enfants et les adultes bègues demeure la recommandation à un traitement d’orthophonie.

  9. Prévalence et caractéristiques de l'automédication chez les étudiants de 18 à 35 ans résidant au Campus de la Kasapa de l'Université de Lubumbashi

    PubMed Central

    Chiribagula, Valentin Bashige; Mboni, Henry Manya; Amuri, Salvius Bakari; kamulete, Grégoire Sangwa; Byanga, Joh Kahumba; Duez, Pierre; Simbi, Jean Baptiste Lumbu

    2015-01-01

    Introduction L'automédication est devenue un phénomène émergeant et menaçant de plus en plus la santé publique. La présente étude objective de déterminer la prévalence et les caractéristiques dans le campus Universitaire Kasapa de l'Université de Lubumbashi. Méthodes L'interview indirecte a servi à la collecte des données qui ont été traitées par le logiciel Graphpad version 5. Résultats De 515 étudiants consultés, l'automédication présente une prévalence de 99%, une partie des sujets l'ayant débutée à l'adolescence (35%). Des répondants, 78,8% reconnaissent que l'automédication peut conduire à un échec thérapeutique et que des erreurs de dose, un traitement inadapté, des effets secondaires et des erreurs diagnostiques sont plausibles. Cette pratique est acceptée pour autant qu'elle permette de prendre en charge des maladies ou symptômes présumés bénins et connus avec pour avantages, discrétion et économie de temps et d'argent. La malaria (82,4%), la fièvre (65,5%), les maux de tête (65,5%) en constituent les trois premières causes. L'amoxicilline (98,2%), le paracétamol (97,5%), l'acide ascorbique (91,6%) et la quinine (79,4%) sont les quatre premiers médicaments les plus consommés. L'association la plus utilisée est paracétamol’ vitamine(s) (88,8%) et la plus aberrante amoxycilline -Erytromycine (25,5%). Le comprimé (37%) constitue la forme la plus utilisée. La plupart des sujets (84,9%), recourent aux plantes médicinales. Conclusion Dans ce milieu, il existe une forte prévalence de l'automédication largement dans un but antipalustre avec quelques abus. PMID:26327945

  10. La pubalgie du sportif: mise au point à propos d'une étude rétrospective de 128 joueurs

    PubMed Central

    Mahmoudi, Ammar; Frioui, Samia; Jemni, Sonia; Khachnaoui, Faycel; Dahmene, Younes

    2015-01-01

    La pubalgie du sportif de haut niveau représente une entité nosologique à part entière, tant en raison du mécanisme à l'origine de la pathologie que des lésions objectivables au niveau de la paroi abdominale. C'est un syndrome douloureux de la région inguino-pubienne qui touche particulièrement les footballeurs. Son étiologie est attribuée à la répétition de mouvements des membres inférieurs et du tronc associant rotation et adduction forcées. Son incidence est nettement plus élevée chez les hommes. Après avoir écarté une pathologie d'organe, le patient devrait bénéficier tout au début de son traitement conservateur d'une IRM afin d'effectuer un bilan lésionnel complet qui, selon les situations, permet d’écourter le traitement conservateur et de proposer un traitement chirurgical optimal. Le traitement doit être résolument conservateur pendant 3 mois. La rééducation est le plus souvent le traitement de première intention. Les patients présentant une persistance des symptômes sont candidats à la chirurgie. L'intervention de Nesovic est le traitement de choix chez les sportifs de haut niveau et qui permet, dans la très grande majorité des cas, une reprise sans aucune limitation de l'activité sportive antérieure. La technique de Bassini semble être moins lourde que celle de Nesovic, puisqu'elle est moins invasive. Une prise en charge multidisciplinaire centrée sur l'athlète avant et après l'intervention permet un retour aux activités physiques après quelques mois. Nous rapportons l'expérience de la prise en charge de 128 joueurs opérés selon la technique de Bassini et nous comparons nos résultats avec celles de la littérature. PMID:26966484

  11. Troubles des conduites alimentaires et tempérament cyclothymique: étude transversale à propos de 107 étudiants Tunisiens

    PubMed Central

    Jaweher, Masmoudi; Sonda, Trabelsi; Uta, Ouali; Inès, Feki; Rim, Sallemi; Imene, Baati; Abdelaziz, Jaoua

    2014-01-01

    Introduction Les objectifs de notre étude ont été d'estimer la prévalence des troubles des conduites alimentaires (TCA) chez les jeunes tunisiens et étudier la relation entre le tempérament cyclothymique et les TCA. Méthodes Nous avons ainsi mené une étude transversale descriptive et analytique. Elle a concerné 107 étudiants de l'Institut de Presse et des Sciences de l'Information de la Manouba, Tunisie. Pour l’évaluation des TCA, nous avons procédé par la passation de l'auto questionnaire EAT 40, dans sa version validée en Tunisie. C'est l'outil le plus utilisé pour le dépistage des TCA dans le monde. Pour l’évaluation du tempérament cyclothymique, nous avons utilisé le TEMPS A dans sa version arabe validée. Une fiche épidémiologique associée a permis de recueillir quelques facteurs sociodémographiques et hygiéno-diététiques. Résultats La prévalence des troubles de conduites alimentaires a été de 24,3%. Le pourcentage des étudiants ayant un score de tempérament cyclothymique ≥14 a été de 37,4%. Une association a été trouvée entre les troubles de conduites alimentaires et le tempérament affectif cyclothymique que ce soit selon l'approche dimensionnelle (p=0,005) ou selon celle catégorielle (p=0,046). Le tempérament cyclothymique multiplie par deux le risque de développer un TCA chez les étudiants de sexe féminin (p=0,04). Conclusion Es TCA sont fréquents chez nos étudiants particulièrement de sexe féminin. De plus, la présence d'un tempérament cyclothymique associé permettrait de suspecter doublement une appartenance au spectre bipolaire et devrait conduire à une attention particulière de la part du clinicien pour définir au mieux les stratégies thérapeutiques. PMID:25404977

  12. Gestion des déchets ménagers dans l’aire de santé Bulaska à Mbuji-Mayi en République Démocratique du Congo

    PubMed Central

    Kangoy, Kasangye; Ngoyi, John; Mudimbiyi, Olive

    2016-01-01

    Introduction La présence des déchets ménagers dans les voies publiques a une influence sur l’hygiène de l’environnement, ils entrainent l’insalubrité et peuvent être facteurs des certaines maladies dont quelques-unes peuvent être épidémiques. Au cours des deux dernières décennies, la question de la gestion des déchets est devenue de plus en plus complexe autant pour les pays développés que ceux sous-développés. L’objectif de cette étude était de déterminer les types de déchets et le mode de gestion des déchets génère par les ménages. Méthodes Cette étude est descriptive transversale, réalisée dans l’aire de sante Bulaska, Kasaï oriental, c’est une approche prospective appuyée par l’interview et l’observation active. Le questionnaire a été adressé au responsable du ménage ou au délègue, du 21 au 25 juin 2010, sur 170 ménages ce qui constituent un échantillon de convenance. Résultats Cette étude a révélé ce qui suit: 94,7% des enquêtes qui avaient répondu a notre questionnaire étaient de sexe féminin; 47% des enquêtes avaient un niveau d’étude primaire; 41,1% des enquêtes étaient des ménagères; la taille médiane de ménage était de 7 personnes par ménage; dans 83,5% des cas les déchets génères étaient solides; 50% des ménages de l’aire de sante utilisent la voie publique comme poubelle. Conclusion Eu égard au résultat de cette étude, développer plus les programmes de sensibilisation sur l’assainissement de l’environnement s’avère nécessaire. PMID:27800105

  13. Structural study of CH4, CO2 and H2O clusters containing from several tens to several thousands of molecules

    NASA Astrophysics Data System (ADS)

    Torchet, G.; Farges, J.; de Feraudy, M. F.; Raoult, B.

    Clusters are produced during the free jet expansion of gaseous CH4, CO2 or H2O. For a given stagnation temperature To, the mean cluster size is easily increased by increasing the stagnation pressure p0. On the other hand, the cluster temperature does not depend on stagnation conditions but mainly on properties of the condensed gas. An electron diffraction analysis provides information about the cluster structure. Depending on whether the diffraction patterns exhibit crystalline lines or not, the structure is worked out either by using crystallographic methods or by constructing cluster models. When they contain more than a few thousand molecules, clusters show a crystalline structure identical to that of one phase, namely, the cubic phase, known in bulk solid: plastic phase (CH4), unique solid phase (CO2) or metastable cubic phase (H2O). When decreasing the cluster size, the studied compounds behave quite differently: CO2 clusters keep the same crystalline structure, CH4 clusters show the multilayer icosahedral structure wich has been found in rare gas clusters, and H2O clusters adopt a disordered structure different from the amorphous structures of bulk ice. Des agrégats sont produits au cours de la détente en jet libre des gaz CH4, CO2 ou H2O. Pour une température initiale donnée To, on accroît facilement la taille moyenne des agrégats en augmentant la pression initiale po . Par contre, la température des agrégats dépend principalement des propriétés du gaz condensé. Une analyse par diffraction électronique permet l'étude de la structure des agrégats. Selon que les diagrammes de diffraction contiennent ou non des raies cristallines, on a recours soit à des méthodes cristallographiques soit à la construction de modèles d'agrégats. Lorsqu'ils renferment plus de quelques milliers de molécules, les agrégats adoptent la structure cristalline de l'une des phases connues du solide massif et plus précisément la phase cubique : phase plastique pour CH4, phase solide unique pour CO2 ou phase cubique métastable pour H2O. Lorsque la taille des agrégats décroît, leurs comportements se révèlent très différents selon les molécules étudiées : les agrégats de CO2 conservent la même structure cristalline, les agrégats de CH4 adoptent la structure icosaédrique multicouche trouvée pour les agrégats de gaz rares, et les agrégats de glace adoptent une structure désordonnée différente des structures amorphes de la glace massive.

  14. Integration des sciences et de la langue: Creation et experimentation d'un modele pedagogique pour ameliorer l'apprentissage des sciences en milieu francophone minoritaire

    NASA Astrophysics Data System (ADS)

    Cormier, Marianne

    Les faibles resultats en sciences des eleves du milieu francophone minoritaire, lors d'epreuves au plan national et international, ont interpelle la recherche de solutions. Cette these avait pour but de creer et d'experimenter un modele pedagogique pour l'enseignement des sciences en milieu linguistique minoritaire. En raison de la presence de divers degres de francite chez la clientele scolaire de ce milieu, plusieurs elements langagiers (l'ecriture, la discussion et la lecture) ont ete integres a l'apprentissage scientifique. Nous avions recommande de commencer le processus d'apprentissage avec des elements langagiers plutot informels (redaction dans un journal, discussions en dyades...) pour progresser vers des activites langagieres plus formelles (redaction de rapports ou d'explications scientifiques). En ce qui a trait a l'apprentissage scientifique, le modele preconisait une demarche d'evolution conceptuelle d'inspiration socio-constructiviste tout en s'appuyant fortement sur l'apprentissage experientiel. Lors de l'experimentation du modele, nous voulions savoir si celui-ci provoquait une evolution conceptuelle chez les eleves, et si, simultanement, le vocabulaire scientifique de ces derniers s'enrichissait. Par ailleurs, nous cherchions a comprendre comment les eleves vivaient leurs apprentissages dans le cadre de ce modele pedagogique. Une classe de cinquieme annee de l'ecole de Grande-Digue, dans le Sud-est du Nouveau-Brunswick, a participe a la mise a l'essai du modele en etudiant les marais sales locaux. Lors d'entrevues initiales, nous avons remarque que les connaissances des eleves au sujet des marais sales etaient limitees. En effet, s'ils etaient conscients que les marais etaient des lieux naturels, ils ne pouvaient pas necessairement les decrire avec precision. Nous avons egalement constate que les eleves utilisaient surtout des mots communs (plantes, oiseaux, insectes) pour decrire le marais. Les resultats obtenus indiquent que les eleves ont progresse dans leurs conceptions au sujet des marais. A la suite de l'intervention pedagogique, ils peuvent decrire le marais de facon comparable a des scientifiques en mettant a profit des mots scientifiques (spartine alterniflore, detritus, chevalier a pattes jaunes). Selon nous, les apprentissages des eleves s'expliquent surtout par la juxtaposition, dans le modele pedagogique, des elements langagiers avec une demarche de changement conceptuel a caractere experientiel. En effet, lors de cette demarche, les eleves se sont beaucoup questionnes, ont ecrit leurs reflexions, discute de leurs preoccupations et consulte des documents. Ces activites langagieres se sont deroulees directement dans le marais ainsi qu'a la suite de visites dans celui-ci. Ainsi, la possibilite de decouverte a ete reelle pour eux. Ces differents elements se sont combines pour creer une forte motivation. Le tout s'est arrime pour permettre une evolution conceptuelle et langagiere. Le modele pedagogique experimente pourrait ainsi s'averer tres fecond aupres des eleves du milieu linguistique minoritaire.

  15. L'Abondance du Deutérium, de l'Ultraviolet au Visible

    NASA Astrophysics Data System (ADS)

    Hébrard, Guillaume

    2000-12-01

    Dans le cadre du modèle standard du Big Bang, le deutérium est l'élément dont l'abondance primordiale est la plus sensible à la densité baryonique de l'Univers. Cet élément est uniquement créé lors de la nucléosynthèse primordiale, quelques minutes après le Big Bang ; aucune théorie standard n'en prédit actuellement d'autres sources significatives. Au contraire, étant brûlé dans les étoiles, son abondance D/H décroît au cours de l'évolution cosmique. Les mesures de D/H apportent ainsi des contraintes sur les modèles de Big Bang et d'évolution chimique des galaxies. On peut distinguer trois types de mesures de D/H: les abondances primordiale, proto-solaire et interstellaire, respectivement représentatives de l'Univers il y a environ 15 milliards d'années, 4.5 milliards d'années et à l'époque actuelle. Si l'évolution du deutérium semble qualitativement claire, les résultats concernant ces trois types d'abondance ne convergent pas pour l'instant vers trois valeurs bien définies. Les travaux entrepris durant cette thèse sont reliés à la mesure de l'abondance interstellaire du deutérium. Celle-ci s'obtient habituellement par l'observation spectroscopique en absorption des séries de Lyman de l'hydrogène et du deutérium. Ces observations se font dans le domaine ultraviolet, au moyen d'observatoires spatiaux. Les résultats présentés ici ont été obtenus avec le Télescope spatial Hubble puis le satellite FUSE, récemment mis en orbite. D'autre part, une nouvelle méthode d'observation du deutérium a été proposée, dans le domaine visible à partir de télescopes au sol. Ce travail a mené aux premières détections et à l'identification de la série de Balmer du deutérium, observée en émission dans des régions HII avec le Télescope Canada-France-Hawaii et le Very Large Telescope. On-line Thesis, Guillaume Hébrard

  16. Encéphalopathie pancréatique: à propos de deux cas

    PubMed Central

    Doghmi, Nawfal; Benakrout, Aziz; Meskine, Amine; Bensghir, Mustaphja; Baite, Abdelouah; Haimeur, Charki

    2016-01-01

    L'encéphalopathie pancréatique, est une complication rare de la pancréatite aiguë, notre étude porte sur 02 cas d'encéphalopathie pancréatique, hospitalisés et traités au sein du service de Réanimation chirurgicale de l'Hôpital Militaire d'Instruction Mohamed V de Rabat. L'âge des patients était compris entre 43 ans et 54 ans, nos 02 cas sont repartis en une femme et un homme. Le mécanisme physiopathologique de l'EP n'est pas encore bien élucidé, de nombreuses hypothèses ont été rapportées dans la littérature, certains auteurs suggèrent que la lipase et la Phospholipase A2 jouent un rôle dans le processus pathologique de l'EP. D'autres facteurs tels que les infections, les troubles hydroélectrolytiques, l'hypoxémie et la perturbation de la glycémie, peuvent être déclencheurs. Le diagnostic de l'encéphalopathie pancréatique est facile à établir, la symptomatologie clinique se résume le plus souvent à une confusion, avec stupeur, et agitation psychomotrice, Il s'y ajoute parfois des atteintes neurologiques comme des convulsions, une céphalée, des hémiparésies passagères, une dysarthrie, enfin des difficultés d'expression verbale et une amnésie. Les examens paracliniques, notamment L'IRM cérébrale et l'électroencéphalogramme, permettent de confirmer le diagnostic. Le traitement est d'abord symptomatique, il a comme objectif de lutter contre les facteurs qui favorisent l'apparition des signes neurologiques par les mesures de réanimation que réclame la gravité de la situation. L'évolution de l'EP est le plus souvent favorable, avec une disparition progressive des symptômes, cependant la persistance de quelques séquelles, est décrite dans la littérature. Le pronostic est fonction de la gravité de la pancréatite aigüe et des complications associées. Dans notre étude les données sont globalement comparables à celles publiées actuellement par la majorité des auteurs. PMID:28292109

  17. Sérologie palustre: quel apport dans un pays d’endémie palustre comme la Côte d’Ivoire?

    PubMed Central

    Goran-Kouacou, Amah Patricia Victorine; Dou, Gonat Serge; Zika, Kalou Dibert; Adou, Adjoumanvoulé Honoré; Yéboah, Oppong Richard; Aka, Rita Ahou; Hien, Sansan; Siransy, Kouabla Liliane; N’Guessan, Koffi; Djibangar, Tariam Agnès; Dassé, Séry Romuald; Adoubryn, Koffi Daho

    2017-01-01

    Introduction La sérologie palustre semble avoir peu d’intérêt dans les pays d’endémie comme la Côte d’Ivoire. Cependant cet examen a été régulièrement réalisé au laboratoire de Parasitologie de l’Unité de Formation et de Recherche Sciences Médicales d’Abidjan. Le but de notre étude était d’apprécier l’apport de la sérologie palustre dans notre contexte de pays endémique. Méthodes Nous avons réalisé une étude rétrospective portant sur la sérologie palustre qui a utilisé le kit Falciparum spot-IF de Biomérieux à la recherche d’anticorps antiplasmodiaux d’isotype IgG. Elle a concerné les sérologies réalisées de janvier 2007 à février 2011 et dont les résultats étaient disponibles dans le registre. Résultats Au total, 136 patients ont été sélectionnés. L’âge moyen était de 36,3 ans avec des extrêmes de 1 an et 81 ans et un sex-ratio de 0,97. Les indications de sérologie palustre étaient variées, dominées par la splénomégalie (49,3%), les cytopénies (14,7%), la fièvre d’origine indéterminée (13,2%). La quasi-totalité des patients (98,5%) avaient des anticorps antiplasmodiaux avec un titre moyen élevé à 1057,35UI/ml. Il n’existait pas de lien entre l’âge et le titre d’Ac qui était plus élevé pour les cytopénies, les fièvres prolongées et la splénomégalie. Conclusion La sérologie palustre a peu d’intérêt dans notre pratique courante en zone d’endémie car quelque soit le motif de la prescription, les titres étaient élevés. PMID:28690735

  18. Analyse des interactions energetiques entre un arena et son systeme de refrigeration

    NASA Astrophysics Data System (ADS)

    Seghouani, Lotfi

    La presente these s'inscrit dans le cadre d'un projet strategique sur les arenas finance par le CRSNG (Conseil de Recherche en Sciences Naturelles et en Genie du Canada) qui a pour but principal le developpement d'un outil numerique capable d'estimer et d'optimiser la consommation d'energie dans les arenas et curlings. Notre travail s'inscrit comme une suite a un travail deja realise par DAOUD et coll. (2006, 2007) qui a developpe un modele 3D (AIM) en regime transitoire de l'arena Camilien Houde a Montreal et qui calcule les flux de chaleur a travers l'enveloppe du batiment ainsi que les distributions de temperatures et d'humidite durant une annee meteorologique typique. En particulier, il calcule les flux de chaleur a travers la couche de glace dus a la convection, la radiation et la condensation. Dans un premier temps nous avons developpe un modele de la structure sous la glace (BIM) qui tient compte de sa geometrie 3D, des differentes couches, de l'effet transitoire, des gains de chaleur du sol en dessous et autour de l'arena etudie ainsi que de la temperature d'entree de la saumure dans la dalle de beton. Par la suite le BIM a ete couple le AIM. Dans la deuxieme etape, nous avons developpe un modele du systeme de refrigeration (REFSYS) en regime quasi-permanent pour l'arena etudie sur la base d'une combinaison de relations thermodynamiques, de correlations de transfert de chaleur et de relations elaborees a partir de donnees disponibles dans le catalogue du manufacturier. Enfin le couplage final entre l'AIM +BIM et le REFSYS a ete effectue sous l'interface du logiciel TRNSYS. Plusieurs etudes parametriques on ete entreprises pour evaluer les effets du climat, de la temperature de la saumure, de l'epaisseur de la glace, etc. sur la consommation energetique de l'arena. Aussi, quelques strategies pour diminuer cette consommation ont ete etudiees. Le considerable potentiel de recuperation de chaleur au niveau des condenseurs qui peut reduire l'energie requise par le systeme de ventilation de l'arena a ete mis en evidence. Mots cles. Arena, Systeme de refrigeration, Consommation d'energie, Efficacite energetique, Conduction au sol, Performance annuelle.

  19. Modeles de Calogero et Sutherland, fonctions speciales et symetries

    NASA Astrophysics Data System (ADS)

    Lapointe, Luc

    La thèse comporte trois volets distincts, bien que l'utilisation de méthodes algébriques soit commune aux trois parties. Le premier volet (articles 1, 2 et 3) explore la relation entre les algèbres quantiques et les q-fonctions hypergéométriques. La connexion est d'abord faite, dans ce contexte, entre une extension à deux paramètres de l'algèbre de l'oscillateur harmonique, la (p, q)- algèbre de l'oscillateur, et les fonctions hypergéométriques bibasiques. Une formule génératrice pour des déformations à deux paramètres des polynômes de Laguerre peut ainsi être obtenue. Ensuite, la connexion entre l'algèbre sl q(n + 3) et les q-fonctions de Lauricella est étudiée et, de cette façon, plusieurs identités et relations de contiguité impliquant ces fonctions sont dérivées. Ce premier volet se termine par un court article où il est montré que certaines équations de Schrödinger en deux dimensions peuvent être résolues en termes de fonctions d'Appell, le cas à deux variables des fonctions de Lauricella. Le second volet (articles 4, 5, 6, 7 et 8) porte sur le modèle de Calogero-Sutherland et les fonctions symétriques. Le modèle de Calogero- Sutherland est un modèle intégrable décrivant N particules identiques sur un cercle dont les solutions sont esssentiellement données par des fonctions symétriques à N variables, les polynômes de Jack. Une formule permettant de construire ces polynômes à l'aide d'opérateurs de création est présentée. Cette formule permet de prouver une propriété importante des polynômes de Jack. Ces opérateurs de création sont ensuite généralisés au cas des polynômes de Macdonald. Les opérateurs obtenus dans ce cas possèdent des propriétés remarquables qui permettent notamment de prouver une forme faible d'une conjecture sur les polynômes de Macdonald. Finalement, le dernier volet (articles 9, 10 et 11) traite des algèbres dynamiques et de symétrie des modèles intégrables à plusieurs corps. Un système à N oscillateurs bosoniques sur une ligne et le modèle de Calogero avec terme harmonique sont étudiés. Une méthode générale qui permet en principe d'obtenir la structure algèbrique de ces modèles est présentée. Toutefois, seuls les cas avec N = 2 et N = 3 sont analysés en détail. Les algèbres de symétrie alors obtenues sont polynômiales.

  20. In-Situ Survey System of Resistive and Thermoelectric Properties of Either Pure or Mixed Materials in Thin Films Evaporated Under Ultra High Vacuum

    NASA Astrophysics Data System (ADS)

    Lechevallier, L.; Le Huerou, J.-Y.; Richon, G.; Sarrau, J.-M.; Gouault, J.

    1995-04-01

    The study of thermoelectric and resistive in situ behaviours depending on temperature for thin films of either pure or composite materials obtained under ultra-high vacuum, is very interesting, since they can be used as strain gauges or superficial resistances. However, studies become particularly difficult when the measurements generate very low-level electrical signals. Indeed, these turn out to be hardly detectable because of the perturbations brought by the experimental environment. The apparatus described below allows for the measurement of resistance with a relative uncertainty of 2×10^{-4}, resistance variation with an absolute uncertainty of 2 mΩ and thermoelectric e.m.f. of about 2 μV. Films studied in the laboratory generally exhibit resistances lower than 100 Ω and resistance variations due to temperature variations of about a few ohms. So this device has sufficient technical characteristics for our studies. It can be connected to a PC, which allows for easy data collection and treatment. L'étude des comportements résistif et thermoélectrique in situ en fonction de la température de couches minces de matériaux simples ou composites obtenus en milieu raréfié s'avére intéressante en vue d'applications comme jauge de contrainte ou résistance superficielle mais particulièrement délicate lorsque les mesures donnent naissance à des signaux électriques de très faible amplitude. Ces derniers deviennent en effet difficilement décelables en raison des perturbations apportées par l'environnement expérimental. Le système qui est décrit ici permet de mesurer des résistances avec une certitude relative de 2×10^{-4} et d'apprécier des variations de résistance de 2 mΩ et des f.e.m. thermoélectriques de l'ordre de 2 μV. Les couches étudiées au laboratoire présentent généralement des résistances inférieures à 100 Ω et des variations de résistance dues aux variations de température de l'ordre de quelques Ω. Le dispositif de mesure présente donc des caractéristiques techniques suffisantes pour nos études. Connecté à un PC il permet l'acquisition des données et un traitement rapide.

  1. Evaluation d'un scenario d'apprentissage favorisant la mobilisation des habiletes reliees au processus d'enquete

    NASA Astrophysics Data System (ADS)

    Blanchard, Samuel F. J.

    Les resultats au Programme international pour le suivi des acquis des eleves (PISA) demontrent que les jeunes neobrunswickois francophones se classent b un niveau significativement inferieur comparaiivement aux eleves anglophones du Nouveau-Brunswick, aux eleves des autres provinces canadiennes et se classent sous la moyenne internationale de tous les pays participants quant b la culture scientifique. L'evaluation de cette culture scientifique est basee sur une serie de savoirs, de savoir-faire et de savoir-etre reliee au processus d'enquete scolaire. Le processus d'enquete scolaire est une approche b l'apprentissage ou les eleves effectuent des recherches d'informations, discutent d'idees et entreprennent des investigations pour augmenter leur comprehension d'un probleme ou d'un sujet. Les recherches demontrent que le processus d'enquete scolaire est rarement une composante pedagogique importante de la salle de classe et les recherches portant sur l'implantation du processus d'enquete scolaire recommandent de rendre ce dernier plus accessible aux enseignantes et aux enseignants. Afin de rendre le processus d'enquete plus accessible aux enseignantes et aux enseignants, notre recherche porte sur l'evaluation de la valeur pedagogique d'un scenario d'apprentissage (PhaRoboS) concu specialement pour creer un environnement dans lequel les eleves auront plusieurs occasions a mobiliser les habiletes reliees au processus d'enquete. Les retombees de cette evaluation nous permettront d'offrir des pistes de remediations afin d'aider plus d'enseignantes et d'enseignants b creer un environnement dans lequel les eleves auront plusieurs occasions b mobiliser les habiletes reliees au processus d'enquete. Cette evaluation s'est faite a partir d'une methodologie inspiree de l'evaluation pour fin d'amelioration d'un objet pedagogique. L'analyse des donnees qualitatives recueillies aupres des eleves et de leur enseignante d'une ecole francophone du Nouveau-Brunswick semble montrer que le scenario d'apprentissage a cree un environnement dans lequel les eleves ont eu plusieurs occasions b mobiliser les habiletes reliees aux processus d'enquete. Entres autres, les eleves ont emis et ont confronte des hypotheses, ont choisi des strategies de resolutions de probleme, ont communique leurs observations et ont analyse et interprete des donnees lors de leur investigation. Cependant, comme suite a cette analyse, quelques petites ameliorations seront apportees a une version subsequente du scenario d'apprentissage PhaRoboS afin de favoriser davantage la mobilisation des habiletes reliees au processus d'enquete.

  2. Translation into French of: “Changes to publication requirements made at the XVIII International Botanical Congress in Melbourne – what does e-publication mean for you?”. Translated by Christian Feuillet and Valéry Malécot Changements des conditions requises pour la publication faits au XVIII e Congrès International de Botanique à Melbourne – qu’est-ce que la publication électronique représente pour vous?

    PubMed Central

    Knapp, Sandra; McNeill, John; Turland, Nicholas J.

    2011-01-01

    Résumé Les changements au Code International de Nomenclature Botanique sont décidés tous les 6 ans aux Sections de Nomenclature associées aux Congrès Internationaux de Botanique (CIB). Le XVIIIe CIB se tenait à Melbourne, Australie; la Section de Nomenclature s’est réunie les 18-22 juillet 2011 et ses décisions ont été acceptées par le Congrès en session plénière le 30 juillet. Suite à cette réunion, plusieurs modifications importantes ont été apportées au Code et vont affecter la publication de nouveaux noms. Deux de ces changements prendront effet le 1er janvier 2012, quelques mois avant que le Code de Melbourne soit publié. Les documents électroniques publiés en ligne en ‘Portable Document Format’ (PDF) avec un ‘International Standard Serial Number’ (ISSN) ou un ‘International Standard Book Number’ (ISBN) constitueront une publication effective, et l’exigence d’une description ou d’une diagnose en latin pour les noms des nouveaux taxa sera changée en l’exigence d’une description ou d’une diagnose en latin ou en anglais. De plus, à partir du 1er janvier 2013, les noms nouveaux des organismes traités comme champignons devront, pour que la publication soit valide, inclure dans le protologue (tous ce qui est associé au nom au moment de la publication valide) la citation d’un identifiant (‘identifier’) fourni par un dépôt reconnu (tel MycoBank). Une ébauche des nouveaux articles concernant la publication électronique est fournie et des conseils de bon usage sont esquissés. Pour encourager la diffusion des changements adoptés au Code International de Nomenclature pour les algues, les champignons et les plantes, cet article sera publié dans BMC Evolutionary Biology, Botanical Journal of the Linnean Society, Brittonia, Cladistics, MycoKeys, Mycotaxon, New Phytologist, North American Fungi, Novon, Opuscula Philolichenum, PhytoKeys, Phytoneuron, Phytotaxa, Plant Diversity and Resources, Systematic Botany et Taxon. PMID:22287925

  3. A Paradigm shift to an Old Scheme for Outgoing Longwave Radiation

    NASA Astrophysics Data System (ADS)

    McDonald, Alastair B.

    2016-04-01

    There are many cases where the climate models do not agree with the empirical data. For instance, the data from radiosondes (and MSUs) do not show the amount of warming in the upper troposphere that is predicted by the models (Thorne et al. 2011). The current scheme for outgoing longwave radiation can be traced back to the great 19th Century French mathematician J-B Joseph Fourier. His anachronistic idea was that the radiation balance at the top of the atmosphere (TOA) is maintained by the conduction of heat from the surface (Fourier 1824). It was based on comparing the atmosphere to the 18th Century Swiss scientist H-B de Saussure's hotbox which he had invented to show that solar radiation is only slightly absorbed by the atmosphere. Saussure also showed that thermal radiation existed and argued that the warmth of the air near the surface of the Earth is due to absorption of that infra red radiation (Saussure 1786). Hence a paradigm shift to Saussure's scheme, where the thermal radiation is absorbed at the base of the atmosphere, rather than throughout the atmosphere as in Fourier's scheme, may solve many climate models problems. In this new paradigm the boundary layer continually exchanges radiation with the surface. Thus only at two instants during the day is there no net gain or loss of heat by the boundary layer from the surface, and so that layer is not in LTE. Moreover, since the absorption of outgoing longwave radiation is saturated within the boundary layer, it has little influence on the TOA balance. That balance is mostly maintained by changes in albedo, e.g. clouds and ice sheets. Use of this paradigm can explain why the excess warming in south western Europe was caused by water vapour close to the surface (Philipona et al. 2005), and may also explain why there are difficulties in closing the surface radiation balance (Wild et al. 2013) and in modelling abrupt climate change (White et al. 2013). References: Fourier, Joseph. 1824. 'Remarques Générales Sur Les Températures Du Globe Terrestre Et Des Espaces Planétaires.' Annales de Chimie et de Physique 27: 136-67, translated by Raymond T. Pierrehumbert http://www.nature.com/nature/journal/v432/n7018/extref/432677a-s1.pdf Philipona, Rolf, Bruno Dürr, Atsumu Ohmura, and Christian Ruckstuhl. 2005. 'Anthropogenic Greenhouse Forcing and Strong Water Vapor Feedback Increase Temperature in Europe'. Geophysical Research Letters 32 (19): L19809. doi:10.1029/2005GL023624. Saussure, Horace-Benedict de. 1786. 'Chapter XXXV. Des Causes du Froid qui Regne sur les Montagnes'. In Voyages dans les Alpes, II:347-71. Neuchatel: Fauche-Borel. http://gallica.bnf.fr/ark:/12148/bpt6k1029499.r=.langFR, translated by Alastair B. McDonald, http://www.abmcdonald.freeserve.co.uk/saussure/CHAPTER%2035.pdf. Thorne, Peter W., Philip Brohan, Holly A. Titchner, et al. 2011. 'A Quantification of Uncertainties in Historical Tropical Tropospheric Temperature Trends from Radiosondes'. Journal of Geophysical Research: Atmospheres 116 (D12): n/a - n/a. doi:10.1029/2010JD015487. Wild, Martin, Doris Folini, Christoph Schär, et al. 2013. 'The Global Energy Balance from a Surface Perspective'. Climate Dynamics 40 (11-12): 3107-34. doi:10.1007/s00382-012-1569-8. White, James W.C., Alley, Richard B., Archer, David E., et al. 2013. Abrupt Impacts of Climate Change: Anticipating Surprises. Washington, D.C.: National Academies Press. http://www.nap.edu/catalog/18373.

  4. Raw materials exploitation in Prehistory of Georgia: sourcing, processing and distribution

    NASA Astrophysics Data System (ADS)

    Tushabramishvili, Nikoloz; Oqrostsvaridze, Avthandil

    2016-04-01

    Study of raw materials has a big importance to understand the ecology, cognition, behavior, technology, culture of the Paleolithic human populations. Unfortunately, explorations of the sourcing, processing and distribution of stone raw materials had a less attention until the present days. The reasons of that were: incomplete knowledge of the archaeologists who are doing the late period archaeology (Bronze Age-Medieval) and who are little bit far from the Paleolithic technology and typology; Ignorance of the stone artifacts made on different kind of raw-materials, except flint and obsidians. Studies on the origin of the stone raw materials are becoming increasingly important since in our days. Interesting picture and situation have been detected on the different sites and in different regions of Georgia. In earlier stages of Middle Paleolithic of Djruchula Basin caves the number of basalt, andesite, argillite etc. raw materials are quite big. Since 130 000 a percent of the flint raw-material is increasing dramatically. Flint is an almost lonely dominated raw-material in Western Georgia during thousand years. Since approximately 50 000 ago the first obsidians brought from the South Georgia, appeared in Western Georgia. Similar situation has been detected by us in Eastern Georgia during our excavations of Ziari and Pkhoveli open-air sites. The early Lower Paleolithic layers are extremely rich by limestone artifacts while the flint raw-materials are dominated in the Middle Paleolithic layers. Study of these issues is possible to achieve across chronologies, the origins of the sources of raw-materials, the sites and regions. By merging archaeology with anthropology, geology and geography we are able to acquire outstanding insights about those populations. New approach to the Paleolithic stone materials, newly found Paleolithic quarries gave us an opportunities to try to achieve some results for understanding of the behavior of Paleolithic populations, geology and geomorphology of different regions of Georgia. References: 1. 2015. Tushabramishvili N. Ziari. Online Archaeology 8. Tbilisi, Georgia. Pp. 41-43 2. 2012. M François-Xavier Le Bourdonnec, Sébastien Nomade, Gérard Poupeau, Hervé Guillou, Nikolos Tushabramishvili, Marie-Hélène Moncel, David Pleurdeau, Tamar Agapishvili, Pierre Voinchet, Ana Mgeladze, David Lordkipanidze). Multiple origins of Bondi Cave and Ortvale Klde (NW Georgia) obsidians and human mobility in Transcaucasia during the Middle and Upper Palaeolithic. Journal of Archaeological Science xxx (2012) 1-14 3. 2011. Mercier N., Valladas H., Meignen L., Joron J. L., Tushabramishvili N., Adler D.S., Bar Yosef O. Dating the early Middle Palaeolithic Laminar Industry from Djruchula cave, Republic of Georgia. Paléorient Volume 36. Issue 36-2, pp. 163-173 4. 2010. L. Meignen&Nicholas Tushabramishvili. Djruchula Cave, on the Southern Slopes of the Great Caucasus: An Extension of the Near Eastern Middle Paleolithic Blady Phenomenon to the North. Journal of The Israel Prehistoric Society 40 (2010), 35-61 5. 2007. Tushabramishvili N.,Pleurdeau D., Moncel M.-H., Mgeladze A. Le complexe Djruchula-Koudaro au sud Caucase (Géorgie). Remarques sur les assemblages lithiques pléistocenes de Koudaro I, Tsona et Djruchula . Anthropologie • 45/1 • pp. 1-18 6. Tushabramishvili, D., 1984. Paleolit Gruzii. (Palaeolithic of Georgia). Newsletter of the Georgian State Museum 37B, 5e27

  5. Aquifer overexploitation: what does it mean?

    NASA Astrophysics Data System (ADS)

    Custodio, Emilio

    2002-02-01

    Groundwater overexploitation and aquifer overexploitation are terms that are becoming common in water-resources management. Hydrologists, managers and journalists use them when talking about stressed aquifers or some groundwater conflict. Overexploitation may be defined as the situation in which, for some years, average aquifer ion rate is greater than, or close to the average recharge rate. But rate and extent of recharge areas are often very uncertain. Besides, they may be modified by human activities and aquifer development. In practice, however, an aquifer is often considered as overexploited when some persistent negative results of aquifer development are felt or perceived, such as a continuous water-level drawdown, progressive water-quality deterioration, increase of ion cost, or ecological damage. But negative results do not necessarily imply that ion is greater than recharge. They may be simply due to well interferences and the long transient period that follow changes in the aquifer water balance. Groundwater storage is depleted to some extent during the transient period after ion is increased. Its duration depends on aquifer size, specific storage and permeability. Which level of "aquifer overexploitation" is advisable or bearable, depends on the detailed and updated consideration of aquifer-development effects and the measures implemented for correction. This should not be the result of applying general rules based on some indirect data. Monitoring, sound aquifer knowledge, and calculation or modelling of behaviour are needed in the framework of a set of objectives and policies. They should be established by a management institution, with the involvement of groundwater stakeholders, and take into account the environmental and social constraints. Aquifer overexploitation, which often is perceived to be associated with something ethically bad, is not necessarily detrimental if it is not permanent. It may be a step towards sustainable development. Actually, the term aquifer overexploitation is mostly a qualifier that intends to point to a concern about the evolution of the aquifer-flow system in some specific, restricted points of view, but without a precise hydrodynamic meaning. Implementing groundwater management and protection measures needs quantitative appraisal of aquifer evolution and effects based on detailed multidisciplinary studies, which have to be supported by reliable data. Resumé. La surexploitation de l'eau souterraine et la surexploitation des nappes sont des termes qui deviennent d'usage commun en gestion de l'eau. Plusieurs hydrologues, aménageurs et journalistes en font usage quand on parle d'une nappe exploitée intensivement et qui présente des situations conflictives. On peut définir la surexploitation comme étant la situation dans laquelle l'extraction moyenne d'eau souterraine est plus grande ou proche de la recharge moyenne pendant quelques années. Mais le taux ansi que la surface de cette recharge sont souvent tres incertains et peuvent changer dûs a des activitées humaines et à l'exploitation de la nappe elle-méme. Du point de vue pratique on souvent considère qu'il y a surexploitation quand on observe or on s'aperçoit de certains résultats négatifs de l'exploitation, tels qu'une diminution continue du niveau de l'eau, une detérioration de sa qualité, une augmentation du coût d' extraction, ou dommages écologiques. Mais ces effets négatifs n' impliquent pas nécessairement que l'extraction soit plus grande que la recharge. Ils peuvent étre simplement le résultat d'interférences ou d'une longue période transitoire qui suivent les changements dans les termes du bilan hydrique. Cette période transitoire a une durée que dépend de la taille de la nappe, et de son coefficient d' emmagasinement et de sa perméabilité. Les extractions d'eau de la nappe comportent une diminution de l'emmagasinement d'eau souterraine pendant le période transitoire. A fin de pouvoir décider du degré de "surexploitation de la nappe" conseillé ou admisible on a besoin de la description detaillée et à jour des effets de l'exploitation et des mesures de correction adoptées. Cette décision ne peut pas étre prise uniquement à partir de regles générales et l'appui de quelques observations indirectes. On a besoin de controle, d'une bonne connaissance de la nappe, et de calculer ou modeliser le comportement, en faisant appel à l'ensemble des objectifs et politiques établies par une institution de gestion, avec l'implication des personnes qui sont intéressées par l'eau souterraine, et tenant compte des conditions environmentales et sociales. La surexploitation de nappes, qui souvent est associée a quelque chose éthiquement nocive, n'est pas necessairement ainsi pendant un certain temps, et peut être une étape dans l'évolution vers un développement durable. Réellement la designation de surexploitation de nappes est surtout un adjectif que a pour but de qualifier une évolution préoccupante sous certains points de vue, mais sans une signification hydrodynamique précise. Pour adopter des mesures de gestion et protection, on a besoin de l'évaluation quantitative de l'évolution de la nappe et de ses effets, ce qui doit déboucher sur des études detaillées dans un contexte multidisciplinaire, et sur de bonnes données. Resumen. La sobreexpolotación del agua subterránea y la sobreexplotacion de acuíferos son conceptos que se están convirtiendo en términos de uso común en gestión hídrica. Muchos hidrólogos, gestores y periodistas las usan para referirse a un acuífero explotado intensamente o que presenta situaciones conflictivas. La sobreexplotación se puede definir como la situación en la que durante varios años la extracción media de agua subterránea de un acuífero supera o se aproxima a la recarga media. Pero la tasa y también la superficie sobre la que se realiza esta recarga son a menudo muy inciertas, y pueden cambiar por actividades humanas y por la propia explotación del acuífero. Sin embargo, en la práctica se suele considerar que hay sobreexplotación cuando se observan o se perciben ciertos resultados negativos de la explotación, tales como un descenso continuado del nivel del agua, un deterioro de su calidad, un encarecimiento del agua extraída, o daños ecológicos. Pero estos efectos no están necesariamente relacionados con el hecho de que la extracción sea mayor que la recarga, puesto que pueden ser simplemente el resultado de interferencias o del dilatado período transitorio que sigue a los cambios en los términos del balance de agua, y cuya duración depende del tamaño del acuífero, y de su permeabilidad y coeficiente de almacenamiento. Las extracciones del acuífero suponen una disminución del almacenamiento de agua subterránea durante este periodo transitorio. Para decidir que grado de "sobreexplotación del acuífero" es aconsejable o admisible hace falta la consideración detallada y actualizada de los efectos de la explotación y las medidas de corrección que se adopten. Para esa decisión no basta con reglas generales y el apoyo de algunas observaciones indirectas. Se necesitan observaciones de control, buen conocimiento del acuífero y cálculos o modelación del comportamiento, y todo ello en el marco de un conjunto de objetivos y políticas establecidas por una institución de gestión, con la implicación de aquellos que tienen un interés en el agua subterránea, y teniendo en cuenta los condicionantes ambientales y sociales. La sobreexplotación de acuíferos, que con frecuencia suele asociarse a algo éticamente malo, no tiene por qué ser necesariamente así durante cierto tiempo, sino una etapa en la evolución hacia un desarrollo sustentable. En la realidad la designación de sobreexplotación de acuíferos es principalmente un adjetivo que trata de calificar a una evolución preocupante bajo determinados puntos de vista, sin que tenga una significación hidrodinámica precisa. Para adoptar medidas de gestión y de protección se necesita la evaluación cuantitativa de la evolución del acuífero y sus efectos, que se derivan de estudios de detalle en un contexto multidisciplinar y de datos fiables.

  6. Static electricity as a hazard in industry

    NASA Astrophysics Data System (ADS)

    van Laar, Ir. G. F. M.

    1991-08-01

    Looking at the German and Dutch statistics, the percentage of dust explosions in all industries dealing with explosible dust-air mixtures which have been ignited by electrostatic discharges is about 8-10%. However, in the plastics industry this value is much higher: 25%. In particular in the last few years some rather large industrial incidents probably have been caused by electrostatic charges. To understand why accidents may happen due to static electricity the several dangerous electrostatic discharges will be discussed briefly in connection with dust explosion hazards. In particular the propagating brush discharge and the cone or " Maurer " discharge are important. Ignitions are caused mainly in the case of easily ignitable powders or hybrid mixtures (powders in combination with flammable vapours) due to isolated conductors and non-conducting materials as the product itself or such process parts as flexible hoses, internally coated silos and ducts. To illustrate how hazards by electrostatic discharges may develop in real life both on a small and a large scale a few examples will be briefly discussed. Une consultation des statistiques allemandes et néerlandaises révèle que dans toutes les industries traitant des mélanges poussière-air explosifs le pourcentage des explosions de poussière qui se sont produites par des décharges électrostatiques ne s'élève qu'à environ 8-10 %. Dans l'industrie des matières plastiques cependant, ce pourcentage est bien plus grand: 25 %. Les dernières années en particulier nous ont rapporté plusieurs incidents industriels assez graves qui sont probablement causés par des charges électrostatiques. Pour comprendre comment des accidents pareils dus à l'électricité statique peuvent se produire, les différentes décharges électrostatiques dangereuses pouvant produire des explosions de poussière se discuteront brièvement. Ce sont surtout la décharge brosse de surface et la décharge " Maurer " qui sont d'une importance considérable. Les inflammations se font principalement en cas de poussières combustibles ou mélanges hybrides (des poudres en combinaison avec des vapeurs combustibles) par des conducteurs isolés et des matériaux non conductibles comme le produit même ou par des éléments concernés comme des tuyaux flexibles, des silos couverts à l'intérieur et des tubes. Afin d'illustrer comment des dangers par décharges électrostatiques sur petite comme sur grande échelle pourraient se développer dans la vie quotidienne, quelques exemples seront donnés.

  7. La plastie tricuspide: annuloplastie de Carpentier versus technique de De VEGA

    PubMed Central

    Charfeddine, Salma; Hammami, Rania; Triki, Faten; Abid, Leila; Hentati, Mourad; Frikha, Imed; Kammoun, Samir

    2017-01-01

    L'atteinte de la valve tricuspide a été longtemps négligée aussi bien par cardiologue que par le chirurgien, mais depuis quelques années, la fuite tricuspidienne a été démontrée comme un facteur pronostic dans l'évolution des patients opérés d'une valvulopathie du c'ur gauche. Plusieurs techniques de plastie tricuspide ont été développées et les études publiées divergent sur les résultats de ces techniques; Nous avons mené ce travail afin d'évaluer les résultats de plastie tricuspide dans une population caractérisée par une forte prévalence de la maladie rhumatismale et de comparer les techniques d'annuloplastie de carpentier versus la plastie de DeVEGA. Etude rétrospective menée sur une période de 25 ans ayant inclus les patients traités par plastie tricuspide dans le service de cardiologie de SFAX. Nous avons comparé les résultats de groupe 1 (annuloplastie de Carpentier) vs groupe 2 (plastie de DeVEGA) 91 patients étaient inclus dans notre étude, avec 45 patients dans le groupe 1et 46 patients dans le groupe 2. La plupart des patients avaient une IT moyenne ou importante (83%) avant la chirurgie, une dilatation de l'anneau a été observée chez 90% des patients sans qu'il y ait une différence significatives entre le groupe 1 et 2. Les résultats immédiats étaient comparables entre les deux techniques mais au cours de suivi une insuffisance récurrente au moins moyenne a été plus significativement fréquente dans le groupe de plastie de DeVEGA. Les facteurs prédictifs d'IT récurrente significative au long cours étaient en étude multivariée la technique de DeVEGA (OR=3.26 (1.12-9.28)) et la pression artérielle pulmonaire systolique préoperatoire (OR=1.06 (1.01-1.12)). La plastie tricuspide avec anneau de Carpentier semble garantir de meilleurs résultats que la plastie de DeVEGA, par contre une PAPS préopératoire élevée est prédictive de récurrence de la fuite tricuspidienne même après plastie d'où l'intérêt d'opérer les malades à un stade précoce. PMID:28819539

  8. Toward a Sociology of Oceans.

    PubMed

    Hannigan, John

    2017-02-01

    Despite covering around 70 percent of the earth's surface, the ocean has long been ignored by sociology or treated as merely an extension of land-based systems. Increasingly, however, oceans are assuming a higher profile, emerging both as a new resource frontier, a medium for geopolitical rivalry and conflict, and a unique and threatened ecological hot spot. In this article, I propose a new sociological specialty area, the "sociology of oceans" to be situated at the interface between environmental sociology and traditional maritime studies. After reviewing existing sociological research on maritime topics and the consideration of (or lack of consideration) the sea by classic sociological theorists, I briefly discuss several contemporary sociological approaches to the ocean that have attracted some notice. In the final section of the paper, I make the case for a distinct sociology of oceans and briefly sketch what this might look like. One possible trajectory for creating a shared vision or common paradigm, I argue, is to draw on Deleuze and Guattari's dialectical theory of the smooth and the striated. Même s'il couvre 70% de la surface de la Terre, l'océan a été longtemps ignoré en sociologie ou traité comme une extension des systèmes terrestres. De plus en plus, toutefois, l'océan retient l'attention, en étant vu comme une nouvelle frontière en termes de ressources, un médium pour les rivalités et les conflits géopolitiques, et un lieu écologique névralgique et unique. Dans cet article, je propose une nouvelle spécialisation sociologique, la 'sociologie des océans', se situant dans l'interface entre la sociologie environnementale et les études maritimes traditionnelles. Après une recension de la recherche sociologique existante sur les sujets maritimes et la prise en compte (ou l'absence de prise en compte) de l'océan par les théoriciens de la sociologie classique, je discute brièvement quelques approches sociologiques contemporaines de l'océan ayant attiré l'attention. Dans la dernière partie de l'article, j'insiste sur le besoin d'une sociologie distincte de l'océan et je présente brièvement à quoi cela pourrait ressembler. Une voie possible pour créer une vision commune ou un paradigme, selon moi, est de s'inspirer de la théorie dialectique du lisse et du strié de Deleuze et Guattari. © 2017 Canadian Sociological Association/La Société canadienne de sociologie.

  9. Traitements didactiques preventifs d'un type de conceptions erronees en sciences physiques chez des eleves du secondaire

    NASA Astrophysics Data System (ADS)

    Blondin, Andre

    Dans un contexte constructiviste, les connaissances anterieures d'un individu sont essentielles a la construction de nouvelles connaissances. Quelle qu'en soit la source (certaines de ces connaissances ont ete elaborees en classe, d'autres ont ete elaborees par interaction personnelle de l'individu avec son environnement physique et social), ces connaissances, une fois acquises, constituent les matieres premieres de l'elaboration des nouvelles conceptions de cet individu. Generalement, cette influence est consideree comme positive. Cependant, dans un milieu scolaire ou l'apprentissage de certaines conceptions enchassees dans un programme d'etudes et enterinees par l'ensemble d'une communaute est obligatoire, certaines connaissances anterieures peuvent entraver la construction des conceptions exigees par la communaute. La litterature abonde de tels exemples. Cependant, certaines connaissances anterieures, en soi tout a fait conformes a l'Heritage, peuvent aussi, parce qu'utilisees de facon non pertinente, entraver la construction d'une conception exigee par la communaute. Ici, la litterature nous donne peu d'exemples de ce type, mais nous en fournirons quelques-uns dans le cadre theorique, et ce sera un d'entre eux qui servira de base a nos propos. En effet, une grande proportion d'eleves inscrits a un cours de sciences physiques de la quatrieme secondaire, en reponse a un probleme deja solutionne durant l'annee et redonne lors d'un examen sommatif, "Pourquoi la Lune nous montre-t-elle toujours la meme face?", attribue principalement la cause de ce phenomene a la rotation de la Terre sur son axe. En tant que responsable de l'enseignement de ce programme d'etudes, plusieurs questions nous sont venues a l'esprit, entre autres, comment, dans un contexte constructiviste, est-il possible de reduire chez un eleve, l'impact de cette connaissance anterieure dans l'elaboration de la solution et ainsi prevenir la construction d'une conception erronee? Nous avons teste nos hypotheses avec la cohorte suivante d'eleves chez qui se repetaient les memes conditions d'apprentissage. Nous avons utilise le design de recherche "posttest only" de Campbell et Stanley. En mai, apres le moment prevu dans la planification du programme pour donner le probleme aux eleves, nous avons suggere deux facons differentes de reviser la solution de ce probleme. Les eleves du premier groupe experimental ont revise sans que soit activee la connaissance anterieure apprehendee de la rotation de la Terre. Les eleves du deuxieme groupe experimental ont ete confrontes, par des questions et une simulation, au fait que la rotation de la Terre n'est pas une connaissance pertinente pour resoudre le probleme. Les groupes temoins et les groupes experimentaux ont ete choisis au hasard dans le bassin des ecoles secondaires de la commission scolaire. (Abstract shortened by UMI.)

  10. Étude vibrationnelle du 3,4'-bitriazole et de quelques-uns de ses dérivés C-monosubstitués

    NASA Astrophysics Data System (ADS)

    Ouijja, N.; Guédira, F.; Zaydoun, S.; Aouial, M.; Saidi Idrissi, M.; Lautié, A.

    1999-05-01

    The vibrational study of 3,4'-bitriazole, 5-methyl-3,4'-bitriazole and 5-bromo-3,4'- bitriazole is reported and an assignment of their fundamentals is proposed on the basis of the existence of only one form in the solid state. The substitution of the hydrogen in 3 position of the \\{1H\\} triazolic ring by 4-triazolyl group induces a decrease in the strength of the NH...N hydrogen bond comparatively to 1,2,4-triazole and its C-monosubstituted derivatives. The introduction of a substituent in position 5 of the biheterocycle increases the strength of the self association mainly when the substituent is an electron-attractor group. Using the νN-H wave number values, the estimation of N...N distances in these derivatives is possible. The distinction between the triazolic ring' s vibrations and those of 4-triazolyl group seems impossible, probably because of the conjugation of the two rings. The higher frequency value obtained for the intercyclic bond stretching mode in the BrbTA is explained by an important conjugation and a shorter C-N bond in this case. L'étude vibrationnelle des 3,4'-bitriazole, 5-méthyl-3,4'-bitriazole et 5-bromo-3,4'- bitriazole a été effectuée et une attribution de leurs vibrations fondamentales a été proposée sur la base de l'existence d'une seule forme à l'état solide. La substitution de l'hydrogène en position 3 du cycle triazolique \\{1H\\} par un groupement 4-triazolyle entraîne une diminution de la force de la liaison hydrogène NH...N comparativement au 1,2,4-triazole et à ses dérivés C-monosubstitués. L'introduction d'un substituant en position 5 du bihétérocycle augmente la force de l'autoassociation surtout dans le cas où le substituant est un groupement attracteur d'électrons. A partir des fréquences νNH, l'estimation des distances N...N dans ces dérivés a été effectuée. La distinction entre les vibrations du cycle triazolique et celles du groupement 4-triazolyle semble impossible, probablement à cause de la conjugaison des deux cycles. La fréquence plus élevée obtenue pour la vibration de valence de la liaison intercyclique dans le cas du BrbTA est explicable par une conjugaison plus forte et une liaison C-N plus courte dans ce dernier.

  11. Prévalence et facteurs associés à l’anémie en grossesse à l’Hôpital Général de Douala

    PubMed Central

    Tchente, Charlotte Nguefack; Tsakeu, Eveline Ngouadjeu Dongho; Nguea, Arlette Géraldine; Njamen, Théophile Nana; Ekane, Gregory Halle; Priso, Eugene Belley

    2016-01-01

    Introduction L’anémie est un problème de santé publique, prédominant chez les enfants et les femmes en âge de procréer. L’objectif de l’étude était de déterminer la prévalence et les facteurs associés à l’anémie chez les femmes enceintes à l’Hôpital Général de Douala. Méthodes Il s’agissait d’une étude transversale qui s’est déroulée de juillet 2012 à juillet 2013. Toutes les femmes enceintes consentantes se présentant pour consultation prénatale et ayant réalisées une numération formule sanguine (NFS) étaient incluses. Les caractéristiques sociodémographiques, les antécédents obstétricaux et les résultats de la NFS étaient enregistrés sur une fiche technique pré-testée. L’anémie était définie selon les critères de l’OMS. Après quelques statistiques descriptives, nous avons effectué une analyse bivariée à l’aide du test de Chi 2 et la probabilité exacte de Fisher pour rechercher les facteurs associés à l’anémie. Une valeur de p< 0,05 était considérée significative. Résultats Au total 415 gestantes ont été recrutées. La prévalence de l’anémie était de 39,8%. L’âge moyen était de 29,89±4,835ans. Le taux moyen d’hémoglobine était de 10,93±1,23.L’anémie normochrome normocytaire (53,3%) était prédominante. L’anémie était sévère dans 2,4% des cas. L’anémie en grossesse était significativement associée aux antécédents de pathologies chroniques (P=0,02) et d’anémie gravidique antérieure (P=0,003). L’anémie était plus observée au 3ème trimestre (P=0,04) et l’allaitement maternel était protecteur (P=0,02). Conclusion La prévalence de l’anémie chez la femme enceinte reste élevée. Un accent doit être mis sur une meilleureprise en charge des pathologies chroniques chez les gestantes et sur leur suivi en post natal afin de corriger l’anémie avant la grossesse ultérieure. PMID:28292095

  12. Modélisation de la dynamique de la chaîne peptidique des protéines en solution par RMN à travers les couplages dipolaires

    NASA Astrophysics Data System (ADS)

    Bouvignies, G.; Bernadó, P.; Blackledge, M.

    2005-11-01

    L'activité d'une protéine est liée non seulement à sa structure, mais également à sa dynamique, et il est important de connaître la nature de ses mouvements pour comprendre sa fonction biologique. La résonance magnétique nucléaire est particulièrement utile pour étudier la dynamique d'une molécule en solution sur une gamme de temps de corrélation très large. En particulière, la relaxation des spins 15N ou 13C donne accès aux mouvements moléculaires avec les temps caractéristiques entre les dizaines de picosecondes et le temps de corrélation rotationelle de la molécule (autour de 10ns pour une protéine monomérique de 20 kD à 300 K). Les vitesses de relaxation dépendent de la flexibilité de chaque site, et peuvent être caractérisé en termes d'amplitude et de temps caractéristique locale. La précision de ces paramètres et sa compréhension en termes de fonction exigent que la réorientation globale de la molécule soit correctement prise en compte. Ces méthodes expérimentales, qui seront présentées ici brièvement, font maintenant partie de la panoplie d'expériences appliquées dans l'étude de la relation structure-dynamique d'une protéine et ses partenaires. Néanmoins les mouvements plus lents, entre la nano et la milliseconde, sont plus difficiles à étudier, et il y a très peu d'information disponible sur la nature de la dynamique de la chaîne peptidique dans cette gamme de temps par RMN. Très récemment de nouvelles méthodologies ont été proposées, basées sur l'alignement préférentiel d'une protéine par rapport au champ magnétique, induit par dissolution de la molécule dans un cristal liquide très dilué. Dans ces conditions les changements conformationels sur des temps caractéristique plus lents (jusqu'au millisecondes) peuvent être étudiés. Nous présenterons cette technique, et quelques résultats, comparant la dynamique rapide (ps-ns), et plus lente le long de la chaîne peptidique de quatre protéines.

  13. Foreword

    NASA Astrophysics Data System (ADS)

    Giard, M.; Ristorcelli, I.

    Guy Serra died prematurely on August 15th 2000 aged 52. He was one of the most active pioneers in the field of infrared and submillimeter space astronomy. After completing a PhD thesis on gamma ray astrophysics in 1973, he was among the first to measure the far-infrared dust emission from our Galaxy with the AGLAE balloon-borne experiment. He then devoted his whole career to contribute in a decisive manner to the emergence and achievement of the infrared and submillimeter space program at the French and European levels with the AROME and PRONAOS balloon borne experiments, and with the satellite missions ISO, ODIN, Planck, and FIRST (which became Herschel). This three day conference dedicated to Guy Serra was held in Toulouse on June 11-13 2001. We took time both to remember the legacy of Guy Serra, and to discuss current advances and prospects in the field of infrared and submillimeter space astronomy. It was clear to all of us that in this first year of the XXIst century, with the construction of the SIRTF, Planck and Herschel satellites, we were close to enter in the golden age of infrared astronomy which would bring us fabulous new insights on our Origins. A Great Humanist Guy Serra was passionately interested in science and physics. He had such generosity and enthusiasm to share with others his very wide-ranging knowledge, his intellectual refinement, and his perceptive views of things, that it was a real joy to work with him. His creativity and capacity for hard work were stunning, and extremely motivating. But first of all, we deeply appreciated his exceptional human qualities. He showed a deep respect for the views of others and had a great capacity for listening. In particular, he was very concerned with the training of PhD students for, and through, research, and with their future after the defense of their thesis. Guy was also exceptional in his will to communicate with the general public, including very young pupils in primary schools. Beyond his own scientific work, and because he had always considered the collective interest as a priority, he was someone who thought deeply about astronomy as a science and about its evolution in France. He devoted a lot of energy to such reflections and played an active role in local and national committees. Among the ideas he defended was that the standing of astronomy as a science depends on a unity between modelling and observation. He particularly liked to point out that similar advanced physics is needed both in the field of instrumentation and in the astrophysical modelling. He considered instrumentation as an essential component in astronomy, that had to be continuously developed, and to remain a part of the astronomers activity. He also liked to emphasize the importance of the collective aspect in the success of a project, which directly depend on researchers and engineers working together as a team. He was also extremely active in developing interfaces and cooperation with others communities: physicists, chemists, mathematicians, biologists. He considered that this was the best way to trigger great leap forward for astronomy. Guy Serra was a real pillar for many of us who worked with him. He was a dazzingly talented friend, passionate not only to astrophysics, but also for history, philosophy, music. Guy was a lover of nature and of life, remarquably altruist, and always concerned with the collective interest. His sudden departure has left a tremendous empty space. The memory of Guy , smiling warmly, with his sparkling eyes full of intelligence and sensitivity shall always remain in our hearts. M. Giard I. Ristorcelli

  14. Transitions de phase dans le oxyde de yttrium vanadium

    NASA Astrophysics Data System (ADS)

    Roberge, Benoit

    Dans le mémoire qui suit, les ordres structural, magnétiques et orbital dans le YVO3 sont étudiés avec l'aide de la diffraction des rayons X,de la spectroscopie Raman et de la technique de la cavité résonnante hyperfréquence. L'objectif premier consiste à observer l'évolution de ces ordres en fonction de la température. Le mémoire met ensuite en évidence le couplage entre les différents ordres cohabitants dans le YVO3 . Les mesures effectuées par la diffraction des rayons X permettent de mesurer le caractère polycrystallin des échantillons du YVO 3. Une comparaison de nos mesures avec des mesures de diffraction des rayons X faites sur la poudre de YVO3 indique la faible présence de maclage. Les mesures effectuées avec la technique de résonnance hyperfréquence permettent de suivre l'évolution de la constante diélectrique en fonction de la température. Les changements impliquant l'ordre orbital se manifestent de manière évidente dans la constante diélectrique à 200 K et à 77 K. La transition diélectrique détectée à 77 K est une transition de premier ordre. Un couplage entre les propriétés diélectriques et magnétiques est observable à la température de Néel à 114 K. L'effet d'un champ magnétique fixe sur la température de transition de l'ordre orbital survenant à 77 K est également remarquable. Cela indique un couplage magnétodiélectrique démontrant ainsi le caractère multiferroïque du YVO 3. Finalement, l'observation d'un mécanisme de relaxation pouvant être modélisé par le modèle d'Havriliak-Negami est observé en dessous de 77 K. En utilisant le modèle d'Arrhénius et le modèle d'Havriliak-Negami, on peut caractériser le mécanisme avec son énergie d'activation et son temps de relaxation. Les mesures effectuées en spectroscopie Raman permettent de suivre l'évolution de la structure du YVO3 en fonction de la température. Les deux changements structuraux survenant à 200 K et 77 K sont observés. Le couplage entre le réseau et l'ordre orbital se manifeste par une augmentation de l'anharmonicité qui se traduit par une augmentation de l'intensité des processus de deuxième et troisième ordres. Les différentes théories expliquant comment l'ordre orbital interagit avec le réseau cristallin seront abordées en mettant l'accent sur la théorie de Van den Brink ['] qui réflète le mieux la réalité. Une comparaison de nos mesures avec d'autres travaux en spectroscopie Raman effectués sur le YVO3 sera également effectuée. Le couplage entre le réseau et l'ordre magnétique s'observe par la présence d'excitations magnétiques dans les spectres Raman et par la présence d'un ramollissement/durcissement survenant à la température de Néel. La théorie de Granado expliquant le phénomène de durcissement/ramollissement sera discutée.

  15. Technique distribuee de gestion de la charge sur le reseau electrique et ring-tree: Un nouveau systeme de communication P2P

    NASA Astrophysics Data System (ADS)

    Ayoub, Simon

    Le reseau de distribution et de transport de l'electricite se modernise dans plusieurs pays dont le Canada. La nouvelle generation de ce reseau que l'on appelle smart grid, permet entre autre l'automatisation de la production, de la distribution et de la gestion de la charge chez les clients. D'un autre cote, des appareils domestiques intelligents munis d'une interface de communication pour des applications du smart grid commencent a apparaitre sur le marche. Ces appareils intelligents pourraient creer une communaute virtuelle pour optimiser leurs consommations d'une facon distribuee. La gestion distribuee de ces charges intelligentes necessite la communication entre un grand nombre d'equipements electriques. Ceci represente un defi important a relever surtout si on ne veut pas augmenter le cout de l'infrastructure et de la maintenance. Lors de cette these deux systemes distincts ont ete concus : un systeme de communication peer-to-peer, appele Ring-Tree, permettant la communication entre un nombre important de noeuds (jusqu'a de l'ordre de grandeur du million) tel que des appareils electriques communicants et une technique distribuee de gestion de la charge sur le reseau electrique. Le systeme de communication Ring-Tree inclut une nouvelle topologie reseau qui n'a jamais ete definie ou exploitee auparavant. Il inclut egalement des algorithmes pour la creation, l'exploitation et la maintenance de ce reseau. Il est suffisamment simple pour etre mis en oeuvre sur des controleurs associes aux dispositifs tels que des chauffe-eaux, chauffage a accumulation, bornes de recharges electriques, etc. Il n'utilise pas un serveur centralise (ou tres peu, seulement lorsqu'un noeud veut rejoindre le reseau). Il offre une solution distribuee qui peut etre mise en oeuvre sans deploiement d'une infrastructure autre que les controleurs sur les dispositifs vises. Finalement, un temps de reponse de quelques secondes pour atteindre 1'ensemble du reseau peut etre obtenu, ce qui est suffisant pour les besoins des applications visees. Les protocoles de communication s'appuient sur un protocole de transport qui peut etre un de ceux utilises sur l'Internet comme TCP ou UDP. Pour valider le fonctionnement de de la technique de controle distribuee et le systeme de communiction Ring-Tree, un simulateur a ete developpe; un modele de chauffe-eau, comme exemple de charge, a ete integre au simulateur. La simulation d'une communaute de chauffe-eaux intelligents a montre que la technique de gestion de la charge combinee avec du stockage d'energie sous forme thermique permet d'obtenir, sans affecter le confort de l'utilisateur, des profils de consommation varies dont un profil de consommation uniforme qui represente un facteur de charge de 100%. Mots-cles : Algorithme Distribue, Demand Response, Gestion de la Charge Electrique, M2M (Machine-to-Machine), P2P (Peer-to-Peer), Reseau Electrique Intelligent, Ring-Tree, Smart Grid

  16. Réalisation d'un appareillage a lame vibrante pour l'étude des lignes de flux dans les supraconducteurs à haute température critique

    NASA Astrophysics Data System (ADS)

    Woirgard, J.; Salmon, E.; Gaboriaud, R. J.; Rabier, J.

    1994-03-01

    A very sensitive apparatus using the vibrating reed technique in a magnetic field is described. This new technic is an internal friction measurement which has been developed and applied to the study of vortex pinning in high T_c type II superconductors. The vibrating reed is simply used as a sample holder for the superconductor which can be oriented thin films, bulk samples or powders. The salient feature of this experimental set-up is the excitation mode of the reed for which the imposed vibration frequency can be freely chosen in the range 10^{-4}-10 Hz. Furthermore, the measurement sensitivity improves the performances obtained up to now by such similar apparatus as forced torsion pendulums. Damping values corresponding to phase lags between 10^{-5} and 10^{-4} radian can be readily obtained for vibration frequencies in the range 10^{-1} 10 Hz. Some preliminary results show damping peaks which might be due to the so-called fusion of the vortex network obtained with thin films whose thickness is 1000 Å and with textured bulk samples of YBaCuO. Une nouvelle technique basée sur la mesure du frottement intérieur en vibrations forcées est appliquée à l'étude de l'ancrage des vortex dans les oxydes supraconducteurs à haute température critique. Dans cette méthode la lame, excitée électrostatiquement, voit son rôle limité à celui de porte-échantillon sur lequel peuvent être disposés des couches minces, des échantillons massifs ou des poudres. L'originalité de cet appareillage réside dans la conception du mode d'excitation de la lame : la fréquence d'oscillation forcée peut être choisie dans une large gamme allant de 10^{-4} Hz à quelques dizaines de hertz. D'autre part, la sensibilité de la mesure améliore sensiblement les performances obtenues jusqu'à ce jour en vibrations forcées. Des amortissements correspondant à des déphasages compris entre à 10^{-5} et 10^{-4} radian peuvent être facilement mesurés. Les premiers essais réalisés sur une couche mince épitaxiée de 1000 Å d'épaisseur d'YBaCuO ont permis de mettre en évidence un pic d'amortissement, de grande amplitude, qui pourrait être dû à la fusion du réseau de vortex. Prochainement, cet appareillage sera employé pour l'étude de l'ancrage des lignes de flux sur les défauts du réseau cristallin, défauts naturels ou artificiels créés par implantation ionique dans les films minces ou par déformation plastique dans les échantillons massifs.

  17. Toxicité sérotoninergique résultant d’une interaction médicamenteuse entre le bleu de méthylène et les inhibiteurs de la recapture de la sérotonine

    PubMed Central

    Charbonneau, Annie

    2013-01-01

    RÉSUMÉ Contexte : Le bleu de méthylène est employé en pratique pour diverses raisons médicales. Des données récentes ont évoqué une interaction potentielle avec les inhibiteurs de la recapture de la sérotonine, pouvant conduire à une toxicité sérotoninergique. Objectif : Décrire le risque de toxicité sérotoninergique associé à l’interaction entre le bleu de méthylène et les inhibiteurs de la recapture de la sérotonine. Sources de l’information : Les publications pertinentes ont été ciblées systématiquement au moyen des moteurs de recherche MEDLINE (1946 au 21 mars 2013) et Embase (1974 à 2013, semaine 11) en utilisant les mots clés suivants : methylene blue, methylthioninium, monoamine oxidase inhibitors, serotonin reuptake inhibitors et serotonin syndrome. Aucune restriction touchant l’indication du bleu de méthylène ou la langue n’a été appliquée. Les références des publications ont également été analysées. Sélection des études et extraction des données : Dix-huit études de cas et deux séries de cas systématiques ont été sélectionnées. Aucune étude clinique aléatoire n’a encore été publiée. Synthèse des résultats : La première étude de cas à avoir soupçonné une interaction entre le bleu de méthylène et les inhibiteurs de la recapture de la sérotonine est parue en 2003. Dix-sept autres études de cas décrivant le même type d’interaction ont par la suite été rapportées. Les deux séries de cas ont regroupé les données de quelques 325 parathyroïdectomies où le bleu de méthylène avait été employé comme agent colorant. Les 17 patients qui ont présenté une toxicité du système nerveux central prenaient tous en période préopératoire des inhibiteurs de recapture de la sérotonine. Conclusion : Lorsqu’il est administré en concomitance avec des inhibiteurs de recapture de la sérotonine, le bleu de méthylène peut conduire à une toxicité sérotoninergique à une dose aussi faible que 0,7 mg/kg. En effet, il posséderait des propriétés inhibitrices de la monoamine oxidase A. Des précautions doivent être prises pour éviter cette interaction. PMID:23950608

  18. L'ethique de l'environnement comme dimension transversale de l'education en sciences et en technologies: Proposition d'un modele educationnel

    NASA Astrophysics Data System (ADS)

    Chavez, Milagros

    Cette these presente la trajectoire et les resultats d'une recherche dont l'objectif global est de developper un modele educationnel integrant l'ethique de l'environnement comme dimension transversale de l'education en sciences et en technologies. Face au paradigme positiviste toujours dominant dans l'enseignement des sciences, il a semble utile d'ouvrir un espace de reflexion et de proposer, sous forme d'un modele formel, une orientation pedagogique qui soit plus en resonance avec quelques-unes des preoccupations fondamentales de notre epoque: en particulier celle qui concerne la relation de humain avec son environnement et plus specifiquement, le role de la science dans le faconnement d'une telle relation, par sa contribution a la transformation des conditions de vie, au point de compromettre les equilibres naturels. En fonction de cette problematique generale, les objectifs de la recherche sont les suivants: (1) definir les elements paradigmatiques, theoriques et axiologiques du modele educationnel a construire et (2) definir ses composantes strategiques. De caractere theorico-speculatif, cette recherche a adopte la demarche de l'anasynthese, en la situant dans la perspective critique de la recherche en education. Le cadre theorique de cette these s'est construit autour de quatre concepts pivots: modele educationnel, education en sciences et en technologies, transversalite educative et ethique de l'environnement. Ces concepts ont ete clarifies a partir d'un corpus textuel, puis, sur cette base, des choix theoriques ont ete faits, a partir desquels un prototype du modele a ete elabore. Ce prototype a ensuite ete soumis a une double validation (par des experts et par une mise a l'essai), dans le but d'y apporter des ameliorations et, a partir de la, de construire un modele optimal. Ce dernier comporte deux dimensions: theorico-axiologique et strategique. La premiere s'appuie sur une conception de l'education en sciences et en technologies comme appropriation d'un patrimoine culturel, dans une perspective critique et emancipatrice. Dans cette optique, l'ethique de l'environnement intervient comme processus reflexif et existentiel de notre relation a l'environnement, susceptible d'etre integre comme dimension transversale de la dynamique educative. A cet effet, la dimension strategique du modele suggere une approche transversale de type existentiel, une strategie globale de type dialogique et des strategies pedagogiques specifiques dont des strategies d'apprentissage et des strategies d'evaluation. La realisation de ce modele a mis en relief certaines perspectives interessantes. Par exemple (1) la necessite de croiser la dimension cognitive des processus educatifs en sciences et en technologies avec d'autres dimensions de l'etre humain (affective, ethique, existentielle, sociale, spirituelle, etc.); (2) une vision de l'education en sciences et en technologies comme acte de liberte fondamentale qui consiste a s'approprier de facon critique un certain patrimoine culturel; (3) une vision de l'ethique de l'environnement comme processus de reflexion qui nous confronte a des questions existentielles de base.

  19. Vecteurs Singuliers des Theories des Champs Conformes Minimales

    NASA Astrophysics Data System (ADS)

    Benoit, Louis

    En 1984 Belavin, Polyakov et Zamolodchikov revolutionnent la theorie des champs en explicitant une nouvelle gamme de theories, les theories quantiques des champs bidimensionnelles invariantes sous les transformations conformes. L'algebre des transformations conformes de l'espace-temps presente une caracteristique remarquable: en deux dimensions elle possede un nombre infini de generateurs. Cette propriete impose de telles conditions aux fonctions de correlations qu'il est possible de les evaluer sans aucune approximation. Les champs des theories conformes appartiennent a des representations de plus haut poids de l'algebre de Virasoro, une extension centrale de l'algebre conforme du plan. Ces representations sont etiquetees par h, le poids conforme de leur vecteur de plus haut poids, et par la charge centrale c, le facteur de l'extension centrale, commune a toutes les representations d'une meme theorie. Les theories conformes minimales sont constituees d'un nombre fini de representations. Parmi celles-ci se trouvent des theories unitaires dont les representation forment la serie discrete de l'algebre de Virasoro; leur poids h a la forme h_{p,q}(m)=[ (p(m+1) -qm)^2-1] (4m(m+1)), ou p,q et m sont des entiers positifs et p+q<= m+1. L'entier m parametrise la charge centrale: c(m)=1 -{6over m(m+1)} avec n>= 2. Ces representations possedent un sous-espace invariant engendre par deux sous-representations avec h_1=h_{p,q} + pq et h_2=h_{p,q} + (m-p)(m+1-q) dont chacun des vecteurs de plus haut poids portent le nom de vecteur singulier et sont notes respectivement |Psi _{p,q}> et |Psi_{m-p,m+1-q}>. . Les theories super-conformes sont une version super-symetrique des theories conformes. Leurs champs appartiennent a des representation de plus haut poids de l'algebre de Neveu-Schwarz, une des deux extensions super -symetriques de l'algebre de Virasoro. Les theories super -conformes minimales possedent la meme structure que les theories conformes minimales. Les representations sont elements de la serie h_{p,q}= [ (p(m+2)-qm)^2-4] /(8m(m+2)) ou p,q et m sont des entiers positifs, p et q etant de meme parite, et p+q<= m+2. La charge centrale est donnee par c(m)={3over 2}-{12over m(m+2)} avec m >= 2. Les vecteurs singuliers | Psi_{p,q}> et |Psi_{m-p,m+2-q} > sont respectivement de poids h _{p,q}+pq/2 et h_ {p,q}+(m-p)(m+2-q)/2.. Les vecteurs singuliers ont une norme nulle et on doit les eliminer des representations pour que celles -ci soient unitaires. Cette elimination engendrent des equations (super-)differentielles qui dependent directement de la forme explicite des vecteurs singuliers et auxquelles doivent obeir les fonctions de correlations de la theorie. Ainsi la connaissance de ces vecteurs singuliers est intimement reliee au calcul des fonctions de correlation. Les equations definissant les vecteurs singuliers forment un systeme lineaire surdetermine dont le nombre d'equations est de l'ordre de N(pq), le nombre de partitions de l'entier pq. Puisque les vecteurs singuliers jouent un role capital en theorie conforme, il est naturel de chercher des formes explicites pour les vecteurs (ou pour des familles infinies de ceux -ci). Nous donnons ici la forme explicite pour la famille infinie de vecteurs singuliers ayant un de ses indices egal a 1, pour les algebres de Virasoro et de Neveu-Schwarz. Depuis ces decouvertes, d'autres techniques de construction des vecteurs singuliers ont ete developpees, dont celle de Bauer, Di Francesco, Itzykson et Zuber pour l'algebre de Virasoro qui reproduit directement l'expression explicite des vecteurs singuliers |Psi _{1,q}> et |Psi_{p,1}>. Ils ont utilise l'algebre des produits d'operateurs et la fusion entre representations irreductibles pour engendrer des relations de recurence produisant les vecteurs singuliers. Dans le dernier chapitre de cette these nous adaptons cet algorithme a la construction des vecteurs singuliers de l'algebre de Neveu-Schwarz.

  20. Complexation des acides aminés basiques arginine, histidine et lysine avec l'ADN plasmidique en solution aqueuse : participation à la capture de radicaux sous irradiation X à 1,5 keV

    NASA Astrophysics Data System (ADS)

    Tariq Khalil, Talat; Taillefumier, Baptiste; Boulanouar, Omar; Mavon, Christophe; Fromm, Michel

    2016-09-01

    L'environnement chimique de l'ADN en situation biologique est complexe notam-ment en raison de la présence d'histones, protéines nucléaires, associées en quantité approximativement égales à l'ADN pour former la chromatine. Les histones possèdent de nombreux radicaux basiques arginine et lysine chargés positivement et dont la majorité se trouve sur les chaînes émergentes, l'ADN présente quant à lui des charges négatives sur ses groupements phosphates localisés tout au long de la double hélice. Dans cette étude, la complexité de la structure de la chromatine nucléaire est dans un premier temps mimée en solution aqueuse par la formation de complexes entre un ADN plasmidique sonde et les trois acides aminés basiques, Arg, His, Lys, qui, mis à part His, sont protonés au pH physiologique. Ces acides aminés libres en solution sont réputés être des capteurs efficaces de radicaux libres, notamment pour le radical hydroxyle, conférant ainsi un pouvoir protecteur vis-à-vis des effets indirects sur l'ADN en situation d'exposition aux rayonnements ionisants. A concentration fixée, les capacités de capture des acides aminés libres, σ, pour le radical hydroxyle sont typiquement les suivantes σHis ≈σArg > σLys (σLys ≈ 0,1 × σArg). Nous avons mesuré les taux de cassures simple brin par plasmide et par Gray (χ) lors d'expositions de solutions aqueuses de complexes [acide aminé - ADN plasmidique] aux rayons X ultra-mous (1,5 keV). A concentrations égales, les trois acides aminés complexés et présents en large excès ne manifestent pas une capacité de protection de l'ADN proportionnelle à leur capacité de capture libre et en solution ; on trouve en effet des taux de cassures dans l'ordre suivant χHis > χArg > χLys (χLys ≈ 0,01 χArg). Après avoir détaillé le mode opératoire de ces mesures, nous analyserons sur des bases bibliographiques, les modes spécifiques d'interaction des acides aminés basiques avec l'ADN. La spécificité des liaisons de l'arginine avec l'ADN et plus particulièrement sa propension à être un ligand bidentate qui se lie aux bases (principalement G) de l'ADN nous permet d'expliquer les taux de cassures simple brin particulièrement élevés observés avec Arg. Un mécanisme de transfert de radical intermoléculaire est suggéré pour Arg. Un raisonnement globalement similaire peut être tenu pour la lysine. Pour l'histidine, nous suggérons quelques voies possibles qui conduiraient à expliquer les taux de cassure anormalement élevés observés, mais cela demandera des expériences complémentaires.

  1. Effet Bauschinger lors de la plasticité cyclique de l'aluminium pur monocristallin

    NASA Astrophysics Data System (ADS)

    Alhamany, A.; Chicois, J.; Fougères, R.; Hamel, A.

    1992-08-01

    This paper is concerned with the study of microscopic mechanisms which control the cyclic deformation of pure aluminium and especially with the analysis of the Bauschinger effect which appears in aluminium single crystals deformed by cyclic straining. Fatigue tests are performed on Al single crystals with the crystal axis parallel to [ overline{1}23] at room temperature, at plastic shear strain amplitudes in the range from 10^{-4} to 3× 10^{-3}. Mechanical saturation is not obtained at any strain level. Instead, a hardening-softening-secondary hardening sequence is found. The magnitude of the Bauschinger effect as the difference between yield stresses in traction and in compression, changes all along the fatigue loop and during the fatigue test. The Bauschinger effect disappears at two points of the fatigue loop, one in the traction part, the other in the compression one. At these points, the Bauschinger effect is inverted. Dislocation arrangement evolutions with fatigue conditions can explain the cyclic behaviour of Al single crystals. An heterogeneous dislocation distribution can be observed in the cyclically strained metal : dislocation tangles, long dislocation walls and dislocation cell walls, separated by dislocation poor channels appear in the material as a function of the cycle number. The long range internal stress necessary to ensure the compatibility of deformation between the hard and soft regions controls the observed Bauschinger effect. Ce travail s'inscrit dans le cadre de l'étude des mécanismes microsocopiques intervenant lors de la déformation cyclique de l'aluminium pur et concerne en particulier l'analyse de l'effet Bauschinger apparaissant au cours de la solliciation cyclique des monocristaux. L'étude a été menée à température ambiante sur des monocristaux d'aluminium pur orientés pour un glissement simple (axe [ overline{1}23] ), à des amplitudes de déformation plastique comprise entre 10^{-4} et quelques 10^{-3}. Nous n'avons pas obtenu de véritable saturation mécanique. Nous sommes en présence d'une séquence durcissement-adoucissement-durcissement secondaire. L'amplitude de l'effet Bauschinger considéré comme la différence entre les limites élastiques en traction et en compression mesurées selon une procédure appropriée, évolue le long d'une boucle de fatigue, s'annule pour deux points particuliers l'un en traction l'autre en compression. De part et d'autre de ces points, le signe de l'effet Bauschinger est inversé. Les microstructures des états fatigués sont caractérisés par une répartition hétérogène des dislocations constituée d'amas, de murs ou des parois, suivant le degré de déformation cyclique, séparés par des zones à faible densité de dislocations. Les contraintes internes liées aux incompatibilités de déformation résultant de cette répartition hétérogène des dislocations sont à l'origine de l'effet Bauschinger observé dans les monocristaux. Ces contraintes et l'évolution de la quantité de cellules de dislocations avec la fatigue expliquent le durcissement secondaire.

  2. Micromechanisms of intergranular brittle ftacture in intermetallic compounds

    NASA Astrophysics Data System (ADS)

    Vitek, V.

    1991-06-01

    Grain boundaries in intermetallic compounds such as Ni3A1 are inherently brittle. The reason is usually sought in grain boundary cohesion but in metals even brittle fracture is accompanied by some local plasticity and thus not only cohesion but also dislocation mobility in the boundary region need to be studied. We first discuss here the role of an irreversible shear deformation at the crack tip during microcrack propagation assuming that these two processes are concomitant. It is shown that a pre-existing crack cannot propagate in a brittle manner once the dislocation emission occurs. However, if a microcrack nucleates during loading it can propagate concurrently with the development of the irreversible shear deformation at the crack tip. The latter is then the major energy dissipating process. In the second part of this paper we present results of atomistic studies of grain boundaries in Ni3A1 and CU3Au which suggest that substantial structural differences exist between strongly and weakly ordered L12 alloys. We discuss then the consequence of these differences for intergranular brittleness in the framework of the above model for microcrack propagation. On this basis we propose an explanation for the intrinsic intergranular brittleness in some L12 alloys and relate it directly to the strength of ordering. Les joints de grains dans les composés intermétalliques de type Ni3AI sont de nature fragile. L'origine de cette fragilité est habituellement dans la cohésion des joints de grains. Dans les métaux, cependant, même la rupture fragile est accompagnée d'une certaine déformation plastique locale, de telle sorte que non seulement la cohésion mais aussi la mobilité des dislocations près des joints doit être étudiée. Nous discutons d'abord le rôle d'une déformation en cisaillement irréversible en tête de fissure pendant la propagation de cette fissure, en supposant que les deux processus sont concomitants. Nous montrons qu'une fissure préexistante ne peut pas se propager de manière fragile, une fois que l'émission de dislocations se produit. Cependant, si une microfissure apparaît pendant le changement, elle peut se développer en concurrence avec le développement d'un cisaillement irréversible en tête de fissure. Ce demier est alors le principal mécanisme dissipatif d'énergie. Dans la deuxième partie de cet article, nous présentons des résultats d'études atomiques de joints de grain dans Ni3AI et CU3Au, suggérant qu'il existe des différences de structure substancielles entre les alliages L12 fortement et faiblement ordonnés. Nous discutons ensuite la conséquence de ces différences pour la fragilité intergranulaire, à l'aide du modèle ci-dessus pour la propagation des microfissures. Sur cette base, nous proposons une explication pour la fragilité intergranulaire intrinsèque de quelques alliages L12, et nous la relions directement au degré d'ordre.

  3. Formulation, caracterisation, modelisation et prevision du comportement thermomecanique des pieces plastiques et composites de fibres de bois : Application aux engrenages =

    NASA Astrophysics Data System (ADS)

    Mijiyawa, Faycal

    Cette etude permet d'adapter des materiaux composites thermoplastiques a fibres de bois aux engrenages, de fabriquer de nouvelles generations d'engrenages et de predire le comportement thermique de ces engrenages. Apres une large revue de la litterature sur les materiaux thermoplastiques (polyethylene et polypropylene) renforces par les fibres de bois (bouleau et tremble), sur la formulation et l'etude du comportement thermomecanique des engrenages en plastique-composite; une relation a ete etablie avec notre presente these de doctorat. En effet, beaucoup d'etudes sur la formulation et la caracterisation des materiaux composites a fibres de bois ont ete deja realisees, mais aucune ne s'est interessee a la fabrication des engrenages. Les differentes techniques de formulation tirees de la litterature ont facilite l'obtention d'un materiau composite ayant presque les memes proprietes que les materiaux plastiques (nylon, acetal...) utilises dans la conception des engrenages. La formulation des materiaux thermoplastiques renforces par les fibres de bois a ete effectuee au Centre de recherche en materiaux lignocellulosiques (CRML) de l'Universite du Quebec a Trois-Rivieres (UQTR), en collaboration avec le departement de Genie Mecanique, en melangeant les composites avec deux rouleaux sur une machine de type Thermotron-C.W. Brabender (modele T-303, Allemand) ; puis des pieces ont ete fabriquees par thermocompression. Les thermoplastiques utilises dans le cadre de cette these sont le polypropylene (PP) et le polyethylene haute densite (HDPE), avec comme renfort des fibres de bouleau et de tremble. A cause de l'incompatibilite entre la fibre de bois et le thermoplastique, un traitement chimique a l'aide d'un agent de couplage a ete realise pour augmenter les proprietes mecaniques des materiaux composites. Pour les composites polypropylene/bois : (1) Les modules elastiques et les contraintes a la rupture en traction des composites PP/bouleau et PP/tremble evoluent lineairement en fonction du taux de fibres, avec ou sans agent de couplage (Maleate de polypropylene MAPP). De plus, l'adherence entre les fibres de bois et le plastique est amelioree en utilisant seulement 3 % MAPP, entrainant donc une augmentation de la contrainte maximale bien qu'aucun effet significatif ne soit observe sur le module d'elasticite. (2) Les resultats obtenus montrent que, en general, les proprietes en traction des composites polypropylene/bouleau, polypropylene/tremble et polypropylene/bouleau/ tremble sont tres semblables. Les composites plastique-bois (WPCs), en particulier ceux contenant 30 % et 40 % de fibres, ont des modules elastiques plus eleves que certains plastiques utilises dans l'application des engrenages (ex. Nylon). Pour les composites polyethylene/bois, avec 3%Maleate de polyethylene (MAPE): (1) Tests de traction : le module elastique passe de 1.34 GPa a 4.19 GPa pour le composite HDPE/bouleau, alors qu'il passe de 1.34 GPa a 3.86 GPa pour le composite HDPE/tremble. La contrainte maximale passe de 22 MPa a 42.65 MPa pour le composite HDPE/bouleau, alors qu'elle passe de 22 MPa a 43.48 MPa pour le composite HDPE/tremble. (2) Tests de flexion : le module elastique passe de 1.04 GPa a 3.47 GPa pour le composite HDPE/bouleau et a 3.64 GPa pour le composite HDPE/tremble. La contrainte maximale passe de 23.90 MPa a 66.70 MPa pour le composite HDPE/bouleau, alors qu'elle passe a 59.51 MPa pour le composite HDPE/tremble. (3) Le coefficient de Poisson determine par impulsion acoustique est autour de 0.35 pour tous les composites HDPE/bois. (4) Le test de degradation thermique TGA nous revele que les materiaux composites presentent une stabilite thermique intermediaire entre les fibres de bois et la matrice HDPE. (5) Le test de mouillabilite (angle de contact) revele que l'ajout de fibres de bois ne diminue pas de facon significative les angles de contact avec de l'eau parce que les fibres de bois (bouleau ou tremble) semblent etre enveloppees par la matrice sur la surface des composites, comme le montrent des images prises au microscope electronique a balayage MEB. (6) Le modele de Lavengoof-Goettler predit mieux le module elastique du composite thermoplastique/bois. (7) Le HDPE renforce par 40 % de bouleau est mieux adapte pour la fabrication des engrenages, car le retrait est moins important lors du refroidissement au moulage. La simulation numerique semble mieux predire la temperature d'equilibre a la vitesse de 500 tr/min; alors qu'a 1000 tr/min, on remarque une divergence du modele. (Abstract shortened by ProQuest.). None None None None None None None None

  4. Minimum Propellant Low-Thrust Maneuvers near the Libration Points

    NASA Astrophysics Data System (ADS)

    Marinescu, A.; Dumitrache, M.

    The impulse technique certainly can bring the vehicle on orbits around the libration points or close to them. The question that aries is, by what means can the vehicle arrive in such cases at the libration points? A first investigation carried out in this paper can give an answer: the use of the technique of low-thrust, which, in addition, can bring the vehicle from the libration points near to or into orbits around these points. This aspect is considered in this present paper where for the applications we have considered the transfer for orbits of the equidistant point L4 and of the collinear point L2, from Earth-moon system. This transfer maneuver can be used to insertion one satellite on libration points orbits. In Earth- moon system the points L 4 and L 5 because an vehicle in on of the equidistant points in quite stable and remains in its vicinity of perturbed, have potential interest for the establishment of transporder satellite for interplanetary tracking. In contrast an vehicle in one of the collinear points is quite instable and it will oscillate along the Earth-moon-axis at increasing amplitude and gradually escape from the libration point. Let use assume that a space vehicle equipped with a low-thrust propulsion is near a libration point L. We consider the planar motion in the restricted frame of the three bodies in the rotating system L, where the Earth-moon distance D=l. The unit of time T is period of the moon's orbit divided by 2 and multiplied by the square root of the quantity one plus the moon/Earth mass ratio, and the unit of mass is the Earth's mass. With these predictions the motion equatios of the vehicle equiped with a low-thrust propulsion installation in the linear approximation near the libration point, have been established. The parameters of the motion at the beginning and the end of these maneuvers are known, the variational problem has been formulated as a Lagrange type problem with fixed extremities. On established the differential equations of the extremals and integrating these differential equations we obtain the desired extremals which characterize the minimum propellant optimal manoeuvres of transfer from libration points to their orbits. By means of Legendre conditions for weak minimum and Weierstrass condition for strong minimum, is demonstrated that variational problem so formulated has sense and is a problem of minimum. The integration of extremal's differential equations system can not lead to analytical solutions easily to obtain and for this we have directed to a numerical integration. The problem is a bilocal one because the motion parameter values are predicted at the beginning and of the maneuver (the manoeuvre duration coincides with the combustion duration) the values of the Lagrange multipliers not being specified at the beginning and end of the manoeuvre. For determination of the velocities at any point on the libration point L4 and L2 has been elaborated the program of calculus on the integration of the motion equations without accelerations due thrust during a revolution period the coordinates and velocities to be equal, with which have been calculated the velocities at the apoapsis A and respectively A'. With these specifications, the final conditions (at the end of the maneuver) could be established, and the determination of optimal transfer parameters in the specified points could be determined. The calculus performed for the transfer from the libration points L4 and L2 to their orbits, shows that the evolution velocities on the orbits are in general small, the velocities on the L2 orbits being greater than the velocities on L 4 orbits having the same semimajor axis. This fact is explicable because the period of evolution on orbits of libration point L4 is greater than the period of orbits of the libration point L2. For the transfer in the apoapsis of both orbits (the points A. and A') on can remarque the fact the accelerations due thrust are greater for orbits around the libration point L2 comparatively with orbits having the same semimajor axis around the libration point L 4 ( maneuver duration = 106 s = 11.574 days for L 4 and = 105 s = 1.157 days for L2 ). Considering orbits around libration points L4 and L2 with semimajor axis between 150-15000 km the components of acceleration due thrust have values between 10-2 -10-5 m/S2 which lays in the range of performances of law thrust propulsion installations (the D, T units have been converted in m, s). *Senior Scientist. Member AIAA **Researche Engineer

  5. Erratic boulders in Switzerland, a geological and cultural heritage

    NASA Astrophysics Data System (ADS)

    Reynard, Emmanuel

    2015-04-01

    Erratic boulders are stones transported over quite long distances by glaciers and that differ from the type of rock upon which they rely. They range from the size of pebbles to large boulders weighing several thousand tons. Erratic boulders are significant geosites (Reynard, 2004) for several reasons. (1) First, they are indicators of former glacier extensions by marking glaciers' path, size and volume. In Switzerland, they allowed mapping the extension of large Alpine glaciers (the Rhine and Rhone glaciers, in particular) and their retreat stages (e.g. the Monthey erratic boulders that mark an important lateglacial stage of the Rhone glacier). Crystalline erratic boulders along the Jura range (limestone mountains) were used to map the altitude reached by the Rhone glacier during the two last glaciations. Precise mapping of crystalline and limestone boulders distribution also enabled mapping local Jura glaciers' recurrences after the Rhone glacier retreat. (2) During the last decades, several erratic boulders were used for cosmogenic nuclide exposure dating, which allowed impressive advances in palaeoclimatic research. (3) Erratic blocks have also an ecological interest by the fact that they "have transported" specific habitats in areas far away from their origin (e.g. acid crystalline rocks and soils in limestone areas such as in the Jura). For all these reasons, several erratic boulders were classified in the inventory of Swiss geosites. Erratic boulders also have a significant cultural value (Lugon et al., 2006). (1) The Glacier Garden in Lucerne was discovered in 1872. It comprises various surfaces of "roches moutonnées", potholes and large erratic blocks that document the presence of the Reuss glacier. Considered as a natural monument it is now one of the most famous touristic attraction of Lucerne and Central Switzerland. (2) The Pierre Bergère stone, situated in Salvan (Mont-Blanc massif, South-western Switzerland), is the place where future Nobel Prize Guglielmo Marconi made his first wireless experiments in the late 19th century. An interpretive panel explaining the origin of the block was posted near the site along a cultural path created by the Marconi Foundation. (3) The Pierre des Marmettes, in Monthey, is one of the key-sites where the nature conservation movement was initiated in the first decade of the 20th century. The block is the property of the Swiss Academy of Sciences and was chosen as an emblematic site for celebrating the 200 years of the Academy in 2015. Moreover, in several cantons the protection of erratic blocks was the first initiative for nature conservation. (4) Several blocks were dedicated or offered to famous scientists (De Charpentier, Agassiz, Studer, Venetz) involved in the development of glaciology during the 19th century. Their names (e.g. Agassiz Block, Studer Block, Venetz Block) remind this important period in the history of Swiss geosciences. In fact, several of these scientists - in particular Jean de Charpentier - not only demonstrated the glacial origin of these blocks, but also used them as a proof of former glacial extensions. (5) Finally, several blocks have a symbolic (most of them have a name, several refer to legends), mythical, religious or an archaeological value - with the presence of petroglyphs. This communication will focus on the cultural value of erratic boulders - in particular for the nature conservation movement and for the history of glaciology and geosciences - and will propose a strategy for their geotourist promotion. References Lugon R., Pralong J.-P., Reynard E. (2006). Patrimoine culturel et géomorphologie: le cas valaisan de quelques blocs erratiques, d'une marmite glaciaire et d'une moraine. Bull. Murithienne, 124, 73-87. Reynard E. (2004). Protecting Stones: conservation of erratic blocks in Switzerland. In: Prikryl R. (ed.) Dimension Stone 2004. New perspectives for a traditional building material, Leiden, Balkema, 3-7.

  6. Vortex lines in layered superconductors. II. Pinning and critical currents in high temperature superconductors

    NASA Astrophysics Data System (ADS)

    Manuel, P.

    1994-02-01

    In this article, a qualitative survey is given on the various phenomena which influence the critical current of high temperature superconductors. Critical current is defined as a property related to a non-zero electric field criterion, the level of which is fixed by experimental considerations, or efficiency requirements of applications. The presentation is restricted to extrinsic intragranular critical current, which depends in a complex way on the interplay between the characteristics of pinning centres and the properties of the vortex lattice. The discussion is focussed on the configuration {B} / / {c}, which contains the main elements of this problem. Differences of behaviour between Y(123) and BSCCO (Bi(2212) or Bi(2223)) are analysed in the context of their respective anisotropy factors. Possible regimes for pinning and creep are discussed in various temperature domains. From critical current results, a strong pinning regime is found to occur in BSCCO, whereas the pinning strength in Y(123) is still an open question. The thermal decrease of critical current allows a collective creep regime to appear in both materials, but at different temperature ranges. The disappearance of correlation effects near the irreversibility line results in a fall of the effective pinning energy. We show that in BSCCO, the effective pinning energy deduced from experimental results is not in agreement with pinning by randomly dispersed oxygen vacancies. Finally, we shortly describe the microstructures which could allow a more efficient pinning in future materials. On effectue une présentation qualitative des divers phénomènes qui contrôlent la valeur du courant critique dans les supraconducteurs à haute température. La notion de courant critique qui est utilisée est reliée à un critère de champ électrique non nul, fixé par des considérations expérimentales ou des exigences de rendement pour les applications. On se restreint au problème des courants critiques intragranulaires d'origine extrinsèque, qui dépendent de façon complexe des caractéristiques d'ancrage des défauts présents dans le matériau et des propriétés du réseau de vortex. On privilégie la configuration de champ {B} / / {c} qui est la plus révélatrice à cet égard. On analyse les différences de comportement entre les composés Y(123) et BSCCO (Bi(2212) ou Bi(2223)) en liaison avec leurs degrés d'anisotropie respectifs. Les différents régimes d'ancrage et de ll creep gg possibles pour ces composés sont examinés en fonction de la température. Les courants critiques obtenus pour BSCCO semblent correspondre à un régime d'ancrage fort, alors que la question reste ouverte pour Y(l23). La décroissance en température du courant critique expérimental suscite l'apparition d'un régime de ll creep gg collectif pour ces deux composés, avec cependant des différences notables sur la position et l'étendue des domaines correspondants. Au voisinage de la ligne d'irréversibilité, la disparition progressive des corrélations entre vortex provoque une chute de l'énergie d'ancrage effective. Dans BSCCO, celle-ci ne semble pas compatible avec l'hypothèse d'un ancrage par des lacunes d'oxygène réparties de façon aléatoire. On donne en conclusion quelques indications concernant les microstructures susceptibles d'améliorer les propriétés d'ancrage des futurs matériaux.

  7. Analyse numerique de la microplasticite aux joints de grains dans les polycristaux metalliques CFC

    NASA Astrophysics Data System (ADS)

    Andriamisandratra, Mamiandrianina

    La rupture par fatigue concerne aujourd’hui encore beaucoup de pièces métalliques soumises en service à un chargement répétitif. À l’échelle de la microstructure, les joints de grains sont connus pour jouer un rôle important dans la tenue en fatigue du matériau grâce au durcissement qu’ils confèrent. Cependant les joints de grains eux-mêmes ou la zone à leur proximité ont souvent été identifiés comme lieux d’amorçage de fissures de fatigue, particulièrement dans le cas des métaux cubiques à faces centrées (CFC). Dans le but de caractériser le comportement micromécanique à proximité de différents types de joint de grain, le comportement à l’interface en traction monotone uniaxiale a été modélisé par la méthode des éléments finis et une loi de plasticité cristalline a été utilisée. De plus, quelques configurations cristallographiques bicristallines ont alors été simulées et leur comportement a été analysé sous un chargement de traction axiale monotone. Le cadre de validité de la modélisation a été restreint à celui des petites déformations (<5%). Quatre critères importants dictant le comportement mécanique cristallin ont été identifiés. Il s’agit de la rigidité élastique, du facteur de Schmid des deux systèmes de glissement les plus favorables, et enfin du ratio entre ces deux plus forts facteurs de Schmid traduisant la propension au glissement simple ou multiple. Des simulations de traction sur des monocristaux ont ainsi permis de comprendre l’influence propre de chaque critère sur le comportement macroscopique (contraintes et déformations) et microscopique (glissements cristallins). Les calculs bicristallins ont ensuite mis en évidence l’activation particulière de certains systèmes de glissement à priori non favorables au niveau du joint de grain. Ce phénomène a été associé avec la nécessité d’assurer la compatibilité mécanique de déformation de part et d’autre de l’interface. Le profil de la déformation dans le sens longitudinal de l’éprouvette a montré une baisse systématique de déformation au niveau du joint et dont l’intensité augmente avec la désorientation angulaire entre les deux grains. L’hétérogénéité de la déformation en chaque section de l’éprouvette est quant à elle surtout liée au caractère fortement anisotrope de la plasticité et s’avère plus prononcée lorsque le mode de déformation est assuré par un glissement simple. Enfin, un cas bicristallin qui présentait une compatibilité microscopique des traces de glissement dans le plan du joint de grain a été étudié. Cependant, aucune corrélation avec le profil de glissement n’a été relevée, une cause macroscopique étant plus vraisemblablement à l’origine du profil observé. Mots-clés: joints de grains, localisation, plasticité cristalline, éléments finis, bicristal.

  8. Etude du comportement et de la modélisation viscoplastique du zircaloy 4recristallisé sous chargements monotones et cycliques uni et multiaxes

    NASA Astrophysics Data System (ADS)

    Delobelle, P.; Robinet, P.

    1994-08-01

    The results of experiment performed on a recrystallized zircaloy 4 alloy in the intermediate temperature domain 20 leqslant T leqslant 400 ^{circ}C are presented. To characterize the anisotropy, especially at 350 ^{circ}C, the tests were made under both monotonic and cyclic uni- and bidirectional loadings, i.e. tension-compression, tension-torsion and tension-internal pressure tests. The different anisotropy coefficients and especially R^p = \\varepsilon^p_{θθ} /\\varepsilon^ p _ {{^-_-}{^-_-} } seem to be temperature independent. An important feature of the behavior of this alloy in the neighbourhood of 300 ^{circ}C is attributed to the dislocations-point defects interactions (dynamic strain aging), phenomena often observed in the solid solutions. For the 2D cyclic non proportional loadings it is shown that a weak supplementary hardening appears, which is a function of the degree of the phase lag. We propose to particularize and to apply a unified viscoplastic model with internal variables to the considered alloy, as the model as already been developed and identified elsewhere for other isotropic materials. From a general point of view the introduction of the anisotropy in the model is made by four tensors of rank 4 ; [ M] is assigned to the flow directions, [ N] to the linear parts of the kinematical hardening variables and [ Q] , [ R] respectively to the dynamic and static recoveries of these tensorial variables. This phenomenological formulation leads to a correct representation of the set of the experimental results presented at 350 ^{circ}C, which provides an a posteriori confirmation of the formalism used. On étudie, entre 20 et 400 ^{circ}C, à l'aide d'essais sous chargements multiaxiaux monotones et cycliques (traction, torsion et pression interne) les propriétés viscoplastiques anisotropes de tube de zircaloy 4 recristallisé. A la température de 350 ^{circ}C, l'anisotropie a été quantifiée de façon détaillée. Les quelques résultats obtenus à la température ambiante ainsi que l'indépendance du rapport R^p = \\varepsilon^p_{θθ}/\\varepsilon^ p_{{^-_-}{^-_-} } avec la température laissent supposer que l'ensemble des coefficients d'anisotropie ne dépendent pas de la température. Par contre, la fluidité de cet alliage présente un minimum très marqué au voisinage de 300 ^{circ}C. Ce comportement est imputable au vieillissement dynamique fréquemment observé dans les solutions solides d'insertion. Lors d'un chargement cyclique hors phase (traction-torsion déphasée à 90^{circ}) ce matériau présente un léger durcissement supplémentaire. On propose l'extension au cas du zircaloy 4 de la formulation d'un modèle viscoplastique unifié développé et identifié par ailleurs sur d'autres matériaux initialement isotropes. D'une manière générale, l'introduction de l'anisotropie dans ce modèle s'effectue par l'intermédiaire de quatre tenseurs d'ordre 4 affectant les directions d'écoulement [ M] , les parties linéaires des écrouissages cinématiques [ N] , ainsi que les restaurations dynamiques [ Q] et statiques [ R] de ces mêmes variables d'écrouissage. L'identification de ce modèle est discutée et réalisée à 350 ^{circ}C. On montre l'adéquation du formalisme à appréhender l'ensemble des caractéristiques mécaniques de cet alliage.

  9. Evaluation d'une approche pedagogique respectant les facons d'apprendre des filles en sciences et en TIC en 9e annee au Nouveau-Brunswick

    NASA Astrophysics Data System (ADS)

    Lirette-Pitre, Nicole T.

    2009-07-01

    La reussite scolaire des filles les amene de plus en plus a poursuivre une formation postsecondaire et a exercer des professions qui demandent un haut niveau de connaissances et d'expertise scientifique. Toutefois, les filles demeurent toujours tres peu nombreuses a envisager une carriere en sciences (chimie et physique), en ingenierie ou en TIC (technologie d'information et de la communication), soit une carriere reliee a la nouvelle economie. Pour plusieurs filles, les sciences et les TIC ne sont pas des matieres scolaires qu'elles trouvent interessantes meme si elles y reussissent tres bien. Ces filles admettent que leurs experiences d'apprentissage en sciences et en TIC ne leur ont pas permis de developper un interet ni de se sentir confiante en leurs habiletes a reussir dans ces matieres. Par consequent, peu de filles choisissent de poursuivre leurs etudes postsecondaires dans ces disciplines. La theorie sociocognitive du choix carriere a ete choisie comme modele theorique pour mieux comprendre quelles variables entrent en jeu lorsque les filles choisissent leur carriere. Notre etude a pour objet la conception et l'evaluation de l'efficacite d'un materiel pedagogique concu specifiquement pour ameliorer les experiences d'apprentissage en sciences et en TIC des filles de 9e annee au Nouveau-Brunswick. L'approche pedagogique privilegiee dans notre materiel a mis en oeuvre des strategies pedagogiques issues des meilleures pratiques que nous avons identifiees et qui visaient particulierement l'augmentation du sentiment d'auto-efficacite et de l'interet des filles pour ces disciplines. Ce materiel disponible par Internet a l'adresse http://www.umoncton.ca/lirettn/scientic est directement en lien avec le programme d'etudes en sciences de la nature de 9e annee du Nouveau-Brunswick. L'evaluation de l'efficacite de notre materiel pedagogique a ete faite selon deux grandes etapes methodologiques: 1) l'evaluation de l'utilisabilite et de la convivialite du materiel et 2) l'evaluation de l'effet du materiel en fonction de diverses variables reliees a l'interet et au sentiment d'auto-efficacite des filles en sciences et en TIC. Cette recherche s'est inscrite dans un paradigme pragmatique de recherche. Le pragmatisme a guide nos choix en ce qui a trait au modele de recherche et des techniques utilisees. Cette recherche a associe a la fois des techniques qualitatives et quantitatives, particulierement en ce qui concerne la collecte et l'analyse de donnees. Les donnees recueillies dans la premiere etape de l'evaluation de l'utilisabilite et de la convivialite du materiel par les enseignantes et les enseignants de sciences et les filles ont revele que le materiel concu est tres utilisable et convivial. Toutefois quelques petites ameliorations seront apportees a une version subsequente afin de faciliter davantage la navigation. Quant a l'evaluation des effets du materiel concu sur les variables reliees au sentiment d'auto-efficacite et aux interets lors de l'etape quasi experimentale, nos donnees qualitatives ont indique que ce materiel a eu des effets positifs sur le sentiment d'auto-efficacite et sur les interets des filles qui l'ont utilise. Toutefois, nos donnees quantitatives n'ont pas permis d'inferer un lien causal direct entre l'utilisation du materiel et l'augmentation du sentiment d'auto-efficacite et des interets des filles en sciences et en TIC. A la lumiere des resultats obtenus, nous avons conclu que le materiel a eu les effets escomptes. Donc, nous recommandons la creation et l'utilisation de materiel de ce genre dans toutes les classes de sciences de la 6e annee a la 12e annee au Nouveau-Brunswick.

  10. L'etude de l'InP et du GaP suite a l'implantation ionique de Mn et a un recuit thermique

    NASA Astrophysics Data System (ADS)

    Bucsa, Ioan Gigel

    Cette these est dediee a l'etude des materiaux InMnP et GaMnP fabriques par implantation ionique et recuit thermique. Plus precisement nous avons investigue la possibilite de former par implantation ionique des materiaux homogenes (alliages) de InMnP et GaMnP contenant de 1 a 5 % atomiques de Mn qui seraient en etat ferromagnetique, pour des possibles applications dans la spintronique. Dans un premier chapitre introductif nous donnons les motivations de cette recherche et faisons une revue de la litterature sur ce sujet. Le deuxieme chapitre decrit les principes de l'implantation ionique, qui est la technique utilisee pour la fabrication des echantillons. Les effets de l'energie, fluence et direction du faisceau ionique sur le profil d'implantation et la formation des dommages seront mis en evidence. Aussi dans ce chapitre nous allons trouver des informations sur les substrats utilises pour l'implantation. Les techniques experimentales utilisees pour la caracterisation structurale, chimique et magnetique des echantillons, ainsi que leurs limitations sont presentees dans le troisieme chapitre. Quelques principes theoriques du magnetisme necessaires pour la comprehension des mesures magnetiques se retrouvent dans le chapitre 4. Le cinquieme chapitre est dedie a l'etude de la morphologie et des proprietes magnetiques des substrats utilises pour implantation et le sixieme chapitre, a l'etude des echantillons implantes au Mn sans avoir subi un recuit thermique. Notamment nous allons voir dans ce chapitre que l'implantation de Mn a plus que 1016 ions/cm 2 amorphise la partie implantee du materiau et le Mn implante se dispose en profondeur sur un profil gaussien. De point de vue magnetique les atomes implantes se trouvent dans un etat paramagnetique entre 5 et 300 K ayant le spin 5/2. Dans le chapitre 7 nous presentons les proprietes des echantillons recuits a basses temperatures. Nous allons voir que dans ces echantillons la couche implantee est polycristalline et les atomes de Mn sont toujours dans un etat paramagnetique. Dans les chapitres 8 et 9, qui sont les plus volumineux, nous presentons les resultats des mesures sur les echantillons recuits a hautes temperatures: il s'agit d'InP et du GaP implantes au Mn, dans le chapitre 8 et d'InP co-implante au Mn et au P, dans le chapitre 9. D'abord, dans le chapitre 8 nous allons voir que le recuit a hautes temperatures mene a une recristallisation epitaxiale du InMnP et du GaMnP; aussi la majorite des atomes de Mn se deplacent vers la surface a cause d'un effet de segregation. Dans les regions de la surface, concentres en Mn, les mesures XRD et TEM identifient la formation de MnP et d'In cristallin. Les mesures magnetiques identifient aussi la presence de MnP ferromagnetique. De plus dans ces mesures on trouve qu'environ 60 % du Mn implante est en etat paramagnetique avec la valeur du spin reduite par rapport a celle trouvee dans les echantillons non-recuits. Dans les echantillons InP co-implantes au Mn et au P la recristallisation est seulement partielle mais l'effet de segregation du Mn a la surface est beaucoup reduit. Dans ce cas plus que 50 % du Mn forme des particules MnP et le restant est en etat paramagnetique au spin 5/2, dilue dans la matrice de l'InP. Finalement dans le dernier chapitre, 10, nous presentons les conclusions principales auxquels nous sommes arrives et discutons les resultats et leurs implications. Mots cles: implantation ionique, InP, GaP, amorphisation, MnP, segregation, co-implantation, couche polycristalline, paramagnetisme, ferromagnetisme.

  11. Cartographie T

    NASA Astrophysics Data System (ADS)

    Cote, Jean-Charles

    Les cartographies T1 par séquences d'échos stimulés et Look- Locker sont les plus communément utilisées pour mesurer les temps de relaxation T 1 en imagerie par résonance magnétique (IRM). Elles ont des performances d'usage clinique, ne prenant que quelques minutes pour produire une carte des valeurs de T1. Ces séquences demeurent cependant très sensibles à la précision des pulses radiofréquences (RF) qui réorientent l'aimantation pour produire les signaux mesurés. Les pulses RF rectangulaires régulièrement utilisés en IRM produisent un basculement de l'aimantation directement proportionnel à l'intensité du champ magnétique B 1 produit par l'antenne émettrice. Les antennes cliniques ont des distributions de champs B1 qui fluctuent énormément. En exemple, l'antenne servant à produire des images de la tête possède un champ B1 qui est distribué dans son volume utile sur une plage allant de 0,5 à 1,2 relativement à son centre. Cette variation spatiale de B1 entraîne des erreurs systématiques sur les valeurs ajustées de T1 dépassant les 50%. Le développement d'un nouveau concept d'excitation RF à approche tangentielle ayant des propriétés adiabatiques pouvant remplacer les demi-passages adiabatiques (AHP) et son utilisation sous la forme d'un BIR-4-S2 (B1-Insensitive Rotation-4 AHP-Sequentialized 2 steps) dans les séquences de cartographie T1 a permis de réduire à moins de 10% les erreurs systématiques dans le cas mesuré par échos stimulés compensés et à moins de 5% pour le Look-Locker. Le BIR-4-S2 possède une imprécision sur l'angle de basculement de moins de 5° sur une plage relative allant de 0,75 à 1,75 autour d'un champ de référence B1 ref, pour un choix de basculement sur 360°. Et, contrairement aux pulses adiabatiques, il demeure un pulse RF tridimentionnel (3D) à faible puissance pouvant être utilisé à répétition cliniquement sans risque d'échauffement dangereux pour les patients. La séquence d'échos stimulés compensés dont nous avons parlé ci-haut utilise un autre de nos développements: la compensation. Le signal produit par une séquence traditionnelle d'échos stimulés tend vers zéro en suivant la relaxation T1 dans le temps. Parfois ceci cause des problèmes lorsque le signal des derniers échos commence à être influencé par le bruit. La compensation a pour effet de modifier la décroissance exponentielle du signal. Ce changement s'opère en tenant compte de la relaxation T1 lors du calcul des angles de basculement nécessaire à la séquence. La compensation transfère une partie de la demande du volume de signal des premiers échos vers les derniers. On peut compenser entièrement et ainsi programmer une séquence qui produira des signaux égaux entre tous les échos ou compenser partiellement pour soutenir les signaux des derniers échos en conservant un maximum de signal total.

  12. Groundwater socio-ecology and governance: a review of institutions and policies in selected countries

    NASA Astrophysics Data System (ADS)

    Mukherji, Aditi; Shah, Tushaar

    2005-03-01

    Groundwater is crucial for the livelihoods and food security of millions of people, and yet, knowledge formation in the field of groundwater has remained asymmetrical. While, scientific knowledge in the discipline (hydrology and hydrogeology) has advanced remarkably, relatively little is known about the socio-economic impacts and institutions that govern groundwater use. This paper therefore has two objectives. The first is to provide a balanced view of the plus and the down side of groundwater use, especially in agriculture. In doing so, examples are drawn from countries such as India, Pakistan, Bangladesh, China, Spain and Mexico—all of which make very intensive use of groundwater. Second, institutions and policies that influence groundwater use are analyzed in order to understand how groundwater is governed in these countries and whether successful models of governance could be replicated elsewhere. Finally, the authors argue that there is a need for a paradigm shift in the way groundwater is presently perceived and managed—from management to governance mode. In this attempt, a number of instruments such as direct regulation, indirect policy levers, livelihood adaptation and people's participation will have to be deployed simultaneously in a quest for better governance. L'eau souterraine est cruciale pour la survie et la sécurité alimentaire de plusieurs millions de personnes mais cependant la foramtion en matière d'eaux souterraines reste asymmétrique. Alors que la connaissance scientifique dans la discipline (hydrologie et hydrogéologie) a avancée de manière remarquable, on connaît peu de choses sur les impacts socio-économiques et les institutions qui gouvernent l'utilisation des eaux souterraines. Cet article a par conséquent deux objectifs. Le premier est d'assurer un point de vue balancé entre le côté positif et le côté négatif de l'utilisation de l'eau souterraine, spécialement en agriculture. De cette manière, des exemples d'utilisation intensive des eaux souterraines sont présentés, provenant de pays tels que l'Inde, le Pakistan, le Bangladesh, la Chîne, l'Espagne et le Mexique. En second lieu, les institutions et les politiques qui influencent l' utilisation de l'eau souterraine sont analysées de telle manière à comprendre comment l'eau souterraine est gérée dans ces pays et comment les modèles de gestion présentant un certain succès pourraient être répliqués ailleurs. Finalement, les auteurs arguent qu'il existe un besoin pour un changement de paradigme dans le sens où l'eau souterraine est actuellement perçue et gérée du mode administratif au mode gouvernemental. Dans cette démarche un certain nombre d'instruments comme la régulation directe, les leviers politiques indirectes, l'adaptation vitale et la participation populaire devront être déployées simultanément dans la quête d'une meilleure gestion. El agua subterránea es crítica para la subsistencia y para la salubridad de la comida de millones de personas. Sin embargo, la formación de conocimientos en el campo de aguas subterráneas ha permanecido asimétrico. Mientras que el concocimiento científico en la disciplina (hidrología e hidrogeología) ha avanzado increíblemente, se conoce relativemente poco sobre los impactos socio-económicos y las instituciones que controlan el uso del agua subterránea. Este artículotiene dos objetivos. El primero es presenter una visión balanceada de los aspectos positivos y negativos concernientes al uso de agua subterránea, especialmente en la agricultura. Con este objetivo se presentan ejemplos de la India, Pakistán, Bangladesh, China, España y México ya que todos estos países hacen uso intensivo del agua subterránea. El segundo objetivo es el análisis de las instituciones y políticas que influyen en el uso del agua subterránea con el fin de entender cómo se gobierna el agua subterránea en estos países y si los modelos exitosos que pueden ser replicados en otros lugares. Finalmente los autores proponen que existe la necesidad de cambiar el paradigma en lo referente a la percepción y al manejo del agua subterránea desde su administración hasta su gobierno. En este intento, con el objeto de alcanzar un mejor gobierno, se debe implementar un número de instrumentos simultáneamente i.e la regulación directa, la política indirecta, la adaptación de las actividades de subsistencia y la participación de los usarios.

  13. The origin of increased salinity in the Cambrian-Vendian aquifer system on the Kopli Peninsula, northern Estonia

    NASA Astrophysics Data System (ADS)

    Karro, Enn; Marandi, Andres; Vaikmäe, Rein

    Monitoring of the confined Cambrian-Vendian aquifer system utilised for industrial water supply at Kopli Peninsula in Tallinn over 24 years reveals remarkable changes in chemical composition of groundwater. A relatively fast 1.5 to 3.0-fold increase in TDS and in concentrations of major ions in ed groundwater is the consequence of heavy pumping. The main sources of dissolved load in Cambrian-Vendian groundwater are the leaching of host rock and the other geochemical processes that occur in the saturated zone. Underlying crystalline basement, which comprises saline groundwater in its upper weathered and fissured portion, and which is hydraulically connected with the overlying Cambrian-Vendian aquifer system, is the second important source of ions. The fractured basement and its clayey weathering crust host the Ca-Cl type groundwater, which is characterised by high TDS values (2-20 g/L). Intensive water ion accelerates the exchange of groundwaters and increases the area of influence of pumping. Chemical and isotopic studies of groundwater indicate an increasing contribution of old brackish water from the crystalline basement and rule out the potential implication of an intrusion of seawater into aquifer. L'origine de la salinité croissante dans le système aquifère du Cambrien-Vendien dans la péninsule de Kopli, nord de l'Estonie Le suivi à long terme du système aquifère captif du Cambrien-Vendien utilisé pour l'approvisionnement d'eaux industrielles dans la Péninsule de Kopli, nord de l'Estonie, révèle de remarquables changements dans la composition chimique des eaux souterraines. Une augmentation de facteur 1.5 à 3 de la TDS et des concentrations en ions majeurs dans l'eau souterraine est la conséquence de pompages intensifs. Les sources principales des charges dissoutes dans les eaux de l'aquifère du Cambrien-Vendien sont le lessivage des roches et d'autres phénomènes géochimiques ayant lieu dans la zone saturée. Le soubassement rocheux cristallin, qui renferme des eaux souterraines salines dans sa partie supérieure altérée et fissurée, et est hydrauliquement connecté avec l'aquifère supérieur du Cambrien-Vendien, est la deuxième importante source d'ions. Le soubassement fracturé et le matériel argileux de l'altération, renferme l'eau souterraine de type Ca-Cl, caractérisée par un haut TDS (2-20 g/l). A cause de la mobilisation intensive de l'eau les échanges d'eau souterraine est sont accélérés et la zone d'influence des pompages augmentent. Les études chimiques et isotopiques indiquent une contribution croissante du drainage des eaux du soubassement cristallin. L'intrusion d'eaux salées de la mer dans le système aquifère n'est pas un phénomène évident. El origen del incremento en salinidad en un sistema de acuíferos Cámbrico-Vendiano en la Península Kopli, norte de Estonie Monitoreo a largo plazo de un sistema de acuíferos confinados, de edad Cámbrico-Vendiano, que se utiliza como fuente de abastecimiento industrial en la Península Kopli, al norte de Estonie, revela cambios notables en la composición química del agua subterránea. Un incremento de 1.5 a 3 veces en TDS y en concentraciones de iones mayores en agua subterránea explotada ha sido ocasionado por bombeo fuerte. Las fuentes principales de carga disuelta en el agua subterránea Cámbrico-Vendiano son la lixiviación de la roca encajonante y los procesos geoquímicos que ocurren en la zona saturada. Basamento cristalino subyacente, que aloja agua subterránea salada en la parte superior intemperizada y fisurada, y está conectado hidráulicamente con el sistema acuífero Cámbrico-Vendiano sobreyacente, es la segunda fuente importante de iones. El basamento fracturado y su corteza de intemperismo arcillosa alojan agua subterránea de tipo Ca-Cl la cual se caracteriza por valores altos de TDS (2-20 g/l). Debido a extracción intensiva se ha acelerado el intercambio de agua subterránea y se ha incrementado el área de influencia del bombeo. Los estudios químicos e isotópicos de agua subterránea indican una contribución creciente por filtración derivada del basamento cristalino. Es evidente una intrusión de agua salada hacia el sistema de acuíferos con implicaciones subsecuentes para la calidad del agua.

  14. La physique des bulles de champagne Une première approche des processus physico-chimiques liés à l'effervescence des vins de Champagne

    NASA Astrophysics Data System (ADS)

    Liger-Belair, G.

    2002-07-01

    People have long been fascinated by bubbles and foams dynamics, and since the pioneering work of Leonardo da Vinci in the early 16th century, this subject has generated a huge bibliography. However, only very recently, much interest was devoted to bubbles in Champagne wines. Small bubbles rising through the liquid, as well as a bubble ring (the so-called collar) at the periphery of a flute poured with champagne are the hallmark of this traditionally festive wine, and even there is no scientific evidence yet to connect the quality of a champagne with its effervescence, people nevertheless often make a connection between them. Therefore, since the last few years, a better understanding of the numerous parameters involved in the bubbling process has become an important stake in the champagne research area. Otherwise, in addition to these strictly enological reasons, we also feel that the area of bubble dynamics could benefit from the simple but close observation of a glass poured with champagne. In this study, our first results concerning the close observation of the three main steps of a champagne bubble's life are presented, that is, the bubble nucleation on tiny particles stuck on the glass wall (Chap. 2), the bubble ascent through the liquid (Chap. 3), and the bursting of bubbles at the free surface, which constitutes the most intriguing and visually appealing step (Chap. 4). Our results were obtained in real consuming conditions, that is, in a classical crystal flute poured with a standard commercial champagne wine. Champagne bubble nucleation proved to be a fantastic everyday example to illustrate the non-classical heterogeneous bubble nucleation process in a weakly supersaturated liquid. Contrary to a generally accepted idea, nucleation sites are not located on irregularities of the glass itself. Most of nucleation sites are located on tiny hollow and roughly cylindrical exogenous fibres coming from the surrounding air or remaining from the wiping process. Because of their geometry and hydrophobic properties, such particles are able to entrap gas pockets during the filling of a flute and to start up the bubble production process. Such particles are responsible for the clockwork and repetitive production of bubbles that rise in-line into the form of elegant bubble trains. This cycle of bubble production at a given nucleation site is characterised by its bubbling frequency. The time needed to reach the moment of bubble detachment depends on the kinetics of the CO2 molecules transfer from the champagne to the gas pocket, but also on the geometrical properties of the given nucleation site. Now, since a collection of particle shapes and sizes exists on the glass wall, the bubbling frequency may also vary from one site to another. Three minutes after pouring, we measured bubbling frequencies ranging from less than 1 Hz up to almost 30 Hz, which means that the most active nucleation sites emit up to 30 bubbles per second. After their detachment from nucleation sites, champagne bubbles rise in-line through the liquid into the form of elegant bubble trains. Since they collect dissolved carbon dioxide molecules, champagne bubbles expand during ascent and therefore constitute an original tool to investigate the dynamics of rising and expanding bubbles. Hydrodynamically speaking, champagne bubbles were found to reach a quasi-stationary stage intermediate between that of a rigid and that a fluid sphere (but nevertheless closer to that of a fluid sphere). This result drastically differs from the result classically observed with bubbles of fixed radii rising in surfactant solutions. Since surfactants progressively adsorb at the bubble surface during the rise, the drag coefficient of a rising bubble of fixed radius progressively increases, and finally reaches the rigid sphere limit when the bubble interface gets completely contaminated. In the case of champagne, since a bubble expands during its rise through the supersaturated liquid, the bubble interface continuously increases and therefore continuously offers newly created surface to the adsorbed surface-active materials (around 5 mg/l, mostly composed of proteins and glycoproteins). Champagne bubbles experience an interesting competition between two opposing effects. Our results suggest that the bubble growth during ascent approximately balance the adsorption rate of surface-active compounds on the rising bubble. We also compared the behaviour of champagne bubbles with that of beer bubbles. It was found that beer bubbles showed a behaviour, very close to that of rigid spheres. This is not a surprising result, since beer contains much higher amounts of surface-active molecules (of order of several hundreds mg/l) likely to be adsorbed at a bubble interface. Furthermore, since the gas content is lower in beer, growth rates of beer bubbles are lower than those of champagne. As a result, the dilution effect due to the rate of dilatation of the bubble area may be too weak to avoid the rigidification of the beer bubble interface. In a third set of experiments, we used instantaneous high-speed photography techniques to freeze the dynamics of bubbles collapsing at the free surface of a glass poured with champagne. The process following bubble collapse and leading to the projection of a high-speed liquid jet above the free surface was captured. A structural analogy between the liquid jet following a bubble collapse and the liquid jet following a drop impact was presented. By drawing a parallel between the fizz in champagne wines and the “fizz of the ocean", we also suggested that droplets issued from champagne bursting bubbles contain much higher amounts of surface-active and potentially aromatic materials than the liquid bulk. The bursting of champagne bubbles is thus expected to play a major role in flavour release. Otherwise, since the first photographic investigation were published about fifty years ago, numerous experiments have been conducted with single bubbles collapsing at a free surface. But, to the best of our knowledge, and surprising as it may seem, no results concerning the collateral effects on adjoining bubbles of bubbles collapsing in a bubble monolayer have been reported up to now. Actually, effervescence in a glass of champagne ideally lends to a preliminary work with bubbles collapsing in a bubble monolayer. For a few seconds after pouring, the free surface is completely covered with a monolayer composed of quite monodisperse millimetric bubbles collapsing close to each others. We took high-speed photographs of the situation which immediately follows the rupture of a bubble cap in a bubble monolayer. Adjoining bubbles were found to be literally sucked and strongly stretched toward the lowest part of the cavity left by the bursting bubble, leading to unexpected and short-lived flower-shaped structures. Stresses in distorted bubbles (petals of the flower-shaped structure) were evaluated and found to be, at least, one order of magnitude higher than stresses numerically calculated in the boundary layer around an isolated single millimetric collapsing bubble. This is a brand-new and slightly counter-intuitive result. While absorbing the energy released during collapse, as an air-bag would do, adjoining bubble caps store this energy into their thin liquid film, leading finally to stresses much higher than those observed in the boundary layer around single millimetric collapsing bubbles. Further investigation should be conducted now, and especially numerically, in order to better understand the relative influence of each pertinent parameters (bubble size, liquid density and viscosity, effect of surfactant...) on bubble deformation. L'objectif général de cet ouvrage consacré à l'étude des processus physico- chimiques de l'effervescence des vins de Champagne était de décortiquer les différentes étapes de la vie d'une bulle de champagne en conditions réelles de consommation, dans une flûte. Nous résumons ci-après les principaux résultats obtenus pour chacune des étapes de la vie de la bulle, depuis sa naissance sur les parois d'une flûte, jusqu'à son éclatement en surface. Naissance de la bulle À l'aide d'une caméra munie d'un objectif de microscope, nous avons pu mettre en évidence les particules qui jouent le rôle de sites de nucléation des bulles sur les parois d'une flûte à champagne. Dans la très grande majorité des cas, ce sont des fibres creuses et allongées, de quelques dizaines à quelques centaines de microns, qui assurent la production répétitive de bulles par nucléation hétérogène non classique (de type IV). Cette production répétitive de bulles au niveau des sites de nucléation est caractérisée par une gamme de fréquences de bullage assez large. Au sein d'une même flûte, immédiatement après le versement, nous avons mesuré des fréquences qui varient de moins de 1 Hz à presque 30 Hz. C'est donc jusqu'à 30 bulles qui sont émises chaque seconde par les sites de nucléation les plus actifs. Vitesse ascensionnelle d'une bulle Pour mesurer la vitesse d'une bulle tout au long de son trajet vers la surface libre du champagne, nous avons tiré profit de la production répétitive de bulles au niveau des sites de nucléation. Par la mise en place d'un dispositif expérimental simple qui associe une lumière stroboscopique et un appareil photographique muni de bagues macros, nous avons pu accéder à l'observation fine des trains de bulles ainsi qu'à la détermination de la vitesse ascensionnelle des bulles. Les mesures expérimentales du rayon et de la vitesse d'une bulle nous ont permis de déterminer le coefficient de traînée d'une bulle montante qui constitue une mesure indirecte de son état de surface en terme de mobilité interfaciale. Ces mesures nous ont montré que l'interface d'une bulle de champagne conserve une grande mobilité pendant sa phase ascensionnelle. C'est la faible dilution du champagne en macromolécules tensioactives et le grossissement continu des bulles pendant l'ascension qui assurent aux bulles une faible contamination de leur interface en molécules tensioactives. Pour comparaison, nous avons réalisé le même type de mesures sur des bulles de bière. Le contenu en macromolécules tensioactives étant beaucoup plus important dans une bière, l'effet de dilution du matériel tensioactif à la surface des bulles lié à l'accroissement de la surface des bulles ne compense plus l'adsorption massive des tensioactifs à la surface des bulles. Contrairement aux bulles du champagne, les bulles de bière adoptent vite un comportement de type sphère rigide. Éclatement d'une bulle en surface Nous avons obtenu des images de la situation qui suit immédiatement la rupture du mince film liquide qui constitue la partie émergée d'une bulle en surface. Nous avons ainsi pu mettre en évidence l'existence des jets de liquide engendrés par les éclatements de bulle. En faisant un parallèle légitime entre le pétillement des bulles à la surface du champagne et le "pétillement de l'océan", nous avons émis l'idée que les gouttelettes de jet étaient beaucoup plus concentrées en matériel tensioactif (et potentiellement aromatique) que le cœur de phase du liquide. Il semble donc que les éclatements de bulles jouent un rôle essentiel dans l'effet exhausteur d'arôme au cours de la dégustation d'un champagne. Pendant les quelques secondes qui suivent le versement du champagne dans la flûte, nous avons également réalisé des clichés d'éclatement de bulles en monocouche. Les premiers résultats de ces observations font apparaître des déformations spectaculaires dans le film liquide des bulles premières voisines. Ces premières images suggèrent des contraintes, dans le mince film des bulles déformées, très supérieures à celles qui existent dans le sillage d'une bulle isolée qui éclate.

  15. Remarks on Polyelectrolyte Conformation

    NASA Astrophysics Data System (ADS)

    de Gennes, P. G.; Pincus, P.; Velasco, R. M.; Brochard, F.

    Nous discutons des conformations de polymères linéaires chargés en faisant les hypothèses suivantes : a) la chaĬne sans charge est flexible, b) la force éctrostatique domine les interactions monomère-monomère c) il n'y a pas de sels. 1) Pour le cas dilué (chaĬne non enchevetrees) en corrigeant le calcul self-consistant fait récemment par Richmond [1a], on trouve une taille des polyions égale a = R ND, qui est une fonction linéaire de l'indice de polymérisation N. Ce rèsultat est en accord avec les prècèdents travaux de Hermans et Overbeek [1b], Kuhn, Kunzle et Katchalsky [1c]. 2) Il existe un domaine pour des concentrations très petites c (c** < c < c*) oò les interactions èlectrostatiques entre les polyions sont supèrieures aux ènergies thermiques, il semble donc possible que les polyions puissent former un rèseau pèriodique à trois dimensions. Nèanmoins, il semble difficile de mettre en èvidence un rèseau si diluè. 3) Jusqu'ici toutes les expériences avec les polyélectrolytes sans sels ont été pratiquement faites à des concentrations c > c*, pour lesquelles les différentes cha.nes sont enchevêtrées. Pour discuter ce régime on s.intéresse uniquement au cas où la charge par unité de longueur est près du (ou audessus du) seuil de condensation, donc il existe une seule longueur ξ(c) caractérisant les corrélations; à trois dimensions 03BE a le même comportement que le rayon de Debye pour les contre-ions. On a considéré quelques conformations possibles : a) un réseau hexagonal de batonnets; b) un réseau cubique de batonnets; c) une phase isotrope de cha.nes partiellement flexibles. Les différentes structures formées de batonnets semblent avoir la même énergie électrostatique. Ce fait suggère que la phase isotrope peut être la plus favorable. On analyse cette dernière phase en utilisant les mêmes méthodes qui se sont révélées efficaces pour les solutions des polymères neutres. Dans le modèle isotrope chaque cha.ne a le comportement d.une succession des petites pelotes (blobs) de taille 03BE. Les effets électrostatiques sont importants à l.intérieur d'un blobs et analogues au cas (1). Mais ces interactions sont écrantées entre les blobs ; chaque cha.ne a un comportement idéal à grande échelle et son rayon est R(c) ~ c-¼ N½. Si on suppose que les effets dynamiques des enchevêtrements sont faibles on trouve une valeur pour la viscosité ηsp/c ~ Nc-½ We discuss the conformations of linear polyions assuming that a) the corresponding uncharged chain is flexible ; b) electrostatic forces dominate the monomer-monomer interactions; c) no salt is added. 1) For the dilute case (non overlapping chains) correcting a recent self-consistent calculation by Richmond [1a], we find an overall polyion size R = Nd which is a linear function of the polymerization index N in agreement with the early work of Hermans and Overbeek, [1b], Kuhn, Kunzle, and Katchalsky [1c]. 2) There is a range of very low concentration c (c** < c < c*) where the chains do not overlap (c < c*) but where the electrostatic interactions between polyions are much larger than thermal energies (c > c**) : here we expect that the polyions build up a 3-dimensional periodic lattice ; however, the detection of such an extremely dilute lattice appears difficult. 3) Practically all experiments on salt-free polyelectrolytes have been performed at concentrations c > c* where different chains overlap each other. To discuss this regime we restrict our attention to cases where the charge per unit length is near (or above) the condensation threshold : then a single length ξ(c) characterizes the correlation; in 3 dimensions ξ scales like the Debye radius associated with the counter ions. We consider several possible conformations : a) hexagonal lattice of rigid rods ; b) cubic lattice of rigid rods; c) isotropic phase of partially flexible chains. The various rigid rod structures appear to have very similar electrostatic energies. This suggests that the isotropic phase might possibly be the most favorable. We analyse this latter phase using the same scaling methods which have recently been helpful for neutral polymer solutions (2). In the isotropic model each chain behaves like a succession of segments of size. Inside one segment electrostatic effects are important and similar to case (1) above. Between segments the interactions are screened out, and tach chain is ideal on a large scale, with radius R(c) ~ c-¼ N½. If we (tentatively) assume that the dynamical effects of entanglements are weak, we are than led to a viscosity ηsp/c ~ Nc-½.

  16. Préface

    NASA Astrophysics Data System (ADS)

    Jörgen Stevefelt, Henri Bachau Et

    2003-06-01

    UVX 2002, sixième édition du “Colloque sur les Sources Cohérentes et Incohérentes UV, VUV, et X : Applications et Développements Récents" s'est tenu du 11 au 14 juin 2002 au Centre CAES du CNRS “La Vieille, Perrotine", à Saint-Pierre d'Oléron. Le colloque a réunni une centaine de chercheurs et d'industriels et a permis de faire le point sur la production, la caractérisation et l'utilisation de rayonnement dans un domaine spectral s'étendant de l'ultraviolet aux rayons X. Les participants ont pu assister a trente conférences et une table ronde autour des problèmes locaux de pollution, une cinquantaine d'affiches ont été présentées au travers de deux sessions. Une douzaine d'industriels ont exposé leurs produits durant les séances d'affiches.Conformément aux éditions précédentes, les domaines couverts par le colloque UVX 2002 sont très variés et il est impossible de les résumer en quelques lignes. Parmi les activités en développement rapide on notera les lasers femtosecondes dont les applications se multiplient dans les laboratoires (propriétés des molécules, agrégats et solides), dans l'industrie (usinage, ablation...) et en médecine. L'absence de thermalisation ou de diffusion thermique ouvre aussi des perspectives pour la réalisation de films minces par ablation laser, un domaine où les lasers excimères sont traditionnellement utilisés, avec des applications importantes dans le secteur des télécommunications. Dans le domaine de l'extrème UV, des progrès significatifs ont été réalisés par plusieurs groupes dans la gamme de longueur d'onde de 5 à 20 nm, ouvrant ainsi la voie au développement industriel de la lithographie EUV. On note les progrès dans la réalisation des sources UV et X (laser X, génération d'harmoniques, laser a électrons libres) et la nécessité de développer des optiques adaptées. Une perspective intéressante, ouverte par la génération d'harmoniques, est la production d'impulsions attosecondes qui permettra d'explorer la matière à l'echelle de temps atomique. En même temps la caractérisation de ces impulsions nécessite la conception de techniques nouvelles d'analyse du signal aux temps ultra brefs.L'intéret particulier du colloque UVX est de réunir les communautés de scientifiques qui vont des chercheurs s'intéressant aux processus fondamentaux à ceux travaillant dans les domaines les plus appliqués, voire industriels. Une caractéristique marquante de notre discipline est la rapidité avec laquelle les progrès réalisés dans les laboratoires de recherche sont diffusés vers les applications industrielles. Il est donc important de maintenir un bon équilibre entre les recherches a caractères fondamental et appliqué dans les laboratoires.Nous tenons remercier les membres du comité d'organisation, le comité scientifique et les différents partenaires, institutionnels et industriels, qui par leur soutien ont permis que le colloque UVX puisse se dérouler. Ce colloque a été parrainé par les départements des Sciences Physiques et Mathématiques (SPM) et des Sciences pour l'Ingénieur (SPI) du CNRS, la Délégation Générale de l'Armement (DGA), le CEA DRECAM, le CEA DAM, le Conseil Géneral de Charente Maritime, l'Université de Bordeaux I, la Société Française d'optique, le Groupement de Recherche “SAXO" du CNRS et la société Air Liquide.

  17. Etude de la Morphologie et de la Cinématique de l'Emission des Raies interdites autour des Etoiles T Tauri

    NASA Astrophysics Data System (ADS)

    Lavalley, Claudia

    2000-06-01

    Le phénomène de perte de masse joue un rôle essentiel dès les premières étapes de la formation stellaire et semble être intimement lié à l'accrétion de matière sur l'étoile, probablement par l'intermédiaire de champs magnétiques permettant de convertir l'énergie cinétique accrétée, en puissance d'éjection. Les étoiles T Tauri classiques, âgées de quelques millions d'années et présentant une faible extinction, offrent un excellent cadre pour étudier les régions internes des vents stellaires. Dans ce travail, je présente les premières études sur la morphologie des jets associés aux étoiles DG Tau, CW Tau et RW Aur à une résolution angulaire de 0.1'' et sur la cinématique à deux dimensions de l'émission des raies de [O I]?6300Å, [N II]?6583Å et [S II] ?6716,6731Å dans le jet de DG Tau. Ces données ont été obtenues avec deux techniques d'observation complètement nouvelles, devenues disponibles entre 1994 et 1998 au télescope CFH, et idéalement adaptées à ce problème: l'imagerie en bande étroite derrière l'optique adaptative (PUEO) qui fournit des données à très haute résolution angulaire (~0.1''), et la spectro-imagerie intégrale de champ (TIGRE/OASIS) qui donne accès à l'information spatiale et spectrale à 2D, à haute résolution angulaire (ici ~0.5''-0.75'') et moyenne résolution spectrale (100-170 km/s). Les trois jets étudiés, résolus pour la première fois à partir de 55 u.a. de l'étoile, présentent une largeur similaire (30-35 u.a.) jusqu'à 100 u.a. et une morphologie dominée par des noeuds d'émission. Les jets des étoiles à faible excès infrarouge CW Tau et RW Aur sont très similaires aux deux autres jets des sources peu enfouies observés jusqu'à présent à la même échelle spatiale. Le jet de DG Tau, plus perturbé que les deux autres, et provenant d'une source avec une enveloppe encore importante, est aussi très similaire au seul autre jet associé à une source encore enfouie résolu à ces distances de l'étoile. Ceci donne des pistes sur l'évolution de l'interaction des jets avec l'environnement circumstellaire. La morphologie et la cinématique du jet de DG Tau suggèrent fortement une variabilité dans la vitesse d'éjection, qui pourrait aussi expliquer certains des noeuds des deux autres jets. La compatibilité d'un des noeuds observés avec les chocs en arc attendus dans une telle situation, a été bien mise en évidence. Des rapports de raies à différentes distances le long d'un jet (DG Tau) et à plusieurs intervalles de vitesses ont été obtenus ici pour la première fois. Des routines d'inversion considérant l'équilibre d'ionisation pour l'oxygène et l'azote et la fraction d'ionisation de l'hydrogène comme paramètre libre, ont permis de faire une estimation des variations de conditions d'excitation (Te, xe et ne) tout au long du jet. Une comparaison détaillée des rapports de raies observés avec les prédictions de différents modèles d'excitation, à l'aide de diagrammes rapport-rapport discriminants identifiés ici pour la première fois, favorise fortement la présence de chocs avec des vitesses de 50-100 km/s, à partir de 0.2'' de l'étoile.

  18. Migration of recharge waters downgradient from the Santa Catalina Mountains into the Tucson basin aquifer, Arizona, USA

    NASA Astrophysics Data System (ADS)

    Cunningham, Erin E. B.; Long, Austin; Eastoe, Chris; Bassett, R. L.

    Aquifers in the arid alluvial basins of the southwestern U.S. are recharged predominantly by infiltration from streams and playas within the basins and by water entering along the margins of the basins. The Tucson basin of southeastern Arizona is such a basin. The Santa Catalina Mountains form the northern boundary of this basin and receive more than twice as much precipitation (ca. 700mm/year) as does the basin itself (ca. 300mm/year). In this study environmental isotopes were employed to investigate the migration of precipitation basinward through shallow joints and fractures. Water samples were obtained from springs and runoff in the Santa Catalina Mountains and from wells in the foothills of the Santa Catalina Mountains. Stable isotopes (δD and δ18O) and thermonuclear-bomb-produced tritium enabled qualitative characterization of flow paths and flow velocities. Stable-isotope measurements show no direct altitude effect. Tritium values indicate that although a few springs and wells discharge pre-bomb water, most springs discharge waters from the 1960s or later. Résumé La recharge des aquifères des bassins alluviaux arides du sud-ouest des États-Unis est assurée surtout à partir des lits des cours d'eau et des playas dans les bassins, ainsi que par l'eau entrant à la bordure de ces bassins. Le bassin du Tucson, dans le sud-est de l'Arizona, est l'un de ceux-ci. La chaîne montagneuse de Santa Catalina constitue la limite nord de ce bassin et reçoit plus de deux fois plus de précipitations (environ 700mm/an) que le bassin (environ 300mm/an). Dans cette étude, les isotopes du milieu ont été utilisés pour analyser le déplacement de l'eau de pluie vers le bassin au travers des fissures et des fractures proches de la surface. Des échantillons d'eau ont été prélevés dans les sources et dans l'écoulement de surface de la chaîne montagneuse et dans des puits au pied de la chaîne. Les isotopes stables (δD et δ18O) et le tritium d'origine thermonucléaire permettent de caractériser qualitativement les cheminements de l'eau et leurs vitesses. Les isotopes stables ne mettent pas en évidence un effet d'altitude. Les teneurs en tritium indiquent que quelques sources et certains puits fournissent une eau ancienne, alors que l'eau de la plupart des sources date des années soixante ou est plus récente. Resumen Los acuíferos en las cuencas aluviales áridas del sudoeste de los Estados Unidos de América se recargan principalmente por la infiltración procedentes de los arroyos y playas de las propias cuencas y por entradas a lo largo de los límites de las mismas. La cuenca de Tucson, en el sudeste de Arizona es una de ellas. Las Montañas de Santa Catalina forman el contorno septentrional de esta cuenca y reciben una precipitación de más del doble (700mm/año) que la media de la propia cuenca (unos 300mm/año). En este estudio, se utilizaron isótopos ambientales para investigar la infiltración a través de fracturas y juntas superficiales. Se obtuvieron muestras de manantiales y de la escorrentía en las Montañas de Santa Catalina, así como de pozos ubicados al pie de las mismas. Los isótopos estables (Deuterio y Oxígeno-18) y el Tritio procedente de las bombas termonucleares permitieron la caracterización cualitativa de las líneas de flujo y de las velocidades. Los datos procedentes de la medida de isótopos estables no parecen presentar un efecto de altitud. Los valores de Tritio indican que aunque algunos pozos y manantiales descargan agua previa a los ensayos termonucleares, la mayoría descargan aguas de fecha posterior a 1960.

  19. Effets de l'humidite sur la propagation du delaminage dans un composite carbone/epoxy sollicite en mode mixte I/II

    NASA Astrophysics Data System (ADS)

    LeBlanc, Luc R.

    Les materiaux composites sont de plus en plus utilises dans des domaines tels que l'aerospatiale, les voitures a hautes performances et les equipements sportifs, pour en nommer quelques-uns. Des etudes ont demontre qu'une exposition a l'humidite nuit a la resistance des composites en favorisant l'initiation et la propagation du delaminage. De ces etudes, tres peu traitent de l'effet de l'humidite sur l'initiation du delaminage en mode mixte I/II et aucune ne traite des effets de l'humidite sur le taux de propagation du delaminage en mode mixte I/II dans un composite. La premiere partie de cette these consiste a determiner les effets de l'humidite sur la propagation du delaminage lors d'une sollicitation en mode mixte I/II. Des eprouvettes d'un composite unidirectionnel de carbone/epoxy (G40-800/5276-1) ont ete immergees dans un bain d'eau distillee a 70°C jusqu'a leur saturation. Des essais experimentaux quasi-statiques avec des chargements d'une gamme de mixites des modes I/II (0%, 25%, 50%, 75% et 100%) ont ete executes pour determiner les effets de l'humidite sur la resistance au delaminage du composite. Des essais de fatigue ont ete realises, avec la meme gamme de mixite des modes I/II, pour determiner 1'effet de 1'humidite sur l'initiation et sur le taux de propagation du delaminage. Les resultats des essais en chargement quasi-statique ont demontre que l'humidite reduit la resistance au delaminage d'un composite carbone/epoxy pour toute la gamme des mixites des modes I/II, sauf pour le mode I ou la resistance au delaminage augmente apres une exposition a l'humidite. Pour les chargements en fatigue, l'humidite a pour effet d'accelerer l'initiation du delaminage et d'augmenter le taux de propagation pour toutes les mixites des modes I/II. Les donnees experimentales recueillies ont ete utilisees pour determiner lesquels des criteres de delaminage en statique et des modeles de taux de propagation du delaminage en fatigue en mode mixte I/II proposes dans la litterature representent le mieux le delaminage du composite etudie. Une courbe de regression a ete utilisee pour determiner le meilleur ajustement entre les donnees experimentales et les criteres de delaminage en statique etudies. Une surface de regression a ete utilisee pour determiner le meilleur ajustement entre les donnees experimentales et les modeles de taux de propagation en fatigue etudies. D'apres les ajustements, le meilleur critere de delaminage en statique est le critere B-K et le meilleur modele de propagation en fatigue est le modele de Kenane-Benzeggagh. Afin de predire le delaminage lors de la conception de pieces complexes, des modeles numeriques peuvent etre utilises. La prediction de la longueur de delaminage lors des chargements en fatigue d'une piece est tres importante pour assurer qu'une fissure interlaminaire ne va pas croitre excessivement et causer la rupture de cette piece avant la fin de sa duree de vie de conception. Selon la tendance recente, ces modeles sont souvent bases sur l'approche de zone cohesive avec une formulation par elements finis. Au cours des travaux presentes dans cette these, le modele de progression du delaminage en fatigue de Landry & LaPlante (2012) a ete ameliore en y ajoutant le traitement des chargements en mode mixte I/II et en y modifiant l'algorithme du calcul de la force d'entrainement maximale du delaminage. Une calibration des parametres de zone cohesive a ete faite a partir des essais quasi-statiques experimentaux en mode I et II. Des resultats de simulations numeriques des essais quasi-statiques en mode mixte I/II, avec des eprouvettes seches et humides, ont ete compares avec les essais experimentaux. Des simulations numeriques en fatigue ont aussi ete faites et comparees avec les resultats experimentaux du taux de propagation du delaminage. Les resultats numeriques des essais quasi-statiques et de fatigue ont montre une bonne correlation avec les resultats experimentaux pour toute la gamme des mixites des modes I/II etudiee.

  20. Some current methods to represent the heterogeneity of natural media in hydrogeology

    NASA Astrophysics Data System (ADS)

    de Marsily, G.; Delay, F.; Teles, V.; Schafmeister, M. T.

    We have known for a long time that the material properties of the subsurface are highly variable in space. We have learned that this variability is due to the extreme complexity and variation with time of processes responsible for the formation of the earth's crust, from plate tectonics to erosion, sediment transport, and deposition, as well as to mechanical, climatic, and diagenetic effects. As geologists, we learned how to "read" this complex history in the rocks and how to try to extrapolate in space what we have understood. As physicists, we then learned that to study flow processes in such media we must apply the laws of continuum mechanics. As mathematicians using analytical methods, we learned that we must simplify by dividing this complex continuum into a small number of units, such as aquifers and aquitards, and describe their properties by (constant) equivalent values. In recent years, as numerical modelers, we learned that we now have the freedom to "discretize" this complex reality and describe it as an ensemble of small homogeneous boxes of continuous media, each of which can have different properties. How do we use this freedom? Is there a need for it? If the answer is "yes," how can we assign different rock-property values to thousands or even millions of such little boxes in our models, to best represent reality, and include confidence levels for each selected rock property? As a tribute to Professor Eugene S. Simpson, with whom the first author of this paper often discussed these questions, we present an overview of three techniques that focus on one property, the rock permeability. We explain the motivation for describing spatial variability and illustrate how to do so by the geostatistical method, the Boolean method, and the genetic method. We discuss their advantages and disadvantages and indicate their present state of development. This is an active field of research and space is limited, so the review is certain to be incomplete, but we hope that it will encourage the development of new ideas and approaches. Résumé On sait depuis longtemps que les propriétés des roches en profondeur sont éminemment variables dans l'espace. On sait que cette variabilité est due à la complexité extrême et à la variation au cours du temps des processus responsables de la formation de la croûte terrestre, de la tectonique des plaques à l'érosion, au transport sédimentaire et au dépôt, sans oublier les effets mécaniques, climatiques et de diagenèse. En tant que géologues, nous avons appris a "lire" cette histoire complexe au sein des roches, et à tenter d'extrapoler dans l'espace notre compréhension. En tant que physiciens, nous avons ensuite appris que pour étudier les écoulements dans de tels milieux, nous devions appliquer les concepts de la mécanique des milieux continus. En tant que mathématiciens utilisant des méthodes analytiques pour résoudre les problèmes d'écoulement, nous avons de plus appris que nous devions simplifier cette réalité complexe en un très petit nombre d'unités, tels que les aquifères et les aquitards, dont chacune est décrite par des propriétés équivalentes constantes. Enfin, dans les années récentes, en tant que numériciens, nous avons appris que nous avions désormais la liberté de "discrétiser" cette réalité complexe, et de la décrire comme un ensemble de petites "boîtes" homogènes de milieu continu, chacune d'entre elles pouvant avoir des propriétés différentes. Comment utilisons nous cette liberté nouvellement acquise? En avons-nous réellement besoin? Si la réponse est "oui", comment pouvons nous attribuer des propriétés différentes aux roches des milliers ou même millions de petites "boîtes" dans nos modèles, pour représenter au mieux la réalité, et comment déterminer les intervalles de confiance des propriétés choisies pour chaque roche? En hommage au Professeur Eugène S. Simpson, avec lequel le premier auteur de cet article a eu souvent l'occasion de discuter de ces questions, nous présentons ici un survol général de quelques techniques de génération de telles propriétés se focalisant sur une seule d'entre elles, la perméabilité des roches. Nous expliquons d'abord quels sont les raisons qui engagent à tenter de décrire la variabilité spatiale, puis nous illustrons trois méthodes pour le faire, la méthode géostatistique, la méthode Booléenne et la méthode génétique. Nous présentons leurs avantages et inconvénients respectifs, et donnons l'état actuel de leur développement. Ces méthodes constituant un domaine de recherche actif, et la place étant ici limitée, ce survol est nécessairement incomplet, mais nous espérons qu'il encouragera l'essor de nouvelles idées et de nouvelles approches. Resumen Sabemos desde hace tiempo que las propiedades del subsuelo son altamente variables espacialmente. Hemos aprendido que esta variabilidad es debida a la extrema complejidad y variabilidad temporal de los procesos responsables de la formación de la corteza terrestre, desde la tectónica de placas a la erosión, transporte de sedimentos y deposición, así como a efectos mecánicos, climáticos y diagenéticos. Como geólogos, hemos aprendido a "leer" esta compleja historia en las rocas y a cómo tratar de extrapolar en el espacio lo que ya sabemos. Como físicos, aprendimos después que para estudiar los procesos en este tipo de medios debemos aplicar las leyes de la mecánica de los medios continuos. Como matemáticos que usan métodos analíticos, hemos aprendido que debemos simplificar el medio dividiéndolo en un número menor de unidades, como serían los acuíferos y acuitardos, y describiendo sus propiedades mediante valores equivalentes (constantes). En los últimos años, como modelistas, también hemos aprendido que tenemos la libertad de "discretizar" esta realidad compleja y describirla como un conjunto de pequeñas cajas homogéneas de medio continuo, cada una con propiedades diferentes. ¿Cómo usamos esta libertad?¿Tenemos necesidad de ella? Si la respuesta es "sí", ¿cómo podemos asignar valores de las distintas propiedades de las rocas a miles e incluso millones de estas pequeñas cajas en nuestros modelos, con la pretensión de representar la realidad, y a la vez dar intervalos de confianza para cada propiedad seleccionada? Como un tributo al Profesor Eugene S. Simpson, con quien el autor de este artículo a menudo discutió sobre estas cuestiones, se presenta una recopilación de tres técnicas que se centran en una propiedad, la permeabilidad de la roca. Se explica la motivación para describir la variabilidad espacial y se ilustra cómo hacerlo mediante el método geoestadístico Booleano y mediante el método genético. Para cada método se discuten sus ventajas e inconvenientes y se indica su estado actual de desarrollo. Se trata éste de un campo activo de investigación y el espacio es limitado, por lo que la revisión es incompleta, pero esperamos que pueda servir para animar el desarrollo de nuevas ideas.

  1. Etude du processus de changement vecu par des familles ayant decide d'adopter volontairement des comportements d'attenuation des changements climatiques

    NASA Astrophysics Data System (ADS)

    Leger, Michel T.

    Les activites humaines energivores telles l'utilisation intensive de l'automobile, la surconsommation de biens et l'usage excessif d'electricite contribuent aux changements climatiques et autres problemes environnementaux. Bien que plusieurs recherches rapportent que l'etre humain est de plus en plus conscient de ses impacts sur le climat de la planete, ces memes recherches indiquent qu'en general, les gens continuent a se comporter de facon non ecologique. Que ce soit a l'ecole ou dans la communaute, plusieurs chercheurs en education relative a l'environnement estiment qu'une personne bien intentionnee est capable d'adopter des comportements plus respectueux de l'environnement. Le but de cette these etait de comprendre le processus d'integration de comportements d'attenuation des changements climatiques dans des familles. A cette fin, nous nous sommes fixe deux objectifs : 1) decrire les competences et les procedes qui favorisent l'adoption de comportements d'attenuation des changements climatiques dans des familles et 2) decrire les facteurs et les dynamiques familiales qui facilitent et limitent l'adoption de comportements d'attenuation des changements climatiques dans des familles. Des familles ont ete invitees a essayer des comportements personnels et collectifs d'attenuation des changements climatiques de sorte a integrer des modes de vie plus ecologiques. Sur une periode de huit mois, nous avons suivi leur experience de changement afin de mieux comprendre comment se produit le processus de changement dans des familles qui decident volontairement d'adopter des comportements d'attenuation des changements climatiques. Apres leur avoir fourni quelques connaissances de base sur les changements climatiques, nous avons observe le vecu de changement des familles durant huit mois d'essais a l'aide de journaux reflexifs, d'entretiens d'explicitation et du journal du chercheur. La these comporte trois articles scientifiques. Dans le premier article, nous presentons une recension des ecrits sur le changement de comportement en environnement. Nous explorons egalement la famille comme systeme fonctionnel de sorte a mieux comprendre ce contexte d'action environnementale qui est, a notre connaissance, peu etudie. Dans le deuxieme article, nous presentons nos resultats de recherche concernant les facteurs d'influence observes ainsi que les competences manifestees au cours du processus d'adoption de nouveaux comportements environnementaux dans trois familles. Enfin, le troisieme article presente les resultats du cas d'une quatrieme famille ou les membres vivent depuis longtemps des modes de vie ecologique. Dans le cadre d'une demarche d'analyse par theorisation ancree, l'etude de ce cas modele nous a permis d'approfondir les categories conceptuelles identifiees dans le deuxieme article de sorte a produire une modelisation de l'integration de comportements environnementaux dans le contexte de la famille. Les conclusions degagees grace a la recension des ecrits nous ont permis d'identifier les elements qui pourraient influencer l'adoption de comportements environnementaux dans des familles. La recension a aussi permis une meilleure comprehension des divers facteurs qui peuvent affecter l'adoption de comportements environnementaux et, enfin, elle a permis de mieux cerner le phenomene de changement de comportement dans le contexte de la famille consideree comme un systeme. En appliquant un processus d'analyse inductif, a partir de nos donnees qualitatives, les resultats de notre etude multi-cas nous ont indique que deux construits conceptuels semblent influencer l'adoption de comportements environnementaux en famille : 1) les valeurs biospheriques communes au sein de la famille et 2) les competences collectivement mises a profit collectivement durant l'essai de nouveaux comportements environnementaux. Notre modelisation du processus de changement dans des familles indique aussi qu'une dynamique familiale collaborative et la presence d'un groupe de soutien exterieur sont deux elements conceptuels qui tendent a influencer les deux principaux construits et, par ce fait, tendent a augmenter les chances d'integrer de nouveaux comportements environnementaux dans des familles. En conclusion, nous presentons les limites de notre recherche ainsi que des pistes pour des recherches futures. Notamment, nous recommandons que l'ecole accueille les familles des eleves dans le cadre d'activites d'education a l'environnement ou les freres, les soeurs et les parents des eleves puissent apprendre ensemble, a l'ecole. Par exemple, nous recommandons la conduite en ERE d'une recherche action portant sur l'apprentissage intergenerationnel de nouveaux comportements dans le contexte de la famille. Mots-cles : education relative a l'environnement, comportement environnemental en famille, changement de comportement en famille, valeurs biospheriques, competences d'action.

  2. La projection par plasma : une revue

    NASA Astrophysics Data System (ADS)

    Fauchais, P.; Grimaud, A.; Vardelle, A.; Vardelle, M.

    The quality of a plasma sprayed coating depends on numerous parameters that start to be understood due to the recent progresses in modelling and measurement techniques for plasma jets, momentum, heat and mass transfers between plasma and particles, the way the particules splat and cool down upon impact on the substrate or the previously deposited layers. In this paper, first are recalled the used measurement techniques and their limitations both for plasma jets and particles in flight. Then are underlined the importance of the different phenomena envolved in the transfers between plasma and particles such as steep temperature and chemical species density gradients around the particles, heat propagation phenomenon especially for ceramic particles and the connected evaporation effect, rarefaction effect which occurs even at atmospheric pressure. The problems related to the size and injection velocity distributions which determine the trajectory distributions and the heat treatments undergone by the particles are treated. The study of plasma generation shows on one hand for d.c. arc plasma torches the drastic influence on the plasma jets lengths and diameters of the gas injection chamber design, the gas nature, the design of the arc chamber and nozzle, the surrounding atmosphere (especially air pumping which cools down very fast the plasma) and on the other hand for RF plasmas the importance of the particle injection design to avoid the coupling between the RF discharge and the carrier gas with the particles. All these points are illustrated with examples of coatings of alumina, zirconia carbide and nickel particles. The way the particles splat is then studied with the chemical reactions in flight, the fast quenching of the particles and the resulting cristalline structures, the coating adhesion and also the residual stesses and their control through that of the temperature gradients into the coatings during spraying. At last a few actual and potential applications are presented in the fields of aeronautics and mechanics. La qualité d'un dépôt projete par plasma dépend de nombreux paramètres que l'on commence à mieux appréhender du fait des progrès de la modélisation et de la métrologie tant des écoulements plasmas que des transferts plasma-particules ou que des conditions d'écrasement et de refroidissement des particules lors de leur impact sur le substrat ou les couches déjà déposées. Les techniques de mesure utilisdes et leurs limitations sont d'abord rappelées tant pour les jets de plasma que pour les particules en vol et l'importance des différents phénomènes intervenant dans les transferts plasma-particules est soulignée : gradients de température et de concentration d'espèces chimiques très élevés autour des particules, effets de propagation de la chaleur, notamment pour les particules céramiques, effet d'évaporation, effet de raréfaction sensible dès la pression atmosphérique. Les problèmes de distribution de taille et de vitesse d'injection des particules sont également abordés car ils conditionnent les distributions de trajectoires et donc le traitement des particules dans le jet de plasma. La génération du plasma montre d'une part 1'influence considérable de l'injection du gaz, de sa nature, du dessin de la chambre d'arc et de la tuyère ainsi que du pompage de l'air ambiant sur la longueur des jets de plasma d'arc et d'autre part les problèmes d'injection pour éviter le couplage avec la décharge dans les jets de plasmas R.E Tout ceci est illustré avec des exemples de dépôt d'alumine, de zircone, de cermet carbure et de nickel. L'écrasement des particules est ensuite abordé avec les problèmes de réactions chimiques, de trempe ultra-rapide et donc de structure cristalline des dépôts, d'adhdsion mais aussi de containtes résiduelles et de leur contrôle via les gradients de température dans les dépôts pendant le tir. Enfin quelques applications actuelles sont présentées notamment pour l'aéronautique et la mécanique.

  3. Groundwater evolution beneath Hat Yai, a rapidly developing city in Thailand

    NASA Astrophysics Data System (ADS)

    Lawrence, A. R.; Gooddy, D. C.; Kanatharana, P.; Meesilp, W.; Ramnarong, V.

    2000-09-01

    Many cities and towns in South and Southeast Asia are unsewered, and urban wastewaters are often discharged either directly to the ground or to surface-water canals and channels. This practice can result in widespread contamination of the shallow groundwater. In Hat Yai, southern Thailand, seepage of urban wastewaters has produced substantial deterioration in the quality of the shallow groundwater directly beneath the city. For this reason, the majority of the potable water supply is obtained from groundwater in deeper semi-confined aquifers 30-50 m below the surface. However, downward leakage of shallow groundwater from beneath the city is a significant component of recharge to the deeper aquifer, which has long-term implications for water quality. Results from cored boreholes and shallow nested piezometers are presented. The combination of high organic content of the urban recharge and the shallow depth to the water table has produced strongly reducing conditions in the upper layer and the mobilisation of arsenic. A simple analytical model shows that time scales for downward leakage, from the surface through the upper aquitard to the semi-confined aquifer, are of the order of several decades. Résumé. De nombreuses villes du sud et du sud-est de l'Asie ne possèdent pas de réseaux d'égouts et les eaux usées domestiques s'écoulent souvent directement sur le sol ou dans des canaux et des cours d'eau de surface. Ces pratiques peuvent provoquer une contamination dispersée de la nappe phréatique. A Hat Yai (sud de la Thaïlande), les infiltrations d'eaux usées domestiques sont responsables d'une détérioration notable de la qualité de la nappe phréatique directement sous la ville. Pour cette raison, la majorité de l'eau potable est prélevée dans des aquifères semi-captifs plus profonds, situés entre 30 et 50 m sous la surface. Cependant, une drainance à partir de la nappe phréatique sous la ville constitue une composante significative de la recharge de l'aquifère plus profond, ce qui aura, à long terme, des implications sur la qualité de l'eau. Les résultats fournis par des forages carottés et des piézomètres peu profonds sont présentés. La combinaison entre une concentration élevée en matières organiques, provenant de la recharge par les eaux usées domestiques, et la faible profondeur de la nappe a produit des conditions fortement réductrices dans le niveau supérieur et une mobilisation de l'arsenic. Un modèle analytique simple montre que les échelles de temps pour la drainance vers le bas, à partir de la surface au travers de l'imperméable supérieur vers l'aquifère semi-captif, sont de l'ordre de quelques dizaines d'années. Resumen. Muchas ciudades en el sur y sudeste de Asia carecen de sistemas de saneamiento, por lo que las aguas residuales urbanas son a menudo vertidas bien directamente al suelo o bien a canales de aguas superficiales. Esta práctica puede provocar la contaminación difusa de las aguas subterráneas someras. En Hat Yai, al sur de Tailandia, la percolación de aguas residuales urbanas ha producido un deterioro substancial de la calidad del acuífero somero sobre el que se sitúa la ciudad. Por ello, la mayor parte del suministro de agua potable se obtiene a partir de aguas subterráneas de acuíferos semiconfinados más profundos, localizados entre 30 y 50 m bajo la superficie. No obstante, el goteo desde el acuífero freático constituye una fracción importante de la recarga al acuífero profundo, hecho que tiene implicaciones en lo que respecta a la calidad del agua a largo plazo. Se presentan en este artículo los resultados de testigos de sondeos y de multi-piezómetros someros. El alto contenido en materia orgánica de las aguas urbanas, unido a la cercanía del nivel freático, ha producido la movilización de arsénico al crearse condiciones altamente reductoras. Un modelo matemático sencillo indica que el tiempo de tránsito desde la superficie hasta el acuífero semiconfinado es del orden de varias décadas.

  4. Pompage optique et violation de parité dans l'atome

    NASA Astrophysics Data System (ADS)

    Bouchiat, M.-A.

    Between the original Kastler work about polarization of the Hg vapour fluorescence (1936) and the parity violation experiment in Cesium [5], we notice a relationship in the methods of investigation and in the nature of the physical problems under consideration : an illustration of the richness of the research field pioneered by Alfred Kastler. This paper adopts a phenomenological description of the Cesium parity violation experiment, without reference to electroweak theory. This allows to shed light on the peculiar features of our experiment which seem to contradict the optical pumping lore. When the 6S-7S forbidden transition in Cesium is excited, the electronic spin orientation of the 7S state exhibits two anomalies : the first one can be associated with a breakdown of a rotational invariance of the atom-radiation field system by the external dc electric field, the second one, much more fundamental, is a right-left asymmetry, Le. a manifestation of a parity violation not accounted for by conventional QED. The interpretation of this new optical pumping effect involves the introduction of three different dipole moments : the strongly suppressed magnetic dipole, an electric dipole induced by the Stark field and one directed along the spin momentum indicating the presence of the parity violating, but Tconserving, electron-nucleus interaction. When the transition is excited, the radiation field is absorbed coherently by these transition dipoles. The breakdown of the usual optical pumping rules can be explained in terms of interference between the amplitudes associated with two different dipoles. In particular, the interference of the Stark dipole with the PV dipole leads to a component of the 7S spin orientation which behaves under mirror symmetry as a polar vector, and not as an axial vector as one would normally expect for an angular momentum type quantity. Basically, the experiment consists in studying the effects upon the properties of the fluorescence light, polarized and unpolarized, produced by mirror symmetries with respect to three orthogonal planes. In this way, it is possible to disentangle the different interference terms and to discriminate the one involving parity violation. In practice, this is done by reversing various parameters of the experiment. The control of the quality of these reversals is obviously the crucial part of the whole experimental procedure. Moreover consistency tests and various cross-checks have to be devised and carefully carried out. They contribute to reliability of the result. In the Cesium experiment, the uncertainty associated with possible systematic effects is estimated to 8 % and the eqm statistical uncertainty, after combination of two independent measurements which satisfactorily crosscheck one another, is 11%. The parity violation can be readily interpreted in terms of the short range electronnucleon interaction associated with the exchange of the neutral vector boson Z0 recently observed with the CERN p-p collider. This new type of interaction, also known under the generic name of « neutral currents » was one of the most important predictions of electroweak theory which unifies, within the frame of gauge field theories, electromagnetic and weak interactions. This experiment, originally designed as a test of electroweak theory, gives information on the structure of neutral currents which complement those obtained by high energy experiments. First, the explored energy range is obviously very different. Secondly, the quarks acting coherently in Atomic Physics experiment but incoherently in accelerator experiments, the basic electroweak parameters extracted from the two kinds of experiments are different. Entre les travaux initiaux de Kastler sur la polarisation de la lumière de fluorescence émise par une vapeur de mercure (1936) et la violation de la parité dans le césium (1982), il existe un lien de parenté dans les méthodes d'investigation et dans la nature des problèmes physiques abordés. Ceci illustre la richesse du champ de recherche ouvert sous l'impulsion de Kastler. La présentation adoptée ici fait le choix d'une description phénoménologique de l'expérience violation de parité dans le césium, indépendamment de toute référence à la théorie électrofaible. Ceci met en lumière les caractères remarquables du résultat qui semblent contredire tout le savoir du pompage optique. Lorsque la transition interdite 6S-7S du césium est excitée, l'orientation du spin électronique dans l'état 7S manifeste certaines anomalies. L'une est de caractère fondamental : il s'agit d'une asymétrie droite-gauche, manifestant une violation de la parité dont ne peut rendre compte la théorie EDQ. L'interprétation met en jeu plusieurs moments dipolaires de transition dont deux dipôles électriques, l'un induit par effet Stark et l'autre dirigé le long du moment angulaire de spin indiquant la présence d'une interaction électron-nucléon violant la parité. L'interférence des amplitudes de transition associée à ces deux dipôles produit une composante de l'orientation du spin dans l'état 7S se comportant de manière anormale : deux expériences images miroir l'une de l'autre conduisent pour cette composante à des résultats qui ne sont pas images miroir l'un de l'autre. Dans son principe, l'expérience consiste à étudier sur les propriétés de la lumière de fluorescence polarisée les effets produits par les réflexions miroirs réalisées par rapport à trois plans orthogonaux. Le contrôle de l'exactitude avec laquelle sont réalisées ces réflexions constitue la part cruciale de toute la procédure expérimentale. Dans notre expérience, aucune correction significative ne s'avère nécessaire. L'incertitude associée aux effets systématiques possibles est estimée à 8 %. En combinant les résultats de deux mesures indépendantes qui se recoupent de manière satisfaisante, l'incertitude statistique est 11 % (1 eqm). La violation de la parité peut être directement interprétée à l'aide de l'interaction électron-nucléon à très courte portée associée à l'échange du boson vectoriel neutre Z 0, dite interaction « à courants neutres ». L'existence de ce type nouveau d'interaction constitue l'une des prédictions les plus importantes de la théorie électrofaible unifiant les interactions électromagnétiques et faibles. L'expérience du césium, initialement conçue comme test de la théorie électrofaible, donne sur la structure des courants neutres des informations qui s'avèrent complémentaires de celles obtenues par les expériences de hautes énergies.

  5. Profil pressionnel de l’adolescent en milieu scolaire à Lubumbashi, République Démocratique du Congo

    PubMed Central

    Kakoma, Placide Kambola; Muyumba, Emmanuel Kiyana; Mukeng, Clarence Kaut; Musung, Jaques Mbaz; Kakisingi, Christian Ngama; Mukuku, Olivier; Nkulu, Dophra Ngoy

    2018-01-01

    Introduction L'objectif de cette étude était de donner le profil de la pression artérielle (PA) des adolescents âgés de 15 à 19 ans en milieu scolaire à Lubumbashi, République Démocratique du Congo. Méthodes il s'agit d'une étude transversale, portant sur les adolescents âgés de 15 à 19 ans au moyen d'un échantillonnage aléatoire des écoles secondaires de Lubumbashi durant les années scolaires 2013-2014, 2014-2015 et 2015-2016. Trois mesures de PA étaient effectuées le même jour. Résultats 1766 adolescents âgés de 15-19 ans ont été inclus parmi eux 995 étaient de sexe féminin et 771 garçons. Les garçons avaient significativement une pression artérielle systolique élevée que les filles dans les tranches d'âges de 17, 18 et 19 ans. La pression artérielle diastolique n'était pas différente statistiquement dans toutes les tranches d'âges dans les deux sexes. Par contre, dans les deux sexes, la pression artérielle systolique été en corrélation significative avec le poids, la taille, l'indice de masse corporelle, le tour de taille et la fréquence cardiaque. Quant à la pression artérielle diastolique, des corrélations significatives étaient retrouvées avec le poids et l'indice de masse corporelle chez les filles alors que la fréquence cardiaque était en corrélation significative dans les deux sexes. Discussion Au cours de notre étude, il était question de déterminer les valeurs moyennes de PA et sa corrélation avec les paramètres anthropométriques, la FC et le poids de naissance chez les adolescents d'âge compris entre 15 et 19 ans. Notre étude a révélé des valeurs moyennes de PAS chez les garçons qui étaient plus élevées que les filles statistiquement significatives dans les tranches d'âges de 17, 18 et 19 ans alors que les valeurs moyennes de PAD n'avait pas de différence statistiquement significative dans toutes les tranches d'âges dans les deux sexes. Harrabi et al. [16], dans leur étude incluant 1569 sujets âgés de 13 à 19 ans, avaient trouvé que les garçons de 16, 17 et 18 ans avaient des PAS élevées sans différence statistiquement significatives mais les différences statistiquement significatives étaient remarquées chez les filles de 13 et 14 ans concernant la PAD. Dans son étude chez les enfants, Forrester et al. [17] avaient rapporté une corrélation positive entre la PAS et l'âge chez les garçons et négative chez les filles. Cette corrélation négative trouvée entre la PAS et l'âge chez les filles pourrait être expliquée par les modifications hormonales liées à la puberté qui commencent plus tôt chez les filles que chez les garçons. Se référant à la littérature, la PA augmente avec la croissance en âge plus chez les garçons suite à l'augmentation de la masse musculaire pendant la puberté [18-20]. Notre étude a montré que la PAS était en corrélation significative avec le poids, la taille, l'IMC, le tour de taille et la FC dans les deux sexes. Ce constat est similaire à celui faite par l'étude de Harrabi et al. [16] qui rapportait que la PAS était en corrélation positive avec la taille (garçons: r = 0,33; p < 0,0001; filles: r = 0,08; p = 0,02), le poids (garçons: r = 0,47, p < 0,0001; filles: r = 0,35; p < 0,0001) et l'âge (r = 0,12; p < 0,0001). Quant à la PAD dans notre étude, les corrélations significatives positives étaient retrouvées avec le poids (r = 0,093; p = 0,003) et l'IMC (r = 0,079; p = 0,012) seulement chez les filles, alors que la FC était en corrélation significative positive chez les garçons (r = 0,168; p < 0,0001) mais non chez les filles (r = 0,12, p < 0,0001) [16]. Dans une étude similaire chez les adolescents réalisée par Sinaiko et al. [21], une corrélation a été trouvée entre le poids et la PAS chez les garçons (r = 0,167, p < 0,0001) et les filles (r = 0,112, p < 0,0001). L'effet de la taille et du poids sur la PA a déjà été démontré dans plusieurs études transversales antérieures sur les enfants concluant en une forte corrélation positive [22,23]. L'insuffisance de déclaration des naissances à l'état civil dans plusieurs pays en développement, conséquente au recours aux poids de naissance déclarés auprès des parents ou tuteurs, serait un biais dans la réalisation des résultats statistiquement comparables. Conclusion Malgré les faiblesses potentielles de la présente étude dans sa conception transversale et les mesures de la PA le même jour, les données pourraient aider les responsables de la santé à adopter une stratégie nationale de prévention de l'hypertension artérielle dans notre population. PMID:29875975

  6. Patterns of evolution of research strands in the hydrologic sciences

    NASA Astrophysics Data System (ADS)

    Schwartz, F. W.; Fang, Y. C.; Parthasarathy, S.

    2005-03-01

    This paper examines issues of impact and innovation in groundwater research by using bibliometric data and citation analysis.The analysis is based on 3120 papers from the journal Water Resources Research with full contents and their citation data from the ISI Web of Science. The research is designed to develop a better understanding of the way citation numbers can be interpreted by scientists. Not surprisingly, the most highly cited papers appear to be pioneers in the field with papers departing significantly from what has come before and to be effective in creating similar, follow-on papers. Papers that are early contributions to a new research strand that is highly influential will be on average highly cited. However, the importance of a research strand as measured by citations seems to fall with time. The citation patterns of some classic papers show that the activity in the topical area and impact of follow-on papers gradually decline with time, which has similarities with Kuhn's ideas of revolutionary and normal science. The results of this study reinforce the importance of being a pioneer in a research strand, strategically shifting research strands, adopting strategies that can facilitate really major research breakthroughs. L'article examine les problèmes d'impact et d'innovation dans la recherche des eaux souterraines en utilisant les données bibliométriques et l'analyse des citations. L'analyse a été faite sur 3120 articles parus dans Water Resources Research en tenant compte de leur texte complet et de toutes citations parues dans l' ISI Web de la Science. Le but de la recherche a été de mieux comprendre comment le nombre des citations peut être interprété par les scientifiques. Ce n'est pas une surprise que les plus cités articles soient les articles-pionniers dans leurs domaines, qui s'écartent d'une manière significative de ce qui a été écrit auparavant et qui ont été suivi par des nouveaux articles. Les articles qui présentent une première contribution dans leur domaine et qui ont beaucoup influencé la recherche sont en général les plus cités. Il semble que la diminution de l'importance d'un domaine de recherche est aussi reflétée par le nombre de citations. Le modèle des citations des quelques articles classiques montre que l'activité dans le domaine ainsi que l'impact sur les articles suivantes déclinent progressivement pendant le temps ce qui présente une analogie avec les idées de Kuhn sur la science normale et révolutionnaire. Le résultat de l'étude renforce une fois de plus l'importance d'être un pionnier dans un domaine, en faisant avancer la recherche dans le domaine respectif, en adoptant des stratégies qui peuvent créer des percées majeures dans la recherche. Este artículo examina los temas de impacto e innovación en la investigación de las aguas subterráneas, mediante el uso de datos bibliométricos y del análisis de las referencias bibliográficas. El análisis se basó en 3120 artículos de la revista Water Resources Research, con el contenido completo y sus datos de referencias bibliográficas obtenidos de la Red de Ciencia ISI. La investigación se diseñó para desarrollar un entendimiento mayor de la forma en que la cantidad de citas bibliográficas puede ser interpretada por los científicos. No es sorprendente que los artículos más citados, parecen ser los pioneros en el campo, siendo artículos que divergen significativamente de lo que se ha hecho anteriormente, los cuales son efectivos creando de manera similar otros artículos "seguidores". Los artículos que son contribuciones iniciales en una línea de investigación nueva, la cual es de alta influencia, serán en promedio citados muy frecuentemente. Sin embargo, la importancia de una línea de investigación, de acuerdo a su medida por las veces que es citada, parece disminuir con el tiempo. Las tendencias en cuanto a hacer citas bibliográficas de algunos artículos clásicos, muestran que la actividad en el área de actualidad, lo mismo que el impacto de los artículos "seguidores", declinan con el tiempo gradualmente, lo cual muestra parecidos con las ideas de Kuhn acerca de la ciencia revolucionariay la ciencia normal. Los resultados de este estudio refuerzan la importancia de ser un pionero en una línea de investigación, cambiando estratégicamente de líneas de investigación y adoptando estrategias que puedan facilitar verdaderos avances en investigaciones de trascendencia.

  7. Note des Éditeurs scientifiques

    NASA Astrophysics Data System (ADS)

    Averbuch, P.

    Cette série d'articles est une revue de résultats expérimentaux sur différents "fluides" moléculaires, dans lesquels la cohésion est due à des forces de Van der Waals et à des liaisons hydrogène, l'eau étant un de ces fluides. Ces résultats sont présentés de façon à justifier expérimentalement un modèle original, non extensif, des propriétés de ces fluides, et l'ensemble se présente sous la forme de trois articles décrivant le modèle, suivis chacun par un article le comparant aux résultats expérimentaux publiés par de nombreux auteurs. Le caractère non extensif des propriétés physiques des fluides est choquant, contraire à beaucoup d'idées établies, il semble n'avoir en sa faveur qu'un argument, la comparaison avec un nombre de résultats expérimentaux assez grand pour que l'effet du hasard soit difficilement soupçonnable. En particulier, les écarts entre des résultats de mesures faits par des auteurs différents dans des conditions différentes sont expliqués, le sérieux et la compétence des différents expérimentateurs ne sont plus mis en doute : mais l'interprétation de ces résultats avec un modèle extensif non adapté est seule mise en cause. Les modèles extensifs étant utilisés systématiquement, au delà des expériences de physiciens, dans les calculs d'ingénieurs, et dans la modélisation d'appareils qui fonctionnent et de phénomènes naturels observés par tout le monde, il fallait expliquer pourquoi on pouvait renoncer à l'extensivité. Les raisons du succès pratique des modèles extensifs sont données, d'abord dans le cas des nématiques, puis dans celui des liquides ordinaires, et c'est ce qui rend l'ensemble cohérent, tant avec les mesures physiques fines qu'avec les observations quotidiennes. Il n'en reste pas moins que si l'interprétation donnée dans cette série d'articles est généralisable, une justification théorique du modèle utilisé devient nécessaire. Pour ce qui est des propriétés d'équilibre, une séparation de l'énergie libre en énergie libre de volume et en énergie libre de surface devrait donner les mêmes résultats ; par contre les choses deviennent troublantes dès que l'on passe aux coefficients de transport, c'est-à-dire à l'aspect macroscopique de la dynamique moléculaire. Il y a là un écart notable avec les conceptions courantes, ce qui rend très surprenante la lecture de ces articles. On peut mentionner la liste des problèmes théoriques posés par la description phénoménologique qui est celle de cette série d'articles : la généralisation de lois d'échelle en dehors de zones critiques n'est pas absolument nouvelle, par contre la simplicité des lois reliant l'exposant v à la température pose problème ; le sens des temps de relaxation utilisés est sans doute également à préciser. Enfin les modes considérés semblent n'intervenir dans les propriétés thermodynamiques que par un facteur par mode, comme si seulement l'énergie potentielle devait intervenir, les termes cinétiques ne participant pas vraiment aux transitions de phase. Tout cela pose donc problème, et l'on peut se demander si un pareil modèle peut être compatible avec tout ce qui est connu par ailleurs en physique statistique. Mais s'il rend bien compte de beaucoup de résultats expérimentaux, ce sont ces derniers qui seraient en difficulté avec la mécanique statistique. Il a donc semblé préférable de publier le modèle, sa justification expérimentale et de poser quelques problèmes, tant aux théoriciens, qui pourraient expliquer pourquoi un tel modèle rend compte de résultats observés, qu'aux expérimentateurs, qui pourraient reprendre certaines mesures, et délimiter le caractère plus ou moins général du modèle.

  8. Dislocation structures and anomalous flow in L12 compounds

    NASA Astrophysics Data System (ADS)

    Dimiduk, D. M.

    1991-06-01

    The theory of the anomalous flow behavior of LI2 compounds has developed over the last 30 years. This theory has a foundation in the early estimates of the crystallographic anisotropy of antiphase boundary (APB) energy in these compounds. In spite of this critical aspect of the theory, it is only in the last five years that electron microscopy has been employed to quantify the APB energies and to determine the detailed nature of dislocation structures at each stage of deformation. The recent studies of several research groups have provided essentially consistent new details about the nature of dislocations in Ni3AI and a few other LI2 compounds which exhibit anomalous flow behavior. These studies have introduced several new concepts for the controlling dislocation mechanisms. Additionally, these studies have shown that in Ni3AI, the APB energies have only small variations in magnitude with change of the APB plane (they are nearly isotropic), are relatively insensitive to changes in solute content, and the anisotropy ratio does not correlate with alloy strength. The present manuscript provides a critical review of the new transmission electron microscopy (TEM) results along with the new concepts for the mechanism of anomalous flow. Inconsistencies and deficiencies within these new concepts are identified and discussed. The collective set of electron-microscopy results is discussed within the context of both the mechanical behavior of LI2 compounds and the Greenberg and Paidar, Pope and Vitek (PPV) models for anomalous flow. Conceptual consistency with these models can only be constructed if the Kear-Wilsdorf (K-W) configurations are treated as an irreversible work hardening or relaxation artifact and, specific details of these two models cannot be shown by electron microscopy. Alternatively, the structural features recently revealed by electron microscopy have not been assembled into a self-consistent model for yielding which fully addresses the mechanical behavior phenomenology. La théorie permettant de rendre compte de l'anomalie d'écoulement plastique dans les composés de structure LI2 a été développée depuis trente ans. Celle-ci est fondée sur les premières estimations de l'anisotropie de l'énergie de paroi d'antiphase en fonction du plan cristallographique de défaut. Cependant, malgré cet aspect essentiel de la théorie, c'est seulement durant ces cinq dernières années que les techniques de microscopie électronique ont été employées pour déterminer l'énergie de paroi d'antiphase, les configurations de dislocations et leur structure de coeur pour chaque condition de déformation. Les études récentes de plusieurs équipes ont apporté des résultats reproductibles ou complémentaires sur la nature des dislocations dans Ni3AI et quelques autres composés de structure LI2 qui possèdent également une anomalie de limite élastique. Ces études ont permis de concevoir plusieurs mécanismes régissant les mouvements des dislocations et pouvant être responsables de l'anomalie de limite élastique. En outre, elles ont montré que, dans Ni3AI, l'énergie de paroi varie très peu avec le plan de l'antiphase (elle est pratiquement isotrope), qu'elle est très peu sensible à la quantité de solutés et que le rapport d'anisotropie n'est pas en relation avec la résistance de l'alliage. Une revue critique de ces nouveaux résultats de microscopie électronique et des concepts qui en ont été déduits est présentée dans ce manuscrit. Les incohérences et les déficiences des nouveaux modèles sont identifiés et discutés. L'ensemble des résultats de microscopie électronique est comparé aux deux modèles de Greenberg et de Paidar, Pope et Vitek. Une bonne corrélation entre les observations expérimentales et ces modèles ne peut seulement être construite que si l'on considère que les configurations de Kear-Wilsdorf (K-W) sont des effets de durcissement ou des artefacts de relaxation. Les détails spécifiques à ces modèles seraient alors expérimentalement inaccessibles par les techniques de microscopie électronique. D'un autre côté, les objets récemment identifiés en microscopie électronique n'ont pas reçu de description dans un modèle de déformation qui explique de façon satisfaisante l'ensemble du comportement mécanique.

  9. Dispositional optimism among American and Jordanian college students: are Westerners really more upbeat than Easterners?

    PubMed

    Khallad, Yacoub

    2010-02-01

    The present study aimed at assessing some previous research conclusions, based primarily on comparisons of North Americans and East Asians, that Westerners tend to be optimistic while Easterners tend to be pessimistic. Two samples of European American and Jordanian college students were administered a questionnaire consisting of items measuring dispositional optimism along with items pertaining to risk and self-protective behaviors (e.g., seatbelt use, vehicular speeding, smoking) and social and demographic factors (e.g., sex, socioeconomic status, religiosity). The findings uncovered dispositional optimism to be stronger for American compared to Jordanian participants. Separate analyses of optimism versus pessimism revealed that Jordanian participants were more pessimistic, but not less optimistic than their American counterparts. No significant correlations were found between dispositional optimism and sex, socioeconomic status, or religiosity. The levels of optimism displayed by Jordanians in this study are inconsistent with previous claims of an optimistic West and a pessimistic East, and suggest that self-enhancing processes may not be confined to Western or highly individualistic groups. The findings did not uncover an association between dispositional optimism and risk or self-protective behaviors. Multiple regression analyses showed cultural background and sex to be the best predictors of these behaviors. The implications of these findings are discussed. La présente étude avait pour but d'évaluer quelques conclusions de recherches passées, fondées principalement sur des comparaisons de nord-américains et d'asiatiques de l'est, indiquant que les occidentaux tendent à être optimistes alors que les orientaux tendent à être pessimistes. Deux échantillons d'étudiants américains-européens et jordaniens du collège ont rempli un questionnaire ayant des items mesurant l'optimisme dispositionnel avec d'autres items qui portent sur les comportements de risque et d'auto-protection (e.g., la ceinture de sécurité, la vitesse au volant, l'usage de tabac), sur les facteurs sociaux et démographiques (e.g., le sexe, le statut socio-économique, la religiosité). Les résultats ont indiqué que l'optimisme dispositionnel était plus fort chez les américains en comparaison aux participants jordaniens. Des analyses séparées de l'optimisme versus le pessimisme ont montré que les participants jordaniens n'étaient pas pessimistes mais pas moins optimistes que leurs homologues américains. Il n'y avait pas de corrélations significatives entre l'optimisme dispotionnel et le sexe, le statut socio-économique et la religiosité. Les niveaux d'optimisme dispotionnel affichés par les jordaniens dans cette étude sont inconsistents avec les affirmations passées d'un occident optimiste et d'un orient pessimiste et suggèrent que les processus d'auto-amélioration pourraient ne pas être restreints aux occidentaux ou aux groupes hautement individualistes. Les résultats n'ont pas dévoilé une association entre l'optimise dispotionnel et les comportements de risque ou les comportements d'auto-protection. Des analyses de régression multiples ont indiqué que le contexte culturel et le sexe sont les meilleurs prédicteurs de ces comportements. Les implications de ces résultats sont discutées. El presente estudio se basó en la evaluación de algunas conclusiones de investigaciones anteriores, basadas principalmente en comparaciones de habitantes de norteamericanos con habitantes del Asia del este, en las que se proponía que mientras los occidentales tienden a ser optimistas, los orientales tienden a ser pesimistas. En dos muestras de estudiantes de la escuela superior europeo-americanos y jordanos se administró un cuestionario que consiste en preguntas que miden el optimismo disposicional junto con preguntas que evalúan conductas de riesgo y de autoprotección (p.e., uso de cinturón de seguridad, velocidad de manejo, fumar) así como factores sociales y demográficos (p.e., sexualidad, estatus socioeconómico, religiosidad). Las conclusiones señalaron que el optimismo disposicional es más fuerte en los participantes americanos en comparación con los jordanos. Análisis separados del optimismo en relación con el pesimismo revelaron que los participantes jordanos eran más pesimistas, pero no menos optimistas que sus homólogos americanos. No se encontraron correlaciones significativas entre optimismo disposicional y sexualidad, estatus socioeconómico o religiosidad. El nivel de optimismo mostrado por los jordanos en este estudio es inconsistente con las propuestas anteriores de un Oeste optimista y un Este pesimista, y sugiere que los procesos de autodesarrollo no pueden ser limitados a grupos Occidentales o altamente individualistas. Las resultados no mostraron una asociación entre optimismo disposicional y conductas de riesgo o autoprotectoras. Los análisis de regresión múltiples mostraron que el contexto cultural y la sexualidad son los mejores predictores de estas conductas. Las implicaciones de estos hallazgos fueron discutidas.

  10. A thick lens of fresh groundwater in the southern Lihue Basin, Kauai, Hawaii, USA

    NASA Astrophysics Data System (ADS)

    Izuka, Scot; Gingerich, Stephen

    2002-11-01

    A thick lens of fresh groundwater exists in a large region of low permeability in the southern Lihue Basin, Kauai, Hawaii, USA. The conventional conceptual model for groundwater occurrence in Hawaii and other shield-volcano islands does not account for such a thick freshwater lens. In the conventional conceptual model, the lava-flow accumulations of which most shield volcanoes are built form large regions of relatively high permeability and thin freshwater lenses. In the southern Lihue Basin, basin-filling lavas and sediments form a large region of low regional hydraulic conductivity, which, in the moist climate of the basin, is saturated nearly to the land surface and water tables are hundreds of meters above sea level within a few kilometers from the coast. Such high water levels in shield-volcano islands were previously thought to exist only under perched or dike-impounded conditions, but in the southern Lihue Basin, high water levels exist in an apparently dike-free, fully saturated aquifer. A new conceptual model of groundwater occurrence in shield-volcano islands is needed to explain conditions in the southern Lihue Basin. Résumé. Dans le sud du bassin de Lihue (Kauai, Hawaii, USA), il existe une épaisse lentille d'eau souterraine douce dans une vaste région à faible perméabilité. Le modèle conceptuel conventionnel pour la présence d'eau souterraine à Hawaii et dans les autres îles de volcans en bouclier ne rend pas compte d'une lentille d'eau douce si épaisse. Dans ce modèle conceptuel, les accumulations de lave dont sont formés la plupart des volcans en bouclier couvrent de vastes régions à relativement forte perméabilité, avec des lentilles d'eau douce peu épaisses. Dans le sud du bassin de Lihue, les laves remplissant le bassin et les sédiments constituent une région étendue à faible conductivité hydraulique régionale, qui, sous le climat humide du bassin, est saturée presque jusqu'à sa surface; les surfaces piézométriques sont plusieurs centaines de mètres au-dessus du niveau de la mer à quelques kilomètres de la côte. On pensait jusqu'à présent que des niveaux piézométriques aussi élevés dans des îles de volcans en bouclier n'existaient que dans le cas de nappes perchées ou de blocage par un dyke, mais dans le sud du bassin de Lihue, des niveaux piézométriques élevés existent dans un aquifère apparemment sans dyke et complètement saturé. Un nouveau modèle conceptuel de présence d'eau souterraine dans les îles de volcans en bouclier est nécessaire pour expliquer les conditions observées dans le sud du bassin de Lihue. Resumen. Se ha determinado la existencia de un espeso lentejón de aguas subterráneas dulces en una extensa región de baja permeabilidad situada al sur de la cuenca de Lihue, en Kauai (Hawaii, Estados Unidos de América). El modelo conceptual convencional de las aguas subterráneas en Hawai y en otros cinturones de islas volcánicas no considera la existencia de lentejones tan gruesos de agua dulce. En dicho modelo, las acumulaciones de flujos de lava que constituyen la mayoría de los cinturones volcánicos se desarrollan en grandes áreas de permeabilidad relativamente baja y con pequeños lentejones de agua dulce. En el sur de la cuenca de Lihue, las lavas de relleno y los sedimentos forman una región extensa de baja conductividad hidráulica regional que, con el clima húmedo de la zona, está saturada hasta prácticamente la superficie del terreno, mientras que el nivel freático se encuentra centenares de metros por encima del nivel del mar a pocos kilómetros de la línea de costa. Se creía hasta ahora que, en los cinturones de islas volcánicas, tales niveles sólo tenían lugar en acuíferos colgados o en condiciones de confinamiento por diques, pero, al sur de la cuenca de Lihue, se dan en acuíferos completamente saturados que no están limitados por diques. Se necesita un nuevo modelo conceptual de las aguas subterráneas en cinturones de islas volcánicas para explicar las condiciones halladas en la cuenca meridional de Lihue.

  11. Evaluation of Two New Smoothing Methods in Equating: The Cubic B-Spline Presmoothing Method and the Direct Presmoothing Method

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2009-01-01

    This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…

  12. Comparison of DNA extraction methods for meat analysis.

    PubMed

    Yalçınkaya, Burhanettin; Yumbul, Eylem; Mozioğlu, Erkan; Akgoz, Muslum

    2017-04-15

    Preventing adulteration of meat and meat products with less desirable or objectionable meat species is important not only for economical, religious and health reasons, but also, it is important for fair trade practices, therefore, several methods for identification of meat and meat products have been developed. In the present study, ten different DNA extraction methods, including Tris-EDTA Method, a modified Cetyltrimethylammonium Bromide (CTAB) Method, Alkaline Method, Urea Method, Salt Method, Guanidinium Isothiocyanate (GuSCN) Method, Wizard Method, Qiagen Method, Zymogen Method and Genespin Method were examined to determine their relative effectiveness for extracting DNA from meat samples. The results show that the salt method is easy to perform, inexpensive and environmentally friendly. Additionally, it has the highest yield among all the isolation methods tested. We suggest this method as an alternative method for DNA isolation from meat and meat products. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Study of New Method Combined Ultra-High Frequency (UHF) Method and Ultrasonic Method on PD Detection for GIS

    NASA Astrophysics Data System (ADS)

    Li, Yanran; Chen, Duo; Zhang, Jiwei; Chen, Ning; Li, Xiaoqi; Gong, Xiaojing

    2017-09-01

    GIS (gas insulated switchgear), is an important equipment in power system. Partial discharge plays an important role in detecting the insulation performance of GIS. UHF method and ultrasonic method frequently used in partial discharge (PD) detection for GIS. It is necessary to investigate UHF method and ultrasonic method for partial discharge in GIS. However, very few studies have been conducted on the method combined this two methods. From the view point of safety, a new method based on UHF method and ultrasonic method of PD detection for GIS is proposed in order to greatly enhance the ability of anti-interference of signal detection and the accuracy of fault localization. This paper presents study aimed at clarifying the effect of the new method combined UHF method and ultrasonic method. Partial discharge tests were performed in laboratory simulated environment. Obtained results show the ability of anti-interference of signal detection and the accuracy of fault localization for this new method combined UHF method and ultrasonic method.

  14. The multigrid preconditioned conjugate gradient method

    NASA Technical Reports Server (NTRS)

    Tatebe, Osamu

    1993-01-01

    A multigrid preconditioned conjugate gradient method (MGCG method), which uses the multigrid method as a preconditioner of the PCG method, is proposed. The multigrid method has inherent high parallelism and improves convergence of long wavelength components, which is important in iterative methods. By using this method as a preconditioner of the PCG method, an efficient method with high parallelism and fast convergence is obtained. First, it is considered a necessary condition of the multigrid preconditioner in order to satisfy requirements of a preconditioner of the PCG method. Next numerical experiments show a behavior of the MGCG method and that the MGCG method is superior to both the ICCG method and the multigrid method in point of fast convergence and high parallelism. This fast convergence is understood in terms of the eigenvalue analysis of the preconditioned matrix. From this observation of the multigrid preconditioner, it is realized that the MGCG method converges in very few iterations and the multigrid preconditioner is a desirable preconditioner of the conjugate gradient method.

  15. Energy minimization in medical image analysis: Methodologies and applications.

    PubMed

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  16. [Comparative study on four kinds of assessment methods of post-marketing safety of Danhong injection].

    PubMed

    Li, Xuelin; Tang, Jinfa; Meng, Fei; Li, Chunxiao; Xie, Yanming

    2011-10-01

    To study the adverse reaction of Danhong injection with four kinds of methods, central monitoring method, chart review method, literature study method and spontaneous reporting method, and to compare the differences between them, explore an appropriate method to carry out post-marketing safety evaluation of traditional Chinese medicine injection. Set down the adverse reactions' questionnaire of four kinds of methods, central monitoring method, chart review method, literature study method and collect the information on adverse reactions in a certain period. Danhong injection adverse reaction information from Henan Province spontaneous reporting system was collected with spontaneous reporting method. Carry on data summary and descriptive analysis. Study the adverse reaction of Danhong injection with four methods of central monitoring method, chart review method, literature study method and spontaneous reporting method, the rates of adverse events were 0.993%, 0.336%, 0.515%, 0.067%, respectively. Cyanosis, arrhythmia, hypotension, sweating, erythema, hemorrhage dermatitis, rash, irritability, bleeding gums, toothache, tinnitus, asthma, elevated aminotransferases, constipation, pain are new discovered adverse reactions. The central monitoring method is the appropriate method to carry out post-marketing safety evaluation of traditional Chinese medicine injection, which could objectively reflect the real world of clinical usage.

  17. Ensemble Methods for MiRNA Target Prediction from Expression Data.

    PubMed

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials.

  18. Ensemble Methods for MiRNA Target Prediction from Expression Data

    PubMed Central

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials. PMID:26114448

  19. 46 CFR 160.077-5 - Incorporation by reference.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., Breaking of Woven Cloth; Grab Method. (ii) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (iii) Method 5134, Strength of Cloth, Tearing; Tongue Method. (iv) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (v) Method 5762, Mildew Resistance of Textile Materials...

  20. 46 CFR 160.077-5 - Incorporation by reference.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Elongation, Breaking of Woven Cloth; Grab Method. (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (3) Method 5134, Strength of Cloth, Tearing; Tongue Method. (4) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (5) Method 5762, Mildew Resistance of Textile Materials...

  1. 46 CFR 160.077-5 - Incorporation by reference.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., Breaking of Woven Cloth; Grab Method. (ii) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (iii) Method 5134, Strength of Cloth, Tearing; Tongue Method. (iv) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (v) Method 5762, Mildew Resistance of Textile Materials...

  2. 46 CFR 160.077-5 - Incorporation by reference.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Elongation, Breaking of Woven Cloth; Grab Method. (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (3) Method 5134, Strength of Cloth, Tearing; Tongue Method. (4) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (5) Method 5762, Mildew Resistance of Textile Materials...

  3. Methods for analysis of cracks in three-dimensional solids

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Newman, J. C., Jr.

    1984-01-01

    Various analytical and numerical methods used to evaluate the stress intensity factors for cracks in three-dimensional (3-D) solids are reviewed. Classical exact solutions and many of the approximate methods used in 3-D analyses of cracks are reviewed. The exact solutions for embedded elliptic cracks in infinite solids are discussed. The approximate methods reviewed are the finite element methods, the boundary integral equation (BIE) method, the mixed methods (superposition of analytical and finite element method, stress difference method, discretization-error method, alternating method, finite element-alternating method), and the line-spring model. The finite element method with singularity elements is the most widely used method. The BIE method only needs modeling of the surfaces of the solid and so is gaining popularity. The line-spring model appears to be the quickest way to obtain good estimates of the stress intensity factors. The finite element-alternating method appears to yield the most accurate solution at the minimum cost.

  4. Development and validation of spectrophotometric methods for estimating amisulpride in pharmaceutical preparations.

    PubMed

    Sharma, Sangita; Neog, Madhurjya; Prajapati, Vipul; Patel, Hiren; Dabhi, Dipti

    2010-01-01

    Five simple, sensitive, accurate and rapid visible spectrophotometric methods (A, B, C, D and E) have been developed for estimating Amisulpride in pharmaceutical preparations. These are based on the diazotization of Amisulpride with sodium nitrite and hydrochloric acid, followed by coupling with N-(1-naphthyl)ethylenediamine dihydrochloride (Method A), diphenylamine (Method B), beta-naphthol in an alkaline medium (Method C), resorcinol in an alkaline medium (Method D) and chromotropic acid in an alkaline medium (Method E) to form a colored chromogen. The absorption maxima, lambda(max), are at 523 nm for Method A, 382 and 490 nm for Method B, 527 nm for Method C, 521 nm for Method D and 486 nm for Method E. Beer's law was obeyed in the concentration range of 2.5-12.5 microg mL(-1) in Method A, 5-25 and 10-50 microg mL(-1) in Method B, 4-20 microg mL(-1) in Method C, 2.5-12.5 microg mL(-1) in Method D and 5-15 microg mL(-1) in Method E. The results obtained for the proposed methods are in good agreement with labeled amounts, when marketed pharmaceutical preparations were analyzed.

  5. Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.

    PubMed

    Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping

    2017-06-27

    Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.

  6. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.

  7. Method Engineering: A Service-Oriented Approach

    NASA Astrophysics Data System (ADS)

    Cauvet, Corine

    In the past, a large variety of methods have been published ranging from very generic frameworks to methods for specific information systems. Method Engineering has emerged as a research discipline for designing, constructing and adapting methods for Information Systems development. Several approaches have been proposed as paradigms in method engineering. The meta modeling approach provides means for building methods by instantiation, the component-based approach aims at supporting the development of methods by using modularization constructs such as method fragments, method chunks and method components. This chapter presents an approach (SO2M) for method engineering based on the service paradigm. We consider services as autonomous computational entities that are self-describing, self-configuring and self-adapting. They can be described, published, discovered and dynamically composed for processing a consumer's demand (a developer's requirement). The method service concept is proposed to capture a development process fragment for achieving a goal. Goal orientation in service specification and the principle of service dynamic composition support method construction and method adaptation to different development contexts.

  8. Simultaneous determination of a binary mixture of pantoprazole sodium and itopride hydrochloride by four spectrophotometric methods.

    PubMed

    Ramadan, Nesrin K; El-Ragehy, Nariman A; Ragab, Mona T; El-Zeany, Badr A

    2015-02-25

    Four simple, sensitive, accurate and precise spectrophotometric methods were developed for the simultaneous determination of a binary mixture containing Pantoprazole Sodium Sesquihydrate (PAN) and Itopride Hydrochloride (ITH). Method (A) is the derivative ratio method ((1)DD), method (B) is the mean centering of ratio spectra method (MCR), method (C) is the ratio difference method (RD) and method (D) is the isoabsorptive point coupled with third derivative method ((3)D). Linear correlation was obtained in range 8-44 μg/mL for PAN by the four proposed methods, 8-40 μg/mL for ITH by methods A, B and C and 10-40 μg/mL for ITH by method D. The suggested methods were validated according to ICH guidelines. The obtained results were statistically compared with those obtained by the official and a reported method for PAN and ITH, respectively, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Simultaneous determination of a binary mixture of pantoprazole sodium and itopride hydrochloride by four spectrophotometric methods

    NASA Astrophysics Data System (ADS)

    Ramadan, Nesrin K.; El-Ragehy, Nariman A.; Ragab, Mona T.; El-Zeany, Badr A.

    2015-02-01

    Four simple, sensitive, accurate and precise spectrophotometric methods were developed for the simultaneous determination of a binary mixture containing Pantoprazole Sodium Sesquihydrate (PAN) and Itopride Hydrochloride (ITH). Method (A) is the derivative ratio method (1DD), method (B) is the mean centering of ratio spectra method (MCR), method (C) is the ratio difference method (RD) and method (D) is the isoabsorptive point coupled with third derivative method (3D). Linear correlation was obtained in range 8-44 μg/mL for PAN by the four proposed methods, 8-40 μg/mL for ITH by methods A, B and C and 10-40 μg/mL for ITH by method D. The suggested methods were validated according to ICH guidelines. The obtained results were statistically compared with those obtained by the official and a reported method for PAN and ITH, respectively, showing no significant difference with respect to accuracy and precision.

  10. Evaluating the efficiency of spectral resolution of univariate methods manipulating ratio spectra and comparing to multivariate methods: An application to ternary mixture in common cold preparation

    NASA Astrophysics Data System (ADS)

    Moustafa, Azza Aziz; Salem, Hesham; Hegazy, Maha; Ali, Omnia

    2015-02-01

    Simple, accurate, and selective methods have been developed and validated for simultaneous determination of a ternary mixture of Chlorpheniramine maleate (CPM), Pseudoephedrine HCl (PSE) and Ibuprofen (IBF), in tablet dosage form. Four univariate methods manipulating ratio spectra were applied, method A is the double divisor-ratio difference spectrophotometric method (DD-RD). Method B is double divisor-derivative ratio spectrophotometric method (DD-RD). Method C is derivative ratio spectrum-zero crossing method (DRZC), while method D is mean centering of ratio spectra (MCR). Two multivariate methods were also developed and validated, methods E and F are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods have the advantage of simultaneous determination of the mentioned drugs without prior separation steps. They were successfully applied to laboratory-prepared mixtures and to commercial pharmaceutical preparation without any interference from additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with the official methods where no significant difference was observed regarding both accuracy and precision.

  11. Methods for elimination of dampness in Building walls

    NASA Astrophysics Data System (ADS)

    Campian, Cristina; Pop, Maria

    2016-06-01

    Dampness elimination in building walls is a very sensitive problem, with high costs. Many methods are used, as: chemical method, electro osmotic method or physical method. The RECON method is a representative and a sustainable method in Romania. Italy has the most radical method from all methods. The technology consists in cutting the brick walls, insertion of a special plastic sheeting and injection of a pre-mixed anti-shrinking mortar.

  12. A comparison of several methods of solving nonlinear regression groundwater flow problems

    USGS Publications Warehouse

    Cooley, Richard L.

    1985-01-01

    Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.

  13. Hybrid DFP-CG method for solving unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa

    2017-09-01

    The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.

  14. Generalization of the Engineering Method to the UNIVERSAL METHOD.

    ERIC Educational Resources Information Center

    Koen, Billy Vaughn

    1987-01-01

    Proposes that there is a universal method for all realms of knowledge. Reviews Descartes's definition of the universal method, the engineering definition, and the philosophical basis for the universal method. Contends that the engineering method best represents the universal method. (ML)

  15. Colloidal Electrolytes and the Critical Micelle Concentration

    ERIC Educational Resources Information Center

    Knowlton, L. G.

    1970-01-01

    Describes methods for determining the Critical Micelle Concentration of Colloidal Electrolytes; methods described are: (1) methods based on Colligative Properties, (2) methods based on the Electrical Conductivity of Colloidal Electrolytic Solutions, (3) Dye Method, (4) Dye Solubilization Method, and (5) Surface Tension Method. (BR)

  16. Theoretical analysis of three methods for calculating thermal insulation of clothing from thermal manikin.

    PubMed

    Huang, Jianhua

    2012-07-01

    There are three methods for calculating thermal insulation of clothing measured with a thermal manikin, i.e. the global method, the serial method, and the parallel method. Under the condition of homogeneous clothing insulation, these three methods yield the same insulation values. If the local heat flux is uniform over the manikin body, the global and serial methods provide the same insulation value. In most cases, the serial method gives a higher insulation value than the global method. There is a possibility that the insulation value from the serial method is lower than the value from the global method. The serial method always gives higher insulation value than the parallel method. The insulation value from the parallel method is higher or lower than the value from the global method, depending on the relationship between the heat loss distribution and the surface temperatures. Under the circumstance of uniform surface temperature distribution over the manikin body, the global and parallel methods give the same insulation value. If the constant surface temperature mode is used in the manikin test, the parallel method can be used to calculate the thermal insulation of clothing. If the constant heat flux mode is used in the manikin test, the serial method can be used to calculate the thermal insulation of clothing. The global method should be used for calculating thermal insulation of clothing for all manikin control modes, especially for thermal comfort regulation mode. The global method should be chosen by clothing manufacturers for labelling their products. The serial and parallel methods provide more information with respect to the different parts of clothing.

  17. Comparison of five methods for the estimation of methane production from vented in vitro systems.

    PubMed

    Alvarez Hess, P S; Eckard, R J; Jacobs, J L; Hannah, M C; Moate, P J

    2018-05-23

    There are several methods for estimating methane production (MP) from feedstuffs in vented in vitro systems. One method (A; "gold standard") measures methane proportions in the incubation bottle's head space (HS) and in the vented gas collected in gas bags. Four other methods (B, C, D and E) measure methane proportion in a single gas sample from HS. Method B assumes the same methane proportion in the vented gas as in HS, method C assumes constant methane to carbon dioxide ratio, method D has been developed based on empirical data and method E assumes constant individual venting volumes. This study aimed to compare the MP predictions from these methods to that of the gold standard method under different incubation scenarios, to validate these methods based on their concordance with a gold standard method. Methods C, D and E had greater concordance (0.85, 0.88 and 0.81), lower root mean square error (RMSE) (0.80, 0.72 and 0.85) and lower mean bias (0.20, 0.35, -0.35) with the gold standard than did method B (concordance 0.67, RMSE 1.49 and mean bias 1.26). Methods D and E were simpler to perform than method C and method D was slightly more accurate than method E. Based on precision, accuracy and simplicity of implementation, it is recommended that, when method A cannot be used, methods D and E are preferred to estimate MP from vented in vitro systems. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. Kennard-Stone combined with least square support vector machine method for noncontact discriminating human blood species

    NASA Astrophysics Data System (ADS)

    Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling

    2017-11-01

    Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.

  19. A Novel Method to Identify Differential Pathways in Hippocampus Alzheimer's Disease.

    PubMed

    Liu, Chun-Han; Liu, Lian

    2017-05-08

    BACKGROUND Alzheimer's disease (AD) is the most common type of dementia. The objective of this paper is to propose a novel method to identify differential pathways in hippocampus AD. MATERIAL AND METHODS We proposed a combined method by merging existed methods. Firstly, pathways were identified by four known methods (DAVID, the neaGUI package, the pathway-based co-expressed method, and the pathway network approach), and differential pathways were evaluated through setting weight thresholds. Subsequently, we combined all pathways by a rank-based algorithm and called the method the combined method. Finally, common differential pathways across two or more of five methods were selected. RESULTS Pathways obtained from different methods were also different. The combined method obtained 1639 pathways and 596 differential pathways, which included all pathways gained from the four existing methods; hence, the novel method solved the problem of inconsistent results. Besides, a total of 13 common pathways were identified, such as metabolism, immune system, and cell cycle. CONCLUSIONS We have proposed a novel method by combining four existing methods based on a rank product algorithm, and identified 13 significant differential pathways based on it. These differential pathways might provide insight into treatment and diagnosis of hippocampus AD.

  20. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  1. Study of comparison between Ultra-high Frequency (UHF) method and ultrasonic method on PD detection for GIS

    NASA Astrophysics Data System (ADS)

    Li, Yanran; Chen, Duo; Li, Li; Zhang, Jiwei; Li, Guang; Liu, Hongxia

    2017-11-01

    GIS (gas insulated switchgear), is an important equipment in power system. Partial discharge plays an important role in detecting the insulation performance of GIS. UHF method and ultrasonic method frequently used in partial discharge (PD) detection for GIS. However, few studies have been conducted on comparison of this two methods. From the view point of safety, it is necessary to investigate UHF method and ultrasonic method for partial discharge in GIS. This paper presents study aimed at clarifying the effect of UHF method and ultrasonic method for partial discharge caused by free metal particles in GIS. Partial discharge tests were performed in laboratory simulated environment. Obtained results show the ability of anti-interference of signal detection and the accuracy of fault localization for UHF method and ultrasonic method. A new method based on UHF method and ultrasonic method of PD detection for GIS is proposed in order to greatly enhance the ability of anti-interference of signal detection and the accuracy of detection localization.

  2. Comparison of four extraction/methylation analytical methods to measure fatty acid composition by gas chromatography in meat.

    PubMed

    Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S

    2008-05-09

    Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.

  3. Birth Control Methods

    MedlinePlus

    ... Z Health Topics Birth control methods Birth control methods > A-Z Health Topics Birth control methods fact ... To receive Publications email updates Submit Birth control methods Birth control (contraception) is any method, medicine, or ...

  4. 26 CFR 1.381(c)(5)-1 - Inventories.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... the dollar-value method, use the double-extension method, pool under the natural business unit method... double-extension method, pool under the natural business unit method, and value annual inventory... natural business unit method while P corporation pools under the multiple pool method. In addition, O...

  5. 26 CFR 1.381(c)(5)-1 - Inventories.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the dollar-value method, use the double-extension method, pool under the natural business unit method... double-extension method, pool under the natural business unit method, and value annual inventory... natural business unit method while P corporation pools under the multiple pool method. In addition, O...

  6. 46 CFR 160.076-11 - Incorporation by reference.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... following methods: (1) Method 5100, Strength and Elongation, Breaking of Woven Cloth; Grab Method, 160.076-25; (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method, 160.076-25; (3) Method 5134, Strength of Cloth, Tearing; Tongue Method, 160.076-25. Underwriters Laboratories (UL) Underwriters...

  7. Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study

    PubMed Central

    Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M

    2017-01-01

    Background The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. Objective The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. Methods We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. Results We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). Conclusions In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants enrolled). The average cost per recruited participant was also lower for online than for offline methods, although costs varied greatly among both online and offline recruitment methods. We observed a decrease in the efficiency of some online recruitment methods over time, suggesting that it may be optimal to adopt multiple online methods. PMID:28249833

  8. Interior-Point Methods for Linear Programming: A Review

    ERIC Educational Resources Information Center

    Singh, J. N.; Singh, D.

    2002-01-01

    The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…

  9. The Relation of Finite Element and Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1976-01-01

    Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.

  10. [Baseflow separation methods in hydrological process research: a review].

    PubMed

    Xu, Lei-Lei; Liu, Jing-Lin; Jin, Chang-Jie; Wang, An-Zhi; Guan, De-Xin; Wu, Jia-Bing; Yuan, Feng-Hui

    2011-11-01

    Baseflow separation research is regarded as one of the most important and difficult issues in hydrology and ecohydrology, but lacked of unified standards in the concepts and methods. This paper introduced the theories of baseflow separation based on the definitions of baseflow components, and analyzed the development course of different baseflow separation methods. Among the methods developed, graph separation method is simple and applicable but arbitrary, balance method accords with hydrological mechanism but is difficult in application, whereas time series separation method and isotopic method can overcome the subjective and arbitrary defects caused by graph separation method, and thus can obtain the baseflow procedure quickly and efficiently. In recent years, hydrological modeling, digital filtering, and isotopic method are the main methods used for baseflow separation.

  11. Semi top-down method combined with earth-bank, an effective method for basement construction.

    NASA Astrophysics Data System (ADS)

    Tuan, B. Q.; Tam, Ng M.

    2018-04-01

    Choosing an appropriate method of deep excavation not only plays a decisive role in technical success, but also in economics of the construction project. Presently, we mainly base on to key methods: “Bottom-up” and “Top-down” construction method. Right now, this paper presents an another method of construction that is “Semi Top-down method combining with earth-bank” in order to take the advantages and limit the weakness of the above methods. The Bottom-up method was improved by using the earth-bank to stabilize retaining walls instead of the bracing steel struts. The Top-down method was improved by using the open cut method for the half of the earthwork quantities.

  12. Marker-based reconstruction of the kinematics of a chain of segments: a new method that incorporates joint kinematic constraints.

    PubMed

    Klous, Miriam; Klous, Sander

    2010-07-01

    The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.

  13. Novel two wavelength spectrophotometric methods for simultaneous determination of binary mixtures with severely overlapping spectra

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham

    2015-02-01

    This work presents the application of different spectrophotometric techniques based on two wavelengths for the determination of severely overlapped spectral components in a binary mixture without prior separation. Four novel spectrophotometric methods were developed namely: induced dual wavelength method (IDW), dual wavelength resolution technique (DWRT), advanced amplitude modulation method (AAM) and induced amplitude modulation method (IAM). The results of the novel methods were compared to that of three well-established methods which were: dual wavelength method (DW), Vierordt's method (VD) and bivariate method (BV). The developed methods were applied for the analysis of the binary mixture of hydrocortisone acetate (HCA) and fusidic acid (FSA) formulated as topical cream accompanied by the determination of methyl paraben and propyl paraben present as preservatives. The specificity of the novel methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories.

  14. Determination of Slope Safety Factor with Analytical Solution and Searching Critical Slip Surface with Genetic-Traversal Random Method

    PubMed Central

    2014-01-01

    In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679

  15. Enumeration of total aerobic microorganisms in foods by SimPlate Total Plate Count-Color Indicator methods and conventional culture methods: collaborative study.

    PubMed

    Feldsine, Philip T; Leung, Stephanie C; Lienau, Andrew H; Mui, Linda A; Townsend, David E

    2003-01-01

    The relative efficacy of the SimPlate Total Plate Count-Color Indicator (TPC-CI) method (SimPlate 35 degrees C) was compared with the AOAC Official Method 966.23 (AOAC 35 degrees C) for enumeration of total aerobic microorganisms in foods. The SimPlate TPC-CI method, incubated at 30 degrees C (SimPlate 30 degrees C), was also compared with the International Organization for Standardization (ISO) 4833 method (ISO 30 degrees C). Six food types were analyzed: ground black pepper, flour, nut meats, frozen hamburger patties, frozen fruits, and fresh vegetables. All foods tested were naturally contaminated. Nineteen laboratories throughout North America and Europe participated in the study. Three method comparisons were conducted. In general, there was <0.3 mean log count difference in recovery among the SimPlate methods and their corresponding reference methods. Mean log counts between the 2 reference methods were also very similar. Repeatability (Sr) and reproducibility (SR) standard deviations were similar among the 3 method comparisons. The SimPlate method (35 degrees C) and the AOAC method were comparable for enumerating total aerobic microorganisms in foods. Similarly, the SimPlate method (30 degrees C) was comparable to the ISO method when samples were prepared and incubated according to the ISO method.

  16. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  17. Completed Suicide with Violent and Non-Violent Methods in Rural Shandong, China: A Psychological Autopsy Study

    PubMed Central

    Sun, Shi-Hua; Jia, Cun-Xian

    2014-01-01

    Background This study aims to describe the specific characteristics of completed suicides by violent methods and non-violent methods in rural Chinese population, and to explore the related factors for corresponding methods. Methods Data of this study came from investigation of 199 completed suicide cases and their paired controls of rural areas in three different counties in Shandong, China, by interviewing one informant of each subject using the method of Psychological Autopsy (PA). Results There were 78 (39.2%) suicides with violent methods and 121 (60.8%) suicides with non-violent methods. Ingesting pesticides, as a non-violent method, appeared to be the most common suicide method (103, 51.8%). Hanging (73 cases, 36.7%) and drowning (5 cases, 2.5%) were the only violent methods observed. Storage of pesticides at home and higher suicide intent score were significantly associated with choice of violent methods while committing suicide. Risk factors related to suicide death included negative life events and hopelessness. Conclusions Suicide with violent methods has different factors from suicide with non-violent methods. Suicide methods should be considered in suicide prevention and intervention strategies. PMID:25111835

  18. A review of propeller noise prediction methodology: 1919-1994

    NASA Technical Reports Server (NTRS)

    Metzger, F. Bruce

    1995-01-01

    This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.

  19. A different approach to estimate nonlinear regression model using numerical methods

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  20. Sorting protein decoys by machine-learning-to-rank

    PubMed Central

    Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen

    2016-01-01

    Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset. PMID:27530967

  1. Sorting protein decoys by machine-learning-to-rank.

    PubMed

    Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen

    2016-08-17

    Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset.

  2. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  3. Wideband characterization of the complex wave number and characteristic impedance of sound absorbers.

    PubMed

    Salissou, Yacoubou; Panneton, Raymond

    2010-11-01

    Several methods for measuring the complex wave number and the characteristic impedance of sound absorbers have been proposed in the literature. These methods can be classified into single frequency and wideband methods. In this paper, the main existing methods are revisited and discussed. An alternative method which is not well known or discussed in the literature while exhibiting great potential is also discussed. This method is essentially an improvement of the wideband method described by Iwase et al., rewritten so that the setup is more ISO 10534-2 standard-compliant. Glass wool, melamine foam and acoustical/thermal insulator wool are used to compare the main existing wideband non-iterative methods with this alternative method. It is found that, in the middle and high frequency ranges the alternative method yields results that are comparable in accuracy to the classical two-cavity method and the four-microphone transfer-matrix method. However, in the low frequency range, the alternative method appears to be more accurate than the other methods, especially when measuring the complex wave number.

  4. Methods for environmental change; an exploratory study.

    PubMed

    Kok, Gerjo; Gottlieb, Nell H; Panne, Robert; Smerecnik, Chris

    2012-11-28

    While the interest of health promotion researchers in change methods directed at the target population has a long tradition, interest in change methods directed at the environment is still developing. In this survey, the focus is on methods for environmental change; especially about how these are composed of methods for individual change ('Bundling') and how within one environmental level, organizations, methods differ when directed at the management ('At') or applied by the management ('From'). The first part of this online survey dealt with examining the 'bundling' of individual level methods to methods at the environmental level. The question asked was to what extent the use of an environmental level method would involve the use of certain individual level methods. In the second part of the survey the question was whether there are differences between applying methods directed 'at' an organization (for instance, by a health promoter) versus 'from' within an organization itself. All of the 20 respondents are experts in the field of health promotion. Methods at the individual level are frequently bundled together as part of a method at a higher ecological level. A number of individual level methods are popular as part of most of the environmental level methods, while others are not chosen very often. Interventions directed at environmental agents often have a strong focus on the motivational part of behavior change.There are different approaches targeting a level or being targeted from a level. The health promoter will use combinations of motivation and facilitation. The manager will use individual level change methods focusing on self-efficacy and skills. Respondents think that any method may be used under the right circumstances, although few endorsed coercive methods. Taxonomies of theoretical change methods for environmental change should include combinations of individual level methods that may be bundled and separate suggestions for methods targeting a level or being targeted from a level. Future research needs to cover more methods to rate and to be rated. Qualitative data may explain some of the surprising outcomes, such as the lack of large differences and the avoidance of coercion. Taxonomies should include the theoretical parameters that limit the effectiveness of the method.

  5. A comparison theorem for the SOR iterative method

    NASA Astrophysics Data System (ADS)

    Sun, Li-Ying

    2005-09-01

    In 1997, Kohno et al. have reported numerically that the improving modified Gauss-Seidel method, which was referred to as the IMGS method, is superior to the SOR iterative method. In this paper, we prove that the spectral radius of the IMGS method is smaller than that of the SOR method and Gauss-Seidel method, if the relaxation parameter [omega][set membership, variant](0,1]. As a result, we prove theoretically that this method is succeeded in improving the convergence of some classical iterative methods. Some recent results are improved.

  6. A review of parametric approaches specific to aerodynamic design process

    NASA Astrophysics Data System (ADS)

    Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li

    2018-04-01

    Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.

  7. A Review and Comparison of Methods for Recreating Individual Patient Data from Published Kaplan-Meier Survival Curves for Economic Evaluations: A Simulation Study

    PubMed Central

    Wan, Xiaomin; Peng, Liubao; Li, Yuanjian

    2015-01-01

    Background In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. Methods A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. Results All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. Conclusions The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method. PMID:25803659

  8. Comparisons of Lagrangian and Eulerian PDF methods in simulations of non-premixed turbulent jet flames with moderate-to-strong turbulence-chemistry interactions

    NASA Astrophysics Data System (ADS)

    Jaishree, J.; Haworth, D. C.

    2012-06-01

    Transported probability density function (PDF) methods have been applied widely and effectively for modelling turbulent reacting flows. In most applications of PDF methods to date, Lagrangian particle Monte Carlo algorithms have been used to solve a modelled PDF transport equation. However, Lagrangian particle PDF methods are computationally intensive and are not readily integrated into conventional Eulerian computational fluid dynamics (CFD) codes. Eulerian field PDF methods have been proposed as an alternative. Here a systematic comparison is performed among three methods for solving the same underlying modelled composition PDF transport equation: a consistent hybrid Lagrangian particle/Eulerian mesh (LPEM) method, a stochastic Eulerian field (SEF) method and a deterministic Eulerian field method with a direct-quadrature-method-of-moments closure (a multi-environment PDF-MEPDF method). The comparisons have been made in simulations of a series of three non-premixed, piloted methane-air turbulent jet flames that exhibit progressively increasing levels of local extinction and turbulence-chemistry interactions: Sandia/TUD flames D, E and F. The three PDF methods have been implemented using the same underlying CFD solver, and results obtained using the three methods have been compared using (to the extent possible) equivalent physical models and numerical parameters. Reasonably converged mean and rms scalar profiles are obtained using 40 particles per cell for the LPEM method or 40 Eulerian fields for the SEF method. Results from these stochastic methods are compared with results obtained using two- and three-environment MEPDF methods. The relative advantages and disadvantages of each method in terms of accuracy and computational requirements are explored and identified. In general, the results obtained from the two stochastic methods (LPEM and SEF) are very similar, and are in closer agreement with experimental measurements than those obtained using the MEPDF method, while MEPDF is the most computationally efficient of the three methods. These and other findings are discussed in detail.

  9. AN EULERIAN-LAGRANGIAN LOCALIZED ADJOINT METHOD FOR THE ADVECTION-DIFFUSION EQUATION

    EPA Science Inventory

    Many numerical methods use characteristic analysis to accommodate the advective component of transport. Such characteristic methods include Eulerian-Lagrangian methods (ELM), modified method of characteristics (MMOC), and operator splitting methods. A generalization of characteri...

  10. Capital investment analysis: three methods.

    PubMed

    Gapenski, L C

    1993-08-01

    Three cash flow/discount rate methods can be used when conducting capital budgeting financial analyses: the net operating cash flow method, the net cash flow to investors method, and the net cash flow to equity holders method. The three methods differ in how the financing mix and the benefits of debt financing are incorporated. This article explains the three methods, demonstrates that they are essentially equivalent, and recommends which method to use under specific circumstances.

  11. Effective description of a 3D object for photon transportation in Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Suganuma, R.; Ogawa, K.

    2000-06-01

    Photon transport simulation by means of the Monte Carlo method is an indispensable technique for examining scatter and absorption correction methods in SPECT and PET. The authors have developed a method for object description with maximum size regions (maximum rectangular regions: MRRs) to speed up photon transport simulation, and compared the computation time with that for conventional object description methods, a voxel-based (VB) method and an octree method, in the simulations of two kinds of phantoms. The simulation results showed that the computation time with the proposed method became about 50% of that with the VD method and about 70% of that with the octree method for a high resolution MCAT phantom. Here, details of the expansion of the MRR method to three dimensions are given. Moreover, the effectiveness of the proposed method was compared with the VB and octree methods.

  12. Region of influence regression for estimating the 50-year flood at ungaged sites

    USGS Publications Warehouse

    Tasker, Gary D.; Hodge, S.A.; Barks, C.S.

    1996-01-01

    Five methods of developing regional regression models to estimate flood characteristics at ungaged sites in Arkansas are examined. The methods differ in the manner in which the State is divided into subrogions. Each successive method (A to E) is computationally more complex than the previous method. Method A makes no subdivision. Methods B and C define two and four geographic subrogions, respectively. Method D uses cluster/discriminant analysis to define subrogions on the basis of similarities in watershed characteristics. Method E, the new region of influence method, defines a unique subregion for each ungaged site. Split-sample results indicate that, in terms of root-mean-square error, method E (38 percent error) is best. Methods C and D (42 and 41 percent error) were in a virtual tie for second, and methods B (44 percent error) and A (49 percent error) were fourth and fifth best.

  13. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  14. Designing Class Methods from Dataflow Diagrams

    NASA Astrophysics Data System (ADS)

    Shoval, Peretz; Kabeli-Shani, Judith

    A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.

  15. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  16. Leapfrog variants of iterative methods for linear algebra equations

    NASA Technical Reports Server (NTRS)

    Saylor, Paul E.

    1988-01-01

    Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.

  17. Development of a Coordinate Transformation method for direct georeferencing in map projection frames

    NASA Astrophysics Data System (ADS)

    Zhao, Haitao; Zhang, Bing; Wu, Changshan; Zuo, Zhengli; Chen, Zhengchao

    2013-03-01

    This paper develops a novel Coordinate Transformation method (CT-method), with which the orientation angles (roll, pitch, heading) of the local tangent frame of the GPS/INS system are transformed into those (omega, phi, kappa) of the map projection frame for direct georeferencing (DG). Especially, the orientation angles in the map projection frame were derived from a sequence of coordinate transformations. The effectiveness of orientation angles transformation was verified through comparing with DG results obtained from conventional methods (Legat method and POSPac method) using empirical data. Moreover, the CT-method was also validated with simulated data. One advantage of the proposed method is that the orientation angles can be acquired simultaneously while calculating position elements of exterior orientation (EO) parameters and auxiliary points coordinates by coordinate transformation. These three methods were demonstrated and compared using empirical data. Empirical results show that the CT-method is both as sound and effective as Legat method. Compared with POSPac method, the CT-method is more suitable for calculating EO parameters for DG in map projection frames. DG accuracy of the CT-method and Legat method are at the same level. DG results of all these three methods have systematic errors in height due to inconsistent length projection distortion in the vertical and horizontal components, and these errors can be significantly reduced using the EO height correction technique in Legat's approach. Similar to the results obtained with empirical data, the effectiveness of the CT-method was also proved with simulated data. POSPac method: The method is presented by Applanix POSPac software technical note (Hutton and Savina, 1997). It is implemented in the POSEO module of POSPac software.

  18. Comparison of four USEPA digestion methods for trace metal analysis using certified and Florida soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M.; Ma, L.Q.

    1998-11-01

    It is critical to compare existing sample digestion methods for evaluating soil contamination and remediation. USEPA Methods 3050, 3051, 3051a, and 3052 were used to digest standard reference materials and representative Florida surface soils. Fifteen trace metals (Ag, As, Ba, Be, Cd, Cr, Cu, Hg, Mn, Mo, Ni, Pb, Sb, Se, and Za), and six macro elements (Al, Ca, Fe, K, Mg, and P) were analyzed. Precise analysis was achieved for all elements except for Cd, Mo, Se, and Sb in NIST SRMs 2704 and 2709 by USEPA Methods 3050 and 3051, and for all elements except for As, Mo,more » Sb, and Se in NIST SRM 2711 by USEPA Method 3052. No significant differences were observed for the three NIST SRMs between the microwave-assisted USEPA Methods 3051 and 3051A and the conventional USEPA Method 3050 Methods 3051 and 3051a and the conventional USEPA Method 3050 except for Hg, Sb, and Se. USEPA Method 3051a provided comparable values for NIST SRMs certified using USEPA Method 3050. However, for method correlation coefficients and elemental recoveries in 40 Florida surface soils, USEPA Method 3051a was an overall better alternative for Method 3050 than was Method 3051. Among the four digestion methods, the microwave-assisted USEPA Method 3052 achieved satisfactory recoveries for all elements except As and Mg using NIST SRM 2711. This total-total digestion method provided greater recoveries for 12 elements Ag, Be, Cr, Fe, K, Mn, Mo, Ni, Pb, Sb, Se, and Zn, but lower recoveries for Mg in Florida soils than did the total-recoverable digestion methods.« less

  19. [Comparative analysis between diatom nitric acid digestion method and plankton 16S rDNA PCR method].

    PubMed

    Han, Jun-ge; Wang, Cheng-bao; Li, Xing-biao; Fan, Yan-yan; Feng, Xiang-ping

    2013-10-01

    To compare and explore the application value of diatom nitric acid digestion method and plankton 16S rDNA PCR method for drowning identification. Forty drowning cases from 2010 to 2011 were collected from Department of Forensic Medicine of Wenzhou Medical University. Samples including lung, kidney, liver and field water from each case were tested with diatom nitric acid digestion method and plankton 16S rDNA PCR method, respectively. The Diatom nitric acid digestion method and plankton 16S rDNA PCR method required 20 g and 2 g of each organ, and 15 mL and 1.5 mL of field water, respectively. The inspection time and detection rate were compared between the two methods. Diatom nitric acid digestion method mainly detected two species of diatoms, Centriae and Pennatae, while plankton 16S rDNA PCR method amplified a length of 162 bp band. The average inspection time of each case of the Diatom nitric acid digestion method was (95.30 +/- 2.78) min less than (325.33 +/- 14.18) min of plankton 16S rDNA PCR method (P < 0.05). The detection rates of two methods for field water and lung were both 100%. For liver and kidney, the detection rate of plankton 16S rDNA PCR method was both 80%, higher than 40% and 30% of diatom nitric acid digestion method (P < 0.05), respectively. The laboratory testing method needs to be appropriately selected according to the specific circumstances in the forensic appraisal of drowning. Compared with diatom nitric acid digestion method, plankton 16S rDNA PCR method has practice values with such advantages as less quantity of samples, huge information and high specificity.

  20. Reliable clarity automatic-evaluation method for optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen

    2015-10-01

    Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.

  1. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 5 2013-04-01 2013-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...

  2. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 5 2012-04-01 2011-04-01 true Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...

  3. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 5 2014-04-01 2014-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...

  4. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 5 2011-04-01 2011-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...

  5. 40 CFR 60.547 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...

  6. 40 CFR 60.547 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...

  7. 40 CFR 60.547 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...

  8. The Dramatic Methods of Hans van Dam.

    ERIC Educational Resources Information Center

    van de Water, Manon

    1994-01-01

    Interprets for the American reader the untranslated dramatic methods of Hans van Dam, a leading drama theorist in the Netherlands. Discusses the functions of drama as a method, closed dramatic methods, open dramatic methods, and applying van Dam's methods. (SR)

  9. Methods for environmental change; an exploratory study

    PubMed Central

    2012-01-01

    Background While the interest of health promotion researchers in change methods directed at the target population has a long tradition, interest in change methods directed at the environment is still developing. In this survey, the focus is on methods for environmental change; especially about how these are composed of methods for individual change (‘Bundling’) and how within one environmental level, organizations, methods differ when directed at the management (‘At’) or applied by the management (‘From’). Methods The first part of this online survey dealt with examining the ‘bundling’ of individual level methods to methods at the environmental level. The question asked was to what extent the use of an environmental level method would involve the use of certain individual level methods. In the second part of the survey the question was whether there are differences between applying methods directed ‘at’ an organization (for instance, by a health promoter) versus ‘from’ within an organization itself. All of the 20 respondents are experts in the field of health promotion. Results Methods at the individual level are frequently bundled together as part of a method at a higher ecological level. A number of individual level methods are popular as part of most of the environmental level methods, while others are not chosen very often. Interventions directed at environmental agents often have a strong focus on the motivational part of behavior change. There are different approaches targeting a level or being targeted from a level. The health promoter will use combinations of motivation and facilitation. The manager will use individual level change methods focusing on self-efficacy and skills. Respondents think that any method may be used under the right circumstances, although few endorsed coercive methods. Conclusions Taxonomies of theoretical change methods for environmental change should include combinations of individual level methods that may be bundled and separate suggestions for methods targeting a level or being targeted from a level. Future research needs to cover more methods to rate and to be rated. Qualitative data may explain some of the surprising outcomes, such as the lack of large differences and the avoidance of coercion. Taxonomies should include the theoretical parameters that limit the effectiveness of the method. PMID:23190712

  10. Implementation of an improved adaptive-implicit method in a thermal compositional simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, T.B.

    1988-11-01

    A multicomponent thermal simulator with an adaptive-implicit-method (AIM) formulation/inexact-adaptive-Newton (IAN) method is presented. The final coefficient matrix retains the original banded structure so that conventional iterative methods can be used. Various methods for selection of the eliminated unknowns are tested. AIM/IAN method has a lower work count per Newtonian iteration than fully implicit methods, but a wrong choice of unknowns will result in excessive Newtonian iterations. For the problems tested, the residual-error method described in the paper for selecting implicit unknowns, together with the IAN method, had an improvement of up to 28% of the CPU time over the fullymore » implicit method.« less

  11. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities.

    PubMed

    Green, Carla A; Duan, Naihua; Gibbons, Robert D; Hoagwood, Kimberly E; Palinkas, Lawrence A; Wisdom, Jennifer P

    2015-09-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.

  12. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities

    PubMed Central

    Green, Carla A.; Duan, Naihua; Gibbons, Robert D.; Hoagwood, Kimberly E.; Palinkas, Lawrence A.; Wisdom, Jennifer P.

    2015-01-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings. PMID:24722814

  13. Bond additivity corrections for quantum chemistry methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. F. Melius; M. D. Allendorf

    1999-04-01

    In the 1980's, the authors developed a bond-additivity correction procedure for quantum chemical calculations called BAC-MP4, which has proven reliable in calculating the thermochemical properties of molecular species, including radicals as well as stable closed-shell species. New Bond Additivity Correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid DFT/MP2 method, BAC-Hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method duemore » to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-Hybrid and BAC-MP4. The BAC-Hybrid method should scale well for large molecules. The BAC-Hybrid method uses the differences between the DFT and MP2 as an indicator of the method's accuracy, while the BAC-G2 method uses its internal methods (G1 and G2MP2) to provide an indicator of its accuracy. Indications of the average error as well as worst cases are provided for each of the BAC methods.« less

  14. Comparison of different methods to quantify fat classes in bakery products.

    PubMed

    Shin, Jae-Min; Hwang, Young-Ok; Tu, Ock-Ju; Jo, Han-Bin; Kim, Jung-Hun; Chae, Young-Zoo; Rhu, Kyung-Hun; Park, Seung-Kook

    2013-01-15

    The definition of fat differs in different countries; thus whether fat is listed on food labels depends on the country. Some countries list crude fat content in the 'Fat' section on the food label, whereas other countries list total fat. In this study, three methods were used for determining fat classes and content in bakery products: the Folch method, the automated Soxhlet method, and the AOAC 996.06 method. The results using these methods were compared. Fat (crude) extracted by the Folch and Soxhlet methods was gravimetrically determined and assessed by fat class using capillary gas chromatography (GC). In most samples, fat (total) content determined by the AOAC 996.06 method was lower than the fat (crude) content determined by the Folch or automated Soxhlet methods. Furthermore, monounsaturated fat or saturated fat content determined by the AOAC 996.06 method was lowest. Almost no difference was observed between fat (crude) content determined by the Folch method and that determined by the automated Soxhlet method for nearly all samples. In three samples (wheat biscuits, butter cookies-1, and chocolate chip cookies), monounsaturated fat, saturated fat, and trans fat content obtained by the automated Soxhlet method was higher than that obtained by the Folch method. The polyunsaturated fat content obtained by the automated Soxhlet method was not higher than that obtained by the Folch method in any sample. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less

  16. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method

    PubMed Central

    2014-01-01

    Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018

  17. Integral methods of solving boundary-value problems of nonstationary heat conduction and their comparative analysis

    NASA Astrophysics Data System (ADS)

    Kot, V. A.

    2017-11-01

    The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.

  18. Method for producing smooth inner surfaces

    DOEpatents

    Cooper, Charles A.

    2016-05-17

    The invention provides a method for preparing superconducting cavities, the method comprising causing polishing media to tumble by centrifugal barrel polishing within the cavities for a time sufficient to attain a surface smoothness of less than 15 nm root mean square roughness over approximately a 1 mm.sup.2 scan area. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media bound to a carrier to tumble within the cavities. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media in a slurry to tumble within the cavities.

  19. A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods

    PubMed Central

    Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016

  20. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.

    PubMed

    Polidori, David; Rowley, Clarence

    2014-07-22

    The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.

  1. Trends in the Contraceptive Method Mix in Low- and Middle-Income Countries: Analysis Using a New “Average Deviation” Measure

    PubMed Central

    Ross, John; Keesbury, Jill; Hardee, Karen

    2015-01-01

    ABSTRACT The method mix of contraceptive use is severely unbalanced in many countries, with over half of all use provided by just 1 or 2 methods. That tends to limit the range of user options and constrains the total prevalence of use, leading to unplanned pregnancies and births or abortions. Previous analyses of method mix distortions focused on countries where a single method accounted for more than half of all use (the 50% rule). We introduce a new measure that uses the average deviation (AD) of method shares around their own mean and apply that to a secondary analysis of method mix data for 8 contraceptive methods from 666 national surveys in 123 countries. A high AD value indicates a skewed method mix while a low AD value indicates a more uniform pattern across methods; the values can range from 0 to 21.9. Most AD values ranged from 6 to 19, with an interquartile range of 8.6 to 12.2. Using the AD measure, we identified 15 countries where the method mix has evolved from a distorted one to a better balanced one, with AD values declining, on average, by 35% over time. Countries show disparate paths in method gains and losses toward a balanced mix, but 4 patterns are suggested: (1) rise of one method partially offset by changes in other methods, (2) replacement of traditional with modern methods, (3) continued but declining domination by a single method, and (4) declines in dominant methods with increases in other methods toward a balanced mix. Regions differ markedly in their method mix profiles and preferences, raising the question of whether programmatic resources are best devoted to better provision of the well-accepted methods or to deploying neglected or new ones, or to a combination of both approaches. PMID:25745119

  2. A review and comparison of methods for recreating individual patient data from published Kaplan-Meier survival curves for economic evaluations: a simulation study.

    PubMed

    Wan, Xiaomin; Peng, Liubao; Li, Yuanjian

    2015-01-01

    In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method.

  3. Achieving cost-neutrality with long-acting reversible contraceptive methods.

    PubMed

    Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna

    2015-01-01

    This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20-29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. [Analyzing and modeling methods of near infrared spectroscopy for in-situ prediction of oil yield from oil shale].

    PubMed

    Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong

    2014-10-01

    In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.

  5. Relative effectiveness of the Bacteriological Analytical Manual method for the recovery of Salmonella from whole cantaloupes and cantaloupe rinses with selected preenrichment media and rapid methods.

    PubMed

    Hammack, Thomas S; Valentin-Bon, Iris E; Jacobson, Andrew P; Andrews, Wallace H

    2004-05-01

    Soak and rinse methods were compared for the recovery of Salmonella from whole cantaloupes. Cantaloupes were surface inoculated with Salmonella cell suspensions and stored for 4 days at 2 to 6 degrees C. Cantaloupes were placed in sterile plastic bags with a nonselective preenrichment broth at a 1:1.5 cantaloupe weight-to-broth volume ratio. The cantaloupe broths were shaken for 5 min at 100 rpm after which 25-ml aliquots (rinse) were removed from the bags. The 25-ml rinses were preenriched in 225-ml portions of the same uninoculated broth type at 35 degrees C for 24 h (rinse method). The remaining cantaloupe broths were incubated at 35 degrees C for 24 h (soak method). The preenrichment broths used were buffered peptone water (BPW), modified BPW, lactose (LAC) broth, and Universal Preenrichment (UP) broth. The Bacteriological Analytical Manual Salmonella culture method was compared with the following rapid methods: the TECRA Unique Salmonella method, the VIDAS ICS/SLM method, and the VIDAS SLM method. The soak method detected significantly more Salmonella-positive cantaloupes (P < 0.05) than did the rinse method: 367 Salmonella-positive cantaloupes of 540 test cantaloupes by the soak method and 24 Salmonella-positive cantaloupes of 540 test cantaloupes by the rinse method. Overall, BPW, LAC, and UP broths were equivalent for the recovery of Salmonella from cantaloupes. Both the VIDAS ICS/SLM and TECRA Unique Salmonella methods detected significantly fewer Salmonella-positive cantaloupes than did the culture method: the VIDAS ICS/SLM method detected 23 of 50 Salmonella-positive cantaloupes (60 tested) and the TECRA Unique Salmonella method detected 16 of 29 Salmonella-positive cantaloupes (60 tested). The VIDAS SLM and culture methods were equivalent: both methods detected 37 of 37 Salmonella-positive cantaloupes (60 tested).

  6. Temperature Profiles of Different Cooling Methods in Porcine Pancreas Procurement

    PubMed Central

    Weegman, Brad P.; Suszynski, Thomas M.; Scott, William E.; Ferrer, Joana; Avgoustiniatos, Efstathios S.; Anazawa, Takayuki; O’Brien, Timothy D.; Rizzari, Michael D.; Karatzas, Theodore; Jie, Tun; Sutherland, David ER.; Hering, Bernhard J.; Papas, Klearchos K.

    2014-01-01

    Background Porcine islet xenotransplantation is a promising alternative to human islet allotransplantation. Porcine pancreas cooling needs to be optimized to reduce the warm ischemia time (WIT) following donation after cardiac death, which is associated with poorer islet isolation outcomes. Methods This study examines the effect of 4 different cooling Methods on core porcine pancreas temperature (n=24) and histopathology (n=16). All Methods involved surface cooling with crushed ice and chilled irrigation. Method A, which is the standard for porcine pancreas procurement, used only surface cooling. Method B involved an intravascular flush with cold solution through the pancreas arterial system. Method C involved an intraductal infusion with cold solution through the major pancreatic duct, and Method D combined all 3 cooling Methods. Results Surface cooling alone (Method A) gradually decreased core pancreas temperature to < 10 °C after 30 minutes. Using an intravascular flush (Method B) improved cooling during the entire duration of procurement, but incorporating an intraductal infusion (Method C) rapidly reduced core temperature 15–20 °C within the first 2 minutes of cooling. Combining all methods (Method D) was the most effective at rapidly reducing temperature and providing sustained cooling throughout the duration of procurement, although the recorded WIT was not different between Methods (p=0.36). Histological scores were different between the cooling Methods (p=0.02) and the worst with Method A. There were differences in histological scores between Methods A and C (p=0.02) and Methods A and D (p=0.02), but not between Methods C and D (p=0.95), which may highlight the importance of early cooling using an intraductal infusion. Conclusions In conclusion, surface cooling alone cannot rapidly cool large (porcine or human) pancreata. Additional cooling with an intravascular flush and intraductal infusion results in improved core porcine pancreas temperature profiles during procurement and histopathology scores. These data may also have implications on human pancreas procurement since use of an intraductal infusion is not common practice. PMID:25040217

  7. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method

    PubMed Central

    Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu

    2017-01-01

    In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility. PMID:28187177

  8. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method.

    PubMed

    Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu; Park, So Yeon

    2017-01-01

    In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.

  9. Estimating Tree Height-Diameter Models with the Bayesian Method

    PubMed Central

    Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733

  10. Estimating tree height-diameter models with the Bayesian method.

    PubMed

    Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  11. A comparison of treatment effectiveness between the CAD/CAM method and the manual method for managing adolescent idiopathic scoliosis.

    PubMed

    Wong, M S; Cheng, J C Y; Lo, K H

    2005-04-01

    The treatment effectiveness of the CAD/CAM method and the manual method in managing adolescent idiopathic scoliosis (AIS) was compared. Forty subjects were recruited with twenty subjects for each method. The clinical parameters namely Cobb's angle and apical vertebral rotation were evaluated at the pre-brace and the immediate in-brace visits. The results demonstrated that orthotic treatments rendered by the CAD/CAM method and the conventional manual method were effective in providing initial control of Cobb's angle. Significant decreases (p < 0.05) were found between the pre-brace and immediate in-brace visits for both methods. The mean reductions of Cobb's angle were 12.8 degrees (41.9%) for the CAD/CAM method and 9.8 degrees (32.1%) for the manual method. An initial control of the apical vertebral rotation was not shown in this study. In the comparison between the CAD/CAM method and the manual method, no significant difference was found in the control of Cobb's angle and apical vertebral rotation. The current study demonstrated that the CAD/CAM method can provide similar result in the initial stage of treatment as compared with the manual method.

  12. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  13. Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study.

    PubMed

    Christensen, Tina; Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M

    2017-03-01

    The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants enrolled). The average cost per recruited participant was also lower for online than for offline methods, although costs varied greatly among both online and offline recruitment methods. We observed a decrease in the efficiency of some online recruitment methods over time, suggesting that it may be optimal to adopt multiple online methods. ©Tina Christensen, Anders H Riis, Elizabeth E Hatch, Lauren A Wise, Marie G Nielsen, Kenneth J Rothman, Henrik Toft Sørensen, Ellen M Mikkelsen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 01.03.2017.

  14. A simple high performance liquid chromatography method for analyzing paraquat in soil solution samples.

    PubMed

    Ouyang, Ying; Mansell, Robert S; Nkedi-Kizza, Peter

    2004-01-01

    A high performance liquid chromatography (HPLC) method with UV detection was developed to analyze paraquat (1,1'-dimethyl-4,4'-dipyridinium dichloride) herbicide content in soil solution samples. The analytical method was compared with the liquid scintillation counting (LSC) method using 14C-paraquat. Agreement obtained between the two methods was reasonable. However, the detection limit for paraquat analysis was 0.5 mg L(-1) by the HPLC method and 0.05 mg L(-1) by the LSC method. The LSC method was, therefore, 10 times more precise than the HPLC method for solution concentrations less than 1 mg L(-1). In spite of the high detection limit, the UC (nonradioactive) HPLC method provides an inexpensive and environmentally safe means for determining paraquat concentration in soil solution compared with the 14C-LSC method.

  15. Hybrid finite element and Brownian dynamics method for diffusion-controlled reactions.

    PubMed

    Bauler, Patricia; Huber, Gary A; McCammon, J Andrew

    2012-04-28

    Diffusion is often the rate determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. This paper proposes a new hybrid diffusion method that couples the strengths of each of these two methods. The method is derived for a general multidimensional system, and is presented using a basic test case for 1D linear and radially symmetric diffusion systems.

  16. Application of multiattribute decision-making methods for the determination of relative significance factor of impact categories.

    PubMed

    Noh, Jaesung; Lee, Kun Mo

    2003-05-01

    A relative significance factor (f(i)) of an impact category is the external weight of the impact category. The objective of this study is to propose a systematic and easy-to-use method for the determination of f(i). Multiattribute decision-making (MADM) methods including the analytical hierarchy process (AHP), the rank-order centroid method, and the fuzzy method were evaluated for this purpose. The results and practical aspects of using the three methods are compared. Each method shows the same trend, with minor differences in the value of f(i). Thus, all three methods can be applied to the determination of f(i). The rank order centroid method reduces the number of pairwise comparisons by placing the alternatives in order, although it has inherent weakness over the fuzzy method in expressing the degree of vagueness associated with assigning weights to criteria and alternatives. The rank order centroid method is considered a practical method for the determination of f(i) because it is easier and simpler to use compared to the AHP and the fuzzy method.

  17. Utility of N-Bromosuccinimide for the Titrimetric and Spectrophotometric Determination of Famotidine in Pharmaceutical Formulations

    PubMed Central

    Zenita, O.; Basavaiah, K.

    2011-01-01

    Two titrimetric and two spectrophotometric methods are described for the assay of famotidine (FMT) in tablets using N-bromosuccinimide (NBS). The first titrimetric method is direct in which FMT is titrated directly with NBS in HCl medium using methyl orange as indicator (method A). The remaining three methods are indirect in which the unreacted NBS is determined after the complete reaction between FMT and NBS by iodometric back titration (method B) or by reacting with a fixed amount of either indigo carmine (method C) or neutral red (method D). The method A and method B are applicable over the range of 2–9 mg and 1–7 mg, respectively. In spectrophotometric methods, Beer's law is obeyed over the concentration ranges of 0.75–6.0 μg mL−1 (method C) and 0.3–3.0 μg mL−1 (method D). The applicability of the developed methods was demonstrated by the determination of FMT in pure drug as well as in tablets. PMID:21760785

  18. Twostep-by-twostep PIRK-type PC methods with continuous output formulas

    NASA Astrophysics Data System (ADS)

    Cong, Nguyen Huu; Xuan, Le Ngoc

    2008-11-01

    This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.

  19. Which method should be the reference method to evaluate the severity of rheumatic mitral stenosis? Gorlin's method versus 3D-echo.

    PubMed

    Pérez de Isla, Leopoldo; Casanova, Carlos; Almería, Carlos; Rodrigo, José Luis; Cordeiro, Pedro; Mataix, Luis; Aubele, Ada Lia; Lang, Roberto; Zamorano, José Luis

    2007-12-01

    Several studies have shown a wide variability among different methods to determine the valve area in patients with rheumatic mitral stenosis. Our aim was to evaluate if 3D-echo planimetry is more accurate than the Gorlin method to measure the valve area. Twenty-six patients with mitral stenosis underwent 2D and 3D-echo echocardiographic examinations and catheterization. Valve area was estimated by different methods. A median value of the mitral valve area, obtained from the measurements of three classical non-invasive methods (2D planimetry, pressure half-time and PISA method), was used as the reference method and it was compared with 3D-echo planimetry and Gorlin's method. Our results showed that the accuracy of 3D-echo planimetry is superior to the accuracy of the Gorlin method for the assessment of mitral valve area. We should keep in mind the fact that 3D-echo planimetry may be a better reference method than the Gorlin method to assess the severity of rheumatic mitral stenosis.

  20. Evaluation and comparison of Abbott Jaffe and enzymatic creatinine methods: Could the old method meet the new requirements?

    PubMed

    Küme, Tuncay; Sağlam, Barıs; Ergon, Cem; Sisman, Ali Rıza

    2018-01-01

    The aim of this study is to evaluate and compare the analytical performance characteristics of the two creatinine methods based on the Jaffe and enzymatic methods. Two original creatinine methods, Jaffe and enzymatic, were evaluated on Architect c16000 automated analyzer via limit of detection (LOD) and limit of quantitation (LOQ), linearity, intra-assay and inter-assay precision, and comparability in serum and urine samples. The method comparison and bias estimation using patient samples according to CLSI guideline were performed on 230 serum and 141 urine samples by analyzing on the same auto-analyzer. The LODs were determined as 0.1 mg/dL for both serum methods and as 0.25 and 0.07 mg/dL for the Jaffe and the enzymatic urine method respectively. The LOQs were similar with 0.05 mg/dL value for both serum methods, and enzymatic urine method had a lower LOQ than Jaffe urine method, values at 0.5 and 2 mg/dL respectively. Both methods were linear up to 65 mg/dL for serum and 260 mg/dL for urine. The intra-assay and inter-assay precision data were under desirable levels in both methods. The higher correlations were determined between two methods in serum and urine (r=.9994, r=.9998 respectively). On the other hand, Jaffe method gave the higher creatinine results than enzymatic method, especially at the low concentrations in both serum and urine. Both Jaffe and enzymatic methods were found to meet the analytical performance requirements in routine use. However, enzymatic method was found to have better performance in low creatinine levels. © 2017 Wiley Periodicals, Inc.

  1. Comparison of the lysis centrifugation method with the conventional blood culture method in cases of sepsis in a tertiary care hospital.

    PubMed

    Parikh, Harshal R; De, Anuradha S; Baveja, Sujata M

    2012-07-01

    Physicians and microbiologists have long recognized that the presence of living microorganisms in the blood of a patient carries with it considerable morbidity and mortality. Hence, blood cultures have become critically important and frequently performed test in clinical microbiology laboratories for diagnosis of sepsis. To compare the conventional blood culture method with the lysis centrifugation method in cases of sepsis. Two hundred nonduplicate blood cultures from cases of sepsis were analyzed using two blood culture methods concurrently for recovery of bacteria from patients diagnosed clinically with sepsis - the conventional blood culture method using trypticase soy broth and the lysis centrifugation method using saponin by centrifuging at 3000 g for 30 minutes. Overall bacteria recovered from 200 blood cultures were 17.5%. The conventional blood culture method had a higher yield of organisms, especially Gram positive cocci. The lysis centrifugation method was comparable with the former method with respect to Gram negative bacilli. The sensitivity of lysis centrifugation method in comparison to conventional blood culture method was 49.75% in this study, specificity was 98.21% and diagnostic accuracy was 89.5%. In almost every instance, the time required for detection of the growth was earlier by lysis centrifugation method, which was statistically significant. Contamination by lysis centrifugation was minimal, while that by conventional method was high. Time to growth by the lysis centrifugation method was highly significant (P value 0.000) as compared to time to growth by the conventional blood culture method. For the diagnosis of sepsis, combination of the lysis centrifugation method and the conventional blood culture method with trypticase soy broth or biphasic media is advocable, in order to achieve faster recovery and a better yield of microorganisms.

  2. Optimization and validation of spectrophotometric methods for determination of finasteride in dosage and biological forms

    PubMed Central

    Amin, Alaa S.; Kassem, Mohammed A.

    2012-01-01

    Aim and Background: Three simple, accurate and sensitive spectrophotometric methods for the determination of finasteride in pure, dosage and biological forms, and in the presence of its oxidative degradates were developed. Materials and Methods: These methods are indirect, involve the addition of excess oxidant potassium permanganate for method A; cerric sulfate [Ce(SO4)2] for methods B; and N-bromosuccinimide (NBS) for method C of known concentration in acid medium to finasteride, and the determination of the unreacted oxidant by measurement of the decrease in absorbance of methylene blue for method A, chromotrope 2R for method B, and amaranth for method C at a suitable maximum wavelength, λmax: 663, 528, and 520 nm, for the three methods, respectively. The reaction conditions for each method were optimized. Results: Regression analysis of the Beer plots showed good correlation in the concentration ranges of 0.12–3.84 μg mL–1 for method A, and 0.12–3.28 μg mL–1 for method B and 0.14 – 3.56 μg mL–1 for method C. The apparent molar absorptivity, Sandell sensitivity, detection and quantification limits were evaluated. The stoichiometric ratio between the finasteride and the oxidant was estimated. The validity of the proposed methods was tested by analyzing dosage forms and biological samples containing finasteride with relative standard deviation ≤ 0.95. Conclusion: The proposed methods could successfully determine the studied drug with varying excess of its oxidative degradation products, with recovery between 99.0 and 101.4, 99.2 and 101.6, and 99.6 and 101.0% for methods A, B, and C, respectively. PMID:23781478

  3. John Butcher and hybrid methods

    NASA Astrophysics Data System (ADS)

    Mehdiyeva, Galina; Imanova, Mehriban; Ibrahimov, Vagif

    2017-07-01

    As is known there are the mainly two classes of the numerical methods for solving ODE, which is commonly called a one and multistep methods. Each of these methods has certain advantages and disadvantages. It is obvious that the method which has better properties of these methods should be constructed at the junction of them. In the middle of the XX century, Butcher and Gear has constructed at the junction of the methods of Runge-Kutta and Adams, which is called hybrid method. Here considers the construction of certain generalizations of hybrid methods, with the high order of accuracy and to explore their application to solving the Ordinary Differential, Volterra Integral and Integro-Differential equations. Also have constructed some specific hybrid methods with the degree p ≤ 10.

  4. Critical study of higher order numerical methods for solving the boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Wornom, S. F.

    1978-01-01

    A fourth order box method is presented for calculating numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations. The method, which is the natural extension of the second order box scheme to fourth order, was demonstrated with application to the incompressible, laminar and turbulent, boundary layer equations. The efficiency of the present method is compared with two point and three point higher order methods, namely, the Keller box scheme with Richardson extrapolation, the method of deferred corrections, a three point spline method, and a modified finite element method. For equivalent accuracy, numerical results show the present method to be more efficient than higher order methods for both laminar and turbulent flows.

  5. A temperature match based optimization method for daily load prediction considering DLC effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Z.

    This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less

  6. [Isolation and identification methods of enterobacteria group and its technological advancement].

    PubMed

    Furuta, Itaru

    2007-08-01

    In the last half-century, isolation and identification methods of enterobacteria groups have markedly improved by technological advancement. Clinical microbiology tests have changed overtime from tube methods to commercial identification kits and automated identification. Tube methods are the original method for the identification of enterobacteria groups, that is, a basically essential method to recognize bacterial fermentation and biochemical principles. In this paper, traditional tube tests are discussed, such as the utilization of carbohydrates, indole, methyl red, and citrate and urease tests. Commercial identification kits and automated instruments by computer based analysis as current methods are also discussed, and those methods provide rapidity and accuracy. Nonculture techniques of nucleic acid typing methods using PCR analysis, and immunochemical methods using monoclonal antibodies can be further developed.

  7. Comparison of three commercially available fit-test methods.

    PubMed

    Janssen, Larry L; Luinenburg, D Michael; Mullins, Haskell E; Nelson, Thomas J

    2002-01-01

    American National Standards Institute (ANSI) standard Z88.10, Respirator Fit Testing Methods, includes criteria to evaluate new fit-tests. The standard allows generated aerosol, particle counting, or controlled negative pressure quantitative fit-tests to be used as the reference method to determine acceptability of a new test. This study examined (1) comparability of three Occupational Safety and Health Administration-accepted fit-test methods, all of which were validated using generated aerosol as the reference method; and (2) the effect of the reference method on the apparent performance of a fit-test method under evaluation. Sequential fit-tests were performed using the controlled negative pressure and particle counting quantitative fit-tests and the bitter aerosol qualitative fit-test. Of 75 fit-tests conducted with each method, the controlled negative pressure method identified 24 failures; bitter aerosol identified 22 failures; and the particle counting method identified 15 failures. The sensitivity of each method, that is, agreement with the reference method in identifying unacceptable fits, was calculated using each of the other two methods as the reference. None of the test methods met the ANSI sensitivity criterion of 0.95 or greater when compared with either of the other two methods. These results demonstrate that (1) the apparent performance of any fit-test depends on the reference method used, and (2) the fit-tests evaluated use different criteria to identify inadequately fitting respirators. Although "acceptable fit" cannot be defined in absolute terms at this time, the ability of existing fit-test methods to reject poor fits can be inferred from workplace protection factor studies.

  8. A Tale of Two Methods: Chart and Interview Methods for Identifying Delirium

    PubMed Central

    Saczynski, Jane S.; Kosar, Cyrus M.; Xu, Guoquan; Puelle, Margaret R.; Schmitt, Eva; Jones, Richard N.; Marcantonio, Edward R.; Wong, Bonnie; Isaza, Ilean; Inouye, Sharon K.

    2014-01-01

    Background Interview and chart-based methods for identifying delirium have been validated. However, relative strengths and limitations of each method have not been described, nor has a combined approach (using both interviews and chart), been systematically examined. Objectives To compare chart and interview-based methods for identification of delirium. Design, Setting and Participants Participants were 300 patients aged 70+ undergoing major elective surgery (majority were orthopedic surgery) interviewed daily during hospitalization for delirium using the Confusion Assessment Method (CAM; interview-based method) and whose medical charts were reviewed for delirium using a validated chart-review method (chart-based method). We examined rate of agreement on the two methods and patient characteristics of those identified using each approach. Predictive validity for clinical outcomes (length of stay, postoperative complications, discharge disposition) was compared. In the absence of a gold-standard, predictive value could not be calculated. Results The cumulative incidence of delirium was 23% (n= 68) by the interview-based method, 12% (n=35) by the chart-based method and 27% (n=82) by the combined approach. Overall agreement was 80%; kappa was 0.30. The methods differed in detection of psychomotor features and time of onset. The chart-based method missed delirium in CAM-identified patients laacking features of psychomotor agitation or inappropriate behavior. The CAM-based method missed chart-identified cases occurring during the night shift. The combined method had high predictive validity for all clinical outcomes. Conclusions Interview and chart-based methods have specific strengths for identification of delirium. A combined approach captures the largest number and the broadest range of delirium cases. PMID:24512042

  9. Inventory Management for Irregular Shipment of Goods in Distribution Centre

    NASA Astrophysics Data System (ADS)

    Takeda, Hitoshi; Kitaoka, Masatoshi; Usuki, Jun

    2016-01-01

    The shipping amount of commodity goods (Foods, confectionery, dairy products, such as public cosmetic pharmaceutical products) changes irregularly at the distribution center dealing with the general consumer goods. Because the shipment time and the amount of the shipment are irregular, the demand forecast becomes very difficult. For this, the inventory control becomes difficult, too. It cannot be applied to the shipment of the commodity by the conventional inventory control methods. This paper proposes the method for inventory control by cumulative flow curve method. It proposed the method of deciding the order quantity of the inventory control by the cumulative flow curve. Here, it proposes three methods. 1) Power method,2) Polynomial method and 3)Revised Holt's linear method that forecasts data with trends that is a kind of exponential smoothing method. This paper compares the economics of the conventional method, which is managed by the experienced and three new proposed methods. And, the effectiveness of the proposal method is verified from the numerical calculations.

  10. Computational Methods in Drug Discovery

    PubMed Central

    Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens

    2014-01-01

    Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236

  11. [Primary culture of human normal epithelial cells].

    PubMed

    Tang, Yu; Xu, Wenji; Guo, Wanbei; Xie, Ming; Fang, Huilong; Chen, Chen; Zhou, Jun

    2017-11-28

    The traditional primary culture methods of human normal epithelial cells have disadvantages of low activity of cultured cells, the low cultivated rate and complicated operation. To solve these problems, researchers made many studies on culture process of human normal primary epithelial cell. In this paper, we mainly introduce some methods used in separation and purification of human normal epithelial cells, such as tissue separation method, enzyme digestion separation method, mechanical brushing method, red blood cell lysis method, percoll layered medium density gradient separation method. We also review some methods used in the culture and subculture, including serum-free medium combined with low mass fraction serum culture method, mouse tail collagen coating method, and glass culture bottle combined with plastic culture dish culture method. The biological characteristics of human normal epithelial cells, the methods of immunocytochemical staining, trypan blue exclusion are described. Moreover, the factors affecting the aseptic operation, the conditions of the extracellular environment, the conditions of the extracellular environment during culture, the number of differential adhesion, and the selection and dosage of additives are summarized.

  12. A Modified Magnetic Gradient Contraction Based Method for Ferromagnetic Target Localization

    PubMed Central

    Wang, Chen; Zhang, Xiaojuan; Qu, Xiaodong; Pan, Xiao; Fang, Guangyou; Chen, Luzhao

    2016-01-01

    The Scalar Triangulation and Ranging (STAR) method, which is based upon the unique properties of magnetic gradient contraction, is a high real-time ferromagnetic target localization method. Only one measurement point is required in the STAR method and it is not sensitive to changes in sensing platform orientation. However, the localization accuracy of the method is limited by the asphericity errors and the inaccurate value of position leads to larger errors in the estimation of magnetic moment. To improve the localization accuracy, a modified STAR method is proposed. In the proposed method, the asphericity errors of the traditional STAR method are compensated with an iterative algorithm. The proposed method has a fast convergence rate which meets the requirement of high real-time localization. Simulations and field experiments have been done to evaluate the performance of the proposed method. The results indicate that target parameters estimated by the modified STAR method are more accurate than the traditional STAR method. PMID:27999322

  13. Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve

    1987-01-01

    Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.

  14. Robust numerical solution of the reservoir routing equation

    NASA Astrophysics Data System (ADS)

    Fiorentini, Marcello; Orlandini, Stefano

    2013-09-01

    The robustness of numerical methods for the solution of the reservoir routing equation is evaluated. The methods considered in this study are: (1) the Laurenson-Pilgrim method, (2) the fourth-order Runge-Kutta method, and (3) the fixed order Cash-Karp method. Method (1) is unable to handle nonmonotonic outflow rating curves. Method (2) is found to fail under critical conditions occurring, especially at the end of inflow recession limbs, when large time steps (greater than 12 min in this application) are used. Method (3) is computationally intensive and it does not solve the limitations of method (2). The limitations of method (2) can be efficiently overcome by reducing the time step in the critical phases of the simulation so as to ensure that water level remains inside the domains of the storage function and the outflow rating curve. The incorporation of a simple backstepping procedure implementing this control into the method (2) yields a robust and accurate reservoir routing method that can be safely used in distributed time-continuous catchment models.

  15. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Construction of exponentially fitted symplectic Runge-Kutta-Nyström methods from partitioned Runge-Kutta methods

    NASA Astrophysics Data System (ADS)

    Monovasilis, Theodore; Kalogiratou, Zacharoula; Simos, T. E.

    2014-10-01

    In this work we derive exponentially fitted symplectic Runge-Kutta-Nyström (RKN) methods from symplectic exponentially fitted partitioned Runge-Kutta (PRK) methods methods (for the approximate solution of general problems of this category see [18] - [40] and references therein). We construct RKN methods from PRK methods with up to five stages and fourth algebraic order.

  17. Why, and how, mixed methods research is undertaken in health services research in England: a mixed methods study

    PubMed Central

    O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon

    2007-01-01

    Background Recently, there has been a surge of international interest in combining qualitative and quantitative methods in a single study – often called mixed methods research. It is timely to consider why and how mixed methods research is used in health services research (HSR). Methods Documentary analysis of proposals and reports of 75 mixed methods studies funded by a research commissioner of HSR in England between 1994 and 2004. Face-to-face semi-structured interviews with 20 researchers sampled from these studies. Results 18% (119/647) of HSR studies were classified as mixed methods research. In the documentation, comprehensiveness was the main driver for using mixed methods research, with researchers wanting to address a wider range of questions than quantitative methods alone would allow. Interviewees elaborated on this, identifying the need for qualitative research to engage with the complexity of health, health care interventions, and the environment in which studies took place. Motivations for adopting a mixed methods approach were not always based on the intrinsic value of mixed methods research for addressing the research question; they could be strategic, for example, to obtain funding. Mixed methods research was used in the context of evaluation, including randomised and non-randomised designs; survey and fieldwork exploratory studies; and instrument development. Studies drew on a limited number of methods – particularly surveys and individual interviews – but used methods in a wide range of roles. Conclusion Mixed methods research is common in HSR in the UK. Its use is driven by pragmatism rather than principle, motivated by the perceived deficit of quantitative methods alone to address the complexity of research in health care, as well as other more strategic gains. Methods are combined in a range of contexts, yet the emerging methodological contributions from HSR to the field of mixed methods research are currently limited to the single context of combining qualitative methods and randomised controlled trials. Health services researchers could further contribute to the development of mixed methods research in the contexts of instrument development, survey and fieldwork, and non-randomised evaluations. PMID:17570838

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor-Pashow, K.; Fondeur, F.; White, T.

    Savannah River National Laboratory (SRNL) was tasked with identifying and developing at least one, but preferably two methods for quantifying the suppressor in the Next Generation Solvent (NGS) system. The suppressor is a guanidine derivative, N,N',N"-tris(3,7-dimethyloctyl)guanidine (TiDG). A list of 10 possible methods was generated, and screening experiments were performed for 8 of the 10 methods. After completion of the screening experiments, the non-aqueous acid-base titration was determined to be the most promising, and was selected for further development as the primary method. {sup 1}H NMR also showed promising results from the screening experiments, and this method was selected formore » further development as the secondary method. Other methods, including {sup 36}Cl radiocounting and ion chromatography, also showed promise; however, due to the similarity to the primary method (titration) and the inability to differentiate between TiDG and TOA (tri-n-ocytlamine) in the blended solvent, {sup 1}H NMR was selected over these methods. Analysis of radioactive samples obtained from real waste ESS (extraction, scrub, strip) testing using the titration method showed good results. Based on these results, the titration method was selected as the method of choice for TiDG measurement. {sup 1}H NMR has been selected as the secondary (back-up) method, and additional work is planned to further develop this method and to verify the method using radioactive samples. Procedures for analyzing radioactive samples of both pure NGS and blended solvent were developed and issued for the both methods.« less

  19. Novel atomic absorption spectrometric and rapid spectrophotometric methods for the quantitation of paracetamol in saliva: application to pharmacokinetic studies.

    PubMed

    Issa, M M; Nejem, R M; El-Abadla, N S; Al-Kholy, M; Saleh, Akila A

    2008-01-01

    A novel atomic absorption spectrometric method and two highly sensitive spectrophotometric methods were developed for the determination of paracetamol. These techniques based on the oxidation of paracetamol by iron (III) (method I); oxidation of p-aminophenol after the hydrolysis of paracetamol (method II). Iron (II) then reacts with potassium ferricyanide to form Prussian blue color with a maximum absorbance at 700 nm. The atomic absorption method was accomplished by extracting the excess iron (III) in method II and aspirates the aqueous layer into air-acetylene flame to measure the absorbance of iron (II) at 302.1 nm. The reactions have been spectrometrically evaluated to attain optimum experimental conditions. Linear responses were exhibited over the ranges 1.0-10, 0.2-2.0 and 0.1-1.0 mug/ml for method I, method II and atomic absorption spectrometric method, respectively. A high sensitivity is recorded for the proposed methods I and II and atomic absorption spectrometric method value indicate: 0.05, 0.022 and 0.012 mug/ml, respectively. The limit of quantitation of paracetamol by method II and atomic absorption spectrometric method were 0.20 and 0.10 mug/ml. Method II and the atomic absorption spectrometric method were applied to demonstrate a pharmacokinetic study by means of salivary samples in normal volunteers who received 1.0 g paracetamol. Intra and inter-day precision did not exceed 6.9%.

  20. Novel Atomic Absorption Spectrometric and Rapid Spectrophotometric Methods for the Quantitation of Paracetamol in Saliva: Application to Pharmacokinetic Studies

    PubMed Central

    Issa, M. M.; Nejem, R. M.; El-Abadla, N. S.; Al-Kholy, M.; Saleh, Akila. A.

    2008-01-01

    A novel atomic absorption spectrometric method and two highly sensitive spectrophotometric methods were developed for the determination of paracetamol. These techniques based on the oxidation of paracetamol by iron (III) (method I); oxidation of p-aminophenol after the hydrolysis of paracetamol (method II). Iron (II) then reacts with potassium ferricyanide to form Prussian blue color with a maximum absorbance at 700 nm. The atomic absorption method was accomplished by extracting the excess iron (III) in method II and aspirates the aqueous layer into air-acetylene flame to measure the absorbance of iron (II) at 302.1 nm. The reactions have been spectrometrically evaluated to attain optimum experimental conditions. Linear responses were exhibited over the ranges 1.0-10, 0.2-2.0 and 0.1-1.0 μg/ml for method I, method II and atomic absorption spectrometric method, respectively. A high sensitivity is recorded for the proposed methods I and II and atomic absorption spectrometric method value indicate: 0.05, 0.022 and 0.012 μg/ml, respectively. The limit of quantitation of paracetamol by method II and atomic absorption spectrometric method were 0.20 and 0.10 μg/ml. Method II and the atomic absorption spectrometric method were applied to demonstrate a pharmacokinetic study by means of salivary samples in normal volunteers who received 1.0 g paracetamol. Intra and inter-day precision did not exceed 6.9%. PMID:20046743

  1. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  2. X-ray imaging using amorphous selenium: a photoinduced discharge readout method for digital mammography.

    PubMed

    Rowlands, J A; Hunter, D M; Araj, N

    1991-01-01

    A new digital image readout method for electrostatic charge images on photoconductive plates is described. The method can be used to read out images on selenium plates similar to those used in xeromammography. The readout method, called the air-gap photoinduced discharge method (PID), discharges the latent image pixel by pixel and measures the charge. The PID readout method, like electrometer methods, is linear. However, the PID method permits much better resolution than scanning electrometers while maintaining quantum limited performance at high radiation exposure levels. Thus the air-gap PID method appears to be uniquely superior for high-resolution digital imaging tasks such as mammography.

  3. Quantitative naturalistic methods for detecting change points in psychotherapy research: an illustration with alliance ruptures.

    PubMed

    Eubanks-Carter, Catherine; Gorman, Bernard S; Muran, J Christopher

    2012-01-01

    Analysis of change points in psychotherapy process could increase our understanding of mechanisms of change. In particular, naturalistic change point detection methods that identify turning points or breakpoints in time series data could enhance our ability to identify and study alliance ruptures and resolutions. This paper presents four categories of statistical methods for detecting change points in psychotherapy process: criterion-based methods, control chart methods, partitioning methods, and regression methods. Each method's utility for identifying shifts in the alliance is illustrated using a case example from the Beth Israel Psychotherapy Research program. Advantages and disadvantages of the various methods are discussed.

  4. A comparative study of interface reconstruction methods for multi-material ALE simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucharik, Milan; Garimalla, Rao; Schofield, Samuel

    2009-01-01

    In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.

  5. Digital photography and transparency-based methods for measuring wound surface area.

    PubMed

    Bhedi, Amul; Saxena, Atul K; Gadani, Ravi; Patel, Ritesh

    2013-04-01

    To compare and determine a credible method of measurement of wound surface area by linear, transparency, and photographic methods for monitoring progress of wound healing accurately and ascertaining whether these methods are significantly different. From April 2005 to December 2006, 40 patients (30 men, 5 women, 5 children) admitted to the surgical ward of Shree Sayaji General Hospital, Baroda, had clean as well as infected wound following trauma, debridement, pressure sore, venous ulcer, and incision and drainage. Wound surface areas were measured by these three methods (linear, transparency, and photographic methods) simultaneously on alternate days. The linear method is statistically and significantly different from transparency and photographic methods (P value <0.05), but there is no significant difference between transparency and photographic methods (P value >0.05). Photographic and transparency methods provided measurements of wound surface area with equivalent result and there was no statistically significant difference between these two methods.

  6. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  7. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  8. [An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].

    PubMed

    Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang

    2014-07-01

    Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.

  9. A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis

    PubMed Central

    Kang, Mengjun

    2015-01-01

    A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691

  10. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  11. [Theory, method and application of method R on estimation of (co)variance components].

    PubMed

    Liu, Wen-Zhong

    2004-07-01

    Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.

  12. Multiple zeros of polynomials

    NASA Technical Reports Server (NTRS)

    Wood, C. A.

    1974-01-01

    For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.

  13. Evaluation of the methods for enumerating coliform bacteria from water samples using precise reference standards.

    PubMed

    Wohlsen, T; Bates, J; Vesey, G; Robinson, W A; Katouli, M

    2006-04-01

    To use BioBall cultures as a precise reference standard to evaluate methods for enumeration of Escherichia coli and other coliform bacteria in water samples. Eight methods were evaluated including membrane filtration, standard plate count (pour and spread plate methods), defined substrate technology methods (Colilert and Colisure), the most probable number method and the Petrifilm disposable plate method. Escherichia coli and Enterobacter aerogenes BioBall cultures containing 30 organisms each were used. All tests were performed using 10 replicates. The mean recovery of both bacteria varied with the different methods employed. The best and most consistent results were obtained with Petrifilm and the pour plate method. Other methods either yielded a low recovery or showed significantly high variability between replicates. The BioBall is a very suitable quality control tool for evaluating the efficiency of methods for bacterial enumeration in water samples.

  14. Wilsonian methods of concept analysis: a critique.

    PubMed

    Hupcey, J E; Morse, J M; Lenz, E R; Tasón, M C

    1996-01-01

    Wilsonian methods of concept analysis--that is, the method proposed by Wilson and Wilson-derived methods in nursing (as described by Walker and Avant; Chinn and Kramer [Jacobs]; Schwartz-Barcott and Kim; and Rodgers)--are discussed and compared in this article. The evolution and modifications of Wilson's method in nursing are described and research that has used these methods, assessed. The transformation of Wilson's method is traced as each author has adopted his techniques and attempted to modify the method to correct for limitations. We suggest that these adaptations and modifications ultimately erode Wilson's method. Further, the Wilson-derived methods have been overly simplified and used by nurse researchers in a prescriptive manner, and the results often do not serve the purpose of expanding nursing knowledge. We conclude that, considering the significance of concept development for the nursing profession, the development of new methods and a means for evaluating conceptual inquiry must be given priority.

  15. The Application of Continuous Wavelet Transform Based Foreground Subtraction Method in 21 cm Sky Surveys

    NASA Astrophysics Data System (ADS)

    Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen

    2013-08-01

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  16. Study report on a double isotope method of calcium absorption

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Some of the pros and cons of three methods to study gastrointestinal calcium absorption are briefly discussed. The methods are: (1) a balance study; (2) a single isotope method; and (3) a double isotope method. A procedure for the double isotope method is also included.

  17. Comparison on genomic predictions using three GBLUP methods and two single-step blending methods in the Nordic Holstein population

    PubMed Central

    2012-01-01

    Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16 traits in the Nordic Holstein population. Methods The data consisted of de-regressed proofs (DRP) for 5 214 genotyped and 9 374 non-genotyped bulls. The bulls were divided into a training and a validation population by birth date, October 1, 2001. Five approaches for genomic prediction were used: 1) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted for the difference of scale between the genomic and the pedigree relationship matrices. A set of weights on the pedigree relationship matrix (ranging from 0.05 to 0.40) was used to build the combined relationship matrix in the single-step blending method and the GBLUP method with a polygenetic effect. Results Averaged over the 16 traits, reliabilities of genomic breeding values predicted using the GBLUP method with a polygenic effect (relative weight of 0.20) were 0.3% higher than reliabilities from the simple GBLUP method (without a polygenic effect). The adjusted single-step blending and original single-step blending methods (relative weight of 0.20) had average reliabilities that were 2.1% and 1.8% higher than the simple GBLUP method, respectively. In addition, the GBLUP method with a polygenic effect led to less bias of genomic predictions than the simple GBLUP method, and both single-step blending methods yielded less bias of predictions than all GBLUP methods. Conclusions The single-step blending method is an appealing approach for practical genomic prediction in dairy cattle. Genomic prediction from the single-step blending method can be improved by adjusting the scale of the genomic relationship matrix. PMID:22455934

  18. Roka Listeria detection method using transcription mediated amplification to detect Listeria species in select foods and surfaces. Performance Tested Method(SM) 011201.

    PubMed

    Hua, Yang; Kaplan, Shannon; Reshatoff, Michael; Hu, Ernie; Zukowski, Alexis; Schweis, Franz; Gin, Cristal; Maroni, Brett; Becker, Michael; Wisniewski, Michele

    2012-01-01

    The Roka Listeria Detection Assay was compared to the reference culture methods for nine select foods and three select surfaces. The Roka method used Half-Fraser Broth for enrichment at 35 +/- 2 degrees C for 24-28 h. Comparison of Roka's method to reference methods requires an unpaired approach. Each method had a total of 545 samples inoculated with a Listeria strain. Each food and surface was inoculated with a different strain of Listeria at two different levels per method. For the dairy products (Brie cheese, whole milk, and ice cream), our method was compared to AOAC Official Method(SM) 993.12. For the ready-to-eat meats (deli chicken, cured ham, chicken salad, and hot dogs) and environmental surfaces (sealed concrete, stainless steel, and plastic), these samples were compared to the U.S. Department of Agriculture/Food Safety and Inspection Service-Microbiology Laboratory Guidebook (USDA/FSIS-MLG) method MLG 8.07. Cold-smoked salmon and romaine lettuce were compared to the U.S. Food and Drug Administration/Bacteriological Analytical Manual, Chapter 10 (FDA/BAM) method. Roka's method had 358 positives out of 545 total inoculated samples compared to 332 positive for the reference methods. Overall the probability of detection analysis of the results showed better or equivalent performance compared to the reference methods.

  19. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  20. Reliability and accuracy of real-time visualization techniques for measuring school cafeteria tray waste: validating the quarter-waste method.

    PubMed

    Hanks, Andrew S; Wansink, Brian; Just, David R

    2014-03-01

    Measuring food waste is essential to determine the impact of school interventions on what children eat. There are multiple methods used for measuring food waste, yet it is unclear which method is most appropriate in large-scale interventions with restricted resources. This study examines which of three visual tray waste measurement methods is most reliable, accurate, and cost-effective compared with the gold standard of individually weighing leftovers. School cafeteria researchers used the following three visual methods to capture tray waste in addition to actual food waste weights for 197 lunch trays: the quarter-waste method, the half-waste method, and the photograph method. Inter-rater and inter-method reliability were highest for on-site visual methods (0.90 for the quarter-waste method and 0.83 for the half-waste method) and lowest for the photograph method (0.48). This low reliability is partially due to the inability of photographs to determine whether packaged items (such as milk or yogurt) are empty or full. In sum, the quarter-waste method was the most appropriate for calculating accurate amounts of tray waste, and the photograph method might be appropriate if researchers only wish to detect significant differences in waste or consumption of selected, unpackaged food. Copyright © 2014 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  1. Modified flotation method with the use of Percoll for the detection of Isospora suis oocysts in suckling piglet faeces.

    PubMed

    Karamon, Jacek; Ziomko, Irena; Cencek, Tomasz; Sroka, Jacek

    2008-10-01

    The modification of flotation method for the examination of diarrhoeic piglet faeces for the detection of Isospora suis oocysts was elaborated. The method was based on removing fractions of fat from the sample of faeces by centrifugation with a 25% Percoll solution. The investigations were carried out in comparison to the McMaster method. From five variants of the Percoll flotation method, the best results were obtained when 2ml of flotation liquid per 1g of faeces were used. The limit of detection in the Percoll flotation method was 160 oocysts per 1g, and was better than with the McMaster method. The efficacy of the modified method was confirmed by results obtained in the examination of the I. suis infected piglets. From all faecal samples, positive samples in the Percoll flotation method were double the results than that of the routine method. Oocysts were first detected by the Percoll flotation method on day 4 post-invasion, i.e. one-day earlier than with the McMaster method. During the experiment (except for 3 days), the extensity of I. suis invasion in the litter examined by the Percoll flotation method was higher than that with the McMaster method. The obtained results show that the modified flotation method with the use of Percoll could be applied in the diagnostics of suckling piglet isosporosis.

  2. Comparison of concentration methods for rapid detection of hookworm ova in wastewater matrices using quantitative PCR.

    PubMed

    Gyawali, P; Ahmed, W; Jagals, P; Sidhu, J P S; Toze, S

    2015-12-01

    Hookworm infection contributes around 700 million infections worldwide especially in developing nations due to increased use of wastewater for crop production. The effective recovery of hookworm ova from wastewater matrices is difficult due to their low concentrations and heterogeneous distribution. In this study, we compared the recovery rates of (i) four rapid hookworm ova concentration methods from municipal wastewater, and (ii) two concentration methods from sludge samples. Ancylostoma caninum ova were used as surrogate for human hookworm (Ancylostoma duodenale and Necator americanus). Known concentration of A. caninum hookworm ova were seeded into wastewater (treated and raw) and sludge samples collected from two wastewater treatment plants (WWTPs) in Brisbane and Perth, Australia. The A. caninum ova were concentrated from treated and raw wastewater samples using centrifugation (Method A), hollow fiber ultrafiltration (HFUF) (Method B), filtration (Method C) and flotation (Method D) methods. For sludge samples, flotation (Method E) and direct DNA extraction (Method F) methods were used. Among the four methods tested, filtration (Method C) method was able to recover higher concentrations of A. caninum ova consistently from treated wastewater (39-50%) and raw wastewater (7.1-12%) samples collected from both WWTPs. The remaining methods (Methods A, B and D) yielded variable recovery rate ranging from 0.2 to 40% for treated and raw wastewater samples. The recovery rates for sludge samples were poor (0.02-4.7), although, Method F (direct DNA extraction) provided 1-2 orders of magnitude higher recovery rate than Method E (flotation). Based on our results it can be concluded that the recovery rates of hookworm ova from wastewater matrices, especially sludge samples, can be poor and highly variable. Therefore, choice of concentration method is vital for the sensitive detection of hookworm ova in wastewater matrices. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  3. Achieving cost-neutrality with long-acting reversible contraceptive methods⋆

    PubMed Central

    Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna

    2014-01-01

    Objectives This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. Study design A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20–29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. Results The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. Conclusions This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Implications Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. PMID:25282161

  4. A method for addressing differences in concentrations of fipronil and three degradates obtained by two different laboratory methods

    USGS Publications Warehouse

    Crawford, Charles G.; Martin, Jeffrey D.

    2017-07-21

    In October 2012, the U.S. Geological Survey (USGS) began measuring the concentration of the pesticide fipronil and three of its degradates (desulfinylfipronil, fipronil sulfide, and fipronil sulfone) by a new laboratory method using direct aqueous-injection liquid chromatography tandem mass spectrometry (DAI LC–MS/MS). This method replaced the previous method—in use since 2002—that used gas chromatography/mass spectrometry (GC/MS). The performance of the two methods is not comparable for fipronil and the three degradates. Concentrations of these four chemical compounds determined by the DAI LC–MS/MS method are substantially lower than the GC/MS method. A method was developed to correct for the difference in concentrations obtained by the two laboratory methods based on a methods comparison field study done in 2012. Environmental and field matrix spike samples to be analyzed by both methods from 48 stream sites from across the United States were sampled approximately three times each for this study. These data were used to develop a relation between the two laboratory methods for each compound using regression analysis. The relations were used to calibrate data obtained by the older method to the new method in order to remove any biases attributable to differences in the methods. The coefficients of the equations obtained from the regressions were used to calibrate over 16,600 observations of fipronil, as well as the three degradates determined by the GC/MS method retrieved from the USGS National Water Information System. The calibrated values were then compared to over 7,800 observations of fipronil and to the three degradates determined by the DAI LC–MS/MS method also retrieved from the National Water Information System. The original and calibrated values from the GC/MS method, along with measures of uncertainty in the calibrated values and the original values from the DAI LC–MS/MS method, are provided in an accompanying data release.

  5. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  6. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  7. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  8. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  9. 77 FR 48733 - Transitional Program for Covered Business Method Patents-Definitions of Covered Business Method...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-14

    ... Office 37 CFR Part 42 Transitional Program for Covered Business Method Patents--Definitions of Covered Business Method Patent and Technological Invention; Final Rule #0;#0;Federal Register / Vol. 77 , No. 157... Business Method Patents-- Definitions of Covered Business Method Patent and Technological Invention AGENCY...

  10. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  11. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...

  12. A Review of Methods for Missing Data.

    ERIC Educational Resources Information Center

    Pigott, Therese D.

    2001-01-01

    Reviews methods for handling missing data in a research study. Model-based methods, such as maximum likelihood using the EM algorithm and multiple imputation, hold more promise than ad hoc methods. Although model-based methods require more specialized computer programs and assumptions about the nature of missing data, these methods are appropriate…

  13. The Views of Turkish Pre-Service Teachers about Effectiveness of Cluster Method as a Teaching Writing Method

    ERIC Educational Resources Information Center

    Kitis, Emine; Türkel, Ali

    2017-01-01

    The aim of this study is to find out Turkish pre-service teachers' views on effectiveness of cluster method as a writing teaching method. The Cluster Method can be defined as a connotative creative writing method. The way the method works is that the person who brainstorms on connotations of a word or a concept in abscence of any kind of…

  14. Assay of fluoxetine hydrochloride by titrimetric and HPLC methods.

    PubMed

    Bueno, F; Bergold, A M; Fröehlich, P E

    2000-01-01

    Two alternative methods were proposed to assay Fluoxetine Hydrochloride: a titrimetric method and another by HPLC using as mobile phase water pH 3.5: acetonitrile (65:35). These methods were applied to the determination of Fluoxetine as such or in formulations (capsules). The titrimetric method is an alternative for pharmacies and small industries. Both methods showed accuracy and precision and are an alternative to the official methods.

  15. Thermophysical Properties of Matter - The TPRC Data Series. Volume 3. Thermal Conductivity - Nonmetallic Liquids and Gases

    DTIC Science & Technology

    1970-01-01

    design and experimentation. I. The Shock- Tube Method Smiley [546] introduced the use of shock waves...one of the greatest disadvantages of this technique. Both the unique adaptability of the shock tube method for high -temperature measurement of...Line-Source Flow Method H. The Hot-Wire Thermal Diffusion Column Method I. The Shock- Tube Method J. The Arc Method K. The Ultrasonic Method .

  16. New methods for the numerical integration of ordinary differential equations and their application to the equations of motion of spacecraft

    NASA Technical Reports Server (NTRS)

    Banyukevich, A.; Ziolkovski, K.

    1975-01-01

    A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.

  17. Comparison of measurement methods for capacitive tactile sensors and their implementation

    NASA Astrophysics Data System (ADS)

    Tarapata, Grzegorz; Sienkiewicz, Rafał

    2015-09-01

    This paper presents a review of ideas and implementations of measurement methods utilized for capacity measurements in tactile sensors. The paper describes technical method, charge amplification method, generation and as well integration method. Three selected methods were implemented in dedicated measurement system and utilised for capacitance measurements of ourselves made tactile sensors. The tactile sensors tested in this work were fully fabricated with the inkjet printing technology. The tests result were presented and summarised. The charge amplification method (CDC) was selected as the best method for the measurement of the tactile sensors.

  18. On time discretizations for spectral methods. [numerical integration of Fourier and Chebyshev methods for dynamic partial differential equations

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Turkel, E.

    1980-01-01

    New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.

  19. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  20. Two smart spectrophotometric methods for the simultaneous estimation of Simvastatin and Ezetimibe in combined dosage form

    NASA Astrophysics Data System (ADS)

    Magdy, Nancy; Ayad, Miriam F.

    2015-02-01

    Two simple, accurate, precise, sensitive and economic spectrophotometric methods were developed for the simultaneous determination of Simvastatin and Ezetimibe in fixed dose combination products without prior separation. The first method depends on a new chemometrics-assisted ratio spectra derivative method using moving window polynomial least square fitting method (Savitzky-Golay filters). The second method is based on a simple modification for the ratio subtraction method. The suggested methods were validated according to USP guidelines and can be applied for routine quality control testing.

  1. Application of LC/MS/MS Techniques to Development of US ...

    EPA Pesticide Factsheets

    This presentation will describe the U.S. EPA’s drinking water and ambient water method development program in relation to the process employed and the typical challenges encountered in developing standardized LC/MS/MS methods for chemicals of emerging concern. The EPA’s Drinking Water Contaminant Candidate List and Unregulated Contaminant Monitoring Regulations, which are the driving forces behind drinking water method development, will be introduced. Three drinking water LC/MS/MS methods (Methods 537, 544 and a new method for nonylphenol) and two ambient water LC/MS/MS methods for cyanotoxins will be described that highlight some of the challenges encountered during development of these methods. This presentation will provide the audience with basic understanding of EPA's drinking water method development program and an introduction to two new ambient water EPA methods.

  2. The Roche Immunoturbidimetric Albumin Method on Cobas c 501 Gives Higher Values Than the Abbott and Roche BCP Methods When Analyzing Patient Plasma Samples.

    PubMed

    Helmersson-Karlqvist, Johanna; Flodin, Mats; Havelka, Aleksandra Mandic; Xu, Xiao Yan; Larsson, Anders

    2016-09-01

    Serum/plasma albumin is an important and widely used laboratory marker and it is important that we measure albumin correctly without bias. We had indications that the immunoturbidimetric method on Cobas c 501 and the bromocresol purple (BCP) method on Architect 16000 differed, so we decided to study these methods more closely. A total of 1,951 patient requests with albumin measured with both the Architect BCP and Cobas immunoturbidimetric methods were extracted from the laboratory system. A comparison with fresh plasma samples was also performed that included immunoturbidimetric and BCP methods on Cobas c 501 and analysis of the international protein calibrator ERM-DA470k/IFCC. The median difference between the Abbott BCP and Roche immunoturbidimetric methods was 3.3 g/l and the Roche method overestimated ERM-DA470k/IFCC by 2.2 g/l. The Roche immunoturbidimetric method gave higher values than the Roche BCP method: y = 1.111x - 0.739, R² = 0.971. The Roche immunoturbidimetric albumin method gives clearly higher values than the Abbott and Roche BCP methods when analyzing fresh patient samples. The differences between the two methods were similar at normal and low albumin levels. © 2016 Wiley Periodicals, Inc.

  3. Manual tracing versus smartphone application (app) tracing: a comparative study.

    PubMed

    Sayar, Gülşilay; Kilinc, Delal Dara

    2017-11-01

    This study aimed to compare the results of conventional manual cephalometric tracing with those acquired with smartphone application cephalometric tracing. The cephalometric radiographs of 55 patients (25 females and 30 males) were traced via the manual and app methods and were subsequently examined with Steiner's analysis. Five skeletal measurements, five dental measurements and two soft tissue measurements were managed based on 21 landmarks. The durations of the performances of the two methods were also compared. SNA (Sella, Nasion, A point angle) and SNB (Sella, Nasion, B point angle) values for the manual method were statistically lower (p < .001) than those for the app method. The ANB value for the manual method was statistically lower than that of app method. L1-NB (°) and upper lip protrusion values for the manual method were statistically higher than those for the app method. Go-GN/SN, U1-NA (°) and U1-NA (mm) values for manual method were statistically lower than those for the app method. No differences between the two methods were found in the L1-NB (mm), occlusal plane to SN, interincisal angle or lower lip protrusion values. Although statistically significant differences were found between the two methods, the cephalometric tracing proceeded faster with the app method than with the manual method.

  4. Contraceptive Method Choice Among Young Adults: Influence of Individual and Relationship Factors.

    PubMed

    Harvey, S Marie; Oakley, Lisa P; Washburn, Isaac; Agnew, Christopher R

    2018-01-26

    Because decisions related to contraceptive behavior are often made by young adults in the context of specific relationships, the relational context likely influences use of contraceptives. Data presented here are from in-person structured interviews with 536 Black, Hispanic, and White young adults from East Los Angeles, California. We collected partner-specific relational and contraceptive data on all sexual partnerships for each individual, on four occasions, over one year. Using three-level multinomial logistic regression models, we examined individual and relationship factors predictive of contraceptive use. Results indicated that both individual and relationship factors predicted contraceptive use, but factors varied by method. Participants reporting greater perceived partner exclusivity and relationship commitment were more likely to use hormonal/long-acting methods only or a less effective method/no method versus condoms only. Those with greater participation in sexual decision making were more likely to use any method over a less effective method/no method and were more likely to use condoms only or dual methods versus a hormonal/long-acting method only. In addition, for women only, those who reported greater relationship commitment were more likely to use hormonal/long-acting methods or a less effective method/no method versus a dual method. In summary, interactive relationship qualities and dynamics (commitment and sexual decision making) significantly predicted contraceptive use.

  5. [A study for testing the antifungal susceptibility of yeast by the Japanese Society for Medical Mycology (JSMM) method. The proposal of the modified JSMM method 2009].

    PubMed

    Nishiyama, Yayoi; Abe, Michiko; Ikeda, Reiko; Uno, Jun; Oguri, Toyoko; Shibuya, Kazutoshi; Maesaki, Shigefumi; Mohri, Shinobu; Yamada, Tsuyoshi; Ishibashi, Hiroko; Hasumi, Yayoi; Abe, Shigeru

    2010-01-01

    The Japanese Society for Medical Mycology (JSMM) method used for testing the antifungal susceptibility of yeast, the MIC end point for azole antifungal agents, is currently set at IC(80). It was recently shown, however that there is an inconsistency in the MIC value between the JSMM method and the CLSI M27-A2 (CLSI) method, in which the end- point was to read as IC(50). To resolve this discrepancy and reassess the JSMM method, the MIC for three azoles, fluconazole, itraconazole and voriconazole were compared to 5 strains of each of the following Candida species: C. albicans, C. glabrata, C. tropicalis, C. parapsilosis and C. krusei, for a total of 25 comparisons, using the JSMM method, a modified JSMM method, and the CLSI method. The results showed that when the MIC end- point criterion of the JSMM method was changed from IC(80) to IC(50) (the modified JSMM method) , the MIC value was consistent and compatible with the CLSI method. Finally, it should be emphasized that the JSMM method, using a spectrophotometer for MIC measurement, was superior in both stability and reproducibility, as compared to the CLSI method in which growth was assessed by visual observation.

  6. Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale

    1997-01-01

    The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.

  7. Accuracy of two geocoding methods for geographic information system-based exposure assessment in epidemiological studies.

    PubMed

    Faure, Elodie; Danjou, Aurélie M N; Clavel-Chapelon, Françoise; Boutron-Ruault, Marie-Christine; Dossus, Laure; Fervers, Béatrice

    2017-02-24

    Environmental exposure assessment based on Geographic Information Systems (GIS) and study participants' residential proximity to environmental exposure sources relies on the positional accuracy of subjects' residences to avoid misclassification bias. Our study compared the positional accuracy of two automatic geocoding methods to a manual reference method. We geocoded 4,247 address records representing the residential history (1990-2008) of 1,685 women from the French national E3N cohort living in the Rhône-Alpes region. We compared two automatic geocoding methods, a free-online geocoding service (method A) and an in-house geocoder (method B), to a reference layer created by manually relocating addresses from method A (method R). For each automatic geocoding method, positional accuracy levels were compared according to the urban/rural status of addresses and time-periods (1990-2000, 2001-2008), using Chi Square tests. Kappa statistics were performed to assess agreement of positional accuracy of both methods A and B with the reference method, overall, by time-periods and by urban/rural status of addresses. Respectively 81.4% and 84.4% of addresses were geocoded to the exact address (65.1% and 61.4%) or to the street segment (16.3% and 23.0%) with methods A and B. In the reference layer, geocoding accuracy was higher in urban areas compared to rural areas (74.4% vs. 10.5% addresses geocoded to the address or interpolated address level, p < 0.0001); no difference was observed according to the period of residence. Compared to the reference method, median positional errors were 0.0 m (IQR = 0.0-37.2 m) and 26.5 m (8.0-134.8 m), with positional errors <100 m for 82.5% and 71.3% of addresses, for method A and method B respectively. Positional agreement of method A and method B with method R was 'substantial' for both methods, with kappa coefficients of 0.60 and 0.61 for methods A and B, respectively. Our study demonstrates the feasibility of geocoding residential addresses in epidemiological studies not initially recorded for environmental exposure assessment, for both recent addresses and residence locations more than 20 years ago. Accuracy of the two automatic geocoding methods was comparable. The in-house method (B) allowed a better control of the geocoding process and was less time consuming.

  8. Comparison of reproducibility of natural head position using two methods.

    PubMed

    Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik

    2012-01-01

    Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.

  9. Comparing four non-invasive methods to determine the ventilatory anaerobic threshold during cardiopulmonary exercise testing in children with congenital heart or lung disease.

    PubMed

    Visschers, Naomi C A; Hulzebos, Erik H; van Brussel, Marco; Takken, Tim

    2015-11-01

    The ventilatory anaerobic threshold (VAT) is an important method to assess the aerobic fitness in patients with cardiopulmonary disease. Several methods exist to determine the VAT; however, there is no consensus which of these methods is the most accurate. To compare four different non-invasive methods for the determination of the VAT via respiratory gas exchange analysis during a cardiopulmonary exercise test (CPET). A secondary objective is to determine the interobserver reliability of the VAT. CPET data of 30 children diagnosed with either cystic fibrosis (CF; N = 15) or with a surgically corrected dextro-transposition of the great arteries (asoTGA; N = 15) were included. No significant differences were found between conditions or among testers. The RER = 1 method differed the most compared to the other methods, showing significant higher results in all six variables. The PET-O2 method differed significantly on five of six and four of six exercise variables with the V-slope method and the VentEq method, respectively. The V-slope and the VentEq method differed significantly on one of six exercise variables. Ten of thirteen ICCs that were >0.80 had a 95% CI > 0.70. The RER = 1 method and the V-slope method had the highest number of significant ICCs and 95% CIs. The V-slope method, the ventilatory equivalent method and the PET-O2 method are comparable and reliable methods to determine the VAT during CPET in children with CF or asoTGA. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  10. Evaluation of Four Methods for Predicting Carbon Stocks of Korean Pine Plantations in Heilongjiang Province, China

    PubMed Central

    Gao, Huilin; Dong, Lihu; Li, Fengri; Zhang, Lianjun

    2015-01-01

    A total of 89 trees of Korean pine (Pinus koraiensis) were destructively sampled from the plantations in Heilongjiang Province, P.R. China. The sample trees were measured and calculated for the biomass and carbon stocks of tree components (i.e., stem, branch, foliage and root). Both compatible biomass and carbon stock models were developed with the total biomass and total carbon stocks as the constraints, respectively. Four methods were used to evaluate the carbon stocks of tree components. The first method predicted carbon stocks directly by the compatible carbon stocks models (Method 1). The other three methods indirectly predicted the carbon stocks in two steps: (1) estimating the biomass by the compatible biomass models, and (2) multiplying the estimated biomass by three different carbon conversion factors (i.e., carbon conversion factor 0.5 (Method 2), average carbon concentration of the sample trees (Method 3), and average carbon concentration of each tree component (Method 4)). The prediction errors of estimating the carbon stocks were compared and tested for the differences between the four methods. The results showed that the compatible biomass and carbon models with tree diameter (D) as the sole independent variable performed well so that Method 1 was the best method for predicting the carbon stocks of tree components and total. There were significant differences among the four methods for the carbon stock of stem. Method 2 produced the largest error, especially for stem and total. Methods 3 and Method 4 were slightly worse than Method 1, but the differences were not statistically significant. In practice, the indirect method using the mean carbon concentration of individual trees was sufficient to obtain accurate carbon stocks estimation if carbon stocks models are not available. PMID:26659257

  11. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  12. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  13. Qualitative versus quantitative methods in psychiatric research.

    PubMed

    Razafsha, Mahdi; Behforuzi, Hura; Azari, Hassan; Zhang, Zhiqun; Wang, Kevin K; Kobeissy, Firas H; Gold, Mark S

    2012-01-01

    Qualitative studies are gaining their credibility after a period of being misinterpreted as "not being quantitative." Qualitative method is a broad umbrella term for research methodologies that describe and explain individuals' experiences, behaviors, interactions, and social contexts. In-depth interview, focus groups, and participant observation are among the qualitative methods of inquiry commonly used in psychiatry. Researchers measure the frequency of occurring events using quantitative methods; however, qualitative methods provide a broader understanding and a more thorough reasoning behind the event. Hence, it is considered to be of special importance in psychiatry. Besides hypothesis generation in earlier phases of the research, qualitative methods can be employed in questionnaire design, diagnostic criteria establishment, feasibility studies, as well as studies of attitude and beliefs. Animal models are another area that qualitative methods can be employed, especially when naturalistic observation of animal behavior is important. However, since qualitative results can be researcher's own view, they need to be statistically confirmed, quantitative methods. The tendency to combine both qualitative and quantitative methods as complementary methods has emerged over recent years. By applying both methods of research, scientists can take advantage of interpretative characteristics of qualitative methods as well as experimental dimensions of quantitative methods.

  14. Fractional and fractal dynamics approach to anomalous diffusion in porous media: application to landslide behavior

    NASA Astrophysics Data System (ADS)

    Martelloni, Gianluca; Bagnoli, Franco

    2016-04-01

    In the past three decades, fractional and fractal calculus (that is, calculus of derivatives and integral of any arbitrary real or complex order) appeared to be an important tool for its applications in many fields of science and engineering. This theory allows to face, analytically and/or numerically, fractional differential equations and fractional partial differential equations. In particular, one of the several applications deals with anomalous diffusion processes. The latter phenomena can be clearly described from the statistical viewpoint. Indeed, in various complex systems, the diffusion processes usually no longer follow Gaussian statistics, and thus Fick's second law fails to describe the related transport behavior. In particular, one observes deviations from the linear time dependence of the mean squared displacement ⟨x2(t)⟩ ∝ t, (1) which is characteristic of Brownian motion, i.e., a direct consequence of the central limit theorem and the Markovian nature of the underlying stochastic process [1-17]. Instead, anomalous diffusion is found in a wide diversity of systems and its feature is the non-linear growth of the mean squared displacement over time. Especially the power-law pattern, with exponent γ different from 1 ⟨ ⟩ x2(t) ∝ tγ, (2) characterizes many systems [18, 19], but a variety of other rules, such as a logarithmic time dependence, exist [20]. The anomalous diffusion, as expressed in Eq. (2) is connected with the breakdown of the central limit theorem, caused by either broad distributions or long-range correlations, e.g., the extreme statistics and the power law distributions, typical of the self-organized criticality [42, 43]. Instead, anomalous diffusion rests on the validity of the Levy-Gnedenko generalized central limit theorem [21-23]. Particularly, broad spatial jumps or waiting time distributions lead to non-Gaussian distribution and non-Markovian time evolution of the system. Anomalous diffusion has been known since Richardson's treatise on turbulent diffusion in 1926 [24] and today, the list of system displaying anomalous dynamical behavior is quite extensive. We only report some examples: charge carrier transport in amorphous semiconductors [25], porous systems [26], reptation dynamics in polymeric systems [27, 28], transport on fractal geometries [29], the long-time dynamics of DNA sequences [30]. In this scenario, the fractional calculus is used to generalized the Fokker-Planck linear equation -∂P (x,t)=D ∇2P (x,t), ∂t (3) where P (x,t) is the density of probability in the space x=[x1, x2, x3] and time t, while D >0 is the diffusion coefficient. Such processes are characterized by Eq. (1). An example of Eq. (3) generalization is ∂∂tP (x,t)=D∇ αP β(x,t) - ∞ < α ≤ 2 β > - 1 , (4) where the fractional based-derivatives Laplacian Σ(∂α/∂xα)i, (i = 1, 2, 3), of non-linear term Pβ(x,t) is taken into account [31]. Another generalized form is represented by equation ∂∂tδδP(x,t)=D ∇ αP(x,t) δ > 0 α ≤ 2 , (5) that considers also the fractional time-derivative [32]. These fractional-described processes exhibit a power law patters as expressed by Eq. (2). This general introduction introduces the presented work, whose aim is to develop a theoretical model in order to forecast the triggering and propagation of landslides, using the techniques of fractional calculus. The latter is suitable for modeling the water infiltration (i.e., the pore water pressure diffusion in the soil) and the dynamical processes in the fractal media [33]. Alternatively the fractal representation of temporal and spatial derivative (the fractal order only appears in the denominator of the derivative) is considered and the results are compared to the fractional one. The prediction of landslides and the discovering of the triggering mechanism, is one of the challenging problems in earth science. Landslides can be triggered by different factors but in most cases the trigger is an intense or long rain that percolates into the soil causing an increasing of the pore water pressure. In literature two type of models exist for attempting to forecast the landslides triggering: statistical or empirical modeling based on rainfall thresholds derived from the analysis of temporal series of daily rain [34] and geotechnical modeling, i.e., slope stability models that take into account water infiltration by rainfall considering classical Richardson equations [35-39]. Regarding the propagation of landslides, the models follow Eulerian (e.g., finite element methods, [40]) or Lagrangian approach (e.g., particle or molecular dynamics methods [41-46]). In a preliminary work [44], the possibility of the integration between fractional-based infiltration modeling and molecular dynamics approach, to model both the triggering and propagation, has been investigated in order to characterize the granular material varying the order of fractional derivative taking into account the equation -∂δ ∂2θ(z,t) ∂tδθ(z,t)=D ∂z2 , (6) where θ(z,t) represents the water content depending on time t and soil depth z [47], while the parameter δ, with 0.5 ≤ δ < 1, represents the fractional derivative order to consider anomalous sub-diffusion [48]; when δ = 1 we have classical derivative, i.e., normal diffusion, and when δ > 1 super-diffusion [32]. To sum up, in [44], a three-dimensional model is developed, the water content is expressed in term of pore pressure (interpreted as a scalar field acting on the particles), whose increasing induces the shear strength reduction. The latter is taking into account by means of Mohr-Coulomb criterion that represents a failure criterion based on limit equilibrium theory [49, 50]. Moreover, the fluctuations depending on positions, in term of pore pressure, are also considered. Concerning the interaction between particles, a Lennard-Jones potential is taking into account and other active forces as gravity, dynamic friction and viscosity are also considered. For the updating of positions, the Verlet algorithm is used [51]. The outcome of simulations are quite satisfactory and, although the model proposed in [44] is still quite schematic, the results encourage the investigations in this direction as this types of modeling can represent a new method to simulate landslides triggered by rainfall. Particularly, the results are consistent with the behavior of real landslides, e.g., it is possible to apply the method of the inverse surface displacement velocity for predicting the failure time (Fukuzono method [52]). An interesting behavior emerges from the dynamic and statistical points of view. In the simulations emerging phenomena such as detachments, fractures and arching are observed. Finally, in the simulated system, a transition of the mean energy increment distribution from Gaussian to power law, varying the value of some parameters (i.e., viscosity coefficient) is observed or, fixed all parameters, the same behavior can be observed in the time, during single simulation, due to the stick and slip phases. As mentioned, considering that our understanding of the triggering mechanisms is limited and alternative approaches based on interconnected elements are meaningful to reproduce transition from slowly moving mass to catastrophic mass release, we are motivated to investigate mathematical methods, as fractional calculus, for the comprehension of non-linearity of the infiltration phenomena and particle-based approach to achieve a realistic description of the behavior of granular materials. References [1] A. Einstein, in: R. Furth (Ed.), Investigations on the theory of the Brownian movement, Dover, New York, 1956. [2] N. Wax (Ed.), Selected Papers on Noise and Stochastic Processes, Dover, New York, 1954. [3] H.S. Carslaw, J.C. Jaegher, Conduction of Heat in Solids, Clarendon Press, Oxford, 1959. [4] E. Nelson, Dynamical Theories of Brownian Motion, Princeton University Press, Princeton, 1967. [5] P. Levy, Processus stochastiques et mouvement Brownien, Gauthier-Villars, Paris, 1965. [6] R. Becker, Theorie der Warme, Heidelberger Taschenbucher, Vol. 10, Springer, Berlin, 1966; Theory of Heat, Springer, Berlin, 1967. [7] S.R. de Groot, P. Mazur, Non-equilibrium Thermodynamics, North-Holland, Amsterdam, 1969. [8] J.L. Doob, Stochastic Processes, Wiley, New York, 1953. [9] J. Crank, The Mathematics of Diffusion, Clarendon Press, Oxford, 1970. [10] D.R. Cox, H.D. Miller, The Theory of Stochastic Processes, Methuen, London, 1965. [11] R. Aris, The Mathematical Theory of Diffusion and Reaction in Permeable Catalysis, Vols. I and II, Clarendon Press, Oxford, 1975. [12] L.D. Landau, E.M. Lifschitz, Statistische Physik, Akademie, Leipzig, 1989; Statistical Physics, Pergamon, Oxford, 1980. [13] N.G. van Kampen, Stochastic Processes in Physics and Chemistry, North-Holland, Amsterdam, 1981. [14] H. Risken, The Fokker-Planck Equation, Springer, Berlin, 1989. [15] W.T. Coffey, Yu.P. Kalmykov, J.T. Waldron, The Langevin Equation, World Scientific, Singapore, 1996. [16] B.D. Hughes, Random Walks and Random Environments, Vol. 1: Random Walks, Oxford University Press, Oxford, 1995. [17] G.H. Weiss, R.J. Rubin, Adv. Chem. Phys. 52 (1983) 363. [18] A. Blumen, J. Klafter, G. Zumofen, in: I. Zschokke (Ed.), Optical Spectroscopy of Glasses, Reidel, Dordrecht, 1986. [19] G.M. Zaslavsky, S. Benkadda, Chaos, Kinetics and Nonlinear Dynamics in Fluids and Plasmas, Springer, Berlin, 1998. [20] R. Metzler, J. Klafter, The random walk's guide to anomalous diffusion: a fractional dynamics approach, Physics Reports 339 (2000) 1-77. [21] P. Levy, Calcul des Probabilites, Gauthier-Villars, Paris, 1925. [22] P. Levy, Theorie de l'addition des variables Aleatoires, Gauthier-Villars, Paris, 1954. [23] B.V. Gnedenko, A.N. Kolmogorov, Limit Distributions for Sums of Random Variables, Addison-Wesley, Reading, MA, 1954. [24] L.F. Richardson, Atmospheric diffusion shown on a distance-neighbour graph, Proc. R. Soc.Lond. A 110, 709-737, 1926. [25] H. Scher, E.W. Montroll, Phys. Rev. B 12 (1975) 2455. [26] J. P. Bouchaud, A. Georges, Anomalous diffusion in disordered media: Statistical mechanisms, models and physical applications, Physics reports, 195(4-5), 127293, 1990. [27] P.-G. de Gennes, Scaling Concepts in Polymer Physics, Cornell University Press, Ithaca, 1979. [28] M. Doi, S.F. Edwards, The Theory of Polymer Dynamics, Clarendon Press, Oxford, 1986. [29] M. Porto, A. Bunde, S. Havlin, H.E. Roman, Phys. Rev. E 56 (2), 1997. [30] P. Allegrini, M. Buiatti, P. Grigolini, B. J. West, Non-Gaussian statistics of anomalous diffusion: The DNA sequences of prokaryotes, Physical Review E 58(3), 1998. [31] M. Bologna, C. Tsallis, P. Grigolini, Anomalous diffusion associated with nonlinear fractional derivative Fokker-Planck-like equation: Exact time-dependent solutions, Physical Review E, 62(2), 2000. [32] W. Chen, H. Sun, X. Zhang, D. Korosak, Anomalous diffusion modeling by fractal and fractional derivatives, Computers and Mathematics with Applications, 59, 1754-1758, 2010. [33] V.E. Tarasov, Fractional Hydrodynamic Equations for Fractal Media, Annals of Physics, 318(2), 286-307, 2005. [34] G. Martelloni, S. Segoni, R. Fanti, F. Catani, Rainfall thresholds for the forecasting of landslide occurrence at regional scale. Landslides Journal, 9(4), 485-495, 2012. [35] M.G. Anderson, S. Howes, Development and application of a combined soil water-slope stability model, Q. J. Eng. Geol. London, 18: 225-236, 1985. [36] R.M. Iverson, Landslide triggering by rain infiltration, Water Resources Research 36(7): 1897-1910, 2000. [37] N. Lu, J. Godt, Infinite slope stability under steady unsaturated seepage conditions, Water Resources Research, Vol. 44, W11404, doi:10.1029/2008WR006976, 2008. [38] W. Wu, R.C. Sidle, A Distributed Slope Stability Model for Steep Forested Basins, Water Resour. Res., 31(8), 2097-2110, doi:10.1029/95WR01136, 1995. [39] G.B. Crosta, P. Frattini, Distributed modelling of shallow landslides triggered by intense rainfall, Natural Hazards and System Sciences 3: 81-93, 2003. [40] A. Patra, A. Bauer, C. Nichita, E. Pitman, M. Sheridan, M. Bursik, et al., Parallel adaptive numerical simulation of dry avalanches over natural terrain, J Volcanol Geotherm Res, 1-21, 2005. [41] E. Massaro, G. Martelloni, F. Bagnoli, Particle based method for shallow landslides: modeling sliding surface lubrification by rainfall, CMSIM International Journal of Nonlinear Scienze ISSN 2241-0503, 147-158, 2011. [42] G. Martelloni, E. Massaro, F. Bagnoli, A computational toy model for shallow landslides: Molecular Dynamics approach, Communications in Nonlinear Science and Numerical Simulation, 18(9), 2479-2492, 2013. [43] G. Martelloni, E. Massaro, F. Bagnoli, Computational modelling for landslide: molecular dynamic 2D application to shallow and deep landslides, In: EGU General Assembly 2012, Vienna (AT), Vol. 14, EGU2012-12219. [44] G. Martelloni, F. Bagnoli, Particle-based models for hydrologically triggered deep seated landslides, In: EGU General Assembly 2013, Vienna (AT), Vol. 15, EGU2013-10599-1. [45] P.A. Cundall, O.D.L. Strack, A discrete numerical model for granular assemblies, Geotechnique 29 819, 47-65, 1979. [46] G. Martelloni, F. Bagnoli, Infiltration effects on a two-dimensional molecular dynamics model of landslides. In NHAZ (Natural Hazards)), special issue in "Modeling in landslide research: advanced methods", 2014. [47] Y. Pachepsky, D. Timlin, W. Rawls, Generalized Richards' equation to simulate water transport in unsaturated soils, Journal of Hidrology 272: 3-13, 2003. [48] G. Drazer, D.H. Zanette, Experimental evidence of power-law trapping-time distributions in porous media, Physical Review E, 60(5), 1999. [49] C.A. Coulomb, Essai sur une application des regles des maximis et minimis a quelques problemes de statique relatifs, a la architecture. Mem. Acad. Roy. Div. Sav., 7: 343-387, 1776. [50] K. Terzaghi, Theoretical soil mechanics. New York: Wiley, 1943. [51] L. Verlet, Computer "Experiments" on Classical Fluids. I. Thermodynamical Properties of Lennard-Jones Molecules. Physycal Review, 159: 98, 1967. [52] T. Fukuzono, A new method for predicting the failure time of a slope. Proc. 4th Int. Conf. Field Workshop Landslides, 145-150. Tokyo: Jpn. Landslide Soc., 1985.

  15. Methods of Farm Guidance

    ERIC Educational Resources Information Center

    Vir, Dharm

    1971-01-01

    A survey of teaching methods for farm guidance workers in India, outlining some approaches developed by and used in other nations. Discusses mass educational methods, group educational methods, and the local leadership method. (JB)

  16. Using mixed methods research designs in health psychology: an illustrated discussion from a pragmatist perspective.

    PubMed

    Bishop, Felicity L

    2015-02-01

    To outline some of the challenges of mixed methods research and illustrate how they can be addressed in health psychology research. This study critically reflects on the author's previously published mixed methods research and discusses the philosophical and technical challenges of mixed methods, grounding the discussion in a brief review of methodological literature. Mixed methods research is characterized as having philosophical and technical challenges; the former can be addressed by drawing on pragmatism, the latter by considering formal mixed methods research designs proposed in a number of design typologies. There are important differences among the design typologies which provide diverse examples of designs that health psychologists can adapt for their own mixed methods research. There are also similarities; in particular, many typologies explicitly orient to the technical challenges of deciding on the respective timing of qualitative and quantitative methods and the relative emphasis placed on each method. Characteristics, strengths, and limitations of different sequential and concurrent designs are identified by reviewing five mixed methods projects each conducted for a different purpose. Adapting formal mixed methods designs can help health psychologists address the technical challenges of mixed methods research and identify the approach that best fits the research questions and purpose. This does not obfuscate the need to address philosophical challenges of mixing qualitative and quantitative methods. Statement of contribution What is already known on this subject? Mixed methods research poses philosophical and technical challenges. Pragmatism in a popular approach to the philosophical challenges while diverse typologies of mixed methods designs can help address the technical challenges. Examples of mixed methods research can be hard to locate when component studies from mixed methods projects are published separately. What does this study add? Critical reflections on the author's previously published mixed methods research illustrate how a range of different mixed methods designs can be adapted and applied to address health psychology research questions. The philosophical and technical challenges of mixed methods research should be considered together and in relation to the broader purpose of the research. © 2014 The British Psychological Society.

  17. Why, and how, mixed methods research is undertaken in health services research in England: a mixed methods study.

    PubMed

    O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon

    2007-06-14

    Recently, there has been a surge of international interest in combining qualitative and quantitative methods in a single study--often called mixed methods research. It is timely to consider why and how mixed methods research is used in health services research (HSR). Documentary analysis of proposals and reports of 75 mixed methods studies funded by a research commissioner of HSR in England between 1994 and 2004. Face-to-face semi-structured interviews with 20 researchers sampled from these studies. 18% (119/647) of HSR studies were classified as mixed methods research. In the documentation, comprehensiveness was the main driver for using mixed methods research, with researchers wanting to address a wider range of questions than quantitative methods alone would allow. Interviewees elaborated on this, identifying the need for qualitative research to engage with the complexity of health, health care interventions, and the environment in which studies took place. Motivations for adopting a mixed methods approach were not always based on the intrinsic value of mixed methods research for addressing the research question; they could be strategic, for example, to obtain funding. Mixed methods research was used in the context of evaluation, including randomised and non-randomised designs; survey and fieldwork exploratory studies; and instrument development. Studies drew on a limited number of methods--particularly surveys and individual interviews--but used methods in a wide range of roles. Mixed methods research is common in HSR in the UK. Its use is driven by pragmatism rather than principle, motivated by the perceived deficit of quantitative methods alone to address the complexity of research in health care, as well as other more strategic gains. Methods are combined in a range of contexts, yet the emerging methodological contributions from HSR to the field of mixed methods research are currently limited to the single context of combining qualitative methods and randomised controlled trials. Health services researchers could further contribute to the development of mixed methods research in the contexts of instrument development, survey and fieldwork, and non-randomised evaluations.

  18. New hybrid conjugate gradient methods with the generalized Wolfe line search.

    PubMed

    Xu, Xiao; Kong, Fan-Yu

    2016-01-01

    The conjugate gradient method was an efficient technique for solving the unconstrained optimization problem. In this paper, we made a linear combination with parameters β k of the DY method and the HS method, and putted forward the hybrid method of DY and HS. We also proposed the hybrid of FR and PRP by the same mean. Additionally, to present the two hybrid methods, we promoted the Wolfe line search respectively to compute the step size α k of the two hybrid methods. With the new Wolfe line search, the two hybrid methods had descent property and global convergence property of the two hybrid methods that can also be proved.

  19. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  20. The change and development of statistical methods used in research articles in child development 1930-2010.

    PubMed

    Køppe, Simo; Dammeyer, Jesper

    2014-09-01

    The evolution of developmental psychology has been characterized by the use of different quantitative and qualitative methods and procedures. But how does the use of methods and procedures change over time? This study explores the change and development of statistical methods used in articles published in Child Development from 1930 to 2010. The methods used in every article in the first issue of every volume were categorized into four categories. Until 1980 relatively simple statistical methods were used. During the last 30 years there has been an explosive use of more advanced statistical methods employed. The absence of statistical methods or use of simple methods had been eliminated.

  1. Social network extraction based on Web: 1. Related superficial methods

    NASA Astrophysics Data System (ADS)

    Khairuddin Matyuso Nasution, Mahyuddin

    2018-01-01

    Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.

  2. Performance of a proposed determinative method for p-TSA in rainbow trout fillet tissue and bridging the proposed method with a method for total chloramine-T residues in rainbow trout fillet tissue

    USGS Publications Warehouse

    Meinertz, J.R.; Stehly, G.R.; Gingerich, W.H.; Greseth, Shari L.

    2001-01-01

    Chloramine-T is an effective drug for controlling fish mortality caused by bacterial gill disease. As part of the data required for approval of chloramine-T use in aquaculture, depletion of the chloramine-T marker residue (para-toluenesulfonamide; p-TSA) from edible fillet tissue of fish must be characterized. Declaration of p-TSA as the marker residue for chloramine-T in rainbow trout was based on total residue depletion studies using a method that used time consuming and cumbersome techniques. A simple and robust method recently developed is being proposed as a determinative method for p-TSA in fish fillet tissue. The proposed determinative method was evaluated by comparing accuracy and precision data with U.S. Food and Drug Administration criteria and by bridging the method to the former method for chloramine-T residues. The method accuracy and precision fulfilled the criteria for determinative methods; accuracy was 92.6, 93.4, and 94.6% with samples fortified at 0.5X, 1X, and 2X the expected 1000 ng/g tolerance limit for p-TSA, respectively. Method precision with tissue containing incurred p-TSA at a nominal concentration of 1000 ng/g ranged from 0.80 to 8.4%. The proposed determinative method was successfully bridged with the former method. The concentrations of p-TSA developed with the proposed method were not statistically different at p < 0.05 from p-TSA concentrations developed with the former method.

  3. Standard setting: comparison of two methods.

    PubMed

    George, Sanju; Haque, M Sayeed; Oyebode, Femi

    2006-09-14

    The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  4. Women's Contraceptive Preference-Use Mismatch

    PubMed Central

    He, Katherine; Dalton, Vanessa K.; Zochowski, Melissa K.

    2017-01-01

    Abstract Background: Family planning research has not adequately addressed women's preferences for different contraceptive methods and whether women's contraceptive experiences match their preferences. Methods: Data were drawn from the Women's Healthcare Experiences and Preferences Study, an Internet survey of 1,078 women aged 18–55 randomly sampled from a national probability panel. Survey items assessed women's preferences for contraceptive methods, match between methods preferred and used, and perceived reasons for mismatch. We estimated predictors of contraceptive preference with multinomial logistic regression models. Results: Among women at risk for pregnancy who responded with their preferred method (n = 363), hormonal methods (non-LARC [long-acting reversible contraception]) were the most preferred method (34%), followed by no method (23%) and LARC (18%). Sociodemographic differences in contraception method preferences were noted (p-values <0.05), generally with minority, married, and older women having higher rates of preferring less effective methods, compared to their counterparts. Thirty-six percent of women reported preference-use mismatch, with the majority preferring more effective methods than those they were using. Rates of match between preferred and usual methods were highest for LARC (76%), hormonal (non-LARC) (65%), and no method (65%). The most common reasons for mismatch were cost/insurance (41%), lack of perceived/actual need (34%), and method-specific preference concerns (19%). Conclusion: While preference for effective contraception was common among this sample of women, we found substantial mismatch between preferred and usual methods, notably among women of lower socioeconomic status and women using less effective methods. Findings may have implications for patient-centered contraceptive interventions. PMID:27710196

  5. Water-resources reconnaissance of Île de la Gonâve, Haiti

    NASA Astrophysics Data System (ADS)

    Troester, Joseph W.; Turvey, Michael D.

    Île de la Gonâve is a 750-km2 island off the coast of Haiti. The depth to the water table ranges from less than 30 m in the Eocene and Upper Miocene limestones to over 60 m in the 300-m-thick Quaternary limestone. Annual precipitation ranges from 800-1,400 mm. Most precipitation is lost through evapotranspiration and there is virtually no surface water. Roughly estimated from chloride mass balance, about 4% of the precipitation recharges the karst aquifer. Cave pools and springs are a common source for water. Hand-dug wells provide water in coastal areas. Few productive wells have been drilled deeper than 60 m. Reconnaissance field analyses indicate that groundwater in the interior is a calcium-bicarbonate type, whereas water at the coast is a sodium-chloride type that exceeds World Health Organization recommended values for sodium and chloride. Tests for the presence of hydrogen sulfide-producing bacteria were negative in most drilled wells, but positive in cave pools, hand-dug wells, and most springs, indicating bacterial contamination of most water sources. Because of the difficulties in obtaining freshwater, the 110,000 inhabitants use an average of only 7 L per person per day. L'Île de la Gonâve est une île de 750 km2 au large de la côte d'Haïti. La profondeur de la nappe varie entre moins de 30 m dans les calcaires de l'Éocène et du Miocène supérieur à plus de 60 m dans les calcaires quaternaires épais de 300 m. Les précipitations annuelles sont comprises entre 800-1.400 mm. La plus grande partie des précipitations est perdue par évapotranspiration et il n'y a pratiquement pas d'eau de surface. Le bilan de masse des chlorures permet d'estimer à 4% des précipitations le montant de la recharge de l'aquifère karstique. Des bassins dans les grottes et des sources sont la source d'eau courante. Des puits creusés à la main fournissent de l'eau dans les zones côtières. Quelques puits productifs ont été forés dépassant 60 m de profondeur. L'analyse des reconnaissances de terrain indique que les eaux souterraines à l'intérieur de l'île sont de faciès bicarbonaté calcique, tandis que l'eau près de la côte a un faciès chloruré sodique dépassant les valeurs recommandées par l'OMS pour le sodium et les chlorures. Des tests pour la présence de bactéries productrices d'hydrogène sulfuré se sont révélés négatifs dans la plupart des forages, mais positifs dans la plupart des sources captées et tous les autres sources, bassins de grottes et puits creusés à la main, ce qui indique une contamination bactérienne de la plupart des sources d'eau. Du fait des difficultés pour s'approvisionner en eau douce, les 110.000 habitants utilisent en moyenne seulement 7 L par personne et par jour. La Isla de la Gonavê, cercana a la costa de Haití, tiene 750 km2. La profundidad al nivel freático está comprendida entre menos de 30 m para las calcitas del Eoceno y Mioceno Superior y más de 60 m en el acuífero de calcitas cuaternarias, el cual posee 300 m de espesor. La precipitación anual varía entre 800-1.400 mm, si bien la mayor parte se pierde por evapotranspiración y prácticamente no hay aguas superficiales. Según un balance de masas de cloruros, alrededor del 4% de la precipitación recarga el acuífero kárstico. Las cavidades y manantiales son una fuente habitual de agua. Los pozos excavados proporcionan agua en las áreas costeras. Pocos pozos productivos se han perforado a más de 60 m. El reconocimiento de los análisis de campo indica que las aguas subterráneas son de tipo bicarbonatado-cálcico en el interior, mientras que es de tipo clorurado-sódico en la costa, donde se sobrepasan las concentraciones recomendadas por la Organización Mundial de la Salud para sodio y cloruro. Los ensayos efectuados para detectar la presencia de bacterias productoras de sulfuro de hidrógeno resultaron negativos en la mayoría de los pozos perforados, pero fueron positivos en la muchos manantiales explotados y en todos los manantiales, cavidades y pozos excavados, hecho que indica la contaminación bacteriana de la mayor parte de fuentes de agua. Debido a las dificultades para obtener agua dulce, los 110.000 habitantes utilizan una media de sólo 7 L por persona al día.

  6. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3’-Diaminobenzidine&Haematoxylin

    PubMed Central

    2013-01-01

    The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. Virtual Slides The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017. PMID:23531405

  7. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3'-Diaminobenzidine&Haematoxylin.

    PubMed

    Korzynska, Anna; Roszkowiak, Lukasz; Lopez, Carlos; Bosch, Ramon; Witkowski, Lukasz; Lejeune, Marylene

    2013-03-25

    The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the 'brown component' extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017.

  8. Numerical Grid Generation and Potential Airfoil Analysis and Design

    DTIC Science & Technology

    1988-01-01

    Gauss- Seidel , SOR and ADI iterative methods e JACOBI METHOD In the Jacobi method each new value of a function is computed entirely from old values...preceding iteration and adding the inhomogeneous (boundary condition) term. * GAUSS- SEIDEL METHOD When we compute I in a Jacobi method, we have already...Gauss- Seidel method. Sufficient condition for p convergence of the Gauss- Seidel method is diagonal-dominance of [A].9W e SUCESSIVE OVER-RELAXATION (SOR

  9. Evaluation of intrinsic respiratory signal determination methods for 4D CBCT adapted for mice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Rachael; Pan, Tinsu, E-mail: tpan@mdanderson.org; Rubinstein, Ashley

    Purpose: 4D CT imaging in mice is important in a variety of areas including studies of lung function and tumor motion. A necessary step in 4D imaging is obtaining a respiratory signal, which can be done through an external system or intrinsically through the projection images. A number of methods have been developed that can successfully determine the respiratory signal from cone-beam projection images of humans, however only a few have been utilized in a preclinical setting and most of these rely on step-and-shoot style imaging. The purpose of this work is to assess and make adaptions of several successfulmore » methods developed for humans for an image-guided preclinical radiation therapy system. Methods: Respiratory signals were determined from the projection images of free-breathing mice scanned on the X-RAD system using four methods: the so-called Amsterdam shroud method, a method based on the phase of the Fourier transform, a pixel intensity method, and a center of mass method. The Amsterdam shroud method was modified so the sharp inspiration peaks associated with anesthetized mouse breathing could be detected. Respiratory signals were used to sort projections into phase bins and 4D images were reconstructed. Error and standard deviation in the assignment of phase bins for the four methods compared to a manual method considered to be ground truth were calculated for a range of region of interest (ROI) sizes. Qualitative comparisons were additionally made between the 4D images obtained using each of the methods and the manual method. Results: 4D images were successfully created for all mice with each of the respiratory signal extraction methods. Only minimal qualitative differences were noted between each of the methods and the manual method. The average error (and standard deviation) in phase bin assignment was 0.24 ± 0.08 (0.49 ± 0.11) phase bins for the Fourier transform method, 0.09 ± 0.03 (0.31 ± 0.08) phase bins for the modified Amsterdam shroud method, 0.09 ± 0.02 (0.33 ± 0.07) phase bins for the intensity method, and 0.37 ± 0.10 (0.57 ± 0.08) phase bins for the center of mass method. Little dependence on ROI size was noted for the modified Amsterdam shroud and intensity methods while the Fourier transform and center of mass methods showed a noticeable dependence on the ROI size. Conclusions: The modified Amsterdam shroud, Fourier transform, and intensity respiratory signal methods are sufficiently accurate to be used for 4D imaging on the X-RAD system and show improvement over the existing center of mass method. The intensity and modified Amsterdam shroud methods are recommended due to their high accuracy and low dependence on ROI size.« less

  10. 26 CFR 1.167(b)-2 - Declining balance method.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Declining balance method. 1.167(b)-2 Section 1... Declining balance method. (a) Application of method. Under the declining balance method a uniform rate is.... While salvage is not taken into account in determining the annual allowances under this method, in no...

  11. 77 FR 60985 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-05

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...

  12. 40 CFR Appendix A to Part 425 - Potassium Ferricyanide Titration Method

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Method A Appendix A to Part 425 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Appendix A to Part 425—Potassium Ferricyanide Titration Method Source The potassium ferricyanide titration method is based on method SLM 4/2 described in “Official Method of Analysis,” Society of Leather Trades...

  13. 40 CFR Appendix A to Part 425 - Potassium Ferricyanide Titration Method

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Method A Appendix A to Part 425 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED..., App. A Appendix A to Part 425—Potassium Ferricyanide Titration Method Source The potassium ferricyanide titration method is based on method SLM 4/2 described in “Official Method of Analysis,” Society of...

  14. 78 FR 67360 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Five New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...

  15. 40 CFR Appendix A to Part 425 - Potassium Ferricyanide Titration Method

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Method A Appendix A to Part 425 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Appendix A to Part 425—Potassium Ferricyanide Titration Method Source The potassium ferricyanide titration method is based on method SLM 4/2 described in “Official Method of Analysis,” Society of Leather Trades...

  16. 78 FR 22540 - Notice of Public Meeting/Webinar: EPA Method Development Update on Drinking Water Testing Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-16

    ...: EPA Method Development Update on Drinking Water Testing Methods for Contaminant Candidate List... Division will describe methods currently in development for many CCL contaminants, with an expectation that several of these methods will support future cycles of the Unregulated Contaminant Monitoring Rule (UCMR...

  17. Problems d'elaboration d'une methode locale: la methode "Paris-Khartoum" (Problems in Implementing a Local Method: the Paris-Khartoum Method)

    ERIC Educational Resources Information Center

    Penhoat, Loick; Sakow, Kostia

    1978-01-01

    A description of the development and implementation of a method introduced in the Sudan that attempts to relate to Sudanese culture and to motivate students. The relationship between language teaching methods and the total educational system is discussed. (AMH)

  18. Exponentially fitted symplectic Runge-Kutta-Nyström methods derived by partitioned Runge-Kutta methods

    NASA Astrophysics Data System (ADS)

    Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.

    2013-10-01

    In this work we derive symplectic EF/TF RKN methods by symplectic EF/TF PRK methods. Also EF/TF symplectic RKN methods are constructed directly from classical symplectic RKN methods. Several numerical examples will be given in order to decide which is the most favourable implementation.

  19. Standard methods for chemical analysis of steel, cast iron, open-hearth iron, and wrought iron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1973-01-01

    Methods are described for determining manganese, phosphorus, sulfur, selenium, copper, nickel, chromium, vanadium, tungsten, titanium, lead, boron, molybdenum ( alpha -benzoin oxime method), zirconium (cupferron --phosphate method), niobium and tantalum (hydrolysis with perchloric and sulfurous acids (gravimetric, titrimetric, and photometric methods)), and beryllium (oxide method). (DHM)

  20. Detection of coupling delay: A problem not yet solved

    NASA Astrophysics Data System (ADS)

    Coufal, David; Jakubík, Jozef; Jajcay, Nikola; Hlinka, Jaroslav; Krakovská, Anna; Paluš, Milan

    2017-08-01

    Nonparametric detection of coupling delay in unidirectionally and bidirectionally coupled nonlinear dynamical systems is examined. Both continuous and discrete-time systems are considered. Two methods of detection are assessed—the method based on conditional mutual information—the CMI method (also known as the transfer entropy method) and the method of convergent cross mapping—the CCM method. Computer simulations show that neither method is generally reliable in the detection of coupling delays. For continuous-time chaotic systems, the CMI method appears to be more sensitive and applicable in a broader range of coupling parameters than the CCM method. In the case of tested discrete-time dynamical systems, the CCM method has been found to be more sensitive, while the CMI method required much stronger coupling strength in order to bring correct results. However, when studied systems contain a strong oscillatory component in their dynamics, results of both methods become ambiguous. The presented study suggests that results of the tested algorithms should be interpreted with utmost care and the nonparametric detection of coupling delay, in general, is a problem not yet solved.

  1. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  2. Identifying Outliers of Non-Gaussian Groundwater State Data Based on Ensemble Estimation for Long-Term Trends

    NASA Astrophysics Data System (ADS)

    Park, E.; Jeong, J.; Choi, J.; Han, W. S.; Yun, S. T.

    2016-12-01

    Three modified outlier identification methods: the three sigma rule (3s), inter quantile range (IQR) and median absolute deviation (MAD), which take advantage of the ensemble regression method are proposed. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method is found to have a limitation in the false identification of excessive outliers, which may be supplemented by joint applications with the other methods (i.e., the 3s rule and MAD methods). The proposed methods can be also applied as a potential tool for future anomaly detection by model training based on currently available data.

  3. Overview of paint removal methods

    NASA Astrophysics Data System (ADS)

    Foster, Terry

    1995-04-01

    With the introduction of strict environmental regulations governing the use and disposal of methylene chloride and phenols, major components of chemical paint strippers, there have been many new environmentally safe and effective methods of paint removal developed. The new methods developed for removing coatings from aircraft and aircraft components include: mechanical methods using abrasive media such as plastic, wheat starch, walnut shells, ice and dry ice, environmentally safe chemical strippers and paint softeners, and optical methods such as lasers and flash lamps. Each method has its advantages and disadvantages, and some have unique applications. For example, mechanical and abrasive methods can damage sensitive surfaces such as composite materials and strict control of blast parameters and conditions are required. Optical methods can be slow, leaving paint residues, and chemical methods may not remove all of the coating or require special coating formulations to be effective. As an introduction to environmentally safe and effective methods of paint removal, this paper is an overview of the various methods available. The purpose of this overview is to introduce the various paint removal methods available.

  4. Newton's method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    More, J. J.; Sorensen, D. C.

    1982-02-01

    Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on Newton's method. It is therefore important to understand Newton's method as an algorithm in its own right and as a key introduction to the most recent ideas in this area. One of the aims of this expository paper is to present and analyze two main approaches to Newton's method for unconstrained minimization: the line search approach and the trust region approach. The other aim is to present some of themore » recent developments in the optimization field which are related to Newton's method. In particular, we explore several variations on Newton's method which are appropriate for large scale problems, and we also show how quasi-Newton methods can be derived quite naturally from Newton's method.« less

  5. [Comparison of two nucleic acid extraction methods for norovirus in oysters].

    PubMed

    Yuan, Qiao; Li, Hui; Deng, Xiaoling; Mo, Yanling; Fang, Ling; Ke, Changwen

    2013-04-01

    To explore a convenient and effective method for norovirus nucleic acid extraction from oysters suitable for long-term viral surveillance. Two methods, namely method A (glycine washing and polyethylene glycol precipitation of the virus followed by silica gel centrifugal column) and method B (protease K digestion followed by application of paramagnetic silicon) were compared for their performance in norovirus nucleic acid extraction from oysters. Real-time RT-PCR was used to detect norovirus in naturally infected oysters and in oysters with induced infection. The two methods yielded comparable positive detection rates for the samples, but the recovery rate of the virus was higher with method B than with method A. Method B is a more convenient and rapid method for norovirus nucleic acid extraction from oysters and suitable for long-term surveillance of norovirus.

  6. On the Formulation of Weakly Singular Displacement/Traction Integral Equations; and Their Solution by the MLPG Method

    NASA Technical Reports Server (NTRS)

    Atluri, Satya N.; Shen, Shengping

    2002-01-01

    In this paper, a very simple method is used to derive the weakly singular traction boundary integral equation based on the integral relationships for displacement gradients. The concept of the MLPG method is employed to solve the integral equations, especially those arising in solid mechanics. A moving Least Squares (MLS) interpolation is selected to approximate the trial functions in this paper. Five boundary integral Solution methods are introduced: direct solution method; displacement boundary-value problem; traction boundary-value problem; mixed boundary-value problem; and boundary variational principle. Based on the local weak form of the BIE, four different nodal-based local test functions are selected, leading to four different MLPG methods for each BIE solution method. These methods combine the advantages of the MLPG method and the boundary element method.

  7. A numerical method to solve the 1D and the 2D reaction diffusion equation based on Bessel functions and Jacobian free Newton-Krylov subspace methods

    NASA Astrophysics Data System (ADS)

    Parand, K.; Nikarya, M.

    2017-11-01

    In this paper a novel method will be introduced to solve a nonlinear partial differential equation (PDE). In the proposed method, we use the spectral collocation method based on Bessel functions of the first kind and the Jacobian free Newton-generalized minimum residual (JFNGMRes) method with adaptive preconditioner. In this work a nonlinear PDE has been converted to a nonlinear system of algebraic equations using the collocation method based on Bessel functions without any linearization, discretization or getting the help of any other methods. Finally, by using JFNGMRes, the solution of the nonlinear algebraic system is achieved. To illustrate the reliability and efficiency of the proposed method, we solve some examples of the famous Fisher equation. We compare our results with other methods.

  8. Mending the Gap, An Effort to Aid the Transfer of Formal Methods Technology

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly

    2009-01-01

    Formal methods can be applied to many of the development and verification activities required for civil avionics software. RTCA/DO-178B, Software Considerations in Airborne Systems and Equipment Certification, gives a brief description of using formal methods as an alternate method of compliance with the objectives of that standard. Despite this, the avionics industry at large has been hesitant to adopt formal methods, with few developers have actually used formal methods for certification credit. Why is this so, given the volume of evidence of the benefits of formal methods? This presentation will explore some of the challenges to using formal methods in a certification context and describe the effort by the Formal Methods Subgroup of RTCA SC-205/EUROCAE WG-71 to develop guidance to make the use of formal methods a recognized approach.

  9. Methods for the calculation of axial wave numbers in lined ducts with mean flow

    NASA Technical Reports Server (NTRS)

    Eversman, W.

    1981-01-01

    A survey is made of the methods available for the calculation of axial wave numbers in lined ducts. Rectangular and circular ducts with both uniform and non-uniform flow are considered as are ducts with peripherally varying liners. A historical perspective is provided by a discussion of the classical methods for computing attenuation when no mean flow is present. When flow is present these techniques become either impractical or impossible. A number of direct eigenvalue determination schemes which have been used when flow is present are discussed. Methods described are extensions of the classical no-flow technique, perturbation methods based on the no-flow technique, direct integration methods for solution of the eigenvalue equation, an integration-iteration method based on the governing differential equation for acoustic transmission, Galerkin methods, finite difference methods, and finite element methods.

  10. Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing

    2017-12-14

    Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.

  11. Two-dimensional phase unwrapping using robust derivative estimation and adaptive integration.

    PubMed

    Strand, Jarle; Taxt, Torfinn

    2002-01-01

    The adaptive integration (ADI) method for two-dimensional (2-D) phase unwrapping is presented. The method uses an algorithm for noise robust estimation of partial derivatives, followed by a noise robust adaptive integration process. The ADI method can easily unwrap phase images with moderate noise levels, and the resulting images are congruent modulo 2pi with the observed, wrapped, input images. In a quantitative evaluation, both the ADI and the BLS methods (Strand et al.) were better than the least-squares methods of Ghiglia and Romero (GR), and of Marroquin and Rivera (MRM). In a qualitative evaluation, the ADI, the BLS, and a conjugate gradient version of the MRM method (MRMCG), were all compared using a synthetic image with shear, using 115 magnetic resonance images, and using 22 fiber-optic interferometry images. For the synthetic image and the interferometry images, the ADI method gave consistently visually better results than the other methods. For the MR images, the MRMCG method was best, and the ADI method second best. The ADI method was less sensitive to the mask definition and the block size than the BLS method, and successfully unwrapped images with shears that were not marked in the masks. The computational requirements of the ADI method for images of nonrectangular objects were comparable to only two iterations of many least-squares-based methods (e.g., GR). We believe the ADI method provides a powerful addition to the ensemble of tools available for 2-D phase unwrapping.

  12. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    USGS Publications Warehouse

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  13. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  14. Statistical methods used to test for agreement of medical instruments measuring continuous variables in method comparison studies: a systematic review.

    PubMed

    Zaki, Rafdzah; Bulgiba, Awang; Ismail, Roshidi; Ismail, Noor Azina

    2012-01-01

    Accurate values are a must in medicine. An important parameter in determining the quality of a medical instrument is agreement with a gold standard. Various statistical methods have been used to test for agreement. Some of these methods have been shown to be inappropriate. This can result in misleading conclusions about the validity of an instrument. The Bland-Altman method is the most popular method judging by the many citations of the article proposing this method. However, the number of citations does not necessarily mean that this method has been applied in agreement research. No previous study has been conducted to look into this. This is the first systematic review to identify statistical methods used to test for agreement of medical instruments. The proportion of various statistical methods found in this review will also reflect the proportion of medical instruments that have been validated using those particular methods in current clinical practice. Five electronic databases were searched between 2007 and 2009 to look for agreement studies. A total of 3,260 titles were initially identified. Only 412 titles were potentially related, and finally 210 fitted the inclusion criteria. The Bland-Altman method is the most popular method with 178 (85%) studies having used this method, followed by the correlation coefficient (27%) and means comparison (18%). Some of the inappropriate methods highlighted by Altman and Bland since the 1980s are still in use. This study finds that the Bland-Altman method is the most popular method used in agreement research. There are still inappropriate applications of statistical methods in some studies. It is important for a clinician or medical researcher to be aware of this issue because misleading conclusions from inappropriate analyses will jeopardize the quality of the evidence, which in turn will influence quality of care given to patients in the future.

  15. Statistical Methods Used to Test for Agreement of Medical Instruments Measuring Continuous Variables in Method Comparison Studies: A Systematic Review

    PubMed Central

    Zaki, Rafdzah; Bulgiba, Awang; Ismail, Roshidi; Ismail, Noor Azina

    2012-01-01

    Background Accurate values are a must in medicine. An important parameter in determining the quality of a medical instrument is agreement with a gold standard. Various statistical methods have been used to test for agreement. Some of these methods have been shown to be inappropriate. This can result in misleading conclusions about the validity of an instrument. The Bland-Altman method is the most popular method judging by the many citations of the article proposing this method. However, the number of citations does not necessarily mean that this method has been applied in agreement research. No previous study has been conducted to look into this. This is the first systematic review to identify statistical methods used to test for agreement of medical instruments. The proportion of various statistical methods found in this review will also reflect the proportion of medical instruments that have been validated using those particular methods in current clinical practice. Methodology/Findings Five electronic databases were searched between 2007 and 2009 to look for agreement studies. A total of 3,260 titles were initially identified. Only 412 titles were potentially related, and finally 210 fitted the inclusion criteria. The Bland-Altman method is the most popular method with 178 (85%) studies having used this method, followed by the correlation coefficient (27%) and means comparison (18%). Some of the inappropriate methods highlighted by Altman and Bland since the 1980s are still in use. Conclusions This study finds that the Bland-Altman method is the most popular method used in agreement research. There are still inappropriate applications of statistical methods in some studies. It is important for a clinician or medical researcher to be aware of this issue because misleading conclusions from inappropriate analyses will jeopardize the quality of the evidence, which in turn will influence quality of care given to patients in the future. PMID:22662248

  16. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    PubMed

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.

  17. [The research and application of pretreatment method for matrix-assisted laser desorption ionization-time of flight mass spectrometry identification of filamentous fungi].

    PubMed

    Huang, Y F; Chang, Z; Bai, J; Zhu, M; Zhang, M X; Wang, M; Zhang, G; Li, X Y; Tong, Y G; Wang, J L; Lu, X X

    2017-08-08

    Objective: To establish and evaluate the feasibility of a pretreatment method for matrix-assisted laser desorption ionization-time of flight mass spectrometry identification of filamentous fungi developed by the laboratory. Methods: Three hundred and eighty strains of filamentous fungi from January 2014 to December 2016 were recovered and cultured on sabouraud dextrose agar (SDA) plate at 28 ℃ to mature state. Meanwhile, the fungi were cultured in liquid sabouraud medium with a vertical rotation method recommended by Bruker and a horizontal vibration method developed by the laboratory until adequate amount of colonies were observed. For the strains cultured with the three methods, protein was extracted with modified magnetic bead-based extraction method for mass spectrum identification. Results: For 380 fungi strains, it took 3-10 d to culture with SDA culture method, and the ratio of identification of the species and genus was 47% and 81%, respectively; it took 5-7 d to culture with vertical rotation method, and the ratio of identification of the species and genus was 76% and 94%, respectively; it took 1-2 d to culture with horizontal vibration method, and the ratio of identification of the species and genus was 96% and 99%, respectively. For the comparison between horizontal vibration method and SDA culture method comparison, the difference was statistically significant (χ(2)=39.026, P <0.01); for the comparison between horizontal vibration method and vertical rotation method recommended by Bruker, the difference was statistically significant(χ(2)=11.310, P <0.01). Conclusion: The horizontal vibration method and modified magnetic bead-based extraction method developed by the laboratory is superior to the method recommended by Bruker and SDA culture method in terms of the identification capacity for filamentous fungi, which can be applied in clinic.

  18. Development of a practical costing method for hospitals.

    PubMed

    Cao, Pengyu; Toyabe, Shin-Ichi; Akazawa, Kouhei

    2006-03-01

    To realize an effective cost control, a practical and accurate cost accounting system is indispensable in hospitals. In traditional cost accounting systems, the volume-based costing (VBC) is the most popular cost accounting method. In this method, the indirect costs are allocated to each cost object (services or units of a hospital) using a single indicator named a cost driver (e.g., Labor hours, revenues or the number of patients). However, this method often results in rough and inaccurate results. The activity based costing (ABC) method introduced in the mid 1990s can prove more accurate results. With the ABC method, all events or transactions that cause costs are recognized as "activities", and a specific cost driver is prepared for each activity. Finally, the costs of activities are allocated to cost objects by the corresponding cost driver. However, it is much more complex and costly than other traditional cost accounting methods because the data collection for cost drivers is not always easy. In this study, we developed a simplified ABC (S-ABC) costing method to reduce the workload of ABC costing by reducing the number of cost drivers used in the ABC method. Using the S-ABC method, we estimated the cost of the laboratory tests, and as a result, similarly accurate results were obtained with the ABC method (largest difference was 2.64%). Simultaneously, this new method reduces the seven cost drivers used in the ABC method to four. Moreover, we performed an evaluation using other sample data from physiological laboratory department to certify the effectiveness of this new method. In conclusion, the S-ABC method provides two advantages in comparison to the VBC and ABC methods: (1) it can obtain accurate results, and (2) it is simpler to perform. Once we reduce the number of cost drivers by applying the proposed S-ABC method to the data for the ABC method, we can easily perform the cost accounting using few cost drivers after the second round of costing.

  19. Comparative study between recent methods manipulating ratio spectra and classical methods based on two-wavelength selection for the determination of binary mixture of antazoline hydrochloride and tetryzoline hydrochloride

    NASA Astrophysics Data System (ADS)

    Abdel-Halim, Lamia M.; Abd-El Rahman, Mohamed K.; Ramadan, Nesrin K.; EL Sanabary, Hoda F. A.; Salem, Maissa Y.

    2016-04-01

    A comparative study was developed between two classical spectrophotometric methods (dual wavelength method and Vierordt's method) and two recent methods manipulating ratio spectra (ratio difference method and first derivative of ratio spectra method) for simultaneous determination of Antazoline hydrochloride (AN) and Tetryzoline hydrochloride (TZ) in their combined pharmaceutical formulation and in the presence of benzalkonium chloride as a preservative without preliminary separation. The dual wavelength method depends on choosing two wavelengths for each drug in a way so that the difference in absorbance at those two wavelengths is zero for the other drug. While Vierordt's method, is based upon measuring the absorbance and the absorptivity values of the two drugs at their λmax (248.0 and 219.0 nm for AN and TZ, respectively), followed by substitution in the corresponding Vierordt's equation. Recent methods manipulating ratio spectra depend on either measuring the difference in amplitudes of ratio spectra between 255.5 and 269.5 nm for AN and 220.0 and 273.0 nm for TZ in case of ratio difference method or computing first derivative of the ratio spectra for each drug then measuring the peak amplitude at 250.0 nm for AN and at 224.0 nm for TZ in case of first derivative of ratio spectrophotometry. The specificity of the developed methods was investigated by analyzing different laboratory prepared mixtures of the two drugs. All methods were applied successfully for the determination of the selected drugs in their combined dosage form proving that the classical spectrophotometric methods can still be used successfully in analysis of binary mixture using minimal data manipulation rather than recent methods which require relatively more steps. Furthermore, validation of the proposed methods was performed according to ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Statistical studies showed that the methods can be competitively applied in quality control laboratories.

  20. Lipidomic analysis of biological samples: Comparison of liquid chromatography, supercritical fluid chromatography and direct infusion mass spectrometry methods.

    PubMed

    Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal

    2017-11-24

    Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. New clinical validation method for automated sphygmomanometer: a proposal by Japan ISO-WG for sphygmomanometer standard.

    PubMed

    Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio

    2007-12-01

    Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.

  2. [Significance of bacteria detection with filter paper method on diagnosis of diabetic foot wound infection].

    PubMed

    Zou, X H; Zhu, Y P; Ren, G Q; Li, G C; Zhang, J; Zou, L J; Feng, Z B; Li, B H

    2017-02-20

    Objective: To evaluate the significance of bacteria detection with filter paper method on diagnosis of diabetic foot wound infection. Methods: Eighteen patients with diabetic foot ulcer conforming to the study criteria were hospitalized in Liyuan Hospital Affiliated to Tongji Medical College of Huazhong University of Science and Technology from July 2014 to July 2015. Diabetic foot ulcer wounds were classified according to the University of Texas diabetic foot classification (hereinafter referred to as Texas grade) system, and general condition of patients with wounds in different Texas grade was compared. Exudate and tissue of wounds were obtained, and filter paper method and biopsy method were adopted to detect the bacteria of wounds of patients respectively. Filter paper method was regarded as the evaluation method, and biopsy method was regarded as the control method. The relevance, difference, and consistency of the detection results of two methods were tested. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of filter paper method in bacteria detection were calculated. Receiver operating characteristic (ROC) curve was drawn based on the specificity and sensitivity of filter paper method in bacteria detection of 18 patients to predict the detection effect of the method. Data were processed with one-way analysis of variance and Fisher's exact test. In patients tested positive for bacteria by biopsy method, the correlation between bacteria number detected by biopsy method and that by filter paper method was analyzed with Pearson correlation analysis. Results: (1) There were no statistically significant differences among patients with wounds in Texas grade 1, 2, and 3 in age, duration of diabetes, duration of wound, wound area, ankle brachial index, glycosylated hemoglobin, fasting blood sugar, blood platelet count, erythrocyte sedimentation rate, C-reactive protein, aspartate aminotransferase, serum creatinine, and urea nitrogen (with F values from 0.029 to 2.916, P values above 0.05), while there were statistically significant differences among patients with wounds in Texas grade 1, 2, and 3 in white blood cell count and alanine aminotransferase (with F values 4.688 and 6.833 respectively, P <0.05 or P <0.01). (2) According to the results of biopsy method, 6 patients were tested negative for bacteria, and 12 patients were tested positive for bacteria, among which 10 patients were with bacterial number above 1×10(5)/g, and 2 patients with bacterial number below 1×10(5)/g. According to the results of filter paper method, 8 patients were tested negative for bacteria, and 10 patients were tested positive for bacteria, among which 7 patients were with bacterial number above 1×10(5)/g, and 3 patients with bacterial number below 1×10(5)/g. There were 7 patients tested positive for bacteria both by biopsy method and filter paper method, 8 patients tested negative for bacteria both by biopsy method and filter paper method, and 3 patients tested positive for bacteria by biopsy method but negative by filter paper method. Patients tested negative for bacteria by biopsy method did not tested positive for bacteria by filter paper method. There was directional association between the detection results of two methods ( P =0.004), i. e. if result of biopsy method was positive, result of filter paper method could also be positive. There was no obvious difference in the detection results of two methods ( P =0.250). The consistency between the detection results of two methods was ordinary (Kappa=0.68, P =0.002). (3) The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of filter paper method in bacteria detection were 70%, 100%, 1.00, 0.73, and 83.3%, respectively. Total area under ROC curve of bacteria detection by filter paper method in 18 patients was 0.919 (with 95% confidence interval 0-1.000, P =0.030). (4) There were 13 strains of bacteria detected by biopsy method, with 5 strains of Acinetobacter baumannii, 5 strains of Staphylococcus aureus, 1 strain of Pseudomonas aeruginosa, 1 strain of Streptococcus bovis, and 1 strain of bird Enterococcus . There were 11 strains of bacteria detected by filter paper method, with 5 strains of Acinetobacter baumannii, 3 strains of Staphylococcus aureus, 1 strain of Pseudomonas aeruginosa, 1 strain of Streptococcus bovis, and 1 strain of bird Enterococcus . Except for Staphylococcus aureus, the sensitivity and specificity of filter paper method in the detection of the other 4 bacteria were all 100%. The consistency between filter paper method and biopsy method in detecting Acinetobacter baumannii was good (Kappa=1.00, P <0.01), while that in detecting Staphylococcus aureus was ordinary (Kappa=0.68, P <0.05). (5) There was no obvious correlation between the bacteria number of wounds detected by filter paper method and that by biopsy method ( r =0.257, P =0.419). There was obvious correlation between the bacteria numbers detected by two methods in wounds with Texas grade 1 and 2 (with r values as 0.999, P values as 0.001). There was no obvious correlation between the bacteria numbers detected by two methods in wounds with Texas grade 3 ( r =-0.053, P =0.947). Conclusions: The detection result of filter paper method is in accordance with that of biopsy method in the determination of bacterial infection, and it is of great importance in the diagnosis of local infection of diabetic foot wound.

  3. A k-space method for large-scale models of wave propagation in tissue.

    PubMed

    Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C

    2001-03-01

    Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.

  4. The method of planning the energy consumption for electricity market

    NASA Astrophysics Data System (ADS)

    Russkov, O. V.; Saradgishvili, S. E.

    2017-10-01

    The limitations of existing forecast models are defined. The offered method is based on game theory, probabilities theory and forecasting the energy prices relations. New method is the basis for planning the uneven energy consumption of industrial enterprise. Ecological side of the offered method is disclosed. The program module performed the algorithm of the method is described. Positive method tests at the industrial enterprise are shown. The offered method allows optimizing the difference between planned and factual consumption of energy every hour of a day. The conclusion about applicability of the method for addressing economic and ecological challenges is made.

  5. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    NASA Astrophysics Data System (ADS)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  6. Modifications of the PCPT method for HJB equations

    NASA Astrophysics Data System (ADS)

    Kossaczký, I.; Ehrhardt, M.; Günther, M.

    2016-10-01

    In this paper we will revisit the modification of the piecewise constant policy timestepping (PCPT) method for solving Hamilton-Jacobi-Bellman (HJB) equations. This modification is called piecewise predicted policy timestepping (PPPT) method and if properly used, it may be significantly faster. We will quickly recapitulate the algorithms of PCPT, PPPT methods and of the classical implicit method and apply them on a passport option pricing problem with non-standard payoff. We will present modifications needed to solve this problem effectively with the PPPT method and compare the performance with the PCPT method and the classical implicit method.

  7. Rapid Method for Sodium Hydroxide/Sodium Peroxide Fusion ...

    EPA Pesticide Factsheets

    Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Plutonium-238 and plutonium-239 in water and air filters Method Selected for: SAM lists this method as a pre-treatment technique supporting analysis of refractory radioisotopic forms of plutonium in drinking water and air filters using the following qualitative techniques: • Rapid methods for acid or fusion digestion • Rapid Radiochemical Method for Plutonium-238 and Plutonium 239/240 in Building Materials for Environmental Remediation Following Radiological Incidents. Summary of subject analytical method which will be posted to the SAM website to allow access to the method.

  8. The Importance of Method Selection in Determining Product Integrity for Nutrition Research1234

    PubMed Central

    Mudge, Elizabeth M; Brown, Paula N

    2016-01-01

    The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. PMID:26980823

  9. Development of a Double Glass Mounting Method Using Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) and its Evaluation for Permanent Mounting of Small Nematodes

    PubMed Central

    ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh

    2015-01-01

    Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729

  10. An evaluation of the efficiency of cleaning methods in a bacon factory

    PubMed Central

    Dempster, J. F.

    1971-01-01

    The germicidal efficiencies of hot water (140-150° F.) under pressure (method 1), hot water + 2% (w/v) detergent solution (method 2) and hot water + detergent + 200 p.p.m. solution of available chlorine (method 3) were compared at six sites in a bacon factory. Results indicated that sites 1 and 2 (tiled walls) were satisfactorily cleaned by each method. It was therefore considered more economical to clean such surfaces routinely by method 1. However, this method was much less efficient (31% survival of micro-organisms) on site 3 (wooden surface) than methods 2 (7% survival) and 3 (1% survival). Likewise the remaining sites (dehairing machine, black scraper and table) were least efficiently cleaned by method 1. The most satisfactory results were obtained when these surfaces were treated by method 3. Pig carcasses were shown to be contaminated by an improperly cleaned black scraper. Repeated cleaning and sterilizing (method 3) of this equipment reduced the contamination on carcasses from about 70% to less than 10%. PMID:5291745

  11. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.

    PubMed

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-09-21

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.

  12. Simplified adsorption method for detection of antibodies to Candida albicans germ tubes.

    PubMed Central

    Ponton, J; Quindos, G; Arilla, M C; Mackenzie, D W

    1994-01-01

    Two modifications that simplify and shorten a method for adsorption of the antibodies against the antigens expressed on both blastospore and germ tube cell wall surfaces (methods 2 and 3) were compared with the original method of adsorption (method 1) to detect anti-Candida albicans germ tube antibodies in 154 serum specimens. Adsorption of the sera by both modified methods resulted in titers very similar to those obtained by the original method. Only 5.2% of serum specimens tested by method 2 and 5.8% of serum specimens tested by method 3 presented greater than one dilution discrepancies in the titers with respect to the titer observed by method 1. When a test based on method 2 was evaluated with sera from patients with invasive candidiasis, the best discriminatory results (sensitivity, 84.6%; specificity, 87.9%; positive predictive value, 75.9%; negative predictive value, 92.7%; efficiency, 86.9%) were obtained when a titer of > or = 1:160 was considered positive. PMID:8126184

  13. A hybrid perturbation Galerkin technique with applications to slender body theory

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1989-01-01

    A two-step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.

  14. A hybrid perturbation Galerkin technique with applications to slender body theory

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1987-01-01

    A two step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.

  15. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  16. Explicit methods in extended phase space for inseparable Hamiltonian problems

    NASA Astrophysics Data System (ADS)

    Pihajoki, Pauli

    2015-03-01

    We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.

  17. Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.

    1998-01-01

    Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.

  18. Comparison of gravimetric, creamatocrit and esterified fatty acid methods for determination of total fat content in human milk.

    PubMed

    Du, Jian; Gay, Melvin C L; Lai, Ching Tat; Trengove, Robert D; Hartmann, Peter E; Geddes, Donna T

    2017-02-15

    The gravimetric method is considered the gold standard for measuring the fat content of human milk. However, it is labor intensive and requires large volumes of human milk. Other methods, such as creamatocrit and esterified fatty acid assay (EFA), have also been used widely in fat analysis. However, these methods have not been compared concurrently with the gravimetric method. Comparison of the three methods was conducted with human milk of varying fat content. Correlations between these methods were high (r(2)=0.99). Statistical differences (P<0.001) were observed in the overall fat measurements and within each group (low, medium and high fat milk) using the three methods. Overall, stronger correlation with lower mean (4.73g/L) and percentage differences (5.16%) was observed with the creamatocrit than the EFA method when compared to the gravimetric method. Furthermore, the ease of operation and real-time analysis make the creamatocrit method preferable. Copyright © 2016. Published by Elsevier Ltd.

  19. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  20. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images.

    PubMed

    Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.

  1. Monitoring the chemical production of citrus-derived bioactive 5-demethylnobiletin using surface enhanced Raman spectroscopy

    PubMed Central

    Zheng, Jinkai; Fang, Xiang; Cao, Yong; Xiao, Hang; He, Lili

    2013-01-01

    To develop an accurate and convenient method for monitoring the production of citrus-derived bioactive 5-demethylnobiletin from demethylation reaction of nobiletin, we compared surface enhanced Raman spectroscopy (SERS) methods with a conventional HPLC method. Our results show that both the substrate-based and solution-based SERS methods correlated with HPLC method very well. The solution method produced lower root mean square error of calibration and higher correlation coefficient than the substrate method. The solution method utilized an ‘affinity chromatography’-like procedure to separate the reactant nobiletin from the product 5-demthylnobiletin based on their different binding affinity to the silver dendrites. The substrate method was found simpler and faster to collect the SERS ‘fingerprint’ spectra of the samples as no incubation between samples and silver was needed and only trace amount of samples were required. Our results demonstrated that the SERS methods were superior to HPLC method in conveniently and rapidly characterizing and quantifying 5-demethylnobiletin production. PMID:23885986

  2. Flow “Fine” Synthesis: High Yielding and Selective Organic Synthesis by Flow Methods

    PubMed Central

    2015-01-01

    Abstract The concept of flow “fine” synthesis, that is, high yielding and selective organic synthesis by flow methods, is described. Some examples of flow “fine” synthesis of natural products and APIs are discussed. Flow methods have several advantages over batch methods in terms of environmental compatibility, efficiency, and safety. However, synthesis by flow methods is more difficult than synthesis by batch methods. Indeed, it has been considered that synthesis by flow methods can be applicable for the production of simple gasses but that it is difficult to apply to the synthesis of complex molecules such as natural products and APIs. Therefore, organic synthesis of such complex molecules has been conducted by batch methods. On the other hand, syntheses and reactions that attain high yields and high selectivities by flow methods are increasingly reported. Flow methods are leading candidates for the next generation of manufacturing methods that can mitigate environmental concerns toward sustainable society. PMID:26337828

  3. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhong-Li, E-mail: zl.liu@163.com; Zhang, Xiu-Lu; Cai, Ling-Cang

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curvemore » of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.« less

  4. The Importance of Method Selection in Determining Product Integrity for Nutrition Research.

    PubMed

    Mudge, Elizabeth M; Betz, Joseph M; Brown, Paula N

    2016-03-01

    The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. © 2016 American Society for Nutrition.

  5. Student Preferences Regarding Teaching Methods in a Drug-Induced Diseases and Clinical Toxicology Course

    PubMed Central

    Gim, Suzanna

    2013-01-01

    Objectives. To determine which teaching method in a drug-induced diseases and clinical toxicology course was preferred by students and whether their preference correlated with their learning of drug-induced diseases. Design. Three teaching methods incorporating active-learning exercises were implemented. A survey instrument was developed to analyze students’ perceptions of the active-learning methods used and how they compared to the traditional teaching method (lecture). Examination performance was then correlated to students’ perceptions of various teaching methods. Assessment. The majority of the 107 students who responded to the survey found traditional lecture significantly more helpful than active-learning methods (p=0.01 for all comparisons). None of the 3 active-learning methods were preferred over the others. No significant correlations were found between students’ survey responses and examination performance. Conclusions. Students preferred traditional lecture to other instructional methods. Learning was not influenced by the teaching method or by preference for a teaching method. PMID:23966726

  6. A new sampling method for fibre length measurement

    NASA Astrophysics Data System (ADS)

    Wu, Hongyan; Li, Xianghong; Zhang, Junying

    2018-06-01

    This paper presents a new sampling method for fibre length measurement. This new method can meet the three features of an effective sampling method, also it can produce the beard with two symmetrical ends which can be scanned from the holding line to get two full fibrograms for each sample. The methodology was introduced and experiments were performed to investigate effectiveness of the new method. The results show that the new sampling method is an effective sampling method.

  7. A comparison between progressive extension method (PEM) and iterative method (IM) for magnetic field extrapolations in the solar atmosphere

    NASA Technical Reports Server (NTRS)

    Wu, S. T.; Sun, M. T.; Sakurai, Takashi

    1990-01-01

    This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.

  8. Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archibald, Richard K; Fann, George I; Shelton Jr, William Allison

    2011-01-01

    We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.

  9. Sensitivity of Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations

    DTIC Science & Technology

    2016-06-12

    Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations Venkatesh Babu, Kumar Kulkarni, Sanjay...buried in soil viz., (1) coupled discrete element & particle gas methods (DEM-PGM) and (2) Arbitrary Lagrangian-Eulerian (ALE), are investigated. The...DEM_PGM and identify the limitations/strengths compared to the ALE method. Discrete Element Method (DEM) can model individual particle directly, and

  10. Two Project Methods: Preliminary Observations on the Similarities and Differences between William Heard Kilpatrick's Project Method and John Dewey's Problem-Solving Method

    ERIC Educational Resources Information Center

    Sutinen, Ari

    2013-01-01

    The project method became a famous teaching method when William Heard Kilpatrick published his article "Project Method" in 1918. The key idea in Kilpatrick's project method is to try to explain how pupils learn things when they work in projects toward different common objects. The same idea of pupils learning by work or action in an…

  11. Using an Ordinal Outranking Method Supporting the Acquisition of Military Equipment

    DTIC Science & Technology

    2009-10-01

    will concentrate on the well-known ORESTE method ([10],[12]) which is complementary to the PROMETHEE methods. There are other methods belonging to...the PROMETHEE methods. This MCDM method is taught in the curriculum of the High Staff College for Military Administrators of the Belgian MoD...C(b,a) similar to the preference indicators ( , ) and (b,a)a b  of the PROMETHEE methods (see [4] and SAS-080 14 and SAS-080 15). These

  12. Review of Statistical Methods for Analysing Healthcare Resources and Costs

    PubMed Central

    Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G

    2011-01-01

    We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344

  13. An adaptive proper orthogonal decomposition method for model order reduction of multi-disc rotor system

    NASA Astrophysics Data System (ADS)

    Jin, Yulin; Lu, Kuan; Hou, Lei; Chen, Yushu

    2017-12-01

    The proper orthogonal decomposition (POD) method is a main and efficient tool for order reduction of high-dimensional complex systems in many research fields. However, the robustness problem of this method is always unsolved, although there are some modified POD methods which were proposed to solve this problem. In this paper, a new adaptive POD method called the interpolation Grassmann manifold (IGM) method is proposed to address the weakness of local property of the interpolation tangent-space of Grassmann manifold (ITGM) method in a wider parametric region. This method is demonstrated here by a nonlinear rotor system of 33-degrees of freedom (DOFs) with a pair of liquid-film bearings and a pedestal looseness fault. The motion region of the rotor system is divided into two parts: simple motion region and complex motion region. The adaptive POD method is compared with the ITGM method for the large and small spans of parameter in the two parametric regions to present the advantage of this method and disadvantage of the ITGM method. The comparisons of the responses are applied to verify the accuracy and robustness of the adaptive POD method, as well as the computational efficiency is also analyzed. As a result, the new adaptive POD method has a strong robustness and high computational efficiency and accuracy in a wide scope of parameter.

  14. A hydrostatic weighing method using total lung capacity and a small tank.

    PubMed Central

    Warner, J G; Yeater, R; Sherwood, L; Weber, K

    1986-01-01

    The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing. PMID:3697596

  15. A hydrostatic weighing method using total lung capacity and a small tank.

    PubMed

    Warner, J G; Yeater, R; Sherwood, L; Weber, K

    1986-03-01

    The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing.

  16. A work study of the CAD/CAM method and conventional manual method in the fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis.

    PubMed

    Wong, M S; Cheng, J C Y; Wong, M W; So, S F

    2005-04-01

    A study was conducted to compare the CAD/CAM method with the conventional manual method in fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis. Ten subjects were recruited for this study. Efficiency analyses of the two methods were performed from cast filling/ digitization process to completion of cast/image rectification. The dimensional changes of the casts/ models rectified by the two cast rectification methods were also investigated. The results demonstrated that the CAD/CAM method was faster than the conventional manual method in the studied processes. The mean rectification time of the CAD/CAM method was shorter than that of the conventional manual method by 108.3 min (63.5%). This indicated that the CAD/CAM method took about 1/3 of the time of the conventional manual to finish cast rectification. In the comparison of cast/image dimensional differences between the conventional manual method and the CAD/CAM method, five major dimensions in each of the five rectified regions namely the axilla, thoracic, lumbar, abdominal and pelvic regions were involved. There were no significant dimensional differences (p < 0.05) in 19 out of the 25 studied dimensions. This study demonstrated that the CAD/CAM system could save the time in the rectification process and offer a relatively high resemblance in cast rectification as compared with the conventional manual method.

  17. An Improved Newton's Method.

    ERIC Educational Resources Information Center

    Mathews, John H.

    1989-01-01

    Describes Newton's method to locate roots of an equation using the Newton-Raphson iteration formula. Develops an adaptive method overcoming limitations of the iteration method. Provides the algorithm and computer program of the adaptive Newton-Raphson method. (YP)

  18. Symplectic test particle encounters: a comparison of methods

    NASA Astrophysics Data System (ADS)

    Wisdom, Jack

    2017-01-01

    A new symplectic method for handling encounters of test particles with massive bodies is presented. The new method is compared with several popular methods (RMVS3, SYMBA, and MERCURY). The new method compares favourably.

  19. The Tongue and Quill

    DTIC Science & Technology

    2004-08-01

    ethnography , phenomenological study , grounded theory study and content analysis. THE HISTORICAL METHOD Methods I. Qualitative Research Methods ... Phenomenological Study 4. Grounded Theory Study 5. Content Analysis II. Quantitative Research Methods A...A. The Historical Method B. General Qualitative

  20. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 5 2010-04-01 2010-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... (CONTINUED) INCOME TAXES Pension, Profit-Sharing, Stock Bonus Plans, Etc. § 1.412(c)(1)-2 Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's...

  1. Comparisons of two methods of harvesting biomass for energy

    Treesearch

    W.F. Watson; B.J. Stokes; I.W. Savelle

    1986-01-01

    Two harvesting methods for utilization of understory biomass were tested against a conventional harvesting method to determine relative costs. The conventional harvesting method tested removed all pine 6 inches diameter at breast height (DBH) and larger and hardwood sawlogs as tree length logs. The two intensive harvesting methods were a one-pass and a two-pass method...

  2. Log sampling methods and software for stand and landscape analyses.

    Treesearch

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...

  3. 77 FR 55832 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of a New Equivalent Method

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-11

    ... Methods: Designation of a New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of a new equivalent method for monitoring ambient air quality. SUMMARY: Notice is... part 53, a new equivalent method for measuring concentrations of PM 2.5 in the ambient air. FOR FURTHER...

  4. 26 CFR 1.446-2 - Method of accounting for interest.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... account by a taxpayer under the taxpayer's regular method of accounting (e.g., an accrual method or the... 26 Internal Revenue 6 2010-04-01 2010-04-01 false Method of accounting for interest. 1.446-2... TAX (CONTINUED) INCOME TAXES Methods of Accounting § 1.446-2 Method of accounting for interest. (a...

  5. Rapid Radiochemical Method for Radium-226 in Building ...

    EPA Pesticide Factsheets

    Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Radium-226 in building materials Method Selected for: SAM lists this method for qualitative analysis of radium-226 in concrete or brick building materials Summary of subject analytical method which will be posted to the SAM website to allow access to the method.

  6. Rapid Radiochemical Method for Americium-241 in Building ...

    EPA Pesticide Factsheets

    Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Americium-241 in building materials Method Selected for: SAM lists this method for qualitative analysis of americium-241 in concrete or brick building materials. Summary of subject analytical method which will be posted to the SAM website to allow access to the method.

  7. Draft Environmental Impact Statement: Peacekeeper Rail Garrison Program

    DTIC Science & Technology

    1988-06-01

    2-13 3.0 ENVIRONMENTAL ANALYSIS METHODS ................................ 3-1 3.1 Methods for Assessing Nationwide Impacts...3-2 3.1.1 Methods for Assessing National Economic Impacts ........... 3-2 3.1.2 Methods for Assessing Railroad Network...3.2.4 Methods for Assessing Existing and Future Baseline Conditions .......................................... 3-6 3.2.5 Methods for Assessing

  8. A Comparative Investigation of the Efficiency of Two Classroom Observational Methods.

    ERIC Educational Resources Information Center

    Kissel, Mary Ann

    The problem of this study was to determine whether Method A is a more efficient observational method for obtaining activity type behaviors in an individualized classroom than Method B. Method A requires the observer to record the activities of the entire class at given intervals while Method B requires only the activities of selected individuals…

  9. Improved methods of vibration analysis of pretwisted, airfoil blades

    NASA Technical Reports Server (NTRS)

    Subrahmanyam, K. B.; Kaza, K. R. V.

    1984-01-01

    Vibration analysis of pretwisted blades of asymmetric airfoil cross section is performed by using two mixed variational approaches. Numerical results obtained from these two methods are compared to those obtained from an improved finite difference method and also to those given by the ordinary finite difference method. The relative merits, convergence properties and accuracies of all four methods are studied and discussed. The effects of asymmetry and pretwist on natural frequencies and mode shapes are investigated. The improved finite difference method is shown to be far superior to the conventional finite difference method in several respects. Close lower bound solutions are provided by the improved finite difference method for untwisted blades with a relatively coarse mesh while the mixed methods have not indicated any specific bound.

  10. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System.

    PubMed

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-02-20

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  11. Comparison of Instream and Laboratory Methods of Measuring Sediment Oxygen Demand

    USGS Publications Warehouse

    Hall, Dennis C.; Berkas, Wayne R.

    1988-01-01

    Sediment oxygen demand (SOD) was determined at three sites in a gravel-bottomed central Missouri stream by: (1) two variations of an instream method, and (2) a laboratory method. SOD generally was greatest by the instream methods, which are considered more accurate, and least by the laboratory method. Disturbing stream sediment did not significantly decrease SOD by the instream method. Temperature ranges of up to 12 degree Celsius had no significant effect on the SOD. In the gravel-bottomed stream, the placement of chambers was critical to obtain reliable measurements. SOD rates were dependent on the method; therefore, care should be taken in comparing SOD data obtained by different methods. There is a need for a carefully researched standardized method for SOD determinations.

  12. Echo movement and evolution from real-time processing.

    NASA Technical Reports Server (NTRS)

    Schaffner, M. R.

    1972-01-01

    Preliminary experimental data on the effectiveness of conventional radars in measuring the movement and evolution of meteorological echoes when the radar is connected to a programmable real-time processor are examined. In the processor programming is accomplished by conceiving abstract machines which constitute the actual programs used in the methods employed. An analysis of these methods, such as the center of gravity method, the contour-displacement method, the method of slope, the cross-section method, the contour crosscorrelation method, the method of echo evolution at each point, and three-dimensional measurements, shows that the motions deduced from them may differ notably (since each method determines different quantities) but the plurality of measurement may give additional information on the characteristics of the precipitation.

  13. Comparison of methods for measuring cholinesterase inhibition by carbamates

    PubMed Central

    Wilhelm, K.; Vandekar, M.; Reiner, E.

    1973-01-01

    The Acholest and tintometric methods are used widely for measuring blood cholinesterase activity after exposure to organophosphorus compounds. However, if applied for measuring blood cholinesterase activity in persons exposed to carbamates, the accuracy of the methods requires verification since carbamylated cholinesterases are unstable. The spectrophotometric method was used as a reference method and the two field methods were employed under controlled conditions. Human blood cholinesterases were inhibited in vitro by four methylcarbamates that are used as insecticides. When plasma cholinesterase activity was measured by the Acholest and spectrophotometric methods, no difference was found. The enzyme activity in whole blood determined by the tintometric method was ≤ 11% higher than when the same sample was measured by the spectrophotometric method. PMID:4541147

  14. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  15. Formal methods technology transfer: Some lessons learned

    NASA Technical Reports Server (NTRS)

    Hamilton, David

    1992-01-01

    IBM has a long history in the application of formal methods to software development and verification. There have been many successes in the development of methods, tools and training to support formal methods. And formal methods have been very successful on several projects. However, the use of formal methods has not been as widespread as hoped. This presentation summarizes several approaches that have been taken to encourage more widespread use of formal methods, and discusses the results so far. The basic problem is one of technology transfer, which is a very difficult problem. It is even more difficult for formal methods. General problems of technology transfer, especially the transfer of formal methods technology, are also discussed. Finally, some prospects for the future are mentioned.

  16. Method Development in Forensic Toxicology.

    PubMed

    Peters, Frank T; Wissenbach, Dirk K; Busardo, Francesco Paolo; Marchei, Emilia; Pichini, Simona

    2017-01-01

    In the field of forensic toxicology, the quality of analytical methods is of great importance to ensure the reliability of results and to avoid unjustified legal consequences. A key to high quality analytical methods is a thorough method development. The presented article will provide an overview on the process of developing methods for forensic applications. This includes the definition of the method's purpose (e.g. qualitative vs quantitative) and the analytes to be included, choosing an appropriate sample matrix, setting up separation and detection systems as well as establishing a versatile sample preparation. Method development is concluded by an optimization process after which the new method is subject to method validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  17. Implementation of Leak Test Methods for the International Space Station (ISS) Elements, Systems and Components

    NASA Technical Reports Server (NTRS)

    Underwood, Steve; Lvovsky, Oleg

    2007-01-01

    The International Space Station (ISS has Qualification and Acceptance Environmental Test Requirements document, SSP 41172 that includes many environmental tests such as Thermal vacuum & Cycling, Depress/Repress, Sinusoidal, Random, and Acoustic Vibration, Pyro Shock, Acceleration, Humidity, Pressure, Electromatic Interference (EMI)/Electromagnetic Compatibility (EMCO), etc. This document also includes (13) leak test methods for Pressure Integrity Verification of the ISS Elements, Systems, and Components. These leak test methods are well known, however, the test procedure for specific leak test method shall be written and implemented paying attention to the important procedural steps/details that, if omitted or deviated, could impact the quality of the final product and affect the crew safety. Such procedural steps/details for different methods include, but not limited to: - Sequence of testing, f or example, pressurization and submersion steps for Method I (Immersion); - Stabilization of the mass spectrometer leak detector outputs fo r Method II (vacuum Chamber or Bell jar); - Proper data processing an d taking a conservative approach while making predictions for on-orbit leakage rate for Method III(Pressure Change); - Proper Calibration o f the mass spectrometer leak detector for all the tracer gas (mostly Helium) Methods such as Method V (Detector Probe), Method VI (Hood), Method VII (Tracer Probe), Method VIII(Accumulation); - Usage of visibl ility aides for Method I (Immersion), Method IV (Chemical Indicator), Method XII (Foam/Liquid Application), and Method XIII (Hydrostatic/Visual Inspection); While some methods could be used for the total leaka ge (either internal-to-external or external-to-internal) rate requirement verification (Vacuum Chamber, Pressure Decay, Hood, Accumulation), other methods shall be used only as a pass/fail test for individual joints (e.g., welds, fittings, and plugs) or for troubleshooting purposes (Chemical Indicator, Detector Probe, Tracer Probe, Local Vacuum Chamber, Foam/Liquid Application, and Hydrostatic/Visual Inspection). Any isolation of SSP 41172 requirements have led to either retesting of hardware or accepting a risk associated with the potential system or component pressure integrity problem during flight.

  18. Temperature profiles of different cooling methods in porcine pancreas procurement.

    PubMed

    Weegman, Bradley P; Suszynski, Thomas M; Scott, William E; Ferrer Fábrega, Joana; Avgoustiniatos, Efstathios S; Anazawa, Takayuki; O'Brien, Timothy D; Rizzari, Michael D; Karatzas, Theodore; Jie, Tun; Sutherland, David E R; Hering, Bernhard J; Papas, Klearchos K

    2014-01-01

    Porcine islet xenotransplantation is a promising alternative to human islet allotransplantation. Porcine pancreas cooling needs to be optimized to reduce the warm ischemia time (WIT) following donation after cardiac death, which is associated with poorer islet isolation outcomes. This study examines the effect of four different cooling Methods on core porcine pancreas temperature (n = 24) and histopathology (n = 16). All Methods involved surface cooling with crushed ice and chilled irrigation. Method A, which is the standard for porcine pancreas procurement, used only surface cooling. Method B involved an intravascular flush with cold solution through the pancreas arterial system. Method C involved an intraductal infusion with cold solution through the major pancreatic duct, and Method D combined all three cooling Methods. Surface cooling alone (Method A) gradually decreased core pancreas temperature to <10 °C after 30 min. Using an intravascular flush (Method B) improved cooling during the entire duration of procurement, but incorporating an intraductal infusion (Method C) rapidly reduced core temperature 15-20 °C within the first 2 min of cooling. Combining all methods (Method D) was the most effective at rapidly reducing temperature and providing sustained cooling throughout the duration of procurement, although the recorded WIT was not different between Methods (P = 0.36). Histological scores were different between the cooling Methods (P = 0.02) and the worst with Method A. There were differences in histological scores between Methods A and C (P = 0.02) and Methods A and D (P = 0.02), but not between Methods C and D (P = 0.95), which may highlight the importance of early cooling using an intraductal infusion. In conclusion, surface cooling alone cannot rapidly cool large (porcine or human) pancreata. Additional cooling with an intravascular flush and intraductal infusion results in improved core porcine pancreas temperature profiles during procurement and histopathology scores. These data may also have implications on human pancreas procurement as use of an intraductal infusion is not common practice. © 2014 John Wiley & Sons A/S Published by John Wiley & Sons Ltd.

  19. The PneuCarriage Project: A Multi-Centre Comparative Study to Identify the Best Serotyping Methods for Examining Pneumococcal Carriage in Vaccine Evaluation Studies

    PubMed Central

    Satzke, Catherine; Dunne, Eileen M.; Porter, Barbara D.; Klugman, Keith P.; Mulholland, E. Kim

    2015-01-01

    Background The pneumococcus is a diverse pathogen whose primary niche is the nasopharynx. Over 90 different serotypes exist, and nasopharyngeal carriage of multiple serotypes is common. Understanding pneumococcal carriage is essential for evaluating the impact of pneumococcal vaccines. Traditional serotyping methods are cumbersome and insufficient for detecting multiple serotype carriage, and there are few data comparing the new methods that have been developed over the past decade. We established the PneuCarriage project, a large, international multi-centre study dedicated to the identification of the best pneumococcal serotyping methods for carriage studies. Methods and Findings Reference sample sets were distributed to 15 research groups for blinded testing. Twenty pneumococcal serotyping methods were used to test 81 laboratory-prepared (spiked) samples. The five top-performing methods were used to test 260 nasopharyngeal (field) samples collected from children in six high-burden countries. Sensitivity and positive predictive value (PPV) were determined for the test methods and the reference method (traditional serotyping of >100 colonies from each sample). For the alternate serotyping methods, the overall sensitivity ranged from 1% to 99% (reference method 98%), and PPV from 8% to 100% (reference method 100%), when testing the spiked samples. Fifteen methods had ≥70% sensitivity to detect the dominant (major) serotype, whilst only eight methods had ≥70% sensitivity to detect minor serotypes. For the field samples, the overall sensitivity ranged from 74.2% to 95.8% (reference method 93.8%), and PPV from 82.2% to 96.4% (reference method 99.6%). The microarray had the highest sensitivity (95.8%) and high PPV (93.7%). The major limitation of this study is that not all of the available alternative serotyping methods were included. Conclusions Most methods were able to detect the dominant serotype in a sample, but many performed poorly in detecting the minor serotype populations. Microarray with a culture amplification step was the top-performing method. Results from this comprehensive evaluation will inform future vaccine evaluation and impact studies, particularly in low-income settings, where pneumococcal disease burden remains high. PMID:26575033

  20. [The clinical value of sentinel lymph node detection in laryngeal and hypopharyngeal carcinoma patients with clinically negative neck by methylene blue method and radiolabeled tracer method].

    PubMed

    Zhao, Xin; Xiao, Dajiang; Ni, Jianming; Zhu, Guochen; Yuan, Yuan; Xu, Ting; Zhang, Yongsheng

    2014-11-01

    To investigate the clinical value of sentinel lymph node (SLN) detection in laryngeal and hypopharyngeal carcinoma patients with clinically negative neck (cN0) by methylene blue method, radiolabeled tracer method and combination of these two methods. Thirty-three patients with cN0 laryngeal carcinoma and six patients with cN0 hypopharyngeal carcinoma underwent SLN detection using both of methylene blue and radiolabeled tracer method. All these patients were accepted received the injection of radioactive isotope 99 Tc(m)-sulfur colloid (SC) and methylene blue into the carcinoma before surgery, then all these patients underwent intraopertive lymphatic mapping with a handheld gamma-detecting probe and blue-dyed SLN. After the mapping of SLN, selected neck dissections and tumor resections were peformed. The results of SLN detection by radiolabeled tracer, dye and combination of both methods were compared. The detection rate of SLN by radiolabeled tracer, methylene blue and combined method were 89.7%, 79.5%, 92.3% respectively. The number of detected SLN was significantly different between radiolabeled tracer method and combined method, and also between methylene blue method and combined method. The detection rate of methylene blue and radiolabeled tracer method were significantly different from combined method (P < 0.05). Nine patients were found to have lymph node metastasis by final pathological examination. The accuracy and negative rate of SLN detection of the combined method were 97.2% and 11.1%. The combined method using radiolabeled tracer and methylene blue can improve the detection rate and accuracy of sentinel lymph node detection. Furthermore, sentinel lymph node detection can accurately represent the cervical lymph node status in cN0 laryngeal and hypopharyngeal carcinoma.

  1. Slump sitting X-ray of the lumbar spine is superior to the conventional flexion view in assessing lumbar spine instability.

    PubMed

    Hey, Hwee Weng Dennis; Lau, Eugene Tze-Chun; Lim, Joel-Louis; Choong, Denise Ai-Wen; Tan, Chuen-Seng; Liu, Gabriel Ka-Po; Wong, Hee-Kit

    2017-03-01

    Flexion radiographs have been used to identify cases of spinal instability. However, current methods are not standardized and are not sufficiently sensitive or specific to identify instability. This study aimed to introduce a new slump sitting method for performing lumbar spine flexion radiographs and comparison of the angular range of motions (ROMs) and displacements between the conventional method and this new method. This study used is a prospective study on radiological evaluation of the lumbar spine flexion ROMs and displacements using dynamic radiographs. Sixty patients were recruited from a single spine tertiary center. Angular and displacement measurements of lumbar spine flexion were carried out. Participants were randomly allocated into two groups: those who did the new method first, followed by the conventional method versus those who did the conventional method first, followed by the new method. A comparison of the angular and displacement measurements of lumbar spine flexion between the conventional method and the new method was performed and tested for superiority and non-inferiority. The measurements of global lumbar angular ROM were, on average, 17.3° larger (p<.0001) using the new slump sitting method compared with the conventional method. They were most significant at the levels of L3-L4, L4-L5, and L5-S1 (p<.0001, p<.0001 and p=.001, respectively). There was no significant difference between both methods when measuring lumbar displacements (p=.814). The new method of slump sitting dynamic radiograph was shown to be superior to the conventional method in measuring the angular ROM and non-inferior to the conventional method in the measurement of displacement. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Searching for transcription factor binding sites in vector spaces

    PubMed Central

    2012-01-01

    Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338

  3. [Social aspects of natural methods (author's transl)].

    PubMed

    Linhard, J

    1981-01-01

    It is rather difficult to distinguish between "natural methods" and "no natural methods" or "unnatural methods". "Natural methods" should therefore be defined as those which are used without any additional product. Use and success depend on the motivation and control of the couple. These methods are: postcoital douching, prolonged lactation, rhythm method according to Knaus or to Ogino by observing BBT, observation of cervical mucus according to Billings, coitus interruptus, and coitus reservatus. As far as we know, these methods have been used since primeval times and have been commented on during different periods and at different places as being used with the support of all 3 monotheistic religions until the era of Augustinus and Thomas of Aquinas. From then on the Christian and later on the Catholic faith saw human production as the purpose of matrimony and therefore banned all methods with the exception of the rhythm method. It has been assumed that the decrease of fertility in Europe since the industrial revolution was a result of using these methods--primarily coitus interruptus, which still seems to be widely spread. It is therefore unintelligible why so little is known about the impact of these methods on the medical and social sector. As long as the ideal method is not available the natural methods should be given a place in the development of a contraceptive methodology. Since the natural methods do not cost anything, they could help to carry forward family planning in countries with low-income population. But before employing them for the purpose they have to be studied in view of their medicobiological as well as their social aspects in order to learn more about these old and much used methods. (Author's)

  4. Evaluation of selected methods for determining streamflow during periods of ice effect

    USGS Publications Warehouse

    Melcher, Norwood B.; Walker, J.F.

    1992-01-01

    Seventeen methods for estimating ice-affected streamflow are evaluated for potential use with the U.S. Geological Survey streamflow-gaging station network. The methods evaluated were identified by written responses from U.S. Geological Survey field offices and by a comprehensive literature search. The methods selected and techniques used for applying the methods are described in this report. The methods are evaluated by comparing estimated results with data collected at three streamflow-gaging stations in Iowa during the winter of 1987-88. Discharge measurements were obtained at 1- to 5-day intervals during the ice-affected periods at the three stations to define an accurate baseline record. Discharge records were compiled for each method based on data available, assuming a 6-week field schedule. The methods are classified into two general categories-subjective and analytical--depending on whether individual judgment is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used at streamflow-gaging stations, where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice-adjustment factor) may be appropriate for use at stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge-ratio and multiple-regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.

  5. Fatigue properties of JIS H3300 C1220 copper for strain life prediction

    NASA Astrophysics Data System (ADS)

    Harun, Muhammad Faiz; Mohammad, Roslina

    2018-05-01

    The existing methods for estimating strain life parameters are dependent on the material's monotonic tensile properties. However, a few of these methods yield quite complicated expressions for calculating fatigue parameters, and are specific to certain groups of materials only. The Universal Slopes method, Modified Universal Slopes method, Uniform Material Law, the Hardness method, and Medians method are a few existing methods for predicting strain-life fatigue based on monotonic tensile material properties and hardness of material. In the present study, nine methods for estimating fatigue life and properties are applied on JIS H3300 C1220 copper to determine the best methods for strain life estimation of this ductile material. Experimental strain-life curves are compared to estimations obtained using each method. Muralidharan-Manson's Modified Universal Slopes method and Bäumel-Seeger's method for unalloyed and low-alloy steels are found to yield batter accuracy in estimating fatigue life with a deviation of less than 25%. However, the prediction of both methods only yield much better accuracy for a cycle of less than 1000 or for strain amplitudes of more than 1% and less than 6%. Manson's Original Universal Slopes method and Ong's Modified Four-Point Correlation method are found to predict the strain-life fatigue of copper with better accuracy for a high number of cycles of strain amplitudes of less than 1%. The differences between mechanical behavior during monotonic and cyclic loading and the complexity in deciding the coefficient in an equation are probably the reason for the lack of a reliable method for estimating fatigue behavior using the monotonic properties of a group of materials. It is therefore suggested that a differential approach and new expressions be developed to estimate the strain-life fatigue parameters for ductile materials such as copper.

  6. Innovative application of the moisture analyzer for determination of dry mass content of processed cheese

    NASA Astrophysics Data System (ADS)

    Kowalska, Małgorzata; Janas, Sławomir; Woźniak, Magdalena

    2018-04-01

    The aim of this work was the presentation of an alternative method of determination of the total dry mass content in processed cheese. The authors claim that the presented method can be used in industry's quality control laboratories for routine testing and for quick in-process control. For the test purposes both reference method of determination of dry mass in processed cheese and moisture analyzer method were used. The tests were carried out for three different kinds of processed cheese. In accordance with the reference method, the sample was placed on a layer of silica sand and dried at the temperature of 102 °C for about 4 h. The moisture analyzer test required method validation, with regard to drying temperature range and mass of the analyzed sample. Optimum drying temperature of 110 °C was determined experimentally. For Hochland cream processed cheese sample, the total dry mass content, obtained using the reference method, was 38.92%, whereas using the moisture analyzer method, it was 38.74%. An average analysis time in case of the moisture analyzer method was 9 min. For the sample of processed cheese with tomatoes, the reference method result was 40.37%, and the alternative method result was 40.67%. For the sample of cream processed cheese with garlic the reference method gave value of 36.88%, and the alternative method, of 37.02%. An average time of those determinations was 16 min. Obtained results confirmed that use of moisture analyzer is effective. Compliant values of dry mass content were obtained for both of the used methods. According to the authors, the fact that the measurement took incomparably less time for moisture analyzer method, is a key criterion of in-process control and final quality control method selection.

  7. Alternative microbial methods: An overview and selection criteria.

    PubMed

    Jasson, Vicky; Jacxsens, Liesbeth; Luning, Pieternel; Rajkovic, Andreja; Uyttendaele, Mieke

    2010-09-01

    This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant bibliography. Perspectives on future development and the potential of the rapid method for routine application in food diagnostics are discussed. As various alternative "rapid" methods in different formats are available on the market, it can be very difficult for a food business operator or for a control authority to select the most appropriate method which fits its purpose. Validation of a method by a third party, according to international accepted protocol based upon ISO 16140, may increase the confidence in the performance of a method. A list of at the moment validated methods for enumeration of both utility indicators (aerobic plate count) and hygiene indicators (Enterobacteriaceae, Escherichia coli, coagulase positive Staphylococcus) as well as for detection of the four major pathogens (Salmonella spp., Listeria monocytogenes, E. coli O157 and Campylobacter spp.) is included with reference to relevant websites to check for updates. In a second part of this study, selection criteria are introduced to underpin the choice of the appropriate method(s) for a defined application. The selection criteria link the definition of the context in which the user of the method functions - and thus the prospective use of the microbial test results - with the technical information on the method and its operational requirements and sustainability. The selection criteria can help the end user of the method to obtain a systematic insight into all relevant factors to be taken into account for selection of a method for microbial analysis. Copyright 2010 Elsevier Ltd. All rights reserved.

  8. Study on ABO and RhD blood grouping: Comparison between conventional tile method and a new solid phase method (InTec Blood Grouping Test Kit).

    PubMed

    Yousuf, R; Abdul Ghani, S A; Abdul Khalid, N; Leong, C F

    2018-04-01

    'InTec Blood Grouping Test kit' using solid-phase technology is a new method which may be used at outdoor blood donation site or at bed side as an alternative to the conventional tile method in view of its stability at room temperature and fulfilled the criteria as point of care test. This study aimed to compare the efficiency of this solid phase method (InTec Blood Grouping Test Kit) with the conventional tile method in determining the ABO and RhD blood group of healthy donors. A total of 760 voluntary donors who attended the Blood Bank, Penang Hospital or offsite blood donation campaigns from April to May 2014 were recruited. The ABO and RhD blood groups were determined by the conventional tile method and the solid phase method, in which the tube method was used as the gold standard. For ABO blood grouping, the tile method has shown 100% concordance results with the gold standard tube method, whereas the solid-phase method only showed concordance result for 754/760 samples (99.2%). Therefore, for ABO grouping, tile method has 100% sensitivity and specificity while the solid phase method has slightly lower sensitivity of 97.7% but both with good specificity of 100%. For RhD grouping, both the tile and solid phase methods have grouped one RhD positive specimen as negative each, thus giving the sensitivity and specificity of 99.9% and 100% for both methods respectively. The 'InTec Blood Grouping Test Kit' is suitable for offsite usage because of its simplicity and user friendliness. However, further improvement in adding the internal quality control may increase the test sensitivity and validity of the test results.

  9. Knowledge, beliefs and use of nursing methods in preventing pressure sores in Dutch hospitals.

    PubMed

    Halfens, R J; Eggink, M

    1995-02-01

    Different methods have been developed in the past to prevent patients from developing pressure sores. The consensus guidelines developed in the Netherlands make a distinction between preventive methods useful for all patients, methods useful only in individual cases, and methods which are not useful at all. This study explores the extent of use of the different methods within Dutch hospitals, and the knowledge and beliefs of nurses regarding the usefulness of these methods. A mail questionnaire was sent to a representative sample of nurses working within Dutch hospitals. A total of 373 questionnaires were returned and used for the analyses. The results showed that many methods judged by the consensus report as not useful, or only useful in individual cases, are still being used. Some methods which are judged as useful, like the use of a risk assessment scale, are used on only a few wards. The opinion of nurses regarding the usefulness of the methods differ from the guidelines of the consensus committee. Although there is agreement about most of the useful methods, there is less agreement about the methods which are useful in individual cases or methods which are not useful at all. In particular the use of massage and cream are, in the opinion of the nurses, useful in individual or in all cases.

  10. Automatic allograft bone selection through band registration and its application to distal femur.

    PubMed

    Zhang, Yu; Qiu, Lei; Li, Fengzan; Zhang, Qing; Zhang, Li; Niu, Xiaohui

    2017-09-01

    Clinical reports suggest that large bone defects could be effectively restored by allograft bone transplantation, where allograft bone selection acts an important role. Besides, there is a huge demand for developing the automatic allograft bone selection methods, as the automatic methods could greatly improve the management efficiency of the large bone banks. Although several automatic methods have been presented to select the most suitable allograft bone from the massive allograft bone bank, these methods still suffer from inaccuracy. In this paper, we propose an effective allograft bone selection method without using the contralateral bones. Firstly, the allograft bone is globally aligned to the recipient bone by surface registration. Then, the global alignment is further refined through band registration. The band, defined as the recipient points within the lifted and lowered cutting planes, could involve more local structure of the defected segment. Therefore, our method could achieve robust alignment and high registration accuracy of the allograft and recipient. Moreover, the existing contour method and surface method could be unified into one framework under our method by adjusting the lift and lower distances of the cutting planes. Finally, our method has been validated on the database of distal femurs. The experimental results indicate that our method outperforms the surface method and contour method.

  11. Validation of a questionnaire method for estimating extent of menstrual blood loss in young adult women.

    PubMed

    Heath, A L; Skeaff, C M; Gibson, R S

    1999-04-01

    The objective of this study was to validate two indirect methods for estimating the extent of menstrual blood loss against a reference method to determine which method would be most appropriate for use in a population of young adult women. Thirty-two women aged 18 to 29 years (mean +/- SD; 22.4 +/- 2.8) were recruited by poster in Dunedin (New Zealand). Data are presented for 29 women. A recall method and a record method for estimating extent of menstrual loss were validated against a weighed reference method. Spearman rank correlation coefficients between blood loss assessed by Weighed Menstrual Loss and Menstrual Record was rs = 0.47 (p = 0.012), and between Weighed Menstrual Loss and Menstrual Recall, was rs = 0.61 (p = 0.001). The Record method correctly classified 66% of participants into the same tertile, grossly misclassifying 14%. The Recall method correctly classified 59% of participants, grossly misclassifying 7%. Reference method menstrual loss calculated for surrogate categories demonstrated a significant difference between the second and third tertiles for the Record method, and between the first and third tertiles for the Recall method. The Menstrual Recall method can differentiate between low and high levels of menstrual blood loss in young adult women, is quick to complete and analyse, and has a low participant burden.

  12. A comparative study of novel spectrophotometric methods based on isosbestic points; application on a pharmaceutical ternary mixture

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham

    This work represents the application of the isosbestic points present in different absorption spectra. Three novel spectrophotometric methods were developed, the first method is the absorption subtraction method (AS) utilizing the isosbestic point in zero-order absorption spectra; the second method is the amplitude modulation method (AM) utilizing the isosbestic point in ratio spectra; and third method is the amplitude summation method (A-Sum) utilizing the isosbestic point in derivative spectra. The three methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The components at the isosbestic point were determined using the corresponding unified regression equation at this point with no need for a complementary method. The obtained results were statistically compared to each other and to that of the developed PLS model. The specificity of the developed methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed.

  13. Towards an Airframe Noise Prediction Methodology: Survey of Current Approaches

    NASA Technical Reports Server (NTRS)

    Farassat, Fereidoun; Casper, Jay H.

    2006-01-01

    In this paper, we present a critical survey of the current airframe noise (AFN) prediction methodologies. Four methodologies are recognized. These are the fully analytic method, CFD combined with the acoustic analogy, the semi-empirical method and fully numerical method. It is argued that for the immediate need of the aircraft industry, the semi-empirical method based on recent high quality acoustic database is the best available method. The method based on CFD and the Ffowcs William- Hawkings (FW-H) equation with penetrable data surface (FW-Hpds ) has advanced considerably and much experience has been gained in its use. However, more research is needed in the near future particularly in the area of turbulence simulation. The fully numerical method will take longer to reach maturity. Based on the current trends, it is predicted that this method will eventually develop into the method of choice. Both the turbulence simulation and propagation methods need to develop more for this method to become useful. Nonetheless, the authors propose that the method based on a combination of numerical and analytical techniques, e.g., CFD combined with FW-H equation, should also be worked on. In this effort, the current symbolic algebra software will allow more analytical approaches to be incorporated into AFN prediction methods.

  14. A Reconstructed Discontinuous Galerkin Method for the Compressible Euler Equations on Arbitrary Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Luquing Luo; Robert Nourgaliev

    2009-06-01

    A reconstruction-based discontinuous Galerkin (DG) method is presented for the solution of the compressible Euler equations on arbitrary grids. By taking advantage of handily available and yet invaluable information, namely the derivatives, in the context of the discontinuous Galerkin methods, a solution polynomial of one degree higher is reconstructed using a least-squares method. The stencils used in the reconstruction involve only the van Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The resulting DG method can be regarded as an improvement of a recovery-based DG method in the sense that it shares the samemore » nice features as the recovery-based DG method, such as high accuracy and efficiency, and yet overcomes some of its shortcomings such as a lack of flexibility, compactness, and robustness. The developed DG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate the accuracy and efficiency of the method. The numerical results indicate that this reconstructed DG method is able to obtain a third-order accurate solution at a slightly higher cost than its second-order DG method and provide an increase in performance over the third order DG method in terms of computing time and storage requirement.« less

  15. [Comparison of different methods in dealing with HIV viral load data with diversified missing value mechanism on HIV positive MSM].

    PubMed

    Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.

  16. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  17. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  18. An Extraction Method of an Informative DOM Node from a Web Page by Using Layout Information

    NASA Astrophysics Data System (ADS)

    Tsuruta, Masanobu; Masuyama, Shigeru

    We propose an informative DOM node extraction method from a Web page for preprocessing of Web content mining. Our proposed method LM uses layout data of DOM nodes generated by a generic Web browser, and the learning set consists of hundreds of Web pages and the annotations of informative DOM nodes of those Web pages. Our method does not require large scale crawling of the whole Web site to which the target Web page belongs. We design LM so that it uses the information of the learning set more efficiently in comparison to the existing method that uses the same learning set. By experiments, we evaluate the methods obtained by combining one that consists of the method for extracting the informative DOM node both the proposed method and the existing methods, and the existing noise elimination methods: Heur removes advertisements and link-lists by some heuristics and CE removes the DOM nodes existing in the Web pages in the same Web site to which the target Web page belongs. Experimental results show that 1) LM outperforms other methods for extracting the informative DOM node, 2) the combination method (LM, {CE(10), Heur}) based on LM (precision: 0.755, recall: 0.826, F-measure: 0.746) outperforms other combination methods.

  19. Comparative study between the hand-wrist method and cervical vertebral maturation method for evaluation skeletal maturity in cleft patients.

    PubMed

    Manosudprasit, Montian; Wangsrimongkol, Tasanee; Pisek, Poonsak; Chantaramungkorn, Melissa

    2013-09-01

    To test the measure of agreement between use of the Skeletal Maturation Index (SMI) method of Fishman using hand-wrist radiographs and the Cervical Vertebral Maturation Index (CVMI) method for assessing skeletal maturity of the cleft patients. Hand-wrist and lateral cephalometric radiographs of 60 cleft subjects (35 females and 25 males, age range: 7-16 years) were used. Skeletal age was assessed using an adjustment to the SMI method of Fishman to compare with the CVMI method of Hassel and Farman. Agreement between skeletal age assessed by both methods and the intra- and inter-examiner reliability of both methods were tested by weighted kappa analysis. There was good agreement between the two methods with a kappa value of 0.80 (95% CI = 0.66-0.88, p-value <0.001). Reliability of intra- and inter-examiner of both methods was very good with kappa value ranging from 0.91 to 0.99. The CVMI method can be used as an alternative to the SMI method in skeletal age assessment in cleft patients with the benefit of no need of an additional radiograph and avoiding extra-radiation exposure. Comparing the two methods, the present study found better agreement from peak of adolescence onwards.

  20. Computer-aided analysis with Image J for quantitatively assessing psoriatic lesion area.

    PubMed

    Sun, Z; Wang, Y; Ji, S; Wang, K; Zhao, Y

    2015-11-01

    Body surface area is important in determining the severity of psoriasis. However, objective, reliable, and practical method is still in need for this purpose. We performed a computer image analysis (CIA) of psoriatic area using the image J freeware to determine whether this method could be used for objective evaluation of psoriatic area. Fifteen psoriasis patients were randomized to be treated with adalimumab or placebo in a clinical trial. At each visit, the psoriasis area of each body site was estimated by two physicians (E-method), and standard photographs were taken. The psoriasis area in the pictures was assessed with CIA using semi-automatic threshold selection (T-method), or manual selection (M-method, gold standard). The results assessed by the three methods were analyzed with reliability and affecting factors evaluated. Both T- and E-method correlated strongly with M-method, and T-method had a slightly stronger correlation with M-method. Both T- and E-methods had a good consistency between the evaluators. All the three methods were able to detect the change in the psoriatic area after treatment, while the E-method tends to overestimate. The CIA with image J freeware is reliable and practicable in quantitatively assessing the lesional of psoriasis area. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

Top