Methodes de calcul des forces aerodynamiques pour les etudes des interactions aeroservoelastiques
NASA Astrophysics Data System (ADS)
Biskri, Djallel Eddine
L'aeroservoelasticite est un domaine ou interagissent la structure flexible d'un avion, l'aerodynamique et la commande de vol. De son cote, la commande du vol considere l'avion comme une structure rigide et etudie l'influence du systeme de commande sur la dynamique de vol. Dans cette these, nous avons code trois nouvelles methodes d'approximation de forces aerodynamiques: Moindres carres corriges, Etat minimal corrige et Etats combines. Dans les deux premieres methodes, les erreurs d'approximation entre les forces aerodynamiques approximees par les methodes classiques et celles obtenues par les nouvelles methodes ont les memes formes analytiques que celles des forces aerodynamiques calculees par LS ou MS. Quant a la troisieme methode, celle-ci combine les formulations des forces approximees avec les methodes standards LS et MS. Les vitesses et frequences de battement et les temps d'executions calcules par les nouvelles methodes versus ceux calcules par les methodes classiques ont ete analyses.
Access. Challenge for Change/Societe Nouvelle Number Eleven.
ERIC Educational Resources Information Center
Prinn, Elizabeth, Ed.
Access is a journal published three or four times a year by Challenge for Change/Societe Nouvelle (CCSN). CCSN is an experimental program established by the Government of Canada as a cooperative effort between the National Film Board of Canada and certain of the Government's departments. Its purposes are to improve communications, create greater…
Une ecole elementaire nouvelle pour une societe nouvelle (New Elementary Schools for a New Society).
ERIC Educational Resources Information Center
Pare, Andre; Pelletier, Louise
1971-01-01
This article, translated into Spanish from the French, discusses changes and increasing complications in society which call for educational reform and improvement. Elementary education, traditionally based on memory skills, should become the setting for the development of mental processes and intellectual skills through the activity of problem…
75 FR 74608 - Airworthiness Directives; CENTRAIR Models 101, 101A, 101P, and 101AP Gliders
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-01
... Airworthiness Directives; CENTRAIR Models 101, 101A, 101P, and 101AP Gliders AGENCY: Federal Aviation...[eacute] Nouvelle (SN) Centrair. This tube had been reinforced in 1984 with a modification. Gliders...[eacute] Nouvelle (SN) Centrair. This tube had been reinforced in 1984 with a modification. Gliders...
Access. Challenge for Change/Societe Nouvelle Number Twelve.
ERIC Educational Resources Information Center
Prinn, Elizabeth, Ed.; Henaut, Dorothy Todd, Ed.
This issue of Access, the journal issued periodically by Challenge for Change/Societe Nouvelle, contains two groups of articles. The first focuses upon the Skyriver Project, relating how a project was developed which used film and video tape as a means of helping Alaskan communities to assess their own needs and to advocate for themselves the…
ERIC Educational Resources Information Center
Starets, Moshe
1983-01-01
A television series for Acadians was evaluated by the Ministry of Education according to its stated objectives of vocabulary development and strengthening of ethnic appreciation in the Acadian community. The series was found to be an effective method of both teaching Acadian culture and reinforcing students' competence in French. (MSE)
Second Supplement to A Catalog of the Mosquitoes of the World (Diptera: Culicidae)
1984-01-01
104. Brunhes, J. 1977a. Les moustiques de l’archipel des Comores I. - Inventaire, &partition et description de quatre esptces ou sous-espscies...nouvelles. Cah. O.R.S.T.O.M. Ser. Entomol. Med. Parasitol. 15:131-152. Brunhes, J. 1977b. Les moustiques de l’archipel des Comores 11. - Description de...Dieng. 1978. Aedes (Stegomyia) neoafricanus un nouvelle espzcie de moustique capture’e au Sgne’gal Oriental (Diptera: Culicidae), Cah. O.R.S.T.O.M
Experimental Investigations on the Distortion of ISAR Images Using Different Radar Waveforms
2003-09-01
des cibles fera partie int~grante d’une nouvelle approche pour la gestion des donn~es relatives A l’espace de combat, comme les nouvelles initiatives de ... risque de tir fratricide en conditions de combat. En somme, l’identification des cibles, particuli~rement en mode de non-cooperation, jouera un r~le...Defence, 2003 © Sa majest6 la reine, repr~sent~e par le ministre de la Defense nationale, 2003 Abstract Experimental results have shown that the
1989-11-01
preliminary data, security classification, proprietary, or other reaons . Details on the availability of these publications may be obtained from: Graphics...which the Litton IRS was used for all flights, so it provided a good opportunity to compare the different wind derivation methods. The principal...as determined by the visual landmark locations) always falls within these bounds. This is a good indication of a robust Kalman filter design. Of
1983-01-01
disturbance theory . The main feature of the method is the use of measured pressure along lines in the flow direction near the tunnel walls. This method...disturbance theory , then $can be written ( , = qo( , ) .@ (:. S-in(.t + 0.( or s CO (8) Defining cw as co S . ^(9) gives Sin= C, f(4,.) + OCr,z)co.s(0t...AUTHOR (S)/ AUTEUR (S) H. Sawada, visiting scientist 2nd Aerodynamics Division, National Aerospace Laboratory, Japan SERIES/SERIE Aeronautical Note 6
ERIC Educational Resources Information Center
Roe, Peter J.
1981-01-01
Addresses those readers who are not familiar with EAP, offering an introductory discussion of its objectives and methods, in two parts. Devotes the first part to the needs that justify an EAP approach, and the second to its methodology, with particular attention to interdisciplinary, task-oriented instruction. Societe Nouvelle Didier Erudition, 40…
ERIC Educational Resources Information Center
Py, Bernard, Ed.
1995-01-01
Conference papers on group methods of speech therapy include: "Donnees nouvelles sur les competences du jeune enfant. Proposition de nouveaux concepts" (New Data on the Competences of the Young Child. Proposition of New Concepts) (Hubert Montagner); "Interactions sociales et apprentissages: quels savoirs en jeu" (Social Interactions and Teaching:…
Proceedings of the Fourth Annual U.S. Army Conference on Applied Statistics, 21-23 October 1998.
1999-11-01
1833) published a memoir Nouvelles mithodes pour la determination des cometes in which he introduced and named the method of least squares. In 1809...251,1972. 2. Sprott, D. A. "Gauss’s Contributions to Statistics." Historia Mathematica, vol. 5, pp. 183-203,1978. 3. Stigler, S. M. "An Attack on Gauss...Published by Legendre in 1820." Historia Mathematica. vol. 4, pp. 31-35, 1977. 4. Stigler, S. M. "Gauss and the Invention of Least Squares." The
1990-03-01
4, Constater Ics imperfections 6ventuellcs du mat6riel (I inspection disponible i l’hure actuelle et les besoins die normalisation cn cc qui...et rapide et leurs propri~t~s particulibres notamlent dianisotropie, font qu’il n’y a pas ou peu do donn6es disponibles . Des mrhthodes sp~cifiques ont...Illuminateurs sont disponibles en fonction du mat6rIau et de l’environnement (Fig.8) :A tubes A 6clairs, A tubes Infrarouges. LeGs illuminateurs sont
Une nouvelle voie pour la conception des implants intervertébraux
NASA Astrophysics Data System (ADS)
Gradel, T.; Tabourot, L.; Arrieux, R.; Balland, P.
2002-12-01
L'objectif de notre travail est la conception d'une nouvelle génération d'implants intersomatiques qui s'adapte parfaitement à la géométrie des plateaux vertébraux en se déformant. Pour cela, nous avons utilisé une nouvelle démarche qui consiste à simuler entièrement le procédé de fabrication en l'occurrence l'emboutissage, Cette simulation, en concervant l'historique des sollicitations exercées sur le matériau lors de sa mise en œuvre permet de valider très précisément sa résistance mécanique en fin de cycle. Au cours de cette étude, nous avons mené en parallèle deux analyses dites “ coopératives ” : l'une fondée sur un modèle phénoménologique de type HILL et l'autre sur un modèle multi-échelles prenant en compte des phénomènes plus physiques afin d'acquérir une bonne connaissance du comportement du matériau lors de la déformation. C'est pour sa bonne résistance, sa biocompatibilité et ses propriétés radiologiques que nous avons choisi le T40 (titane pur) comme matériau.
A Formal Characterization of Relevant Information in Multi-Agent Systems
2009-10-01
Conference iTrust. (2004) [17] Sadek, D.: Le dialogue homme-machine : de l’ ergonomie des interfaces à l’ agent intelligent dia- loguant. In: Nouvelles interfaces hommemachine, Lavoisier Editeur, Arago 18 (1996) 277–321
L'Ecole Maternelle Belge: Son Histoire, Ses Nouvelles Preoccupations.
ERIC Educational Resources Information Center
Delhaxhe, Arlette
1988-01-01
Notes two new concerns of nursery schools in Belgium: (1) increased numbers of very young children participating in programs not designed for children their age; and (2) the inability of after-school programs to meet the educational needs of the participants. (SKC)
ERIC Educational Resources Information Center
Hajdu-Vaughn, Susan, Ed.; Coyle, Barbara, Ed.
1997-01-01
This collection includes four quarterly issues of "Interaction," a publication of the Canadian Child Care Federation. Each issue addresses several topics and is arranged in four sections: opinions, practice/pratique, focus/a propos, and news/nouvelles. The opinions section includes letters and editorial/review columns, the practice…
Prevention of Acute Rheumatic Fever and Rheumatic Heart Disease
... Canada Henry Ford Hospital, Detroit, Michigan Instituto Nacional de Cardiología Ignacio Chávez – INCICh México City, México Kuang- ... Pompidou Hospital, Paris, France; the Centre Hospitalier Territorial de Nouvelle Calédonie, New Caledonia; Maputo Heart Institute, Maputo, ...
Fokam, Z.; Ngassam, P.; Nana, P.A.; Bricheux, G.; Bouchard, P.; Sime Ngando, T.
2012-01-01
Cinq nouvelles espèces de ciliés astomes, présentes dans le tube digestif de vers oligochètes du genre Alma du Cameroun, ont été décrites. Les techniques utilisées sont : l’observation vitale, la coloration au DAPI, la microscopie électronique à balayage et les imprégnations argentiques selon Fernandez Galiano, 1966. Ce travail confirme la présence des genres Paracoelophrya et Dicoelophrya dans le tube digestif des oligochètes du genre Alma du Gabon et du Cameroun ; il permet de faire une synthèse récapitulative de la sous-famille des Metaracoelophryinae. De plus, est confirmée l’homogénéité de ce groupe, et est reposée la question de la parenté phylogénétique des Hoplitophryida. PMID:22314239
ERIC Educational Resources Information Center
Capelle, Guy
1983-01-01
Serious problems in education in Latin America arising from political, economic, and social change periodically put in question the status, objectives, and manner of French second-language instruction. A number of solutions to general and specific pedagogical problems are proposed. (MSE)
Learner Autonomy: New Insights. = Autonomie de l'apprenant: nouvelles pistes.
ERIC Educational Resources Information Center
Dam, Leni, Ed.
2001-01-01
This book presents papers, written in both English and French, from a symposium entitled "Promoting Learner Autonomy: New Insights." After "Introduction" (Leni Dam), the eight papers include the following: "Examining the Discourse of Learner Advisory Sessions" (David Crabbe, Alison Hoffman, and Sara Cotteral);…
Grammaire nouvelle? Questions pour des questions (New Grammar? Questions about Questions).
ERIC Educational Resources Information Center
Lamy, Andre
1987-01-01
Implications of the "new grammar" approach to teaching French are examined, including the issues of nomenclature, definition, rules and generalizations, and native language use in the second language class. Grammar itself has not changed, and the principal concern is still good usage. (MSE)
Historique en grandes enjambées de la thermodynamique de l'équilibre
NASA Astrophysics Data System (ADS)
Hertz, J.
2004-12-01
La Thermodynamique, une science totalement nouvelle au XIXème siècle, a germé en France en contrepoint des idées du siècle des Lumières, dans le milieu particulier des anciens élèves de l’Ecole Polytechnique, officiers supérieurs formés pour l’armée républicaine ou napoléonienne, mais qui ne trouvaient plus leur place dans l’armée de la Restauration. Ils se convertissaient en ingénieurs civils des métiers industriels en pleine expansion, comme le développement de la machine à vapeur ou des chemins de fer. La plupart d’entre eux, plutôt libre-penseurs, adhéraient aux idées scientistes du « positivisme », véhiculées dans les Loges de la Franc-Maçonnerie du Grand Orient de France et plus particulièrement dans les cercles Saint-Simoniens, premiers adeptes du socialisme industriel. C’est ainsi que naquit en 1824, dans le cerveau subtil mais brouillon de Sadi Carnot toute la vision illuminée de cette science nouvelle, incompréhensible pour ses contemporains. Elle ressuscita en 1834 sous la plume d’un Emile Clapeyron qui avait pris conscience de l’immensité de l’œuvre de Sadi Carnot. Mais le rappel de Clapeyron demeura également sans écho pendant dix années. Le réveil de la Thermodynamique se fera désormais hors de France par des hommes de grande pratique religieuse et généralement protestants. C’est ainsi que William Thomson en Ecosse et Rudolph Clausius, venu de Prusse, achevèrent l’œuvre de leurs deux prédécesseurs et que la Thermodynamique mécano-thermique fut définitivement établie en 1864. La thermodynamique chimique peut être attribuée à un seul génie mathématicien, Josiah Willard Gibbs qui travaillait tout seul au Yale College de New-Haven, dans le Connecticut, et rédigea sa nouvelle théorie entre 1875 et 1878. Enfin l’interprétation statistique du second principe sera l’œuvre en 1877 d’un Autrichien, Ludwig Boltzmann, homme génial mais fragile qui eut le temps d’insuffler ses idées sur la quantification de l’énergie à l’Allemand Max Planck, premier prix Nobel de la nouvelle discipline.
NASA Astrophysics Data System (ADS)
Toureille, A.; Reboul, J.-P.; Merle, P.
1991-01-01
A non-destructive method for the measurement of space charge densities in solid insulating materials is described. This method called “ the thermal step technique ” is concerned with the diffusion of a step of heat applied to one side of the sample and with the resulting nonuniform thermal expansion. From the solution of the equation of heat, we have set up the relations between the measured current and the space charge densities. The deconvolution procedure leading to these charge densities is presented. Some results obtained with this method on XLPE and polypropylene slabs are given. Une nouvelle méthode non destructive, pour la mesure des densités de charges d'espace dans les isolants solides, est décrite. Cette méthode, dite de “ l'onde thermique ” est basée sur la diffusion d'un front de chaleur appliqué à une des faces de l'éprouvette et sur la dilatation non uniforme qui en résulte. A partir de la résolution de l'équation de la chaleur, nous avons établi les relations entre le courant mesuré et les densités de charges. Nous indiquons ensuite un procédé de déconvolution permettant de calculer ces densités de charge. Quelques résultats obtenus par cette méthode, sur des plaques de polyéthylène réticulé et polypropylène sont donnés.
Aedes (Stegomyia) Corneti, A New Species of the Africanus Subgroup (Diptera: Culicidae)
1986-10-14
C. Jan, J. Renaudet, P. L. Dieng, J. F. Bangoura, and A. Lorand. 1979. Isolements d’arbovirus au Senegal oriental a partir de moustiques (1972...149-163. Comet, M., M. Valade, and P. Y. Dieng. 1978. Aedes (Stegomyia) neoafricanus une nouvelle espece de moustique capturee au Senegal
Fistule vesico-sigmoïdienne compliquant une hydatidose intestinale: à propos d'un cas rare
Lahyani, Mounir; Jabbour, Younes; Karmouni, Tarik; Elkhader, Khalid; Koutani, Abdellatif; Andaloussi, Ahmed Ibn attya
2014-01-01
La fistule colo-vésicale sur une hydatidose sigmoidienne est une entité pathologique exceptionnelle. Nous rapportions une nouvelle observation, ou seront rappelées les principales données diagnostique et thérapeutique de cette affection. PMID:25722790
Une nouvelle facon de decouvrir la grammaire (A New Way of Discovering Grammar).
ERIC Educational Resources Information Center
Gutierrez, Alma Rosa Aguilar; Duarte, Delma Gonzalez
1993-01-01
A discussion of French grammar instruction looks at the relationship between linguistic competence and communicative competence. It offers exercises emphasizing the logic of language, using four different approaches. Three of the exercises use texts (included), and the fourth requires the student to describe a room. (MSE)
2011-02-28
activités quotidiennes que lors des interventions en situation de crise . Il existe de nouvelles technologies et applications qui peuvent accroître la...Motorola NEC Nortel Nokia-Siemens Samsung Qualcomm Texas Instruments 2. The introduction of new technology into a network will
ERIC Educational Resources Information Center
Moeschler, Jacques
1981-01-01
Analyzes the strategies employed in terminating conversational exchanges, with particular attention to argumentative sequences. Examines the features that distinguish these sequences from those that have a transactional character, and discusses the patterns of verbal interaction attendant to negative responses. Societe Nouvelle Didier Erudition,…
Les nouveaux critères de la Maladie d’Alzheimer – Perspective gériatrique*
Molin, Pierre; Rockwood, Kenneth
2016-01-01
RÉSUMÉ Deux nouvelles séries de critères pour le diagnostic de la maladie d’Alzheimer sont maintenant en vigueur, incluant une série publiée en 2014. Un « nouveau lexique » conceptualisant la maladie a également été proposé. En 2012, la Conférence consensuelle canadienne affirmait que, pour l’instant, ni les nouveaux critères ni la nouvelle terminologie ne modifiaient la pratique en première ligne. Néanmoins, pour les consultants spécialisés en démence, l’avènement de ces critères ouvre la porte à de nombreux défis et occasions. En général, les nouveaux critères accordent une place grandissante aux biomarqueurs. Toutefois, les évidences qui sous-tendent leur utilisation demeurent incomplètes. L’étude de sujets provenant de la communauté ayant raffiné notre compréhension des critères neuropathologiques des démences, il est probable que notre expérience avec les biomarqueurs en bénéficierait également. Pour l’instant, ces critères sont réservés à la recherche. Cependant, leur adoption à plus large échelle est pressentie, particulièrement aux États-Unis. Les gériatres canadiens doivent être conscients de la terminologie maintenant utilisée et du changement fondamental qui en découle : un diagnostic de maladie d’Alzheimer ne requiert plus un diagnostic de démence. Dans l’attente de nouvelles données – auxquelles les gériatres peuvent contribuer – il y a lieu de faire preuve de prudence dans l’adoption des nouveaux critères, car ils sont susceptibles de moins bien s’appliquer aux personnes âgées. PMID:27403215
1981-01-01
Ter. Inini, Publ. no. 112. 6 p. 1947. Distribution des Moustiques de Genre Cuhx en Guyane Inst. Pasteur Guyane Ter. Inini, Publ. 146. 9 p. 0.h...Culicidae). Ann. Am. Entomol. Sot. 43:75-114. Shnevet, G. and E. Abonnenc. 1941. Les moustiques de la Guyane Fransaise. Le genre CuZex. - 2. Nouvelle
Training in Toronto's "New Economy"=La formation dans la "nouvelle" economie de Toronto.
ERIC Educational Resources Information Center
Community Perspectives Series, 2002
2002-01-01
This Community Perspectives Series document includes statements about the new economy in Toronto made by four participants in a March 2001 forum. The new economy was defined by the moderator as "an economy that emphasizes knowledge and technical processes put to the production of goods and other outputs so that an individual's knowledge is…
ERIC Educational Resources Information Center
Auchlin, Antoine
1981-01-01
Examines morphemic markers that signal the opening and closing of discourse units, emphasizing their complexity and their central role for a descriptive model of conversation. Then proceeds to analyze their functions within the overall structure of conversation, classifying them according to their properties and uses. Societe Nouvelle Didier…
75 FR 43103 - Airworthiness Directives; CENTRAIR Models 101, 101A, 101P, and 101AP Gliders
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-23
... tube and, if necessary, replacing it. You may obtain further information by examining the MCAI in the... the rudder bar locking adjustment tube of a non- reinforced version have been reported to Soci[eacute]t[eacute] Nouvelle (SN) Centrair. This tube had been reinforced in 1984 with a modification. Gliders...
ERIC Educational Resources Information Center
Canadian Association for Distance Education, Montreal (Quebec).
This document contains a collection of 85 abstracts, of which 56 are written in English, 27 in French, and 2 in both languages. These abstracts discuss research studies, evaluation studies, needs assessments, and the advantages and disadvantages of distance education and related issues. Some of the titles are: "Activation of learning in…
NASA Astrophysics Data System (ADS)
Kermène, V.; Chabanol, M.; Mugnier, A.; Barthélémy, A.
2002-06-01
Nous présentons une nouvelle méthode de pompage pour augmenter la brillance d'un Oscillateur Paramétrique Optique nanoseconde. Cette méthode est basée sur le profilage spatio-temporel du faisceau de pompage conduisant POPO à ce comporter comme une source auto-injectée.
We Are the New Nation (Nous Sommes La Nouvelle Nation). The Metis and National Native Policy.
ERIC Educational Resources Information Center
Daniels, Harry W.
A compilation of six policy statements, the booklet is intended to draw attention to the suppression of the rights of indigenous peoples (specifically, the Canadian Metis) by an inflexible federalist system of government, misguided national policies, and land claim settlements such as the 1978 COPE settlement. It is also intended to propose…
1983-09-15
Markka 38 FRANCE CNPF, Government Clash Over Business Expense Figures "(Alain Barbanel; L’USINE NOUVELLE, 14 Jul 83) 39 ICELAND... FRANCE Leftwing Militants Oppose Economic, Foreign Policies (LE MONDE, 24 Jun 83) 72 NETHERLANDS Developments, Plans of Peace...either domestic or international) from taking place in such countries as France , England, Italy, Belgium and the FRG. In Portugal’s case there are two
ERIC Educational Resources Information Center
Guindon, Rene; Poulin, Pierre
This text examines the ties that bind Francophones across Canada to illustrate the diversity and depth of the Canadian Francophone community. Observations are organized into seven chapters. The first looks at the kinship ties of Canadian Francophones, including common ancestral origins, settlement of the Francophone regions, and existence of two…
JPRS Report Science & Technology Europe.
1992-10-22
Potatoes for More Sugar [Frankfurt/Main FRANKFURTER ALLEGEMEINE, 12 Aug 92] 26 COMPUTERS French Devise Operating System for Parallel, Failure...Tolerant and Real-Time Systems [Munich COMPUTER WOCHE, 5 Jun 92] 27 Germany Markets External Mass Memory for IBM-Compatible Parallel Interfaces...Infrared Detection System [Thierry Lucas; Paris L’USINE NOUVELLE TECHNOLOGIES, 16 Jul 92] 28 Streamlined ACE Fighter Airplane Approved [Paris AFP
JPRS Report, Science & Technology, Europe
1992-05-07
Knowledge of Systems and Their Scope: Within their own domains , computer assistants must be able to provide information about their scope and its limits...Described 19 Effluent Treatment System [Bonn WISSENSCHAFT WIRTSCHAFT POLITIK, 25 Mar 92] 19 Waste Disposal by Pyrolysis [Bonn WISSENSCHAFT...Ditterich; Wuerzburg UMWELTMAGAZIN, Apr 92] 23 France Plans Nuclear Plant Monitoring System [Patrick Levy; Paris LVSINE NOUVELLE, 12 Mar 92] 24
Stoneflies of the genus Zwicknia Murányi, 2014 (Plecoptera: Capniidae) from western Switzerland.
Reding, Jean-Paul G
2018-02-21
Le genre Zwicknia Murányi, de la famille des Capniidae, a récemment été proposé pour inclure Capnia bifrons (Newman), ce qui a rendu nécessaire la révision du matériel collecté en Suisse occidentale et identifié précédemment comme correspondant à cette espèce. Cette révision, qui a depuis abouti à la description d'une espèce nouvelle pour la science, Zwicknia ledoarei Reding et al., est maintenant complétée, permettant de signaler la présence de deux espèces supplémentaires, Z. bifrons et Z. westermanni Boumans Murányi, la dernière nouvelle pour la Suisse. Des informations sur la distribution, les préférences écologiques, la zoogéographie et le statut de conservation de ces espèces en Suisse occidentale sont également apportées. Finalement, une clé d'identification pour les larves et les adultes mâles des espèces précitées est proposée.
Une nouvelle famille de scorpions du Crétacé inférieur du Brésil
NASA Astrophysics Data System (ADS)
de Carvalho, Maria da Gloria P.; Lourenço, Wilson R.
2001-06-01
A new family, new genus and species of fossil scorpions are described from the Early Cretaceous of Brazil, Santana formation, Crato area in the state of Ceará. These fossils can be classified together with extant families within the Scorpionoidea. This suggests that these modern scorpions belong to lineages present at least for 110 Myr.
Tramontana, Joseph
2016-01-01
Dr. Dabney Ewin was a major factor in the revitalization of the New Orleans Society for Clinical Hypnosis (NOSCH) after it had been dormant for many years. This article briefly presents the fascinating history of the society as a tribute to Dr. Ewin, a remarkable physician.
JPRS Report, Science & Technology, Europe & Latin America.
1988-01-22
Rex Malik; ZERO UN INFORMATIQUE, 31 Aug 87) 25 FACTORY AUTOMATION, ROBOTICS West Europe Seeks To Halt Japanese Inroads in Machine Tool Sector...aircraft. 25048 CSO: 3698/A014 26 FACTORY AUTOMATION, ROBOTICS vrEST EUROpE WEST EUROPE SEEKS TO HALT JAPANESE INROADS IN MACHINE TOOL SECTOR...Trumpf, by the same journalist; first paragraph is L’USINE NOUVELLE introduction] [Excerpts] European machine - tool builders are stepping up mutual
ERIC Educational Resources Information Center
Roulet, Eddy
1981-01-01
Attempts to show how the surface structure of conversation can be described by means of a few principles and simple categories, regardless of its level of complexity. Accordingly, proposes a model that emphasizes the pragmatic functions of certain connectors and markers in the context of conversation exchanges. Societe Nouvelle Didier Erudition,…
ERIC Educational Resources Information Center
Tschoumy, Jacques-Andre
This document examines the trend of school partnership both inside and outside the educational system. The report asks three questions: what is motivating European partners?; is the phenomenon of partnership really European?; and is this the end of the school of Jules Ferry? School partnership history, strategy, and axiomatics or rules are…
ERIC Educational Resources Information Center
Abou, Selim
This study, the result of interviews conducted in Quebec and Montreal in the spring of 1975, deals with the adaptation, integration, and acculturation of the Lebanese immigrants in Quebec since the end of World War II. This new immigration wave is contrasted with the one that took place around 1880. Generally speaking, the situation in both the…
ERIC Educational Resources Information Center
Colbert, Vicky; Arboleda, Jairo
For the first time, Colombia is in a position to comply with the article of her constitution which guarantees a primary education to all citizens. The country now has the technical, political and financial conditions necessary to universalize primary education, particularly in rural areas where low coverage and inefficiency of the system have…
ERIC Educational Resources Information Center
Barkatoolah, Amina B. S.
Two of Unesco's major education programs, the universalizaation of basic education and literacy education, have emphasized the need for teachers trained in their particular disciplines and pedagogical techniques. Developing countries, faced with a shortage of qualified teachers, have realized that they cannot rely solely on the formal education…
JPRS Report, Science & Technology, China: Energy
1988-03-18
NETHERLANDS, Sep 87] 28 SUPERCONDUCTIVITY Rhone-Poulenc World Leader in Rare Earths, Superconducting Powders [Philippe Lanone, et al; L’USINE NOUVELLE...scientist founded Ferrofluidics Corporation, a company which is the world leader in the field and which offers not only magnetic liquids, but also...University of Perm, as well as by A. Gailitis of the University of Riga. (It should be noted that the next world congress on this subject will be
Possibilité d'une nouvelle technologie de traitement des minerais de fer de l'Ouenza par radiométrie
NASA Astrophysics Data System (ADS)
Idres, A.; Bounouala, M.
2005-05-01
En l'absence d'une technologie fiable de traitement des haldes de minerais de fer, les caractéristiques minéralogiques et chimiques complexes et les effets néfastes des résidus miniers posent réellement un problème environnemental. A cet effet, une étude minéralogique et chimique du minerai de fer a été menée en utilisant des techniques multiples (microscopie optique, DRX, FX, MEB). En tenant compte de la nature des résidus, des échantillons représentatifs ont été testés par séparation radiométrique. Plusieurs paramètres ont été caractérisés tels que la vitesse de la bande transporteuse, le temps d'émission des rayons gamma et la granulométrie d'alimentation du procédé. Les résultats ainsi obtenus par cette méthode de séparation sont très significatifs en récupération et en teneur fer. Cependant, cette nouvelle technologie permet d'une part une meilleure valorisation des minerais de fer et d'autre part une réduction du tonnage stocké sur le carreau de la mine.
Miroirs multicouches C/SI a incidence normale pour la region spectrale 25-40 nanometres
NASA Astrophysics Data System (ADS)
Grigonis, Marius
Nous avons propose la nouvelle combinaison de materiaux, C/Si, pour la fabrication de miroirs multicouches a incidence normale dans la region spectrale 25-40 nm. Les resultats experimentaux montrent que cette combinaison possede une reflectivite d'environ ~25% dans la region spectrale 25-33 nm et une reflectivite d'environ ~23% dans la region spectrale 33-40 nm. Ces valeurs de reflectivite sont les plus grandes obtenues jusqu'a maintenant dans la region spectrale 25-40 nm. Les miroirs multicouches ont ete par la suite caracterises par microscopie electronique a transmission, par diverses techniques de diffraction des rayons X et par spectroscopies d'electrons AES et ESCA. La resistance des miroirs aux temperatures elevees a ete egalement etudiee. Les resultats fournis par les methodes de caracterisation indiquent que cette combinaison possede des caracteristiques tres prometteuses pour son application comme miroir pour les rayons X mous.
Modeles numeriques de la stimulation optique de neurones assistee par nanoparticules plasmoniques
NASA Astrophysics Data System (ADS)
Le Hir, Nicolas
La stimulation de neurones par laser emerge depuis plusieurs annees comme une alternative aux techniques plus traditionnelles de stimulation artificielle. Contrairement a celles-ci, la stimulation lumineuse ne necessite pas d'interagir directement avec le tissu organique, comme c'est le cas pour une stimulation par electrodes, et ne necessite pas de manipulation genetique comme c'est le cas pour les methodes optogenetiques. Plus recemment, la stimulation lumineuse de neurones assistee par nanoparticules a emerge comme un complement a la stimulation simplement lumineuse. L'utilisation de nanoparticules complementaires permet d'augmenter la precision spatiale du procede et de diminuer la fluence necessaire pour observer le phenomene. Ceci vient des proprietes d'interaction entre les nanoparticules et le faisceau laser, comme par exemple les proprietes d'absorption des nanoparticules. Deux phenomenes princpaux sont observes. Dans certains cas, il s'agit d'une depolarisation de la membrane, ou d'un potentiel d'action. Dans d'autres experiences, un influx de calcium vers l'interieur du neurone est detecte par une augmentation de la fluorescence d'une proteine sensible a la concentration calcique. Certaines stimulations sont globales, c'est a dire qu'une perturbation se propage a l'ensemble du neurone : c'est le cas d'un potentiel d'action. D'autres sont, au contraire, locales et ne se propagent pas a l'ensemble de la cellule. Si une stimulation lumineuse globale est rendue possible par des techniques relativement bien maitrisees a l'heure actuelle, comme l'optogenetique, une stimulation uniquement locale est plus difficile a realiser. Or, il semblerait que les methodes de stimulation lumineuse assistees par nanoparticules puissent, dans certaines conditions, offrir cette possibilite. Cela serait d'une grande aide pour conduire de nouvelles etudes sur le fonctionnement des neurones, en offrant de nouvelles possibilites experimentales en complement des possibilites actuelles. Cependant, le mecanisme physique a l'origine de la stimulation lumineuse de neurones, ainsi que celui a l'orgine de la stimulation lumineuse assistee par nanoparticules, n'est a ce jour pas totalement compris. Des hypotheses ont ete formulees concernant ce mecanisme : il pourrait etre photothermique, photomecanique, ou encore photochimique. Il se pourrait egalement que plusieurs mecanismes soient a l'oeuvre conjointement, etant donne la variete des observations. La litterature ne converge pas a ce sujet et l'existence d'un mecanisme commun aux differentes situations n'a pas ete demontree.
ERIC Educational Resources Information Center
Seaton, Ian
1981-01-01
Describes the criteria behind the development of the English Language Testing Service system, a new language proficiency evaluation instrument developed by the British Council. This system is based on an analysis of foreign students' communication needs in the context of their professional training and academic life. Societe Nouvelle Didier…
ERIC Educational Resources Information Center
Human Resources Development Canada, Hull (Quebec). Office of Learning Technologies.
This document is the product of a study that was conducted to identify the target audience of Canada's Office of Learning Technologies (OLT), determine which stakeholders should be involved in developing the OLT's action program; and recommend specific actions for the OLT to take. Chapter 1 provides an overview of the study methodology, which…
New Leader: a Documentary on the Orleans ROTC Battalion.
1991-05-01
Robert Flaherty’s Non-Preconception, Italian Neo- Realism , French Nouvelle Vague, England’s free cinema, and electronic news gathering for television...Kino-Eye has been a relentless struggle to modify the course 38 of world cinema, to place in cinematic production a new emphasis of the "unplayed...diverted from the conventional style to a cinematic presentation that showed audiences as true a portrait of themselves and their surroundings as
Investigation of a Neural Network Implementation of a TCP Packet Anomaly Detection System
2004-05-01
reconnatre les nouvelles variantes d’attaque. Les réseaux de neurones artificiels (ANN) ont les capacités d’apprendre à partir de schémas et de...Computational Intelligence Techniques in Intrusion Detection Systems. In IASTED International Conference on Neural Networks and Computational Intelligence , pp...Neural Network Training: Overfitting May be Harder than Expected. In Proceedings of the Fourteenth National Conference on Artificial Intelligence , AAAI-97
Trust Repair between a Military Organization and a Local Population: A Pilot Study
2011-02-01
l’aspect « public » du paradigme IIMP, une approche relativement nouvelle pour un grand nombre de militaires et le plus grand des défis pour les...questionnaires were reviewed and approved by the DRDC Human Research Ethics Committee (HREC) and all participants received remuneration according to DRDC...Form. The Human Research Ethics Committee (HREC) of Defence Research and Development Canada (DRDC) has approved this study (L-701A). If you have
Nouvelles bornes et estimations pour les milieux poreux à matrice rigide parfaitement plastique
NASA Astrophysics Data System (ADS)
Bilger, Nicolas; Auslender, François; Bornert, Michel; Masson, Renaud
We derive new rigorous bounds and self-consistent estimates for the effective yield surface of porous media with a rigid perfectly plastic matrix and a microstructure similar to Hashin's composite spheres assemblage. These results arise from a homogenisation technique that combines a pattern-based modelling for linear composite materials and a variational formulation for nonlinear media. To cite this article: N. Bilger et al., C. R. Mecanique 330 (2002) 127-132.
Information Services: Their Organization, Control and Use.
1981-01-01
au plan de Ia dissfmination de l’information. Les gouvernements doivent d6finir de nouvelles politiques de l’information sous forme de plans d’actions...los systemnes documen- taires . Los perspectives do la banque do donnes ARIANE on 80 Quellos sont los perspectives 6 P’houre actuelle ? Dopuis plus do 8...lb mime, avec los ardinatours serveurs. Souls, 6 mon sons, les systomes banques do donnees pourront repondre aux bosomns des utilisateurs qui exigeront
Critique, Explore, Compare, and Adapt (CECA): A New Model for Command Decision Making
2003-07-01
dans le contexte du C2. Comparativement à la boucle OODA, la boucle CECA présente l’avantage de jeter une lumière nouvelle sur la nature de la...these is the Cognitive Hierarchy, which was designed to describe the relationships between various kinds of data and understanding by commanders and...needed to design training, doctrine, and procedures to be consistent with natural human reasoning and enhance its strengths and mitigate its
2001-04-01
part of the following report: TITLE: New Information Processing Techniques for Military Systems [les Nouvelles techniques de traitement de l’information...rapidly developing information increasing amount of time is needed for gathering and technology has until now not yet resulted in a substantial...Information Processing Techniques for Military Systems", held in Istanbul, Turkey, 9-11 October 2000, and published in RTO MP-049. 23-2 organisations. The
Canadian Paramedic Services Standards Report: A Strategic Planning Report
2014-06-01
Gestion des programmes 7. Nouvelle technologie 8. Gestion des situations d’urgence La section 4 du rapport sur la stratégie de ...normalisation (CSA) et du Programme canadien pour la sûreté et la sécurité (PCSS). Ce projet a réuni des experts clés de la communauté canadienne du ...existantes pour les services paramédicaux actuels au Canada aux besoins présents et futurs de la communauté
Real Time Intrusion Detection (la detection des intrusions en temps reel)
2003-06-01
prometteuses actuelles et nouvelles, susceptibles d’être utilisées pour des applications temps réel, et laisse prévoir ainsi les technologies et les...components, to survivability, as a risk management problem requiring the involvement of the whole organization to support the survival of the organization’s...this topic. In all fairness , until recently “reaction” has not been part of IDS’s functionality. Above all and as stated previously, traditional RT
2001-01-01
cin~tique Properties»> ou o IP »>. En fait, ces IP ne sont rien (exemple : un missile en vol libre ). Meme si on peut d’autres que des composants sur...probablement pas radicalement le problkme peuvent soit achet~s soit &tre issus de conceptions au niveau des composants 6lectroniques. pr&c~dentes. Les
2010-04-01
that are now digitally enhanced and also part-task trainers (attrappe). Paper 12 – Using Advanced Prosthetics for Stress Inoculation Training and to...concept of minimum treatment called the “Just seven procedure”: just bridge the fracture , just align the limb, just stiff enough to allow evacuation...require energy, some more than others. Oxygen economizing systems like on-demand valves should be compulsory for spontaneous ventilation. Pulseoximeters
NASA Astrophysics Data System (ADS)
Brunet, Michel; White, Tim D.
2001-01-01
Two new small Suines, Kolpochoerus deheinzelini sp. nov. and Kolpochoerus cookei sp. nov., are described. The first one represents the earliest appearance of the genus and the earliest African Suine. K. deheinzelini n. sp. is the ancestor of K. afarensis, previously considered as an Eurasiatic migrant in the Middle Pliocene of Africa (Pickford, 1994). The second one is the smallest species of the genus.
NASA Astrophysics Data System (ADS)
Tremblay, Jose-Philippe
Les systemes avioniques ne cessent d'evoluer depuis l'apparition des technologies numeriques au tournant des annees 60. Apres le passage par plusieurs paradigmes de developpement, ces systemes suivent maintenant l'approche " Integrated Modular Avionics " (IMA) depuis le debut des annees 2000. Contrairement aux methodes anterieures, cette approche est basee sur une conception modulaire, un partage de ressources generiques entre plusieurs systemes et l'utilisation plus poussee de bus multiplexes. La plupart des concepts utilises par l'architecture IMA, bien que deja connus dans le domaine de l'informatique distribuee, constituent un changement marque par rapport aux modeles anterieurs dans le monde avionique. Ceux-ci viennent s'ajouter aux contraintes importantes de l'avionique classique telles que le determinisme, le temps reel, la certification et les cibles elevees de fiabilite. L'adoption de l'approche IMA a declenche une revision de plusieurs aspects de la conception, de la certification et de l'implementation d'un systeme IMA afin d'en tirer profit. Cette revision, ralentie par les contraintes avioniques, est toujours en cours, et offre encore l'opportunite de developpement de nouveaux outils, methodes et modeles a tous les niveaux du processus d'implementation d?un systeme IMA. Dans un contexte de proposition et de validation d'une nouvelle architecture IMA pour un reseau generique de capteurs a bord d?un avion, nous avons identifie quelques aspects des differentes approches traditionnelles pour la realisation de ce type d?architecture pouvant etre ameliores. Afin de remedier a certaines des differentes lacunes identifiees, nous avons propose une approche de validation basee sur une plateforme materielle reconfigurable ainsi qu'une nouvelle approche de gestion de la redondance pour l'atteinte des cibles de fiabilite. Contrairement aux outils statiques plus limites satisfaisant les besoins pour la conception d'une architecture federee, notre approche de validation est specifiquement developpee de maniere a faciliter la conception d'une architecture IMA. Dans le cadre de cette these, trois axes principaux de contributions originales se sont degages des travaux executes suivant les differents objectifs de recherche enonces precedemment. Le premier axe se situe au niveau de la proposition d'une architecture hierarchique de reseau de capteurs s'appuyant sur le modele de base de la norme IEEE 1451. Cette norme facilite l'integration de capteurs et actuateurs intelligents a tout systeme de commande par des interfaces normalisees et generiques.
NASA Astrophysics Data System (ADS)
La Madeleine, Carole
Ce memoire est presente a la Faculte de medecine et des sciences de la sante de l'Universite de Sherbrooke en vue de l'obtention du grade de maitre es sciences (M.Sc.) en radiobiologie (2009). Un jury a revise les informations contenues dans ce memoire. Il etait compose de professeurs de la Faculte de medecine et des sciences de la sante soit : Darel Hunting PhD, directeur de recherche (departement de medecine nucleaire et radiobiologie), Leon Sanche PhD, directeur de recherche (departement de medecine nucleaire et radiobiologie), Richard Wagner PhD, membre du programme (departement de medecine nucleaire et radiobiologie) et Guylain Boissonneault PhD, membre exterieur au programme (departement de biochimie). Le 5-bromodeoxyuridine (BrdU), un analogue halogene de la thymidine reconnu depuis les annees 60 comme etant un excellent radiosensibilisateur. L'hypothese la plus repandue au sujet de l'effet radio sensibilisant du BrdU est qu'il augmente le nombre de cassures simple et double brin lorsqu'il est incorpore dans l'ADN de la cellule et expose aux radiations ionisantes. Toutefois, de nouvelles recherches semblent remettre en question les observations precedentes. Ces dernieres etudes ont confirme que le BrdU est un bon radiosensibilisateur, car il augmente les dommages radio-induits dans l'ADN. Mais, c'est en etant incorpore dans une region simple brin que le BrdU radiosensibilise l'ADN. Ces recherches ont egalement revele pour la premiere fois un nouveau type de dommages produits lors de l'irradiation de l'ADN contenant du BrdU : les dimeres interbrins. Le but de ces travaux de recherche est de determiner si la presence de bromodeoxyuridine dans l'ADN augmente l'induction de bris simple et / ou double brin chez les cellules irradiees en utilisant de nouvelles techniques plus sensibles et specifiques que celles utilisees auparavant. Pour ce faire, les essais cometes et la detection des foci H2AX phosphorylee pourraient permettre d'etablir les effets engendres par le BrdU au niveau cellulaire. Notre hypothese (basee sur des resultats preliminaires effectues dans notre laboratoire) est que l'irradiation de l'ADN cellulaire en presence de BrdU augmentera le nombre de bris simple brin sans toutefois augmenter le nombre de bris double brin. Les resultats presentes dans ce memoire semblent corroborer cette hypothese. Les nouvelles methodes d'analyse, soient l'essai comete et la detection des foci gamma-H2AX remettent en question ce qui a ete dit sur le BrdU au sujet de l'induction des cassures double brin depuis plusieurs annees. L'ensemble de ces nouveaux resultats effectue a l'aide de cellules ayant incorporees du BrdU sont en correlation avec de precedents resultats obtenus dans notre laboratoire sur des oligonucleotides bromes. Ils reaffirment que l'irradiation combinee au BrdU augmente l'induction de bris simple brin mais pas de bris double brin. L'investigation approfondie des mecanismes d'action non elucides du BrdU au niveau cellulaire et son utilisation a des moments strategiques pendant le traitement de radiotherapie pourraient accroitre son efficacite a des fins d'utilisation clinique. Mots cles : 5-bromodeoxyuridine, dimeres interbrins, dommage a l'ADN, essai comete, H2AX, radiosensibilisateur, radiotherapie
1990-01-01
BANGOURA and A. LORAND. 1979. Isolements d’arbovirus au Senegal oriental a partir de moustiques (1972-1977) et notes sur l’epidemiologie des virus...Dengue 2 au Senegal oriental: Une poussee epizootioque en milieu selvatique; isolements du virus a partir de moustiques et d’un singe et...neoafticanus une nouvelle espece de moustique capturee au Senegal Oriental (Diptera: Culicidae). Cah. O.R.S.T.O.M. Ser. Entomol. Med. Parasitol. 16
Managed Readiness Simulator (MARS) V2: Implementation of the Managed Readiness Model
2010-12-01
mesure de répondre aux besoins propres à un ensemble de tâches opérationnelles. La première version du programme MARS (V1) a...mise en œuvre du modèle de gestion de la disponibilité opérationnelle du MARS V2 dans la nouvelle architecture logicielle. La public cible de ce...l’établissement. Le but de la présente étude est de documenter la mise en œuvre du modèle de
2012-11-01
de loin sans correction, la chirurgie réfractive permet aux jeunes recrues ou militaires du rang de satisfaire aux normes de vision sans...correction exigées pour le métier militaire de leur choix, d’où une augmentation du nombre de candidats et la possibilité pour les individus...réfractive risquent d’engendrer des handicaps visuels et d’écourter des carrières militaires. Pour cette raison, la qualité du service fourni
ERIC Educational Resources Information Center
Saraceni, Luisa; And Others
1993-01-01
Four activities are offered for French second-language classroom use: an exercise to aid comprehension of indicative and subjunctive mood; a lesson in making bread and jam, designed for young children; a study of narration within a novel, using a Guy de Maupassant story; and an exercise in discourse analysis. (MSE)
1985-02-01
stationnaire. Le Systeme pseudo-instationnaire est constrult en rempla^ant 1 equation de 1’energie le) par la condition de rothalpie uniforme I ■ cte et...la definition de nouvelles helices permet des gains substantiels pour tout le domaine de vol et, notanment, au decollage et en montee. Nous...etudes d’un tel Systeme sont effectuees et I’essai d’une maquette dans la grande soufflerie transsonique SI de Modane est prevu en 1985. On presente
Vers une réconciliation des théories et de la pratique de l’évaluation, perspectives d’avenir
Brousselle, Astrid; Champagne, François; Contandriopoulos, André-Pierre
2013-01-01
L’évaluation est un domaine très prolifique, à plusieurs points de vue. Sur le plan théorique, de nouvelles approches s’ajoutent chaque année. La pratique est également en pleine expansion. Cette demande croissante pour des évaluations dans un domaine où les développements théoriques sont très importants crée, paradoxalement, des difficultés quant à la transposition des nouvelles connaissances dans la pratique de l’évaluation. Nous proposons, premièrement, d’illustrer trois grandes difficultés auxquelles est confronté l’évaluateur dans sa pratique : la définition de l’intervention, la considération du changement et les préoccupations pour l’utilisation de l’évaluation. Dans un deuxième temps, nous présenterons les trois principales réponses théoriques que propose le domaine de l’évaluation. Dans un troisième temps, nous discuterons des enjeux de cette interface et des avenues possibles pour favoriser une réconciliation entre la pratique et la théorie de l’évaluation. Cette discussion permettra d’illustrer la tension qui se dessine actuellement entre les questionnements de la pratique et le foisonnement théorique et de présenter les avancées prochaines sur le plan des développements théoriques, vers un rapprochement des préoccupations pratiques des évaluateurs. PMID:23997420
Les cooperatives et l'electrification rurale du Quebec, 1945--1964
NASA Astrophysics Data System (ADS)
Dorion, Marie-Josee
Cette these est consacree a l'histoire de l'electrification rurale du Quebec, et, plus particulierement, a l'histoire des cooperatives d'electricite. Fondees par vagues successives a partir de 1945, les cooperatives rurales d'electricite ont ete actives dans plusieurs regions du Quebec et elles ont electrifie une partie significative des zones rurales. Afin de comprendre le contexte de la creation des cooperatives d'electricite, notre these debute (premiere partie) par une analyse du climat sociopolitique des annees precedant la naissance du systeme cooperatif d'electrification rurale. Nous y voyons de quelle facon l'electrification rurale devient progressivement, a partir de la fin des annees 1920, une question d'actualite a laquelle les divers gouvernements qui se succedent tentent de trouver une solution, sans engager---ou si peu---les fonds de l'Etat. En ce sens, la premiere etatisation et la mise sur pied d'Hydro-Quebec, en 1944, marquent une rupture quant au mode d'action privilegie jusque-la. La nouvelle societe d'Etat se voit cependant retirer son mandat d'electrifier le monde rural un an apres sa fondation, car le gouvernement Duplessis, de retour au pouvoir, prefere mettre en place son propre modele d'electrification rurale. Ce systeme repose sur des cooperatives d'electricite, soutenues par un organisme public, l'Office de l'electrification rurale (OER). L'OER suscite de grandes attentes de la part des ruraux et c'est par centaines qu'ils se manifestent. Cet engouement pour les cooperatives complique la tache de l'OER, qui doit superviser de nouvelles societes tout en assurant sa propre organisation. Malgre des hesitations et quelques delais introduits par un manque de connaissances techniques et de personnel qualifie, les commissaires de l'OER se revelent perspicaces et parviennent a mettre sur pied un systeme cooperatif d'electrification rurale qui produit des resultats rapides. Il leur faudra cependant compter sur l'aide des autres acteurs engages dans l'electrification, les organismes publics et les compagnies privees d'electricite. Cette periode de demarrage et d'organisation, traitee dans la deuxieme partie de la these, se termine en 1947-48, au moment ou l'OER et les cooperatives raffermissent leur maitrise du systeme cooperatif d'electrification rurale. Les annees 1948 a 1955 (troisieme partie de these) correspondent a une periode de croissance pour le mouvement cooperatif. Cette partie scrute ainsi le developpement des cooperatives, les vastes chantiers de construction et l'injection de millions de dollars dans l'electrification rurale. Cette troisieme partie prend egalement acte des premiers signes que quelque chose ne va pas si bien dans le monde cooperatif. Nous y verrons egalement les ruraux a l'oeuvre: comme membres, d'abord, mais aussi en tant que benevoles, puis a l'emploi des cooperatives. La quatrieme et derniere partie, les annees 1956 a 1964, aborde les changements majeurs qui ont cours dans l'univers cooperatif; il s'agit d'une ere nouvelle et difficile pour le mouvement cooperatif, dont les reseaux paraissent inadaptes aux changements de profil de la consommation d'electricite des usagers. L'OER sent alors le besoin de raffermir son controle des cooperatives, car il pressent les problemes et les defis auxquels elles auront a faire face. Notre etude se termine par l'acquisition des cooperatives par Hydro-Quebec, en 1963-64. Fondee sur des sources riches et variees, notre demarche propose un eclairage inedit sur une dimension importante de l'histoire de l'electricite au Quebec. Elle permet, ce faisant, de saisir les rouages et l'action de l'Etat sous un angle particulier, avant sa profonde transformation amorcee au cours des annees 1960. De meme, elle apporte quelques cles nouvelles pour une meilleure comprehension de la dynamique des milieux ruraux de cette periode.
NASA Astrophysics Data System (ADS)
Aboutajeddine, Ahmed
Les modeles micromecaniques de transition d'echelles qui permettent de determiner les proprietes effectives des materiaux heterogenes a partir de la microstructure sont consideres dans ce travail. L'objectif est la prise en compte de la presence d'une interphase entre la matrice et le renforcement dans les modeles micromecaniques classiques, de meme que la reconsideration des approximations de base de ces modeles, afin de traiter les materiaux multiphasiques. Un nouveau modele micromecanique est alors propose pour tenir compte de la presence d'une interphase elastique mince lors de la determination des proprietes effectives. Ce modele a ete construit grace a l'apport de l'equation integrale, des operateurs interfaciaux de Hill et de la methode de Mori-Tanaka. Les expressions obtenues pour les modules globaux et les champs dans l'enrobage sont de nature analytique. L'approximation de base de ce modele est amelioree par la suite dans un nouveau modele qui s'interesse aux inclusions enrobees avec un enrobage mince ou epais. La resolution utilisee s'appuie sur une double homogeneisation realisee au niveau de l'inclusion enrobee et du materiau. Cette nouvelle demarche, permettra d'apprehender completement les implications des approximations de la modelisation. Les resultats obtenus sont exploites par la suite dans la solution de l'assemblage de Hashin. Ainsi, plusieurs modeles micromecaniques classiques d'origines differentes se voient unifier et rattacher, dans ce travail, a la representation geometrique de Hashin. En plus de pouvoir apprecier completement la pertinence de l'approximation de chaque modele dans cette vision unique, l'extension correcte de ces modeles aux materiaux multiphasiques est rendue possible. Plusieurs modeles analytiques et explicites sont alors proposee suivant des solutions de differents ordres de l'assemblage de Hashin. L'un des modeles explicite apparait comme une correction directe du modele de Mori-Tanaka, dans les cas ou celui ci echoue a donner de bons resultats. Finalement, ce modele de Mori-Tanaka corrige est utilise avec les operateurs de Hill pour construire un modele de transition d'echelle pour les materiaux ayant une interphase elastoplastique. La loi de comportement effective trouvee est de nature incrementale et elle est conjuguee a la relation de la plasticite de l'interphase. Des simulations d'essais mecaniques pour plusieurs proprietes de l'interphase plastique a permis de dresser des profils de l'enrobage octroyant un meilleur comportement au materiau.
NASA Astrophysics Data System (ADS)
Benammi, Mouloud
2001-08-01
The Aı̈t Kandoula basin is known for its abundance of micromammal fauna. The systematic study of the fauna of two layers of the Afoud section (AF6 and AF5), highlights the presence of two new species: Myocricetodon afoudensis nov. sp. and Myocricetodon jaegeri nov. sp. These two layers are respectively dated at 10.1 and 6.3 Ma by magnetostratigraphic correlation.
2003-11-01
de défense sur des matériels, des hommes et des doctrines préexistants, mais part au contraire d’une analyse des menaces et du... hommes et les doctrines. Comme on le verra ultérieurement, cette nouvelle démarche d’ingénierie du système de défense, qui se veut proactive et non...résolvent sous des contraintes de zéro mort ou tout au moins de pertes minimales, dont l ’« acceptabilité » est essentiellement facteur de
Shin Clearance in the Hawk Mk115
2008-01-01
represented by the Minister of National Defence, 2008 © Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense... de sélection anthropométrique pour les pilotes. La nouvelle norme base l’acceptation ou le rejet des candidats selon qu’ils sont physiquement...compatibles avec les postes de pilotage de la flotte; la norme précédente n’était pas aussi spécifique. En conséquence, la compatibilité des élèves pilotes
NASA Astrophysics Data System (ADS)
Petriglieri, Jasmine Rita; Laporte-Magoni, Christine; Salvioli-Mariani, Emma; Gunkel-Grillon, Peggy; Tribaudino, Mario; Mantovani, Luciana; Bersani, Danilo; Lottici, Pier Paolo; Tomatis, Maura
2017-04-01
The New Caledonia is covered by ultrabasic units for more than a third of its surface, and it is one of the largest world producers of nickel ore. Mining activity, focused on extraction from lateritic ore deposits formed by the alteration of ultramafic rocks, must deal with the natural occurrence of asbestos and fibrous minerals. Almost all outcrops of geological units in open mines contain serpentine and amphibole, also as asbestos varieties (Lahondère, 2007). Owing to humid tropical to sub-tropical conditions, weathering processes and supergene mineralization are one of the main responsible for fibrogenesis of asbestos minerals. The presence of fibrous minerals in mining and storage sites requires attention due to public health problems and for the safety of the operators. In this context, the evaluation of risk and health hazard to prevent the effects due to exposition is closely linked to the formation, alteration and release of fibers into the environment. It has been demonstrated that different fibrous minerals have different toxicity (Fubini & Otero-Arean, 1999; Fubini & Fenoglio, 2007). An analytical strategy to discriminate and characterize, with certainty, the different varieties of the asbestiform phases is required to the establishment of an environmental monitoring system. We have therefore analyzed by different methods a set of about fifty asbestos sampled for mapping environmental risk in fibrous minerals in New Caledonia. The samples contain serpentines (chrysotile, antigorite) and amphibole (tremolite), all fibrous and have been sorted by their different degree of alteration. Data obtained with the more traditional mineralogical and petrological analytical techniques - such as optical microscopy, X-ray diffraction, secondary electron microscopy (SEM-EDS), and transmission electron microscopy - have been completed by the employment of more specialized tools as phase contrast microscopy (PCM), Raman spectroscopy, and thermal analysis (DTA). Moreover analytical performances of a Raman portable equipment, to be used in field observation, were assessed against other laboratory methods. Portable Raman was tested first in laboratory to check its reliability, and then on fieldtrip, directly on the mining front under normal environmental conditions (sun, strong wind, high temperature, etc.). Thus, for each analytical method, ability for fibers identification was tested. This project is part of the French-Italian program "Amiantes et Bonnes Pratiques", financed by the CNRT "Nickel and its environment" of New Caledonia. Fubini, B., & Otero-Arean, C. (1999): Chemical aspects of the toxicity of inhaled mineral dusts. Chemical Society Reviews, 28(6), 373-381. Fubini, B., & Fenoglio, I. (2007): Toxic potential of mineral dusts. Elements, 3(6), 407-414. Lahondère, D. (2007): L'amiante environnemental en Nouvelle Calédonie: Expertise géologique des zones amiantifères. Evaluation des actions engagées. BRGM/RP-55894-FR, 55 p.
NASA Astrophysics Data System (ADS)
Laoufi, Fatiha; Belbachir, Ahmed-Hafid; Benabadji, Noureddine; Zanoun, Abdelouahab
2011-10-01
We have mapped the region of Oran, Algeria, using multispectral remote sensing with different resolutions. For the identification of objects on the ground using their spectral signatures, two methods were applied to images from SPOT, LANDSAT, IRS-1 C and ASTER. The first one is called Base Rule method (BR method) and is based on a set of rules that must be met at each pixel in the different bands reflectance calibrated and henceforth it is assigned to a given class. The construction of these rules is based on the spectral profiles of popular classes in the scene studied. The second one is called Spectral Angle Mapper method (SAM method) and is based on the direct calculation of the spectral angle between the target vector representing the spectral profile of the desired class and the pixel vector whose components are numbered accounts in the different bands of the calibrated image reflectance. This new method was performed using PCSATWIN software developed by our own laboratory LAAR. After collecting a library of spectral signatures with multiple libraries, a detailed study of the principles and physical processes that can influence the spectral signature has been conducted. The final goal is to establish the range of variation of a spectral profile of a well-defined class and therefore to get precise bases for spectral rules. From the results we have obtained, we find that the supervised classification of these pixels by BR method derived from spectral signatures reduces the uncertainty associated with identifying objects by enhancing significantly the percentage of correct classification with very distinct classes.
De l'importance des orbites periodiques: Detection et applications
NASA Astrophysics Data System (ADS)
Doyon, Bernard
L'ensemble des Orbites Periodiques Instables (OPIs) d'un systeme chaotique est intimement relie a ses proprietes dynamiques. A partir de l'ensemble (en principe infini) d'OPIs cachees dans l'espace des phases, on peut obtenir des quantites dynamiques importantes telles les exposants de Lyapunov, la mesure invariante, l'entropie topologique et la dimension fractale. En chaos quantique (i.e. l'etude de systemes quantiques qui ont un equivalent chaotique dans la limite classique), ces memes OPIs permettent de faire le pont entre le comportement classique et quantique de systemes non-integrables. La localisation de ces cycles fondamentaux est un probleme complexe. Cette these aborde dans un premier temps le probleme de la detection des OPIs dans les systemes chaotiques. Une etude comparative de deux algorithmes recents est presentee. Nous approfondissons ces deux methodes afin de les utiliser sur differents systemes dont des flots continus dissipatifs et conservatifs. Une analyse du taux de convergence des algorithmes est aussi realisee afin de degager les forces et les limites de ces schemes numeriques. Les methodes de detection que nous utilisons reposent sur une transformation particuliere de la dynamique initiale. Cette astuce nous a inspire une methode alternative pour cibler et stabiliser une orbite periodique quelconque dans un systeme chaotique. Le ciblage est en general combine aux methodes de controle pour stabiliser rapidement un cycle donne. En general, il faut connaitre la position et la stabilite du cycle en question. La nouvelle methode de ciblage que nous presentons ne demande pas de connaitre a priori la position et la stabilite des orbites periodiques. Elle pourrait etre un outil complementaire aux methodes de ciblage et de controle actuelles.
Un cosmologiste oublié: Jean Henri Lambert
NASA Astrophysics Data System (ADS)
Débarbat, Suzanne; Lévy, Jacques
Si les travaux de Kepler ont eu une large influence sure les progrès réalisés en astronomie au cours du 17e siècle, le Siècle de lumières a vu apparaître de nouvelles conceptions. La court vie de J.H. lambert s'inscrit dans le 18e siècle. Il s'agit d'un nom bien connu dans différents domaines (photométrie, projections cartographiques, mathématiques appliquées, etc.); mais il n'est guàre mentionné en cosmologie, alors que Lambert y a fourni une contribution originale offrant quelques suprenantes anticipations...
Germain, J-F; Matile-Ferrero, D; Kaydan, M B; Malausa, T; Williams, D J
2015-07-01
Une nouvelle espèce de Dysmicoccus nuisible à la lavande en Provence (France) (Hemiptera, Sternorrhyncha, Pseudococcidae). Dysmicoccus lavandulae Germain, Matile-Ferrero & Williams n. sp. est décrite et illustrée. Ses séquences ADN sont présentées. L'espèce vit sur Lavandula x intermedia cultivée pour la production d'essence de lavande en Provence. La liste des espèces de pseudococcines vivant sur les lavandes spontanées en France est dressée. Le statut des 2 genres voisins Trionymus Berg et Dysmicoccus Ferris est discuté.
Le TERS, ou comment obtenir une signature chimique à l'échelle nanométrique
NASA Astrophysics Data System (ADS)
Hsia, Patrick; Chaigneau, Marc
2018-02-01
Le TERS pour tip enhanced raman scattering, est une technique de pointe (dans tous les sens du terme) basée sur l'amplification du signal Raman relative à la résonance de plasmons de surface. Cette technique non-destructive, qui ne nécessite pas de marquage particulier, permet de caractériser un échantillon à l'échelle nanométrique. Le TERS s'impose aujourd'hui pour sonder les propriétés physico-chimiques des nanomatériaux et participer ainsi au développement de nouvelles applications dans le domaine des nanotechnologies.
2001-04-01
divisée en un certain nombre de sessions. SESSION I – SYSTEMES ET TECHNIQUES DE TRAITEMENT DE L’INFORMATION (I) Les deux premières sessions étaient...établissant des paramètres et en jetant les bases théoriques/mathématiques de futurs systèmes d’information. Il s’agit d’un domaine pour lequel la...transfert de données et de renseignements se fait de plus en plus sentir. Ce symposium couvrira un grand domaine opérationnel qui est d’une grande
1985-11-01
1 1 f, 1,1"u’t I r -I irmat tor related to the weapon system as a whole can [1--1 Th t- it: as growth pot enS lal to Interface a Mission Planning...taictical decisions4 1n" 1 w 1I ’ depenid en the? core avionics to varying dog re-n. It is Important .i.h, h11~ knew whether any equipment or system ...toot en limitant le poids. le volume et le coOt des eguipements. Des fonctions nouvelles sont introduites. Peu A peu, le systeme de navigation se
NASA Astrophysics Data System (ADS)
Allen, Steve
2000-10-01
Dans cette these nous presentons une nouvelle methode non perturbative pour le calcul des proprietes d'un systeme de fermions. Notre methode generalise l'approximation auto-coherente a deux particules proposee par Vilk et Tremblay pour le modele de Hubbard repulsif. Notre methode peut s'appliquer a l'etude du comportement pre-critique lorsque la symetrie du parametre d'ordre est suffisamment elevee. Nous appliquons la methode au probleme du pseudogap dans le modele de Hubbard attractif. Nos resultats montrent un excellent accord avec les donnees Monte Carlo pour de petits systemes. Nous observons que le regime ou apparait le pseudogap dans le poids spectral a une particule est un regime classique renormalise caracterise par une frequence caracteristique des fluctuations supraconductrices inferieure a la temperature. Une autre caracteristique est la faible densite de superfluide de cette phase demontrant que nous ne sommes pas en presence de paires preformees. Les resultats obtenus semblent montrer que la haute symetrie du parametre d'ordre et la bidimensionalite du systeme etudie elargissent le domaine de temperature pour lequel le regime pseudogap est observe. Nous argumentons que ce resultat est transposable aux supraconducteurs a haute temperature critique ou le pseudogap apparait a des' temperatures beaucoup plus grandes que la temperature critique. La forte symetrie dans ces systemes pourraient etre reliee a la theorie SO(5) de Zhang. En annexe, nous demontrons un resultat tout recent qui permettrait d'assurer l'auto-coherence entre les proprietes a une et a deux particules par l'ajout d'une dynamique au vertex irreductible. Cet ajout laisse entrevoir la possibilite d'etendre la methode au cas d'une forte interaction.
LiCo2As3O10: une nouvelle structure à tunnels interconnectés
Ben Smida, Youssef; Guesmi, Abderrahmen; Driss, Ahmed
2013-01-01
The title compound, lithium dicobalt(II) triarsenate, LiCo2As3O10, was synthesized by a solid-state reaction. The As atoms and four out of seven O atoms lie on special positions, all with site symmetry m. The Li atoms are disordered over two independent special (site symmetry -1) and general positions with occupancies of 0.54 (7) and 0.23 (4), respectively. The structure model is supported by bond-valence-sum (BVS) and charge-distribution (CHARDI) methods. The structure can be described as a three-dimensional framework constructed from bi-octahedral Co2O10 dimers edge-connected to As3O10 groups. It delimits two sets of tunnels, running parallel to the a and b axes, the latter being the larger. The Li+ ions are located within the intersections of the tunnels. The possible motion of the alkali cations has been investigated by means of the BVS model. This simulation shows that the Li+ motion appears to be easier mainly along the b-axis direction and that this material may possess interesting conduction properties. PMID:23794970
NASA Astrophysics Data System (ADS)
Lepage, Martin
1998-12-01
Cette these est presentee a la Faculte de medecine de l'Universite de Sherbrooke en vue de l'obtention du grade de Ph.D. en Radiobiologie. Elle contient des resultats experimentaux enregistres avec un spectrometre d'electrons a haute resolution. Ces resultats portent sur la formation de resonances electroniques en phase condensee et de differents canaux pour leur decroissance. En premier lieu, nous presentons des mesures d'excitations vibrationnelles de l'oxygene dilue en matrice d'argon pour des energies des electrons incidents de 1 a 20 eV. Les resultats suggerent que le temps de vie des resonances de l'oxygene est modifie par la densite d'etats d'electrons dans la bande de conduction de l'argon. Nous presentons aussi des spectres de pertes d'energie d'electrons des molecules de tetrahydrofuranne (THF) et d'acetone. Dans les deux cas, la position en energie des pertes associees aux excitations vibrationnelles est en excellent accord avec les resultats trouves dans la litterature. Les fonctions d'excitation de ces modes revelent la presence de plusieurs nouvelles resonances electroniques. Nous comparons les resonances du THF et celles de la molecule de cyclopentane en phase gazeuse. Nous proposons une origine commune aux resonances ce qui implique qu'elles ne sont pas necessairement attribuees a l'excitation des electrons non-apparies de l'oxygene du THF. Nous proposons une nouvelle methode basee sur la spectroscopie par pertes d'energie des electrons pour detecter la production de fragments neutres qui demeurent a l'interieur d'un film mince condense a basse temperature. Cette methode se base sur la detection des excitations electroniques du produit neutre. Nous presentons des resultats de la production de CO dans un film de methanol. Le taux de production de CO en fonction de l'energie incidente des electrons est calibre en termes d'une section efficace totale de diffusion des electrons. Les resultats indiquent une augmentation lineaire du taux de production de CO en fonction de l'epaisseur du film et de la dose d'electrons incidente sur le film. Ces donnees experimentales cadrent dans un modele simple ou un electron cause la fragmentation de la molecule sans reaction avec les molecules avoisinantes. Le mecanisme propose pour la fragmentation unimoleculaire du methanol est la formation de resonances qui decroissent dans un etat electronique excite. Nous suggerons l'action combinee de la presence d'un trou dans une orbitale de coeur du methanol et de la presence de deux electrons dans la premiere orbitale vide pour expliquer la dehydrogenation complete du methanol pour des energies des electrons entre 8 et 18 eV. Pour des energies plus grandes, la fragmentation par l'intermediaire de l'ionisation de la molecule a deja ete suggeree. La methode de detection des etats electroniques offre une alternative a la detection des excitations vibrationnelles puisque les spectres de pertes d'energie des electrons sont congestionnes dans cette region d'energie pour les molecules polyatomiques.
Methodologies nouvelles pour la realisation d'essais dans la soufflerie Price-Paidoussis
NASA Astrophysics Data System (ADS)
Flores Salinas, Manuel
Le present memoire en genie de la production automatisee vise a decrire le travail effectue dans la soufflerie Price-Paidoussis du laboratoire LARCASE pour trouver les methodologies experimentales et les procedures de tests, qui seront utilisees avec les modeles d'ailes actuellement au laboratoire. Les methodologies et procedures presentees ici vont permettre de preparer les tests en soufflerie du projet MDO-505 Architectures et technologies deformables pour l'amelioration des performances des ailes, qui se derouleront durant l'annee 2015. D'abord, un bref historique des souffleries subsoniques sera fait. Les differentes sections de la soufflerie Price-Paidoussis seront decrites en mettant l'emphase sur leur influence dans la qualite de l'ecoulement qui se retrouve dans la chambre d'essai. Ensuite, une introduction a la pression, a sa mesure lors de tests en soufflerie et les instruments utilises pour les tests en soufflerie au laboratoire LARCASE sera presente, en particulier le capteur piezoelectrique XCQ-062. Une attention particuliere sera portee au mode de fonctionnement, a son installation, a la mesure et a la detection des frequences et aux sources d'erreurs lorsqu'on utilise des capteurs de haute precision comme la serie XCQ-062 du fournisseur Kulite. Finalement, les procedures et les methodologies elaborees pour les tests dans la soufflerie Price-Paidoussis seront utilisees sur quatre types d'ailes differentes. L'article New methodology for wind tunnel calibration using neural networks - EGD approch portant sur une nouvelle facon de predire les caracteristiques de l'ecoulement a l'interieur de la soufflerie Price-Paidoussis se trouve dans l'annexe 2 de ce document. Cet article porte sur la creation d'un reseau de neurones multicouche et sur l'entrainement des neurones, Ensuite, une comparaison des resultats du reseau de neurones a ete fait avec des valeurs simules avec le logiciel Fluent.
NASA Astrophysics Data System (ADS)
Régipa, R.
A partir d'une théorie sur la détermination des formes et des contraintes globales d'un ballon de révolution, ou s'en rapprochant, une nouvelle famille de ballons a été définie. Les ballons actuels, dits de ``forme naturelle'', sont calculés en général pour une tension circonférencielle nulle. Ainsi, pour une mission donnée, la tension longitudinale et la forme de l'enveloppe sont strictement imposées. Les ballons de la nouvelle génération sont globalement cylindriques et leurs pôles sont réunis par un câble axial, chargé de transmettre une partie des efforts depuis le crochet (pôle inférieur), directement au pôle supérieur. De plus, la zone latérale cylindrique est soumise à un faible champ de tensions circonférencielles. Ainsi, deux paramètres permettent de faire évoluer la distribution des tensions et la forme de l'enveloppe: - la tension du câble de liaison entre pôles (ou la longueur de ce câble) - la tension circonférencielle moyenne désirée (ou le rayon du ballon). On peut donc calculer et réaliser: - soit des ballons de forme adaptée, comme les ballons à fond plat pour le bon fonctionnement des montgolfières infrarouge (projet MIR); - soit des ballons optimisés pour une bonne répartition des contraintes et une meilleure utilisation des matériaux d'enveloppe, pour l'ensemble des programmes stratosphériques. Il s'ensuit une économie sensible des coûts de fabrication, une fiabilité accrue du fonctionnement de ces ballons et une rendement opérationnel bien supérieur, permettant entre autres, d'envisager des vols à très haute altitude en matériaux très légers.
Les bons conseils de la naissance à la maternelle
Rourke, Leslie; Leduc, Denis; Constantin, Evelyn; Carsley, Sarah; Rourke, James; Li, Patricia
2013-01-01
Résumé Objectif Donner un aperçu de la version 2011 du Relevé postnatal Rourke (RPR), qui comporte aussi des nouveautés dans son site web et de nouvelles initiatives connexes et qui intègre les données des ouvrages scientifiques récents sur les soins de santé préventifs à l’intention des enfants de 0 à 5 ans. Qualité des données À l’instar des versions antérieures du RPR, la nouvelle édition présente des recommandations identifiées comme se fondant sur des preuves suffisantes, acceptables ou consensuelles, selon les classifications adoptées par le Groupe d’étude canadien sur les soins de santé préventifs en 2011. Message principal De nouveaux renseignements et des recommandations sont présentés concernant la surveillance de la croissance, la nutrition et la vaccination contre la varicelle, les pneumocoques, les méningocoques et le rotavirus. Il y a maintenant de bonnes données probantes en faveur de l’adaptation canadienne des courbes de croissance de l’Organisation mondiale de la Santé, du dépistage universel des problèmes d’audition chez le nouveau-né et du recours à des stratégies de réduction de la douleur liée à la vaccination. On a mis à jour les conseils anticipatoires concernant la sécurité durant le sommeil, la supervision de la santé des enfants en foyer d’accueil, le trouble du spectre de l’alcoolisation fœtale, les facteurs de risque nécessitant le dépistage du plomb et de l’anémie, les soins dentaires et la santé buccale. Parmi les nouvelles rubriques dans le site web du RPR, on peut mentionner une section présentant des ressources pour les parents, des modifications à l’intention de populations particulières, comme les personnes vivant au Nunavut, une version du RPR qui met en évidence en un coup d’œil les modifications à la version de 2009 et une expansion de la fonction d’exploration du RPR accompagnée des hyperliens connexes menant aux renseignements pertinents. On peut aussi avoir accès au RPR sous forme d’une page par visite. Le Collège des médecins de famille du Canada et la Société canadienne de pédiatrie ont donné leur aval au RPR de 2011. Les versions nationale et ontarienne sont disponibles en français et en anglais. Conclusion Le RPR de 2011 est un outil pratique, mis à jour, fondé sur des données probantes de transposition du savoir concernant les soins de santé préventifs aux enfants de la naissance à l’âge de 5 ans. Il propose des ressources exhaustives dans le web à l’intention des professionnels de la santé, des étudiants, des résidents et des parents.
ERIC Educational Resources Information Center
Leclerc, Jacques, Ed.; Maurais, Jacques, Ed.
The volume is one of a series of six listing language-related legislation around the world. It contains the texts, in French, of laws of Algeria, Austria, China, Denmark, Finland, Hungary, Malta, Morocco, Norway, New Zealand, the Netherlands, the United Kingdom, Tunisia, Turkey, and the former Soviet Union. The laws concern official languages,…
NASA Astrophysics Data System (ADS)
Durniak, C.; Coulibaly, S.; Taki, M.
2004-11-01
La dynamique de formation de structures transverses dans un Oscillateur Paramétrique Optique (OPO) dégénéré pompé en continu est marquée, pour une nouvelle zone de paramètres, par l'apparition de solitons spatiaux en réseaux (fronts modulés), caractérisés analytiquement par leur vitesse et leur période. L'intégration numérique du système bi-dimensionnel confirme ces résultats. Ces fronts jouent un rôle primordial dans la formation de structures spatiales localisées.
2003-02-01
de celle de la tete. MWthodes Douze sujets, 9 hommes et 3 femmes, dg6s de 23 it 41 ans, se...qui donne la sensation de voir un 6cran informatique, centr6 sur l’axe interoculaire, d’une taille angulaire de 300 X 22,5’. L ’&ran virtuel apparait...inclin6s de faqon identique par rapport ýt la gravit6. Le premier essai 6tait toujours r6alis6 avec la tete droite . Ensuite, une nouvelle orientation de
Nouvelles découvertes de Vertébrés miocènes dans le synclinal de Dera Bugti (Balouchistan, Pakistan)
NASA Astrophysics Data System (ADS)
Welcomme, Jean-Loup; Antoine, Pierre-Olivier; Duranthon, Francis; Mein, Pierre; Ginsburg, Léonard
1997-10-01
Since Forster-Cooper in 1910, no paleontologist bas visited the area of Dera Bugti in Baluchistan (Pakistan). In 1995 and 1996, two small French expeditions prospected the syncline of Dera Bugti. They established stratigraphical sections ana discovered many fossils, mainly reptiles, and mammals. On top of the Eocene marine limestone of the Kirthar there lies a Burdigalian marine falun. Above, about 250 m of continental marls, sands and sandstones are deposited. The first 100 m have yielded five fossiliferous levels of MN3 b in age, and one of MN4, surrounded by a more sandy series. The top of the series has yielded Hipparion of the Upper Miocene.
Une nouvelle théorie de la cinétique des réactions radical-radical
NASA Astrophysics Data System (ADS)
Green, N. J. B.; Rickerby, A. G.
1999-01-01
Radical recombination reactions are of central importance in radiation chemistry. In general such reactions are slower than diffusion-controlled because of the radical spins. The rate constant is corrected by a multiplicative spin statistical factor, which represents the probability that the radicals encounter one another in a reactive state. However, this method does not account for the possibility that the reactivity of a pair may recover following an unreactive encounter, for example by spin relaxation or by a coherent evolution of the spin function. In this paper we show how the spin statistical factor can be corrected for the recovery of reactivity. The new theory covers a large range of mechanisms for the recovery of reactivity, and gives simple analytical results. Both steady-state and transient solutions are presented and the former are tested against experiment for the reaction between the hydrated electron and oxygen, and for the magnetic field effect on the rate constant of an elementary reaction. Les réactions de recombinaison entre deux radicaux libres ont une grande importance en chimie sous rayonnement. En général, du fait du spin des radicaux, de telles réactions sont moins rapides que celles qui sont contrôlées par la diffusion. Les constantes de vitesse sont corrigées par un facteur multiplicatif, le facteur statistique de spin, qui représente la probabilité pour que les radicaux se rencontrent, l'un avec l'autre, dans un état réactif. Cependant, cette méthode ne tient pas compte de la possibilité pour que la même paire se rencontre plusieurs fois en raison de rencontres non-réactives. Ensuite la réactivité de cette paire peut se rétablir par exemple par relaxation de spins, ou par évolution cohérente d'une superposition des états de spin. Dans cet article on démontre comment on peut corriger le facteur statistique de spin pour le rétablissement de la réactivité. La nouvelle théorie couvre un large domaine de mécanismes de rétablissement de la réactivité ; elle donne des résultats analytiques simples pour le facteur statistique modifié. On présente des solutions dans l'état stationnaire et des solutions transitoires; les premières sont testées pour la réaction entre l'électron hydraté et l'oxygène et également pour l'effet d'un champ magnétique sur la constante de vitesse d'une réaction élémentaire.
Une nouvelle espèce d'hyracoïde du genre Bunohyrax (Mammalia) de l'Éocène de Bir El Ater (Algérie).
NASA Astrophysics Data System (ADS)
Tabuce, Rodolphe; Coiffait, Brigitte; Coiffait, Philippe-Emmanuel; Mahboubi, Mohamed; Jaeger, Jean-Jacques
2000-07-01
A new species of Bunohyrax, B. matsumotoi n. sp. (Hyracoidea, Mammalia) from the Bir El Ater locality (Algeria) is described and compared with the two known species from the Oligocene of the Fayum (Egypt). This new hyracoid is documented by fragmentary remains, but the characters are significant enough to establish a new species, particularly because of its extremely small size. B. matsumotoi appears to be more primitive than the Egyptian Bunohyrax. The Algerian species, together with geological and palaeontological data, argues for a late Middle to Late Eocene age for the Bir El Ater site, rather than an Oligocene age, equivalent to the upper sequence of the Fayum, as suggested by Rasmussen et al. [17].
Droit International De L'éducation: Une Discipline Nouvelle?
NASA Astrophysics Data System (ADS)
Monteiro, Agostinho Reis
2008-03-01
INTERNATIONAL EDUCATION LAW: A NEW DISCIPLINE? - Education is one of the most highly regarded "human rights" and one which has been developed most extensively within the International Human Rights Law, so that its normative corpus already forms a very International Education Law. The right to education means a new right to a new education that, in the Rule of Law, may be qualified as Rightful Education. Such an expression is an operating concept for a human rights-based approach to education; that is, for an education no longer envisaged as a right of man over man. It amounts to a new paradigm. It is therefore high time to systematize the International Education Law in order to promote its study and the introduction of a legal dimension into pedagogic culture.
NASA Astrophysics Data System (ADS)
Rahmouni, Lyes; Adrian, Simon B.; Cools, Kristof; Andriulli, Francesco P.
2018-01-01
In this paper, we present a new discretization strategy for the boundary element formulation of the Electroencephalography (EEG) forward problem. Boundary integral formulations, classically solved with the Boundary Element Method (BEM), are widely used in high resolution EEG imaging because of their recognized advantages, in several real case scenarios, in terms of numerical stability and effectiveness when compared with other differential equation based techniques. Unfortunately, however, it is widely reported in literature that the accuracy of standard BEM schemes for the forward EEG problem is often limited, especially when the current source density is dipolar and its location approaches one of the brain boundary surfaces. This is a particularly limiting problem given that during an high-resolution EEG imaging procedure, several EEG forward problem solutions are required, for which the source currents are near or on top of a boundary surface. This work will first present an analysis of standardly and classically discretized EEG forward problem operators, reporting on a theoretical issue of some of the formulations that have been used so far in the community. We report on the fact that several standardly used discretizations of these formulations are consistent only with an L2-framework, requiring the expansion term to be a square integrable function (i.e., in a Petrov-Galerkin scheme with expansion and testing functions). Instead, those techniques are not consistent when a more appropriate mapping in terms of fractional-order Sobolev spaces is considered. Such a mapping allows the expansion function term to be a less regular function, thus sensibly reducing the need for mesh refinements and low-precisions handling strategies that are currently required. These more favorable mappings, however, require a different and conforming discretization, which must be suitably adapted to them. In order to appropriately fulfill this requirement, we adopt a mixed discretization based on dual boundary elements residing on a suitably defined dual mesh. We devote also a particular attention to implementation-oriented details of our new technique that will allow the rapid incorporation of our finding in one's own EEG forward solution technology. We conclude by showing how the resulting forward EEG problems show favorable properties with respect to previously proposed schemes, and we show their applicability to real-case modeling scenarios obtained from Magnetic Resonance Imaging (MRI) data. xml:lang="fr" Malheureusement, il est également reconnu dans la littérature que leur précision diminue particulièrement lorsque la source de courant est dipolaire et se situe près de la surface du cerveau. Ce défaut constitue une importante limitation, étant donné qu'au cours d'une session d'imagerie EEG à haute résolution, plusieurs solutions du problème direct EEG sont requises, pour lesquelles les sources de courant sont proches ou sur la surface de cerveau. Ce travail présente d'abord une analyse des opérateurs intervenant dans le problème direct et leur discrétisation. Nous montrons que plusieurs discrétisations couramment utilisées ne conviennent que dans un cadre L2, nécessitant que le terme d'expansion soit une fonction de carré intégrable. Dès lors, ces techniques ne sont pas cohérentes avec les propriétés spectrales des opérateurs en termes d'espaces de Sobolev d'ordre fractionnaire. Nous développons ensuite une nouvelle stratégie de discrétisation conforme aux espaces de Sobolev avec des fonctions d'expansion moins régulières, donnant lieu à une nouvelle formulation intégrale. Le solveur résultant présente des propriétés favorables par rapport aux méthodes existantes et réduit sensiblement le recours à un maillage adaptatif et autres stratégies actuellement requises pour améliorer la précision du calcul. Les résultats numériques présentés corroborent les développements théoriques et mettent en évidence l'impact positif de la nouvelle approche.
NASA Astrophysics Data System (ADS)
Rebaine, Ali
1997-08-01
Ce travail consiste en la simulation numerique des ecoulements internes compressibles bidimensionnels laminaires et turbulents. On s'interesse, particulierement, aux ecoulements dans les ejecteurs supersoniques. Les equations de Navier-Stokes sont formulees sous forme conservative et utilisent, comme variables independantes, les variables dites enthalpiques a savoir: la pression statique, la quantite de mouvement et l'enthalpie totale specifique. Une formulation variationnelle stable des equations de Navier-Stokes est utilisee. Elle est base sur la methode SUPG (Streamline Upwinding Petrov Galerkin) et utilise un operateur de capture des forts gradients. Un modele de turbulence, pour la simulation des ecoulements dans les ejecteurs, est mis au point. Il consiste a separer deux regions distinctes: une region proche de la paroi solide, ou le modele de Baldwin et Lomax est utilise et l'autre, loin de la paroi, ou une formulation nouvelle, basee sur le modele de Schlichting pour les jets, est proposee. Une technique de calcul de la viscosite turbulente, sur un maillage non structure, est implementee. La discretisation dans l'espace de la forme variationnelle est faite a l'aide de la methode des elements finis en utilisant une approximation mixte: quadratique pour les composantes de la quantite de mouvement et de la vitesse et lineaire pour le reste des variables. La discretisation temporelle est effectuee par une methode de differences finies en utilisant le schema d'Euler implicite. Le systeme matriciel, resultant de la discretisation spatio-temporelle, est resolu a l'aide de l'algorithme GMRES en utilisant un preconditionneur diagonal. Les validations numeriques ont ete menees sur plusieurs types de tuyeres et ejecteurs. La principale validation consiste en la simulation de l'ecoulement dans l'ejecteur teste au centre de recherche NASA Lewis. Les resultats obtenus sont tres comparables avec ceux des travaux anterieurs et sont nettement superieurs concernant les ecoulements turbulents dans les ejecteurs.
Lupus érythémateux systémique à début pédiatrique: à propos d'un cas
Khlif, Sana; Hachicha, Hend; Frikha, Faten; Feki, Sawsan; Ben Ayed, Mourad; Bahloul, Zouhir; Masmoudi, Hatem
2015-01-01
Le lupus érythémateux systémique (LES) est une maladie systémique auto-immune d’étiologie inconnue qui touche essentiellement les femmes à l’âge adulte. Le lupus pédiatrique est une entité rare. Nous rapportons une nouvelle observation. Il s'agissait d'un nourrisson âgé de 7 mois qui présentait des lésions cutanées purpuriques, une polyarthrite fébrile. Le bilan immunologique était positif (AAN et anti-ADN). Une amélioration clinique et biologique a été notée sous corticothérapie générale avec une récidive lors de la dégression du traitement. PMID:26015845
NASA Astrophysics Data System (ADS)
Martinez, Nicolas
Actuellement le Canada et plus specialement le Quebec, comptent une multitude de sites isoles dont !'electrification est assuree essentiellement par des generatrices diesel a cause, principalement, de l'eloignement du reseau central de disuibution electrique. Bien que consideree comme une source fiable et continue, 1 'utilisation des generatrices diesel devient de plus en plus problematique d'un point de vue energetique, economique et environnemental. Dans le but d'y resoudre et de proposer une methode d'approvisionnement plus performante, mains onereuse et plus respectueuse de l'environnement, l'usage des energies renouvelables est devenu indispensable. Differents travaux ont alors demontre que le couplage de ces energies avec des generauices diesel, formant des systemes hybrides, semble etre une des meilleures solutions. Parmi elles, le systeme hybride eolien-diesel avec stockage par air comprime (SHEDAC) se presente comme une configuration optimale pour 1 'electrification des sites isoles. En effet, differentes etudes ont permis de mettre en avant l'efficacite du stockage par air comprime, par rapport a d'autres technologies de stockage, en complement d'un systeme hybride eolien-diesel. Plus precisement, ce systeme se compose de sous-systemes qui sont: des eoliennes, des generatrices diesel et une chaine de compression et de stockage d'air comprime qui est ensuite utilisee pour suralimenter les generatrices. Ce processus permet ainsi de reduire Ia consommation en carburant tout en agrandissant la part d'energie renouvelable dans Ia production electrique. A ce jour, divers travaux de recherche ont pennis de demontrer I' efficacite d 'un tel systeme et de presenter une variete de configurations possibles. A travers ce memoire, un logiciel de dimensionnement energetique y est elabore dans le but d'unifom1iser !'approche energetique de cette technologie. Cet outil se veut etre une innovation dans le domaine puisqu'il est actuellement impossible de dimensionner un SHEDAC avec les outils existants. Un etat de l'art specifique associe a une validation des resultats a ete realise dans le but de proposer un logiciel fiable et performant. Dans une logique visant !'implantation d'un SHEDAC sur un site isole du Nord-du-Quebec, le logiciel developpe a ete, ensuite, utilise pour realiser une etude energetique permettant d' en degager la solution optimale a mettre en place. Enfin, a l' aide des outils et des resultats obtenus precedemment, !'elaboration de nouvelles strategies d'operation est presentee dans le but de demontrer comment le systeme pourrait etre optimise afm de repondre a differentes contraintes techniques. Le contenu de ce memoire est presente sous forme de trois articles originaux, soumis a des joumaux scientifiques avec cornite de lecture, et d'un chapitre specifique presentant les nouvelles strategies d 'operation. Ils relatent des travaux decrits dans le paragraphe precedent et permettent d'en deduire un usage concluant et pertinent du SHEDAC dans un site isole nordique.
[Structured treatment interruption in HIV-infected adolescents].
Vidal, P; Lalande, M; Rodiere, M
2009-07-01
Structured treatment interruption in HIV is now being debated. There are 2 cases in which it may be discussed: when the initial treatment was started early and when there is no compliance to treatment [Yeni P, et al. Les nouvelles recommandations de prise en charge des personnes infectées par le VIH 2006. Paris: Flammarion médecine-sciences; 2006]. Noncompliant behavior is one of the characteristics of chronic illness during adolescence. In HIV infection, however, the prognosis is negatively influenced because the resulting resistance to the antiretroviral therapy can further reduce therapeutic options. Therefore, it is important in such a critical period to consider both what is consciously and unconsciously at stake and what responsible action could be taken when a specialist is faced with spontaneous (unplanned) treatment interruption. We report here examples of follow-up care, interruption, and resumption of treatment in 4 female adolescents.
Mécanismes de la non-stœchiométrie dans les nouveaux supraconducteurs à haute Tboldmath_c
NASA Astrophysics Data System (ADS)
Hervieu, M.; Michel, C.; Martin, C.; Huvé, M.; van Tendeloo, G.; Maignan, A.; Pelloquin, D.; Goutenoire, F.; Raveau, B.
1994-11-01
Two new families of high T_c superconductors have been recently discovered : ll the mercury based oxides gg and the ll oxycarbonates gg. The main characteristics of the structural mechanisms were studied by high resolution electron microscopy. This paper deals with the description of the numerous order-disorder phenomena which have been observed on the cation and anion networks. Deux nouvelles familles de supraconducteurs ont récemment vu le jour : ll les oxydes à base de mercure gg et les ll oxycarbonates gg. Les caractéristiques essentielles des mécanismes structuraux ont été étudiées par microscopie électronique haute résolution. Les nombreux phénomènes ordre-désordre observés tant sur le réseau des cations que sur celui des anions sont exposés.
Report of foreign travel to Paris, France, June 1, 1990--June 12, 1990
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Hoesen, S.D.; Jones, L.S.
1990-07-01
The Martin Marietta Energy Systems, Inc., Team, consisting of representatives of the Engineering Division and Central Waste Management Division, participated in a technology exchange program on French --- US low-level radioactive waste (LLW) management facility design, construction, and operation. Visits were made to the new French LLW disposal facility currently under construction, the Centre de Stockage de l'Aube (CSA), to the La Hague reprocessing facility to visit LLW conditioning and storage facilities, and to the operating LLW disposal facility, the Centre de Stockage de la Manche (CSM). A meeting was also held with representatives of the Agence National pour lamore » Gestion des Dechets Radioactifs (ANDRA) to discuss overall French and Oak Ridge LLW disposal facility development programs and to review the status of the efforts being conducted under the current subcontract with NUMATEC/Societe General pour les Techniques Nouvelles (SGN)/ANDRA.« less
(Low-level waste disposal facility siting and site characterization)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mezga, L.J.; Ketelle, R.H.; Pin, F.G.
A US team consisting of representatives of Oak Ridge National Laboratory (ORNL), Savannah River Plant (SRP), Savannah river Laboratory (SRL), and the Department of Energy Office of Defense Waste and Byproducts Management participated in the fourth meeting held under the US/French Radioactive Waste Management Agreement between the US Department of Energy and the Commissariat a l'Energie Atomique. This meeting, held at Agence Nationale pour les Gestion des Dechets Radioactifs' (ANDRA's) Headquarters in Paris, was a detailed, technical topical workshop focusing on Low-Level Waste Disposal Facility Siting and Site Characterization.'' The meeting also included a visit to the Centre de lamore » Manche waste management facility operated by ANDRA to discuss and observe the French approach to low-level waste management. The final day of the meeting was spent at the offices of Societe Generale pour les Techniques Nouvelles (SGN) discussing potential areas of future cooperation and exchange. 20 figs.« less
Etude de la structure électronique et magnétique de CrO_2
NASA Astrophysics Data System (ADS)
Matar, S.; Demazeau, G.; Sticht, J.; Eyert, V.; Kübler, J.
1992-03-01
The electronic and magnetic properties of CrO2 were investigated using the self-consistent A.S.W. method in a new approach to study the evolution of its magnetic properties at deceasing volume and to assess recent photoemission results on thin films. The results show that a magnetic transition of ferro Rightarrow antiferromagnetic type is likely to be induced under pressure. Experimental results could be explained by a compression of the CrO6 octahedron within the cell. Les propriétés électroniques de CrO2 ont été déterminées par la méthode auto-cohérente de l'onde sphérique augmentée : A.S.W. Cette étude est menée dans le cadre d'une nouvelle approche visant à définir lévolution de ses propriétés magnétiques à volume décroissant d'une part et à élucider les résultats expérimentaux récents de photoémission sur couches minces d'autre part. Les résultats des calculs montrent qu'une transition magnétique de type ferro Rightarrow antiferromagnétique est susceptible d'être induite sous l'effet de la pression. Les résultats expérimentaux pourraient être interprétés par une compression de l'octaèdre CrO6 au sein de la maille.
NASA Astrophysics Data System (ADS)
Valentin, Olivier
Selon l'Organisation mondiale de la sante, le nombre de travailleurs exposes quotidiennement a des niveaux de bruit prejudiciables a leur audition est passe de 120 millions en 1995 a 250 millions en 2004. Meme si la reduction du bruit a la source devrait etre toujours privilegiee, la solution largement utilisee pour lutter contre le bruit au travail reste la protection auditive individuelle. Malheureusement, le port des protecteurs auditifs n'est pas toujours respecte par les travailleurs car il est difficile de fournir un protecteur auditif dont le niveau d'attenuation effective est approprie a l'environnement de travail d'un individu. D'autre part, l'occlusion du canal auditif induit une modification de la perception de la parole, ce qui cree un inconfort incitant les travailleurs a retirer leurs protecteurs. Ces deux problemes existent parce que les methodes actuelles de mesure de l'effet d'occlusion et de l'attenuation sont limitees. Les mesures objectives basees sur des mesures microphoniques intra-auriculaires ne tiennent pas compte de la transmission directe du son a la cochlee par conduction osseuse. Les mesures subjectives au seuil de l'audition sont biaisees a cause de l'effet de masquage aux basses frequences induit par le bruit physiologique. L'objectif principal de ce travail de these de doctorat est d'ameliorer la mesure de l'attenuation et de l'effet d'occlusion des protecteurs auditifs intra-auriculaires. L'approche generale consiste a : (i) verifier s'il est possible de mesurer l'attenuation des protecteurs auditifs grâce au recueil des potentiels evoques stationnaires et multiples (PEASM) avec et sans protecteur auditif (protocole 1), (ii) adapter cette methodologie pour mesurer l'effet d'occlusion induit par le port de protecteur auditifs intra-auriculaires (protocole 2), et (iii) valider chaque protocole par l'intermediaire de mesures realisees sur sujets humains. Les resultats du protocole 1 demontrent que les PEASM peuvent etre utilises pour mesurer objectivement l'attenuation des protecteurs auditifs : les resultats obtenus a 500 Hz et 1 kHz demontrent que l'attenuation mesuree a partir des PEASM est relativement equivalente a l'attenuation calculee par la methode REAT, ce qui est en accord avec ce qui etait attendu puisque l'effet de masquage induit par le bruit physiologique aux basses frequences est relativement negligeable a ces frequences. Les resultats du protocole 2 demontrent que les PEASM peuvent etre egalement utilises pour mesurer objectivement l'effet d'occlusion induit par le port de protecteurs auditifs : l'effet d'occlusion mesure a partir des PEASM a 500 Hz est plus eleve que celui calcule par l'intermediaire de la methode subjective au seuil de l'audition, ce qui est en accord avec ce qui etait attendu puisqu'en dessous d'1 kHz, l'effet de masquage induit par le bruit physiologique aux basses frequences est source de biais pour les resultats obtenus par la methode subjective car il y a surestimation des seuils de l'audition en basse frequence lors du port de protecteurs auditifs. Toutefois, les resultats obtenus a 250 Hz sont en contradiction avec les resultats attendus. D'un point de vue scientifique, ce travail de these a permis de realiser deux nouvelles methodes innovantes pour mesurer objectivement l'attenuation et l'effet d'occlusion des protecteurs auditifs intra-auriculaires par electroencephalographie. D'un point de vue sante et securite au travail, les avancees presentees dans cette these pourraient aider a concevoir des protecteurs auditifs plus performants. En effet, si ces deux nouvelles methodes objectives etaient normalisees pour caracteriser les protecteurs auditifs intra-auriculaires, elles pourraient permettre : (i) de mieux apprehender l'efficacite reelle de la protection auditive et (ii) de fournir une mesure de l'inconfort induit par l'occlusion du canal auditif lors du port de protecteurs. Fournir un protecteur auditif dont l'efficacite reelle est adaptee a l'environnement de travail et dont le confort est optimise permettrait, sur le long terme, d'ameliorer les conditions des travailleurs en minimisant le risque lie a la degradation de leur appareil auditif. Les perspectives de travail proposees a la fin de cette these consistent principalement a : (i) exploiter ces deux methodes avec une gamme frequentielle plus etendue, (ii) explorer la variabilite intra-individuelle de chacune des methodes, (iii) comparer les resultats des deux methodes avec ceux obtenus par l'intermediaire de la methode "Microphone in Real Ear" (MIRE) et (iv) verifier la compatibilite de chacune des methodes avec tous les types de protecteurs auditifs. De plus, pour la methode de mesure de l'effet d'occlusion utilisant les PEASM, une etude complementaire est necessaire pour lever la contradiction observee a 250 Hz.
Controle optique de qubits lies a des centres isoelectroniques d'azote dans le GaAs
NASA Astrophysics Data System (ADS)
Ethier-Majcher, Gabriel
Le traitement de l'information quantique est un domaine de recherche actuellement en pleine effervescence car il laisse entrevoir une revolution dans notre facon de traiter et d'echanger de l'information. D'une part, l'ordinateur quantique promet de resoudre des problemes comme la factorisation d'un polynome de facon beaucoup plus efficace qu'un ordinateur classique. D'autre part, les communications quantiques promettent l'echange d'information de facon fondamentalement inviolable. Afin de tirer pleinement profit de ces nouvelles technologies, il sera avantageux de construire des reseaux quantiques. Dans un tel reseau, des processeurs quantiques, les noeuds, seront connectes par des photons voyageant dans des fibres optiques. Les reseaux quantiques permettront de deployer les communications quantiques a grande echelle et de creer des super-ordinateurs quantiques. La realisation de reseaux quantiques necessitera des interfaces optiques pouvant echanger l'information de facon coherente entre un qubit (bit d'information quantique) et un photon. L'implementation de telles interfaces dans un systeme physique s'avere un important defi scientifique et technologique. Or, les systemes actuellement envisages a cette fin souffrent d'un faible couplage avec la lumiere ou encore de grandes inhomogeneites, constituant des obstacles a la realisation de reseaux a grande echelle. Dans cette these, le potentiel des centres isoelectroniques pour realiser des interfaces optiques est evalue. Deux types de qubits lies a des paires d'azote dans le GaAs sont consideres : les qubits excitoniques et les qubits de spin electronique, controlables par l'intermediaire d'excitons charges. Le controle optique complet des qubits excitoniques est demontre, ce qui constitue la premiere realisation du genre dans les centres isoelectroniques. L'observation d'excitons charges dans ce systeme, liant a la fois des trous lourds et des trous legers, laisse entrevoir de nouvelles possibilites afin de manipuler des spins electroniques. Les excitons et excitons charges lies aux paires d'azote sont etudies par la photoluminescence resolue spatialement. Le controle des qubits excitoniques est realisee a l'aide d'impulsions laser resonantes avec un etat excitonique et l'etat du qubit est lu par sa fluorescence en resonance. Une experience de rotations de Rabi est realisee pour demontrer un controle sur la population du qubit. Cette experience permet d'extraire un moment dipolaire moyen de 27 D pour l'exciton.
NASA Astrophysics Data System (ADS)
Arsenault, Louis-Francois
Les applications reliees a la generation d'energie motivent la recherche de materiaux ayant un fort pouvoir thermoelectrique (S). De plus, S nous renseigne sur certaines proprietes fondamentales des materiaux, comme, par exemple, la transition entre l'etat coherent et incoherent des quasi-particules lorsque la temperature augmente. Empiriquement, la presence de fortes interactions electron-electron peut mener a un pouvoir thermoelectrique geant. Nous avons donc etudie le modele le plus simple qui tient compte de ces fortes interactions, le modele de Hubbard. La theorie du champ moyen dynamique (DMFT) est tout indiquee dans ce cas. Nous nous sommes concentres sur un systeme tridimensionnel (3d) cubique a face centree (fcc), et ce, pour plusieurs raisons. A) Ce type de cristal est tres commun dans la nature. B) La DMFT donne de tres bons resultats en 3d et donc ce choix sert aussi de preuve de principe de la methode. C) Finalement, a cause de la frustration electronique intrinseque au fcc, celui-ci ne presente pas de symetrie particule-trou, ce qui est tres favorable a l'apparition d'une grande valeur de S. Ce travail demontre que lorsque le materiau est un isolant a demi-remplissage a cause des fortes interactions (isolant de Mott), il est possible d'obtenir de grands pouvoirs thermoelectriques en le dopant legerement. C'est un resultat pratique important. Du point de vue methodologique, nous avons montre comment la limite de frequence infinie de S et l'approche dite de Kelvin, qui considere la limite de frequence nulle avant la limite thermodynamique pour S, donnent des estimations fiables de la vraie limite continue (DC) dans les domaines de temperature appropriee. Ces deux approches facilitent grandement les calculs en court-circuit ant la necessite de recourir a de problematiques prolongements analytiques. Nous avons trouve que la methode de calcul a frequence infinie fonctionne bien lorsque les echelles d'energie sont relativement faibles. En d'autres termes, cette approche donne une bonne representation de S lorsque le systeme devient coherent. Les calculs montrent aussi que la formule Kelvin est precise lorsque la fonction spectrale des electrons devient incoherente, soit a plus haute temperature. Dans la limite Kelvin, S est essentiellement l'entropie par particule, tel que propose il y a longtemps. Nos resultats demontrent ainsi que la vision purement entropique de S est la bonne dans le regime incoherent, alors que dans le regime coherent, l'approche a frequence infinie est meilleure. Nous avons utilise une methode a la fine pointe, soit le Monte-Carlo quantique en temps continu pour resoudre la DMFT. Pour permettre une exploration rapide du diagramme de phase, nous avons du developper une nouvelle version de la methode des perturbations iterees pour qu'elle soit applicable aussi a forte interaction au-dela de la valeur critique de la transition de Mott. Un autre sujet a aussi ete aborde. L'effet orbital du champ magnetique dans les systemes electroniques fortement correles est une question tres importante et peu developpee. Cela est d'autant plus essentiel depuis la decouverte des oscillations quantiques dans les supraconducteurs a haute temperature (haut- Tc). Par desir de developper une methode la moins biaisee possible, nous avons derive la DMFT lorsqu'un champ se couplant a l'operateur energie cinetique par la substitution de Peierls est present. Ce type d'approche est necessaire pour comprendre entre autres l'effet de la physique de Mott sur des phenomenes tels que les oscillations quantiques. Nous avons obtenu un resultat tres important en demontrant rigoureusement que la relation d'auto-coherence de la DMFT et le systeme intermediaire d'impurete quantique restent les memes. L'effet du champ peut etre contenu dans la fonction de Green locale, ce qui constitue la grande difference avec le cas habituel. Ceci permet de continuer a utiliser les solutionneurs d'impuretes standards, qui sont de plus en plus puissants. Nous avons aussi developpe la methode pour le cas d'un empilement de plans bidimensionnels selon z, ce qui permet d'etudier l'effet orbital du champ dans des nanostructures et meme dans les materiaux massifs, si le nombre de plans est suffisant pour obtenir la limite tridimensionnelle. Mots cles : Pouvoir thermoelectrique, Theorie du Champ Moyen Dynamique, Modele de Hubbard, Effet orbital du champ magnetique, Electrons fortement correles, Materiaux quantiques, Theorie des perturbations iterees
Implementation en VHDl/FPGA d'afficheur video numerique (AVN) pour des applications aerospatiales
NASA Astrophysics Data System (ADS)
Pelletier, Sebastien
L'objectif de ce projet est de developper un controleur video en langage VHDL afin de remplacer la composante specialisee presentement utilisee chez CMC Electronique. Une recherche approfondie des tendances et de ce qui se fait actuellement dans le domaine des controleurs video est effectuee afin de definir les specifications du systeme. Les techniques d'entreposage et d'affichage des images sont expliquees afin de mener ce projet a terme. Le nouveau controleur est developpe sur une plateforme electronique possedant un FPGA, un port VGA et de la memoire pour emmagasiner les donnees. Il est programmable et prend peu d'espace dans un FPGA, ce qui lui permet de s'inserer dans n'importe quelle nouvelle technologie de masse a faible cout. Il s'adapte rapidement a toutes les resolutions d'affichage puisqu'il est modulaire et configurable. A court terme, ce projet permettra un controle ameliore des specifications et des normes de qualite liees aux contraintes de l'avionique.
NASA Astrophysics Data System (ADS)
Lhomme, J. P.; Monteny, B.
1982-03-01
This paper begins to recall new concepts concerning evapotranspiration as they have been specified by the round-table conference of Budapest in May 1977. The potential evaporation ( EP) is now defined as the evaporation of a crop whose all exchange surfaces (leaves, stalks,...) are saturated, i.e., covered with a thin film of water. It can be calculated by a theoretical formula of Penman type. We give the reasons why it is interesting to use grass potential evaporation ( EP g ) as reference. The empirical relationships to estimate in this case the net radiation and the aerodynamic component of the formula have been derived from measurements made in Ivory Coast (West Africa). The relationship (8) has been obtained. It gives the daily value of EP g in millimeters of water per day (mm/d). The values calculated by this formula are compared to measurements of grass maximal evapotranspiration ( ETM g ).
Modelisation des emissions de particules microniques et nanometriques en usinage
NASA Astrophysics Data System (ADS)
Khettabi, Riad
La mise en forme des pieces par usinage emet des particules, de tailles microscopiques et nanometriques, qui peuvent etre dangereuses pour la sante. Le but de ce travail est d'etudier les emissions de ces particules pour fins de prevention et reduction a la source. L'approche retenue est experimentale et theorique, aux deux echelles microscopique et macroscopique. Le travail commence par des essais permettant de determiner les influences du materiau, de l'outil et des parametres d'usinage sur les emissions de particules. E nsuite un nouveau parametre caracterisant les emissions, nomme Dust unit , est developpe et un modele predictif est propose. Ce modele est base sur une nouvelle theorie hybride qui integre les approches energetiques, tribologiques et deformation plastique, et inclut la geometrie de l'outil, les proprietes du materiau, les conditions de coupe et la segmentation des copeaux. Il ete valide au tournage sur quatre materiaux: A16061-T6, AISI1018, AISI4140 et fonte grise.
Ironic inversion in Eliza Haywood's fiction: fantomina and "The History of the Invisible Mistress".
Hinnant, Charles H
2010-01-01
This article contends that Fantomina can best be understood as an ironic inversion of “The History of the Invisible Mistress”, a Spanish nouvelle published as an interpolated tale in Paul Scarron's Le Roman Comique. Both works revolve around the efforts of a heroine to capture and hold the attention of a young man upon whom she has set her heart. The plot of both works concerns the successive disguises by which the heroines seek to test their heroes. But where Don Carlos, the hero of “The History of the Invisible Mistress”, remains chaste and faithful to his initial pledge, Beauplaisir, with whom Fantomina becomes sexually involved, displays the inconstancy of a roving libertine. Hence, where “The History of the Invisible Mistress” ends happily in marriage, Fantomina concludes with its heroine's pregnancy and exile in disgrace by her mother to a monastery in France. A comparison of the two texts affords a fascinating glimpse into Haywood's aims and strategies in Fantomina.
Santos-Barrera, Georgina; Pacheco, Jesus; Mendoza-Quijano, Fernando; Bolaños, Federico; Cháves, Gerardo; Daily, Gretchen C; Ehrlich, Paul R; Ceballos, Gerardo
2008-06-01
We present an inventory of the amphibians and reptiles of the San Vito de Coto Brus region, including the Las Cruces Biological Station, in southern Costa Rica, which is the result of a survey of the herpetofauna occurring in mountain forest fragments, pastures, coffee plantations, and other disturbed areas. We found 67 species, included 26 species of amphibians and of 41 of reptiles. We describe the distribution patterns of the community on the basis of the life zones, elevation, fragmentation, and degree of anthropogenic impact. We also provide some nouvelle data on the systematics of some select taxa, their geographical ranges, microhabitats, activity, and other relevant ecological and natural history features. Finally, we comment on the present conservation status of the herpetofauna in the region. Previous literature and collection records indicate a higher number of species occurring in this area, which suggests that some declines have occurred, especially of amphibians, in last decades.
NASA Astrophysics Data System (ADS)
Izart, Alain; Tahiri, Abdelfatah; El Boursoumi, Abdou; Vachard, Daniel; Saidi, Mariam; Chèvremont, Philippe; Berkhli, Mostafa
2001-02-01
New Visean formations and biozones of foraminifera were defined on the Bouqachmir map. The new biozonation concerns the Moroccan biozone, Cfm1, which is subdivided into two subzones, Cfm1a and Cfm1b. This map exhibited, from north-west to south-east, the Tilouine, Bouqachmir-Tougouroulmès and Fourhal turbiditic basins. The first one, from Tournaisian to Late Visean, was the equivalent of the Sidi Bettache basin, located westwards. The second extended the Tilouine basin eastwards during the Visean. The third was a basin from Visean to Westphalian. They were separated by the Zaer-Oulmes and El Hammam horsts, else emerged or immersed, bordered by faults and with materials feeding chaotic deposits.
NASA Astrophysics Data System (ADS)
Salissou, Yacoubou
L'objectif global vise par les travaux de cette these est d'ameliorer la caracterisation des proprietes macroscopiques des materiaux poreux a structure rigide ou souple par des approches inverses et indirectes basees sur des mesures acoustiques faites en tube d'impedance. La precision des approches inverses et indirectes utilisees aujourd'hui est principalement limitee par la qualite des mesures acoustiques obtenues en tube d'impedance. En consequence, cette these se penche sur quatre problemes qui aideront a l'atteinte de l'objectif global precite. Le premier probleme porte sur une caracterisation precise de la porosite ouverte des materiaux poreux. Cette propriete en est une de passage permettant de lier la mesure des proprietes dynamiques acoustiques d'un materiau poreux aux proprietes effectives de sa phase fluide decrite par les modeles semi-phenomenologiques. Le deuxieme probleme traite de l'hypothese de symetrie des materiaux poreux selon leur epaisseur ou un index et un critere sont proposes pour quantifier l'asymetrie d'un materiau. Cette hypothese est souvent source d'imprecision des methodes de caracterisation inverses et indirectes en tube d'impedance. Le critere d'asymetrie propose permet ainsi de s'assurer de l'applicabilite et de la precision de ces methodes pour un materiau donne. Le troisieme probleme vise a mieux comprendre le probleme de transmission sonore en tube d'impedance en presentant pour la premiere fois un developpement exact du probleme par decomposition d'ondes. Ce developpement permet d'etablir clairement les limites des nombreuses methodes existantes basees sur des tubes de transmission a 2, 3 ou 4 microphones. La meilleure comprehension de ce probleme de transmission est importante puisque c'est par ce type de mesures que des methodes permettent d'extraire successivement la matrice de transfert d'un materiau poreux et ses proprietes dynamiques intrinseques comme son impedance caracteristique et son nombre d'onde complexe. Enfin, le quatrieme probleme porte sur le developpement d'une nouvelle methode de transmission exacte a 3 microphones applicable a des materiaux ou systemes symetriques ou non. Dans le cas symetrique, on montre que cette approche permet une nette amelioration de la caracterisation des proprietes dynamiques intrinseques d'un materiau. Mots cles. materiaux poreux, tube d'impedance, transmission sonore, absorption sonore, impedance acoustique, symetrie, porosite, matrice de transfert.
La microscopie ionique analytique des tissus biologiques
NASA Astrophysics Data System (ADS)
Galle, P.
Proposed in 1960 by R. Castaing and G. Slodzian, secondary ion emission microanalysis is a microanalytical method which is now largely used for the study of inert material. The instrument called the analytical ion microscope can also be used for the study of biological spécimens ; images representing the distribution of a given stable or radioactive isotope in a tissue section are obtained with a resolution of 0.5 μm. Among the characteristics of this method, two are of particular interest in biological research : its capacity for isotopic analysis and its very high sensitivity which makes possible for the first time a chemical analysis of element at a very low or even at a trace concentration in a microvolume. Proposé en 1960 par R. Castaing et G. Slodzian, la microanalyse par émission ionique secondaire est une méthode qui permet, entre autre, d'obtenir des images représentant la distribution des isotopes présents à la surface d'un échantillon solide avec une résolution de 0,5 μm. D'intérêt très général, cette méthode a été d'abord largement utilisée pour l'étude des matériaux inertes. Elle offre en outre des possibilités entièrement nouvelles dans le domaine de la recherche biomédicale. L'instrument réalisé, le microscope ionique analytique présente deux caractéristiques particulièrement intéressantes pour la biologie : la possibilité d'analyse isotopique, et l'extrême sensibilité permettant de détecter et de localiser dans une coupe histologique des éléments à des concentrations très faibles voire à l'état de trace.
NASA Astrophysics Data System (ADS)
Fournier, Marie-Claude
Une caracterisation des emissions atmospheriques provenant des sources fixes en operation, alimentees au gaz et a l'huile legere, a ete conduite aux installations visees des sites no.1 et no.2. La caracterisation et les calculs theoriques des emissions atmospheriques aux installations des sites no.1 et no.2 presentent des resultats qui sont en dessous des valeurs reglementaires pour des conditions d'operation normales en periode hivernale et par consequent, a de plus fortes demandes energetiques. Ainsi, pour une demande energetique plus basse, le taux de contaminants dans les emissions atmospheriques pourrait egalement etre en dessous des reglementations municipales et provinciales en vigueur. Dans la perspective d'une nouvelle reglementation provinciale, dont les termes sont discutes depuis 2005, il serait souhaitable que le proprietaire des infrastructures visees participe aux echanges avec le Ministere du Developpement Durable, de l'Environnement et des Parcs (MDDEP) du Quebec. En effet, meme si le principe de droit acquis permettrait d'eviter d'etre assujetti a la nouvelle reglementation, l'application de ce type de principe ne s'inscrit pas dans ceux d'un developpement durable. L'âge avance des installations etudiees implique la planification d'un entretien rigoureux afin d'assurer les conditions optimales de combustion en fonction du type de combustible. Des tests de combustion sur une base reguliere sont donc recommandes. Afin de supporter le processus de suivi et d'evaluation de la performance environnementale des sources fixes, un outil d'aide a la gestion de l'information environnementale a ete developpe. Dans ce contexte, la poursuite du developpement d'un outil d'aide a la gestion de l'information environnementale faciliterait non seulement le travail des personnes affectees aux inventaires annuels mais egalement le processus de communication entre les differents acteurs concernes tant intra- qu'inter-etablissement. Cet outil serait egalement un bon moyen pour sensibiliser le personnel a leur consommation energetique ainsi qu'a leur role dans la lutte contre les emissions polluantes et les gaz a effets de serre. En outre, ce type d'outil a pour principale fonction de generer des rapports dynamiques pouvant s'adapter a des besoins precis. Le decoupage coherent de l'information associe a un developpement par modules offre la perspective d'application de l'outil pour d'autres types d'activites. Dans ce cas, il s'agit de definir la part commune avec les modules existants et planifier les activites de developpement specifiques selon la meme demarche que celle presentee dans le present document.
Jenkins, G; Wainwright, L J; Holland, R; Barrett, K E; Casey, J
2014-01-01
Synopsis Objective The maintenance of youthful skin appearance is strongly desired by a large proportion of the world's population. The aim of the present study was therefore to evaluate the effect on skin wrinkling, of a combination of ingredients reported to influence key factors involved in skin ageing, namely inflammation, collagen synthesis and oxidative/UV stress. A supplemented drink was developed containing soy isoflavones, lycopene, vitamin C and vitamin E and given to post-menopausal women with a capsule containing fish oil. Method We have performed a double-blind randomized controlled human clinical study to assess whether this cocktail of dietary ingredients can significantly improve the appearance of facial wrinkles. Results We have shown that this unique combination of micronutrients can significantly reduce the depth of facial wrinkles and that this improvement is associated with increased deposition of new collagen fibres in the dermis. Conclusion This study demonstrates that consumption of a mixture of soy isoflavones, lycopene, vitamin C, vitamin E and fish oil is able to induce a clinically measureable improvement in the depth of facial wrinkles following long-term use. We have also shown, for the first time with an oral product, that the improvement is associated with increased deposition of new collagen fibres in the dermis. Résumé Objectif Le maintien de l'apparence d'une peau jeune est vivement souhaité par une grande proportion de la population mondiale. L'objectif de la présente étude était donc d'évaluer l'effet sur les rides de la peau, d'une combinaison d'ingrédients rapportés à influer sur les facteurs clés impliqués dans le vieillissement de la peau, à savoir l'inflammation, la synthèse du collagène et le stress oxydatif / UV. Une boisson supplémentée a été élaborée contenant des isoflavones de soja, le lycopène, la vitamine C et la vitamine E et donnée aux femmes ménopausées avec une capsule contenant de l'huile de poisson. Méthode Nous avons effectué une étude clinique humaine contrôlée randomisée en double aveugle afin de déterminer si ce cocktail d'ingrédients alimentaires pouvait améliorer considérablement l'apparence des rides du visage. Résultats Nous avons montré que cette combinaison unique de micronutriments peut réduire considérablement la profondeur des rides du visage et que cette amélioration est associée à un dépôt de nouvelles fibres de collagène dans le derme. Conclusion Cette étude montre que la consommation d'un mélange d'isoflavones de soja, de lycopène, de la vitamine C et de la vitamine E et de l'huile de poisson est capable d'induire une amélioration cliniquement mesurable dans la profondeur des rides du visage après utilisation à long terme. Nous avons également montré, pour la première fois avec un produit oral, que l'amélioration est associée à une augmentation du dépôt de nouvelles fibres de collagène dans le derme. PMID:23927381
The conceptualization model problem—surprise
NASA Astrophysics Data System (ADS)
Bredehoeft, John
2005-03-01
The foundation of model analysis is the conceptual model. Surprise is defined as new data that renders the prevailing conceptual model invalid; as defined here it represents a paradigm shift. Limited empirical data indicate that surprises occur in 20-30% of model analyses. These data suggest that groundwater analysts have difficulty selecting the appropriate conceptual model. There is no ready remedy to the conceptual model problem other than (1) to collect as much data as is feasible, using all applicable methods—a complementary data collection methodology can lead to new information that changes the prevailing conceptual model, and (2) for the analyst to remain open to the fact that the conceptual model can change dramatically as more information is collected. In the final analysis, the hydrogeologist makes a subjective decision on the appropriate conceptual model. The conceptualization problem does not render models unusable. The problem introduces an uncertainty that often is not widely recognized. Conceptual model uncertainty is exacerbated in making long-term predictions of system performance. C'est le modèle conceptuel qui se trouve à base d'une analyse sur un modèle. On considère comme une surprise lorsque le modèle est invalidé par des données nouvelles; dans les termes définis ici la surprise est équivalente à un change de paradigme. Des données empiriques limitées indiquent que les surprises apparaissent dans 20 à 30% des analyses effectuées sur les modèles. Ces données suggèrent que l'analyse des eaux souterraines présente des difficultés lorsqu'il s'agit de choisir le modèle conceptuel approprié. Il n'existe pas un autre remède au problème du modèle conceptuel que: (1) rassembler autant des données que possible en utilisant toutes les méthodes applicables—la méthode des données complémentaires peut conduire aux nouvelles informations qui vont changer le modèle conceptuel, et (2) l'analyste doit rester ouvert au fait que le modèle conceptuel peut bien changer lorsque des nouvelles informations apparaissent. Dans l'analyse finale le hydrogéologue prend une décision subjective sur le modèle conceptuel approprié. Le problème du le modèle conceptuel ne doit pas rendre le modèle inutilisable. Ce problème introduit une incertitude qui n'est pas toujours reconnue. Les incertitudes du modèle conceptuel deviennent plus importantes dans les cases de prévisions à long terme dans l'analyse de performance. La base para hacer un análisis de un modelo es el modelo conceptual. Se define aquí la sorpresa como los datos nuevos que convierten en incoherente al modelo conceptual previamente aceptado; tal como se define aquí esto representa un cambio de paradigma. Los datos empíricos limitados indican que estas sorpresas suceden entre un 20 a un 30% de los análisis de modelos. Esto sugiere que los analistas de modelos de agua subterránea tienen dificultades al seleccionar el modelo conceptual apropiado. No hayotra solución disponible a este problema del modelo conceptual diferente de: (1) Recolectar tanta información como sea posible, mediante la utilización de todos los métodos aplicables, lo cual puede resultar en que esta nueva información ayude a cambiar el modelo conceptual vigente, y (2) Que el analista de modelos se mantenga siempre abierto al hecho de que un modelo conceptual puede cambiar de manera total, en la medida en que se colecte mas información. En el análisis final el hidrogeólogo toma una decisión subjetiva en cuanto al modelo conceptual apropiado. El problema de la conceptualización no produce modelos inútiles. El problema presenta una incertidumbre, la cual a menudo no es tenida en cuentade manera adecuada. Esta incertidumbre en los modelos conceptuales se aumenta, cuando se hacen predicciones a largo plazo del comportamiento de un sistema dado.
NASA Astrophysics Data System (ADS)
Francoeur, Dany
Cette these de doctorat s'inscrit dans le cadre de projets CRIAQ (Consortium de recherche et d'innovation en aerospatiale du Quebec) orientes vers le developpement d'approches embarquees pour la detection de defauts dans des structures aeronautiques. L'originalite de cette these repose sur le developpement et la validation d'une nouvelle methode de detection, quantification et localisation d'une entaille dans une structure de joint a recouvrement par la propagation d'ondes vibratoires. La premiere partie expose l'etat des connaissances sur l'identification d'un defaut dans le contexte du Structural Health Monitoring (SHM), ainsi que la modelisation de joint a recouvrements. Le chapitre 3 developpe le modele de propagation d'onde d'un joint a recouvrement endommage par une entaille pour une onde de flexion dans la plage des moyennes frequences (10-50 kHz). A cette fin, un modele de transmission de ligne (TLM) est realise pour representer un joint unidimensionnel (1D). Ce modele 1D est ensuite adapte a un joint bi-dimensionnel (2D) en faisant l'hypothese d'un front d'onde plan incident et perpendiculaire au joint. Une methode d'identification parametrique est ensuite developpee pour permettre a la fois la calibration du modele du joint a recouvrement sain, la detection puis la caracterisation de l'entaille situee sur le joint. Cette methode est couplee a un algorithme qui permet une recherche exhaustive de tout l'espace parametrique. Cette technique permet d'extraire une zone d'incertitude reliee aux parametres du modele optimal. Une etude de sensibilite est egalement realisee sur l'identification. Plusieurs resultats de mesure sur des joints a recouvrements 1D et 2D sont realisees permettant ainsi l'etude de la repetabilite des resultats et la variabilite de differents cas d'endommagement. Les resultats de cette etude demontrent d'abord que la methode de detection proposee est tres efficace et permet de suivre la progression d'endommagement. De tres bons resultats de quantification et de localisation d'entailles ont ete obtenus dans les divers joints testes (1D et 2D). Il est prevu que l'utilisation d'ondes de Lamb permettraient d'etendre la plage de validite de la methode pour de plus petits dommages. Ces travaux visent d'abord la surveillance in-situ des structures de joint a recouvrements, mais d'autres types de defauts. (comme les disbond) et. de structures complexes sont egalement envisageables. Mots cles : joint a recouvrement, surveillance in situ, localisation et caracterisation de dommages
Strategies facilitant les tests en pre-certification pour la robustesse a l'egard des radiations =
NASA Astrophysics Data System (ADS)
Souari, Anis
Les effets des radiations cosmiques sur l'electronique embarquee preoccupent depuis. quelques decennies les chercheurs interesses par la robustesse des circuits integres. Plusieurs. recherches ont ete menees dans cette direction, principalement pour les applications spatiales. ou lâenvironnement de leur deploiement est hostile. En effet, ces environnements sont denses. en termes de particules qui, lorsquâelles interagissent avec les circuits integres, peuvent. mener a leur dysfonctionnement, voir meme a leur destruction. De plus, les effets des. radiations sâaccentuent pour les nouvelles generations des circuits integres ou la diminution. de la taille des transistors et lâaugmentation de la complexite de ces circuits augmentent la. probabilite dâapparition des anomalies et par consequence la croissance des besoins de test. Lâexpansion de lâelectronique grand public (commercial off-the-shelf, COTS) et lâadoption. de ces composants pour des applications critiques comme les applications avioniques et. spatiales incitent egalement les chercheurs a doubler les efforts de verification de la fiabilite. de ces circuits. Les COTS, malgre leurs meilleures caracteristiques en comparaison avec les. circuits durcis tolerants aux radiations qui sont couteux et en retard en termes de technologie. utilisee, sont vulnerables aux radiations. Afin dâameliorer la fiabilite de ces circuits, une evaluation de leur vulnerabilite dans les. differents niveaux dâabstraction du flot de conception est recommandee. Ceci aide les. concepteurs a prendre les mesures de mitigation necessaires sur le design au niveau. dâabstraction en question. Enfin, afin de satisfaire les exigences de tolerance aux pannes, des. tests tres couteux de certification, obtenus a lâaide de bombardement de particules (protons, neutrons, etc.), sont necessaires. Dans cette these, nous nous interessons principalement a definir une strategie de precertification. permettant dâevaluer dâune facon realiste la sensibilite des circuits integres face. aux effets des radiations afin dâeviter dâenvoyer des circuits non robustes a la phase tres. couteuse de la certification. Les circuits cibles par nos travaux sont les circuits integres. programmables par lâusager (FPGA) a base de memoire SRAM et le type de pannes ciblees, causees par les radiations, est les SEU (single event upset) consistant a un basculement de. lâetat logique dâun element de memoire a son complementaire. En effet, les FPGA a base de. memoire SRAM sont de plus en plus demandes par la communaute de lâaerospatial grace a. leurs caracteristiques de prototypage rapide et de reconfiguration sur site mais ils sont. vulnerables face aux radiations ou les SEU sont les pannes les plus frequentes dans les. elements de memoire de type SRAM. Nous proposons une nouvelle approche dâinjection de. pannes par emulation permettant de mimer les effets des radiations sur la memoire de. configuration des FPGA et de generer des resultats les plus fideles possibles des resultats des. tests de certification. Cette approche est basee sur la consideration de la difference de. sensibilite des elements de memoire de configuration lorsquâils sont a lâetat '1' et a lâetat '0', observee sous des tests acceleres sous faisceaux de protons au renomme laboratoire. TRIUMF, dans la procedure de generation des sequences de test dans le but de mimer la. distribution des pannes dans la memoire de configuration. Les resultats des experimentations. de validation montrent que la strategie proposee est efficace et genere des resultats realistes. Ces resultats revelent que ne pas considerer la difference de sensibilite peut mener a une. sous-estimation de la sensibilite des circuits face aux radiations. Dans la meme optique dâoptimisation de la procedure dâinjection des pannes par emulation, a. savoir le test de pre-certification, nous proposons une methodologie permettant de maximiser. la detection des bits critiques (bits provoquant une defaillance fonctionnelle sâils changent. dâetat) pour un nombre bien determine de SEU (qui est le modele de pannes adopte) ou de. maximiser la precision de lâestimation de nombre des bits critiques. Pour ce faire, une. classification des bits de configuration en differents ensembles est tout dâabord mise en. oeuvre, selon leur contenu, les ressources quâils configurent et leur criticite. Ensuite, une. evaluation de la sensibilite de chaque ensemble est accomplie. Enfin, la priorisation. dâinjection des pannes dans les ensembles les plus sensibles est recommandee. Plusieurs. scenarios dâoptimisation dâinjection des pannes sont proposes et les resultats sont compares. avec ceux donnes par la methode conventionnelle dâinjection aleatoire des pannes. La. methodologie dâoptimisation proposee assure une amelioration de plus de deux ordres de. grandeur. Une derniere approche facilitant lâevaluation de la sensibilite des bits configurant les LUT. (look up table) de FPGA, les plus petites entites configurables du FPGA permettant. dâimplementer des fonctions combinatoires, utilises par un design est presentee. Elle permet. lâidentification facile et sans cout en termes dâutilisation du materiel ou dâoutils externes des. bits des LUT. Lâapproche proposee est simple et efficace, offrant une couverture de pannes. de 100 % et applicable aux nouvelles generations des FPGA de Xilinx. Les approches proposees contribuent a repondre aux exigences du cahier des charges de cette. these et a achever les objectifs definis. Le realisme et la maximisation de lâestimation de la. vulnerabilite des circuits sous test offerts par les nouvelles approches assurent le. developpement dâune strategie de test en pre-certification efficace. En effet, la premiere. approche dâinjection de pannes considerant la difference de sensibilite relative des elements. de memoire selon leur contenu genere des resultats donnant une erreur relative atteignant. 3.1 % quand compares aux resultats obtenus a TRIUMF alors que lâerreur relative donnee. par la comparaison des resultats dâune injection conventionnelle aleatoire de pannes avec. ceux de TRIUMF peut atteindre la valeur de 75 %. De plus, lâapplication de cette approche a. des circuits plus conventionnels montre que 2.3 fois plus dâerreurs sont detectees en. comparaison avec lâinjection aleatoire des pannes. Ceci suggere que ne pas considerer la. difference de sensibilite relative dans la procedure dâemulation peut mener a une sousestimation. de la sensibilite du design face aux radiations. Les resultats de la deuxieme. approche proposee ont ete aussi compares aux resultats dâune injection aleatoire de pannes. Lâapproche proposee, maximisant le nombre des bits critiques inverses, permet dâatteindre. un facteur dâacceleration de 108 de la procedure dâinjection des pannes en comparaison a. lâapproche aleatoire. Elle permet aussi de minimiser lâerreur dâestimation du nombre des bits. critiques pour atteindre une valeur de ±1.1 % calculee pour un intervalle de confiance de. 95 % tandis que la valeur dâerreur dâestimation des bits critiques generee par lâapproche. aleatoire dâinjection des pannes pour le meme intervalle de confiance peut atteindre ±8.6 %. Enfin, la derniere approche proposee dâinjection de pannes dans les LUT se distingue des. autres approches disponibles dans la litterature par sa simplicite tout en assurant une. couverture maximale de pannes de 100 %. En effet, lâapproche proposee est independante. des outils externes permettant dâidentifier les bits configurant les LUT qui sont obsoletes ou. ne supportent pas les nouvelles generations des FPGA. Elle agit directement sur les fichiers. generes par lâoutil de synthese adopte.
TGBA and TGBC phases in some chiral tolan derivatives
NASA Astrophysics Data System (ADS)
Nguyen, H. T.; Bouchta, A.; Navailles, L.; Barois, P.; Isaert, N.; Twieg, R. J.; Maaroufi, A.; Destrade, C.
1992-10-01
Three chiral compounds (n=10, 11, 12) belonging to the optically active series : 3-fluoro-4-[(R) or (S)-1-methylheptyloxy]-4'-(4''-alkoxy-2'', 3''-difluorobenzoyloxy) tolans (nF{2}BTFO{1}M{7}) have been synthesized. The helical SA^{*} phase or TGBA phase is found in the decyloxy derivative. The most interesting compound is obtained with n=11. It displays, for the first time, two TGB phases (TGBA and TGBC phases). The nature of these helical smectic phases is confirmed by different studies : optical observation, DSC, contact method, mixtures, X-ray diffraction and helical pitch measurements. the electrooptical properties of the SC^{*} phase have also been studied. Trois produits (n=10, 11, 12) de la série chirale : 3-fluoro-4-[(R) ou (S)-1-methylheptyloxy]-4'-(4''-alcoxy-2'', 3''-difluorobenzoyloxy) tolanes (nF{2}BTFO{1}M{7}) ont été synthétisés. Les deux premiers produits présentent la phase SA^{*} hélicoïdale ou torse (TGBA). L'existence de la nouvelle phase TGBC, prédite par Renn et Lubensky, a été trouvée dans les deux derniers matériaux et prouvée par plusieurs études : observation microscopique, AED, méthode de contact, mélanges binaires, diffraction de rayons X et mesures du pas d'hélice. Le diagramme de phase réalisé entre ces trois matériaux est similaire à celui prédit par Renn. Les propriétés électrooptiques de la phase SC^{*} ferroélectrique ont aussi été étudiées.
Nouvelle serie d'oxydes derives de la structure de α-U 3U 8: MIIUMo 4O 16
NASA Astrophysics Data System (ADS)
Lee, M. R.; Jaulmes, S.
1987-04-01
A new family of isotypical oxides MIIUMo 4O 16 ( MII = Mg,Mn,Cd,Ca,Hg,Sr,Pb) is identified. The structure of the compound with Ca was determined by X-ray diffraction. It is triclinic, space group P overline1 with a = 13.239(5) Å, b = 6.651(2) Å, c = 8.236(3) Å, α = 90°00(4), β = 90°38(4), γ = 120°16(3), Z = 2. The final index and the weighted Rw index are 0.049 and 0.040, respectively. The cell is related to the orthorhombic one of α-U 3O 8: a = 2 a0, b = -( a0 + b0)/2, c = 2 c0. The structure, reminiscent of that of α-U 3O 8, consists of chains of [Ca,U]O 7 pentagonal bipyramids and MoO 6 octahedra, running parallel to the c axis. The UO distances along the UOCaO chains are shortened to 1.77(1) Å. The uranyl ion was characterized by its IR spectrum.
Speakable and Unspeakable in Quantum Mechanics
NASA Astrophysics Data System (ADS)
Bell, J. S.; Aspect, Introduction by Alain
2004-06-01
List of papers on quantum philosophy by J. S. Bell; Preface; Acknowledgements; Introduction by Alain Aspect; 1. On the problem of hidden variables in quantum mechanics; 2. On the Einstein-Rosen-Podolsky paradox; 3. The moral aspects of quantum mechanics; 4. Introduction to the hidden-variable question; 5. Subject and object; 6. On wave packet reduction in the Coleman-Hepp model; 7. The theory of local beables; 8. Locality in quantum mechanics: reply to critics; 9. How to teach special relativity; 10. Einstein-Podolsky-Rosen experiments; 11. The measurement theory of Everett and de Broglie's pilot wave; 12. Free variables and local causality; 13. Atomic-cascade photons and quantum-mechanical nonlocality; 14. de Broglie-Bohm delayed choice double-slit experiments and density matrix; 15. Quantum mechanics for cosmologists; 16. Bertlmann's socks and the nature of reality; 17. On the impossible pilot wave; 18. Speakable and unspeakable in quantum mechanics; 19. Beables for quantum field theory; 20. Six possible worlds of quantum mechanics; 21. EPR correlations and EPR distributions; 22. Are there quantum jumps?; 23. Against 'measurement'; 24. La Nouvelle cuisine.
Que nous apprennent les petits frères et sœurs sur les signes précoces d’autisme?1
Rogers, Sally J.
2010-01-01
L’objectif de cette revue est de présenter une synthèse des réponses que l’on peut actuellement apporter à la question de savoir quelles sont les premières caractéristiques comportementales qui prédisent le développement de l’autisme. L’article se centre sur 5 points : la présence de Troubles du Spectre Autistique (TSA) dans des groupes de frères et sœurs puînés d’enfants déjà diagnostiqués, les patterns et caractéristiques du développement moteur, les patterns et caractéristiques du développement social et émotionnel, les patterns et caractéristiques de la communication intentionnelle verbale et non verbale, et les patterns qui marquent le début de comportements pathognomoniques de TSA. La discussion porte sur les aspects inattendus des résultats et les pistes de recherche nouvelles qu’ils peuvent engendrer. PMID:20890377
Ontological Standardization for Historical Map Collections: Studying the Greek Borderlines of 1881
NASA Astrophysics Data System (ADS)
Gkadolou, E.; Tomai, E.; Stefanakis, E.; Kritikos, G.
2012-07-01
Historical maps deliver valuable historical information which is applicable in several domains while they document the spatiotemporal evolution of the geographical entities that are depicted therein. In order to use the historical cartographic information effectively, the maps' semantic documentation becomes a necessity for restoring any semantic ambiguities and structuring the relationship between historical and current geographical space. This paper examines cartographic ontologies as a proposed methodology and presents the first outcomes of the methodology applied for the historical map series «Carte de la nouvelle frontière Turco-Grecque» that sets the borderlines between Greece and Ottoman Empire in 1881. The map entities were modelled and compared to the current ones so as to record the changes in their spatial and thematic attributes and an ontology was developed in Protégé OWL Editor 3.4.4 for the attributes that thoroughly define a historical map and the digitised spatial entities. Special focus was given on the Greek borderline and the changes that it caused to other geographic entities.
Etude de l'affaiblissement du comportement mecanique du pergelisol du au rechauffement climatique
NASA Astrophysics Data System (ADS)
Buteau, Sylvie
Le rechauffement climatique predit pour les prochaines decennies, aura des impacts majeurs sur le pergelisol qui sont tres peu documentes pour l'instant. La presente etude a pour but d'evaluer ces impacts sur les proprietes mecaniques du pergelisol et sa stabilite a long terme. Une nouvelle technique d'essai de penetration au cone a taux de deformation controle, a ete developpee pour caracteriser en place le pergelisol. Ces essais geotechniques et la mesure de differentes proprietes physiques ont ete effectues sur une butte de pergelisol au cours du printemps 2000. Le developpement et l'utilisation d'un modele geothermique 1D tenant compte de la thermodependance du comportement mecanique ont permis d'evaluer que les etendues de pergelisol chaud deviendraient instables a la suite d'un rechauffement de l'ordre de 5°C sur cent ans. En effet, la resistance mecanique du pergelisol diminuera alors rapidement jusqu'a 11,6 MPa, ce qui correspond a une perte relative de 98% de la resistance par rapport a un scenario sans rechauffement.
NASA Astrophysics Data System (ADS)
Danouj, Boujemaa
An important issue affecting the sustainability of power transformers is systematic and progressive deterioration of the insulation system by the action of partial discharge. Ideally, it is appropriate to use on line, non-destructive techniques for detection and diagnosis of failures related to insulation systems, in order to determine whether preventive maintenance action is required. Thus, huge material losses can be saved (spared), while improving reliability and system availability. Based on a new generation of piezoelectric sensors (High Temperature Ultrasonic Transducers HTUTs), recently developed by the Industrial Materials Institute (IMI) in Boucherville (Qc, Canada) and offers very interesting features (broad band frequency response, flexible, miniature, economic, etc..), we propose in this thesis an investigation on the applicability of this technology to the problematic of partial discharges. This work presents an analysis of the metrological performance of these sensors and demonstrated empirically the consistency of their measures. It outlines the results of validation from a comparative study with the measures of a standard detection circuit. In addition, it also presents the potential of these sensors to locate partial discharge source position by acoustic emission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapon, Emilien
Le paysage de la physique des particules a subi des changements majeurs entre le début de cette thèse, en septembre 2010, et sa n en juin 2013. On peut notamment qualier l'année 2012 de date-clé dans l'histoire de la physique des particules. En 2012, une nouvelle particule a été découverte au LHC [1, 2], dont la majeure partie de la communauté s'accorde aujourd'hui à dire qu'il s'agit très probablement du boson de Higgs. Cet événement est intervenu peu après une sorte de passage de relais entre le Tevatron, arrêté le 30 septembre 2011, et le LHC, dont les toutes premièresmore » collisions sont intervenues le 23 novembre 2009.« less
NASA Astrophysics Data System (ADS)
Marnas, F.; Chazette, P.; Flamant, C.; Royer, P.; Sodemman, H.; Derimian, Y.
2012-04-01
In the framework of the FENNEC experiment (6 to 30 June 2011) an effort has been dedicated to characterize Saharan dust plumes transported towards southern Europe. Hence, a multi instrumented field campaign has been conducted. Ground based nitrogen Raman LIDAR (GBNRL) has been deployed in southern Spain close to Marbella, simultaneously with airborne lidar (AL) performing measurements over both the tropical Atlantic Ocean and the western Africa (from 2 to 23 June). The GBNRL was equipped with co-polar and cross-polar channels to perform continuous measurements of the dust aerosols trapped in the troposphere. It was developed by LSCE with the support of the LEOSPHERE Company. The French FALCON 20 research aircraft operated by SAFIRE (Service des Avions Francais Instrumentés pour la Recherche en Environnement) carried the AL Leandre Nouvelle Generation (LNG) as well as a dropsonde releasing system and radiometers. A major, one week long, dust event has been sampled over Spain from 25 June to 1 July with high optical depth (>0.5 at 355nm) and particular depolarization ratios (15 to 25%). Backtrajectory studies suggest that the dust particles observed were from dust uplifts that occurred in Southern Morocco and Northern Mauritania. The event has been also documented 3 days before by the AL flying over Mauritania. AERONET sunphotometer measurements of aerosol properties, along the dust plume transport path appear to be coherent with both the lidar and the backtrajectory analysis. These analysis exhibit a likely major contribution from the Western Sahara sources to the Southern Europe. Such a contribution may impact the visibility and then the airtrafic, modify the tropospheric chemistry, and add nutrients to both the Mediterranean Sea and the continental surfaces. It can also affect the health of European populations. We will present strategy of the experiment and the case study built from measurements performed at the end of June.
Hannigan, John
2017-02-01
Despite covering around 70 percent of the earth's surface, the ocean has long been ignored by sociology or treated as merely an extension of land-based systems. Increasingly, however, oceans are assuming a higher profile, emerging both as a new resource frontier, a medium for geopolitical rivalry and conflict, and a unique and threatened ecological hot spot. In this article, I propose a new sociological specialty area, the "sociology of oceans" to be situated at the interface between environmental sociology and traditional maritime studies. After reviewing existing sociological research on maritime topics and the consideration of (or lack of consideration) the sea by classic sociological theorists, I briefly discuss several contemporary sociological approaches to the ocean that have attracted some notice. In the final section of the paper, I make the case for a distinct sociology of oceans and briefly sketch what this might look like. One possible trajectory for creating a shared vision or common paradigm, I argue, is to draw on Deleuze and Guattari's dialectical theory of the smooth and the striated. Même s'il couvre 70% de la surface de la Terre, l'océan a été longtemps ignoré en sociologie ou traité comme une extension des systèmes terrestres. De plus en plus, toutefois, l'océan retient l'attention, en étant vu comme une nouvelle frontière en termes de ressources, un médium pour les rivalités et les conflits géopolitiques, et un lieu écologique névralgique et unique. Dans cet article, je propose une nouvelle spécialisation sociologique, la 'sociologie des océans', se situant dans l'interface entre la sociologie environnementale et les études maritimes traditionnelles. Après une recension de la recherche sociologique existante sur les sujets maritimes et la prise en compte (ou l'absence de prise en compte) de l'océan par les théoriciens de la sociologie classique, je discute brièvement quelques approches sociologiques contemporaines de l'océan ayant attiré l'attention. Dans la dernière partie de l'article, j'insiste sur le besoin d'une sociologie distincte de l'océan et je présente brièvement à quoi cela pourrait ressembler. Une voie possible pour créer une vision commune ou un paradigme, selon moi, est de s'inspirer de la théorie dialectique du lisse et du strié de Deleuze et Guattari. © 2017 Canadian Sociological Association/La Société canadienne de sociologie.
NASA Astrophysics Data System (ADS)
Blondin, Andre
Dans un contexte constructiviste, les connaissances anterieures d'un individu sont essentielles a la construction de nouvelles connaissances. Quelle qu'en soit la source (certaines de ces connaissances ont ete elaborees en classe, d'autres ont ete elaborees par interaction personnelle de l'individu avec son environnement physique et social), ces connaissances, une fois acquises, constituent les matieres premieres de l'elaboration des nouvelles conceptions de cet individu. Generalement, cette influence est consideree comme positive. Cependant, dans un milieu scolaire ou l'apprentissage de certaines conceptions enchassees dans un programme d'etudes et enterinees par l'ensemble d'une communaute est obligatoire, certaines connaissances anterieures peuvent entraver la construction des conceptions exigees par la communaute. La litterature abonde de tels exemples. Cependant, certaines connaissances anterieures, en soi tout a fait conformes a l'Heritage, peuvent aussi, parce qu'utilisees de facon non pertinente, entraver la construction d'une conception exigee par la communaute. Ici, la litterature nous donne peu d'exemples de ce type, mais nous en fournirons quelques-uns dans le cadre theorique, et ce sera un d'entre eux qui servira de base a nos propos. En effet, une grande proportion d'eleves inscrits a un cours de sciences physiques de la quatrieme secondaire, en reponse a un probleme deja solutionne durant l'annee et redonne lors d'un examen sommatif, "Pourquoi la Lune nous montre-t-elle toujours la meme face?", attribue principalement la cause de ce phenomene a la rotation de la Terre sur son axe. En tant que responsable de l'enseignement de ce programme d'etudes, plusieurs questions nous sont venues a l'esprit, entre autres, comment, dans un contexte constructiviste, est-il possible de reduire chez un eleve, l'impact de cette connaissance anterieure dans l'elaboration de la solution et ainsi prevenir la construction d'une conception erronee? Nous avons teste nos hypotheses avec la cohorte suivante d'eleves chez qui se repetaient les memes conditions d'apprentissage. Nous avons utilise le design de recherche "posttest only" de Campbell et Stanley. En mai, apres le moment prevu dans la planification du programme pour donner le probleme aux eleves, nous avons suggere deux facons differentes de reviser la solution de ce probleme. Les eleves du premier groupe experimental ont revise sans que soit activee la connaissance anterieure apprehendee de la rotation de la Terre. Les eleves du deuxieme groupe experimental ont ete confrontes, par des questions et une simulation, au fait que la rotation de la Terre n'est pas une connaissance pertinente pour resoudre le probleme. Les groupes temoins et les groupes experimentaux ont ete choisis au hasard dans le bassin des ecoles secondaires de la commission scolaire. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Mazière, D.
2002-04-01
Faisant suite aux deux précédents colloques "Matériaux pour les machines thermiques" et "Matériaux pour le nucléaire" , le colloque 2001 de l'INSTN intitulé "Matériaux pour les énergies propres" s'est focalisé sur les problèmes de matériaux encore à résoudre dans ce secteur industriel. Le colloque de métallurgie est traditionnellement organisé par des ense ignants du DEA Métallurgie et Matériaux et un comité scientifique choisi chaq ue année en liaison avec le thème traité. Les étudiants de ce DEA, qui est hab ilité entre Paris XI, Paris VI, l'Ecole des Mines de Paris, l'Ecole Centrale de Pari s et l'INSTN, sont invités à participer à ce colloque et aux débats scientifiques qui s'y déroulent. Des conférences invitées à caractère péda gogique permettent d'introduire les différents thèmes abordés qui sont ensuite développés dans des présentations plus novatrices. Cette manifestation a pour ambition de favoriser la rencontre de scientifiques d'horizons divers venant de milieux académiques ou industriels entre eux et avec les étudiants et thésards. Cette 44e édition, dont les comptes rendus sont publiés ici, a fait le point sur les problèmes de matériaux rencontrés lors de la production, du stockage et de la conversion des énergies dites propres en englobant lesprogrès constants des industriels de l'automobile. Ce colloque a réuni, du 26 au 28 juin 2001, 63 participants provenant d'universités ou grandes écoles (18), du CEA (17), du CNRS (10) et de l'industrie ou de centres de recherche associés. L'ensemble des problèmes de matériaux de ce secteur ont été examinés au cours des six sessions ci-dessous : dépollution des gaz d'échappement ; combustion catalytique en production thermique ; nouvelles batteries ; piles à combustibles ; production et stockage d'hydrogène ; production et stockage d'énergie solaire. Vingt huit communications dont vingt deux orales ont illustré les dével oppements en cours. Dix-sept d'entre elles sont développées dans cet ouvrage. On pourra consulter avec profit le numéro 44 des Clefs CEA "Nouvelles Technologies de l'énergie" en complément du présent ouvrage.
Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
Symétries et nomenclature des baryons: Proposition d'une nouvelle nomenclature
NASA Astrophysics Data System (ADS)
Landry, Gaëtan
Baryons, such as protons and neutrons, are matter particles made of three quarks. Their current nomenclature is based on the concept of isospin, introduced by Werner Heisenberg in 1932 to explain the similarity between the masses of protons and neutrons, as well as the similarity of their behaviour under the strong interaction. It is a refinement of a nomenclature designed in 1964, before the acceptance of the quark model, for light baryons. A historical review of baryon physics before the advent of the quark model is given to understand the motivations behind the light baryon nomenclature. Then, an overview of the quark model is given to understand the extensions done to this nomenclature in 1986, as well as to understand the physics of baryons and of properties such as isospin and flavour quantum numbers. Since baryon properties are in general explained by the quark model, a nomenclature based on isospin leads to several issues of physics and of clarity. To resolve these issues, the concepts of isospin and mass groups are generalized to all flavours of quarks, the Gell-Mann--Okubo formalism is extended to generalized mass groups, and a baryon nomenclature based on the quark model, reflecting modern knowledge, is proposed.
Rotation of a rigid satellite with a fluid component: a new light onto Titan's obliquity
NASA Astrophysics Data System (ADS)
Boué, Gwenaël; Rambaux, Nicolas; Richard, Andy
2017-12-01
We revisit the rotation dynamics of a rigid satellite with either a liquid core or a global subsurface ocean. In both problems, the flow of the fluid component is assumed inviscid. The study of a hollow satellite with a liquid core is based on the Poincaré-Hough model which provides exact equations of motion. We introduce an approximation when the ellipticity of the cavity is low. This simplification allows to model both types of satellite in the same manner. The analysis of their rotation is done in a non-canonical Hamiltonian formalism closely related to Poincaré's "forme nouvelle des équations de la mécanique". In the case of a satellite with a global ocean, we obtain a seven-degree-of-freedom system. Six of them account for the motion of the two rigid components, and the last one is associated with the fluid layer. We apply our model to Titan for which the origin of the obliquity is still a debated question. We show that the observed value is compatible with Titan slightly departing from the hydrostatic equilibrium and being in a Cassini equilibrium state.
Cancer métaplasique du sein: à propos d'un cas
Babahabib, Moulay Abdellah; Chennana, Adil; Hachi, Aymen; Kouach, Jaoud; Moussaoui, Driss; Dhayni, Mohammed
2014-01-01
Les carcinomes métaplasiques du sein sont des tumeurs rares. Ils constituent un groupe hétérogène de tumeurs définis selon l'organisation mondiale de la santé comme étant un carcinome canalaire infiltrant mais comportant des zones de remaniements métaplasiques (de type épidermoïde, à cellules fusiformes, chondroïde et osseux ou mixte), qui varient de quelques foyers microscopiques à un remplacement glandulaire complet. Les aspects cliniques et radiologiques ne sont pas spécifiques. Le traitement associe la chirurgie, la radiothérapie et la chimiothérapie. L'hormonothérapie n'a pas de place. Le pronostic est sombre. L'histopathologie combinée à l'immunohistochimie permet de poser un diagnostic sure. Etant donné que la prise en charge thérapeutique est limitée, une nouvelle approche moléculaire pourrait modifier cette contribution faible et mal cernée des traitements systémiques classiques. Les patientes atteintes de carcinome métaplasique mammaire pourraient bénéficier de traitements ciblés, ce qui reste à confirmer par des essais cliniques. PMID:25870723
La problématique du coût des nouvelles thérapeutiques en oncologie: qu'en-est-il du Maroc?
Brahmi, Sami Aziz; Zahra, Ziani Fatima; Seddik, Youssef; Afqir, Said
2016-01-01
Le cancer est un problème majeur de santé public en Afrique. Les progrès réalisé dans le traitement des cancers ces dix dernières années est indéniable. L’émergence des thérapies ciblés en oncologie a permit de modifier l'histoire naturelle de certains cancers réputés de mauvais pronostic. En dépit de leurs efficacité, ces thérapeutiques pose un problème majeur de coût qui les rend inaccessible à la majorité des patients dans les pays en voie développement. Au Maroc, le cancer est reconnu comme a une affection de longue durée et les patients bénéficient de ce fait d'une couverture médicale totale. L'implication de la société civile a permis aussi d'améliorer la prise en charge ainsi qu'un accès plus élargi aux médicaments innovants pour les patients les plus démunis. PMID:27642392
Cost and surface optimization of a remote photovoltaic system for two kinds of panels' technologies
NASA Astrophysics Data System (ADS)
Avril, S.; Arnaud, G.; Colin, H.; Montignac, F.; Mansilla, C.; Vinard, M.
2011-10-01
Stand alone photovoltaic (PV) systems comprise one of the promising electrification solutions to cover the demand of remote consumers, especially when it is coupled with a storage solution that would both increase the productivity of power plants and reduce the areas dedicated to energy production. This short communication presents a multi-objective design of a remote PV system coupled to battery and hydrogen storages systems simultaneously minimizing the total levelized cost and the occupied area, while fulfilling a constraint of consumer satisfaction. For this task, a multi-objective code based on particle swarm optimization has been used to find the best combination of different energy devices. Both short and mid terms based on forecasts assumptions have been investigated. An application for the site of La Nouvelle in the French overseas island of La Réunion is proposed. It points up a strong cost advantage by using Heterojunction with Intrinsic Thin layer (HIT) rather than crystalline silicon (c-Si) cells for the short term. However, the discrimination between these two PV cell technologies is less obvious for the mid term: a strong constraint on the occupied area will promote HIT, whereas a strong constraint on the cost will promote c-Si.
Optimisation thermique de moules d'injection construits par des processus génératifs
NASA Astrophysics Data System (ADS)
Boillat, E.; Glardon, R.; Paraschivescu, D.
2002-12-01
Une des potentialités les plus remarquables des procédés de production génératifs, comme le frittage sélectif par laser, est leur capacité à fabriquer des moules pour l'injection plastique équipés directement de canaux de refroidissement conformes, parfaitement adaptés aux empreintes Pour que l'industrie de l'injection puisse tirer pleinement parti de cette nouvelle opportunité, il est nécessaire de mettre à la disposition des moulistes des logiciels de simulation capables d'évaluer les gains de productivité et de qualité réalisables avec des systèmes de refroidissement mieux adaptés. Ces logiciels devraient aussi être capables, le cas échéant, de concevoir le système de refroidissement optimal dans des situations où l'empreinte d'injection est complexe. Devant le manque d'outils disponibles dans ce domaine, le but de cet article est de proposer un modèle simple de moules d'injection. Ce modèle permet de comparer différentes stratégies de refroidissement et peut être couplé avec un algorithme d'optimisation.
Repenser ensemble le concept d'autonomie alimentaire.
Bélisle, Micheline; Labarthe, Jenni; Moreau, Cynthia; Landry, Elise; Adam, Gracia; Bourque Bouliane, Mijanou; Dupéré, Sophie
2017-03-01
C'est au cœur du projet de recherche-action participative « Vers une autonomie alimentaire pour tou-te-s : agir et vivre ensemble le changement » (VAATAVEC) que s'est élaborée une définition nouvelle et évolutive de l'autonomie alimentaire. De ce projet regroupant des personnes en situation de pauvreté, chercheurs et intervenants, a émergé une conceptualisation résultant à la fois d'une méthode de réflexion collective et d'un processus d'analyse collective, inspiré de la théorisation ancrée et de l'analyse conceptualisante de Paillé et Mucchielli (2003). Le recours à l'expertise des personnes en situation de pauvreté, expertes du vécu de l'insécurité alimentaire, a contribué à ancrer cette définition dans les causes structurelles de l'insécurité alimentaire dans une perspective de développement d'un pouvoir d'agir collectif, de toutes les personnes concernées. Il en résulte une définition de l'autonomie alimentaire d'où ressort une contribution surtout méthodologique et théorique.
Toward cost-efficient sampling methods
NASA Astrophysics Data System (ADS)
Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie
2015-09-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.
Methodology Series Module 5: Sampling Strategies
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438
NASA Astrophysics Data System (ADS)
Ayoub, Simon
Le reseau de distribution et de transport de l'electricite se modernise dans plusieurs pays dont le Canada. La nouvelle generation de ce reseau que l'on appelle smart grid, permet entre autre l'automatisation de la production, de la distribution et de la gestion de la charge chez les clients. D'un autre cote, des appareils domestiques intelligents munis d'une interface de communication pour des applications du smart grid commencent a apparaitre sur le marche. Ces appareils intelligents pourraient creer une communaute virtuelle pour optimiser leurs consommations d'une facon distribuee. La gestion distribuee de ces charges intelligentes necessite la communication entre un grand nombre d'equipements electriques. Ceci represente un defi important a relever surtout si on ne veut pas augmenter le cout de l'infrastructure et de la maintenance. Lors de cette these deux systemes distincts ont ete concus : un systeme de communication peer-to-peer, appele Ring-Tree, permettant la communication entre un nombre important de noeuds (jusqu'a de l'ordre de grandeur du million) tel que des appareils electriques communicants et une technique distribuee de gestion de la charge sur le reseau electrique. Le systeme de communication Ring-Tree inclut une nouvelle topologie reseau qui n'a jamais ete definie ou exploitee auparavant. Il inclut egalement des algorithmes pour la creation, l'exploitation et la maintenance de ce reseau. Il est suffisamment simple pour etre mis en oeuvre sur des controleurs associes aux dispositifs tels que des chauffe-eaux, chauffage a accumulation, bornes de recharges electriques, etc. Il n'utilise pas un serveur centralise (ou tres peu, seulement lorsqu'un noeud veut rejoindre le reseau). Il offre une solution distribuee qui peut etre mise en oeuvre sans deploiement d'une infrastructure autre que les controleurs sur les dispositifs vises. Finalement, un temps de reponse de quelques secondes pour atteindre 1'ensemble du reseau peut etre obtenu, ce qui est suffisant pour les besoins des applications visees. Les protocoles de communication s'appuient sur un protocole de transport qui peut etre un de ceux utilises sur l'Internet comme TCP ou UDP. Pour valider le fonctionnement de de la technique de controle distribuee et le systeme de communiction Ring-Tree, un simulateur a ete developpe; un modele de chauffe-eau, comme exemple de charge, a ete integre au simulateur. La simulation d'une communaute de chauffe-eaux intelligents a montre que la technique de gestion de la charge combinee avec du stockage d'energie sous forme thermique permet d'obtenir, sans affecter le confort de l'utilisateur, des profils de consommation varies dont un profil de consommation uniforme qui represente un facteur de charge de 100%. Mots-cles : Algorithme Distribue, Demand Response, Gestion de la Charge Electrique, M2M (Machine-to-Machine), P2P (Peer-to-Peer), Reseau Electrique Intelligent, Ring-Tree, Smart Grid
40 CFR Appendix I to Part 261 - Representative Sampling Methods
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Representative Sampling Methods I...—Representative Sampling Methods The methods and equipment used for sampling waste materials will vary with the form and consistency of the waste materials to be sampled. Samples collected using the sampling...
40 CFR Appendix I to Part 261 - Representative Sampling Methods
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 26 2011-07-01 2011-07-01 false Representative Sampling Methods I...—Representative Sampling Methods The methods and equipment used for sampling waste materials will vary with the form and consistency of the waste materials to be sampled. Samples collected using the sampling...
A new sampling method for fibre length measurement
NASA Astrophysics Data System (ADS)
Wu, Hongyan; Li, Xianghong; Zhang, Junying
2018-06-01
This paper presents a new sampling method for fibre length measurement. This new method can meet the three features of an effective sampling method, also it can produce the beard with two symmetrical ends which can be scanned from the holding line to get two full fibrograms for each sample. The methodology was introduced and experiments were performed to investigate effectiveness of the new method. The results show that the new sampling method is an effective sampling method.
Log sampling methods and software for stand and landscape analyses.
Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough
2008-01-01
We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...
Archfield, Stacey A.; LeBlanc, Denis R.
2005-01-01
To evaluate diffusion sampling as an alternative method to monitor volatile organic compound (VOC) concentrations in ground water, concentrations in samples collected by traditional pumped-sampling methods were compared to concentrations in samples collected by diffusion-sampling methods for 89 monitoring wells at or near the Massachusetts Military Reservation, Cape Cod. Samples were analyzed for 36 VOCs. There was no substantial difference between the utility of diffusion and pumped samples to detect the presence or absence of a VOC. In wells where VOCs were detected, diffusion-sample concentrations of tetrachloroethene (PCE) and trichloroethene (TCE) were significantly lower than pumped-sample concentrations. Because PCE and TCE concentrations detected in the wells dominated the calculation of many of the total VOC concentrations, when VOC concentrations were summed and compared by sampling method, visual inspection also showed a downward concentration bias in the diffusion-sample concentration. The degree to which pumped- and diffusion-sample concentrations agreed was not a result of variability inherent within the sampling methods or the diffusion process itself. A comparison of the degree of agreement in the results from the two methods to 13 quantifiable characteristics external to the sampling methods offered only well-screen length as being related to the degree of agreement between the methods; however, there is also evidence to indicate that the flushing rate of water through the well screen affected the agreement between the sampling methods. Despite poor agreement between the concentrations obtained by the two methods at some wells, the degree to which the concentrations agree at a given well is repeatable. A one-time, well-bywell comparison between diffusion- and pumped-sampling methods could determine which wells are good candidates for the use of diffusion samplers. For wells with good method agreement, the diffusion-sampling method is a time-saving and cost-effective alternative to pumped-sampling methods in a long-term monitoring program, such as at the Massachusetts Military Reservation.
Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)’s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete (“grab”) samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples. PMID:28759607
Modified electrokinetic sample injection method in chromatography and electrophoresis analysis
Davidson, J. Courtney; Balch, Joseph W.
2001-01-01
A sample injection method for horizontal configured multiple chromatography or electrophoresis units, each containing a number of separation/analysis channels, that enables efficient introduction of analyte samples. This method for loading when taken in conjunction with horizontal microchannels allows much reduced sample volumes and a means of sample stacking to greatly reduce the concentration of the sample. This reduction in the amount of sample can lead to great cost savings in sample preparation, particularly in massively parallel applications such as DNA sequencing. The essence of this method is in preparation of the input of the separation channel, the physical sample introduction, and subsequent removal of excess material. By this method, sample volumes of 100 nanoliter to 2 microliters have been used successfully, compared to the typical 5 microliters of sample required by the prior separation/analysis method.
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Methods for purifying carbon materials
Dailly, Anne [Pasadena, CA; Ahn, Channing [Pasadena, CA; Yazami, Rachid [Los Angeles, CA; Fultz, Brent T [Pasadena, CA
2009-05-26
Methods of purifying samples are provided that are capable of removing carbonaceous and noncarbonaceous impurities from a sample containing a carbon material having a selected structure. Purification methods are provided for removing residual metal catalyst particles enclosed in multilayer carbonaceous impurities in samples generate by catalytic synthesis methods. Purification methods are provided wherein carbonaceous impurities in a sample are at least partially exfoliated, thereby facilitating subsequent removal of carbonaceous and noncarbonaceous impurities from the sample. Methods of purifying carbon nanotube-containing samples are provided wherein an intercalant is added to the sample and subsequently reacted with an exfoliation initiator to achieve exfoliation of carbonaceous impurities.
Sampling techniques for thrips (Thysanoptera: Thripidae) in preflowering tomato.
Joost, P Houston; Riley, David G
2004-08-01
Sampling techniques for thrips (Thysanoptera: Thripidae) were compared in preflowering tomato plants at the Coastal Plain Experiment Station in Tifton, GA, in 2000 and 2003, to determine the most effective method of determining abundance of thrips on tomato foliage early in the growing season. Three relative sampling techniques, including a standard insect aspirator, a 946-ml beat cup, and an insect vacuum device, were compared for accuracy to an absolute method and to themselves for precision and efficiency of sampling thrips. Thrips counts of all relative sampling methods were highly correlated (R > 0.92) to the absolute method. The aspirator method was the most accurate compared with the absolute sample according to regression analysis in 2000. In 2003, all sampling methods were considered accurate according to Dunnett's test, but thrips numbers were lower and sample variation was greater than in 2000. In 2000, the beat cup method had the lowest relative variation (RV) or best precision, at 1 and 8 d after transplant (DAT). Only the beat cup method had RV values <25 for all sampling dates. In 2003, the beat cup method had the lowest RV value at 15 and 21 DAT. The beat cup method also was the most efficient method for all sample dates in both years. Frankliniella fusca (Pergande) was the most abundant thrips species on the foliage of preflowering tomato in both years of study at this location. Overall, the best thrips sampling technique tested was the beat cup method in terms of precision and sampling efficiency.
Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere
2011-01-01
Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization’s Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples’ statistical properties. PMID:22582004
Evaluating fungal contamination indoors is complicated because of the many different sampling methods utilized. In this study, fungal contamination was evaluated using five sampling methods and four matrices for results. The five sampling methods were a 48 hour indoor air sample ...
Evaluation of seven aquatic sampling methods for amphibians and other aquatic fauna
Gunzburger, M.S.
2007-01-01
To design effective and efficient research and monitoring programs researchers must have a thorough understanding of the capabilities and limitations of their sampling methods. Few direct comparative studies exist for aquatic sampling methods for amphibians. The objective of this study was to simultaneously employ seven aquatic sampling methods in 10 wetlands to compare amphibian species richness and number of individuals detected with each method. Four sampling methods allowed counts of individuals (metal dipnet, D-frame dipnet, box trap, crayfish trap), whereas the other three methods allowed detection of species (visual encounter, aural, and froglogger). Amphibian species richness was greatest with froglogger, box trap, and aural samples. For anuran species, the sampling methods by which each life stage was detected was related to relative length of larval and breeding periods and tadpole size. Detection probability of amphibians varied across sampling methods. Box trap sampling resulted in the most precise amphibian count, but the precision of all four count-based methods was low (coefficient of variation > 145 for all methods). The efficacy of the four count sampling methods at sampling fish and aquatic invertebrates was also analyzed because these predatory taxa are known to be important predictors of amphibian habitat distribution. Species richness and counts were similar for fish with the four methods, whereas invertebrate species richness and counts were greatest in box traps. An effective wetland amphibian monitoring program in the southeastern United States should include multiple sampling methods to obtain the most accurate assessment of species community composition at each site. The combined use of frogloggers, crayfish traps, and dipnets may be the most efficient and effective amphibian monitoring protocol. ?? 2007 Brill Academic Publishers.
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Monitoring benthic aIgal communides: A comparison of targeted and coefficient sampling methods
Edwards, Matthew S.; Tinker, M. Tim
2009-01-01
Choosing an appropriate sample unit is a fundamental decision in the design of ecological studies. While numerous methods have been developed to estimate organism abundance, they differ in cost, accuracy and precision.Using both field data and computer simulation modeling, we evaluated the costs and benefits associated with two methods commonly used to sample benthic organisms in temperate kelp forests. One of these methods, the Targeted Sampling method, relies on different sample units, each "targeted" for a specific species or group of species while the other method relies on coefficients that represent ranges of bottom cover obtained from visual esti-mates within standardized sample units. Both the field data and the computer simulations suggest that both methods yield remarkably similar estimates of organism abundance and among-site variability, although the Coefficient method slightly underestimates variability among sample units when abundances are low. In contrast, the two methods differ considerably in the effort needed to sample these communities; the Targeted Sampling requires more time and twice the personnel to complete. We conclude that the Coefficent Sampling method may be better for environmental monitoring programs where changes in mean abundance are of central concern and resources are limiting, but that the Targeted sampling methods may be better for ecological studies where quantitative relationships among species and small-scale variability in abundance are of central concern.
NASA Astrophysics Data System (ADS)
Woirgard, J.; Salmon, E.; Gaboriaud, R. J.; Rabier, J.
1994-03-01
A very sensitive apparatus using the vibrating reed technique in a magnetic field is described. This new technic is an internal friction measurement which has been developed and applied to the study of vortex pinning in high T_c type II superconductors. The vibrating reed is simply used as a sample holder for the superconductor which can be oriented thin films, bulk samples or powders. The salient feature of this experimental set-up is the excitation mode of the reed for which the imposed vibration frequency can be freely chosen in the range 10^{-4}-10 Hz. Furthermore, the measurement sensitivity improves the performances obtained up to now by such similar apparatus as forced torsion pendulums. Damping values corresponding to phase lags between 10^{-5} and 10^{-4} radian can be readily obtained for vibration frequencies in the range 10^{-1} 10 Hz. Some preliminary results show damping peaks which might be due to the so-called fusion of the vortex network obtained with thin films whose thickness is 1000 Å and with textured bulk samples of YBaCuO. Une nouvelle technique basée sur la mesure du frottement intérieur en vibrations forcées est appliquée à l'étude de l'ancrage des vortex dans les oxydes supraconducteurs à haute température critique. Dans cette méthode la lame, excitée électrostatiquement, voit son rôle limité à celui de porte-échantillon sur lequel peuvent être disposés des couches minces, des échantillons massifs ou des poudres. L'originalité de cet appareillage réside dans la conception du mode d'excitation de la lame : la fréquence d'oscillation forcée peut être choisie dans une large gamme allant de 10^{-4} Hz à quelques dizaines de hertz. D'autre part, la sensibilité de la mesure améliore sensiblement les performances obtenues jusqu'à ce jour en vibrations forcées. Des amortissements correspondant à des déphasages compris entre à 10^{-5} et 10^{-4} radian peuvent être facilement mesurés. Les premiers essais réalisés sur une couche mince épitaxiée de 1000 Å d'épaisseur d'YBaCuO ont permis de mettre en évidence un pic d'amortissement, de grande amplitude, qui pourrait être dû à la fusion du réseau de vortex. Prochainement, cet appareillage sera employé pour l'étude de l'ancrage des lignes de flux sur les défauts du réseau cristallin, défauts naturels ou artificiels créés par implantation ionique dans les films minces ou par déformation plastique dans les échantillons massifs.
Perpendicular distance sampling: an alternative method for sampling downed coarse woody debris
Michael S. Williams; Jeffrey H. Gove
2003-01-01
Coarse woody debris (CWD) plays an important role in many forest ecosystem processes. In recent years, a number of new methods have been proposed to sample CWD. These methods select individual logs into the sample using some form of unequal probability sampling. One concern with most of these methods is the difficulty in estimating the volume of each log. A new method...
Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W
2015-06-01
Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Comparability of river suspended-sediment sampling and laboratory analysis methods
Groten, Joel T.; Johnson, Gregory D.
2018-03-06
Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.
Some connections between importance sampling and enhanced sampling methods in molecular dynamics.
Lie, H C; Quer, J
2017-11-21
In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.
Some connections between importance sampling and enhanced sampling methods in molecular dynamics
NASA Astrophysics Data System (ADS)
Lie, H. C.; Quer, J.
2017-11-01
In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.
Rapid detection of Salmonella spp. in food by use of the ISO-GRID hydrophobic grid membrane filter.
Entis, P; Brodsky, M H; Sharpe, A N; Jarvis, G A
1982-01-01
A rapid hydrophobic grid-membrane filter (HGMF) method was developed and compared with the Health Protection Branch cultural method for the detection of Salmonella spp. in 798 spiked samples and 265 naturally contaminated samples of food. With the HGMF method, Salmonella spp. were isolated from 618 of the spiked samples and 190 of the naturally contaminated samples. The conventional method recovered Salmonella spp. from 622 spiked samples and 204 unspiked samples. The isolation rates from Salmonella-positive samples for the two methods were not significantly different (94.6% overall for the HGMF method and 96.7% for the conventional approach), but the HGMF results were available in only 2 to 3 days after sample receipt compared with 3 to 4 days by the conventional method. Images PMID:7059168
Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei
2015-01-01
A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.
Coes, Alissa L.; Paretti, Nicholas V.; Foreman, William T.; Iverson, Jana L.; Alvarez, David A.
2014-01-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.
Coes, Alissa L; Paretti, Nicholas V; Foreman, William T; Iverson, Jana L; Alvarez, David A
2014-03-01
A continuous active sampling method was compared to continuous passive and discrete sampling methods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate sampling methods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three sampling methods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three sampling methods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method. Published by Elsevier B.V.
Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal
2017-11-24
Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.
Prevalence of Mixed-Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Collins, Kathleen M. T.
2006-01-01
The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…
Developpement d'une commande pour une hydrolienne de riviere et optimisation =
NASA Astrophysics Data System (ADS)
Tetrault, Philippe
Suivant le developpement des energies renouvelables, la presente etude se veut une base theorique quant aux principes fondamentaux necessaires au bon fonctionnement et a l'implementation d'une hydrolienne de riviere. La problematique derriere ce nouveau type d'appareil est d'abord presentee. La machine electrique utilisee dans l'application, c'est-a-dire la machine synchrone a aimants permanents, est etudiee : ses equations dynamiques mecaniques et electriques sont developpees, introduisant en meme temps le concept de referentiel tournant. Le fonctionnement de l'onduleur utilise, soit un montage en pont complet a deux niveaux a semi-conducteurs, est explique et mit en equation pour permettre de comprendre les strategies de modulation disponibles. Un bref historique de ces strategies est fait avant de mettre l'emphase sur la modulation vectorielle qui sera celle utilisee pour l'application en cours. Les differents modules sont assembles dans une simulation Matlab pour confirmer leur bon fonctionnement et comparer les resultats de la simulation avec les calculs theoriques. Differents algorithmes permettant de traquer et maintenir un point de fonctionnement optimal sont presentes. Le comportement de la riviere est etudie afin d'evaluer l'ampleur des perturbations que le systeme devra gerer. Finalement, une nouvelle approche est presentee et comparee a une strategie plus conservatrice a l'aide d'un autre modele de simulation Matlab.
Raymond, Kateri; Levasseur, Mélanie; Chouinard, Maud-Christine; Mathieu, Jean; Gagnon, Cynthia
2016-06-01
Chronic disease self-management is a priority in health care. Personal and environmental barriers for populations with neuromuscular disorders might diminish the efficacy of self-management programs, although they have been shown to be an effective intervention in many populations. Owing to their occupational expertise, occupational therapists might optimize self-management program interventions. This study aimed to adapt the Stanford Chronic Disease Self-Management Program (CDSMP) for people with myotonic dystrophy type 1 (DM1) and assess its acceptability and feasibility in this population. Using an adapted version of the Stanford CDSMP, a descriptive pilot study was conducted with 10 participants (five adults with DM1 and their caregivers). A semi-structured interview and questionnaires were used. The Stanford CDSMP is acceptable and feasible for individuals with DM1. However, improvements are required, such as the involvement of occupational therapists to help foster concrete utilization of self-management strategies into day-to-day tasks using their expertise in enabling occupation. Although adaptations are needed, the Stanford CDSMP remains a relevant intervention with populations requiring the application of self-management strategies. © CAOT 2016.
Dry-vault storage of spent fuel at the CASCAD facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baillif, L.; Guay, M.
A new modular dry storage vault concept using vertical metallic wells cooled by natural convection has been developed by the Commissariat a l'Energie Atomique and Societe Generale pour les Techniques Nouvelles to accommodate special fuels for high-level wastes. Basic specifications and design criteria have been followed to guarantee a double containment system and cooling to maintain the fuel below an acceptable temperature. The double containment is provided by two static barriers: At the reactor, fuels are placed in containers playing the role of the first barrier; the storage wells constitute the second barrier. Spent fuel placed in wells is cooledmore » by natural convection: a boundary layer is created along the outer side of the well. The heated air rises along the well leading to a thermosiphon flow that extracts the heat released. For heat transfer, studies, computations, and experimental tests have been carried out to calculate and determine the temperature of the containers and the fuel rod temperatures in various situations. The CASCAD vault storage can be applied to light water reactor (LWR) fuels without any difficulties if two requirements are satisfied: (1) Spend fuels have to be inserted in tight canisters. (2) Spent fuels have to be received only after a minimum decay time of 5 yr.« less
Che, W W; Frey, H Christopher; Lau, Alexis K H
2014-12-01
Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.
40 CFR 80.8 - Sampling methods for gasoline, diesel fuel, fuel additives, and renewable fuels.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Sampling methods for gasoline, diesel... Provisions § 80.8 Sampling methods for gasoline, diesel fuel, fuel additives, and renewable fuels. The sampling methods specified in this section shall be used to collect samples of gasoline, diesel fuel...
Evaluation of respondent-driven sampling.
McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G
2012-01-01
Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required when interpreting findings based on the sampling method.
NASA Technical Reports Server (NTRS)
Kim, Hyun Jung; Choi, Sang H.; Bae, Hyung-Bin; Lee, Tae Woo
2012-01-01
The National Aeronautics and Space Administration-invented X-ray diffraction (XRD) methods, including the total defect density measurement method and the spatial wafer mapping method, have confirmed super hetero epitaxy growth for rhombohedral single crystalline silicon germanium (Si1-xGex) on a c-plane sapphire substrate. However, the XRD method cannot observe the surface morphology or roughness because of the method s limited resolution. Therefore the authors used transmission electron microscopy (TEM) with samples prepared in two ways, the focused ion beam (FIB) method and the tripod method to study the structure between Si1-xGex and sapphire substrate and Si1?xGex itself. The sample preparation for TEM should be as fast as possible so that the sample should contain few or no artifacts induced by the preparation. The standard sample preparation method of mechanical polishing often requires a relatively long ion milling time (several hours), which increases the probability of inducing defects into the sample. The TEM sampling of the Si1-xGex on sapphire is also difficult because of the sapphire s high hardness and mechanical instability. The FIB method and the tripod method eliminate both problems when performing a cross-section TEM sampling of Si1-xGex on c-plane sapphire, which shows the surface morphology, the interface between film and substrate, and the crystal structure of the film. This paper explains the FIB sampling method and the tripod sampling method, and why sampling Si1-xGex, on a sapphire substrate with TEM, is necessary.
Evaluation of Respondent-Driven Sampling
McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G
2012-01-01
Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling method, and caution is required when interpreting findings based on the sampling method. PMID:22157309
Conception et optimisation d'une peau en composite pour une aile adaptative =
NASA Astrophysics Data System (ADS)
Michaud, Francois
Les preoccupations economiques et environnementales constituent des enjeux majeurs pour le developpement de nouvelles technologies en aeronautique. C'est dans cette optique qu'est ne le projet MDO-505 intitule Morphing Architectures and Related Technologies for Wing Efficiency Improvement. L'objectif de ce projet vise a concevoir une aile adaptative active servant a ameliorer sa laminarite et ainsi reduire la consommation de carburant et les emissions de l'avion. Les travaux de recherche realises ont permis de concevoir et optimiser une peau en composite adaptative permettant d'assurer l'amelioration de la laminarite tout en conservant son integrite structurale. D'abord, une methode d'optimisation en trois etapes fut developpee avec pour objectif de minimiser la masse de la peau en composite en assurant qu'elle s'adapte par un controle actif de la surface deformable aux profils aerodynamiques desires. Le processus d'optimisation incluait egalement des contraintes de resistance, de stabilite et de rigidite de la peau en composite. Suite a l'optimisation, la peau optimisee fut simplifiee afin de faciliter la fabrication et de respecter les regles de conception de Bombardier Aeronautique. Ce processus d'optimisation a permis de concevoir une peau en composite dont les deviations ou erreurs des formes obtenues etaient grandement reduites afin de repondre au mieux aux profils aerodynamiques optimises. Les analyses aerodynamiques realisees a partir de ces formes ont predit de bonnes ameliorations de la laminarite. Par la suite, une serie de validations analytiques fut realisee afin de valider l'integrite structurale de la peau en composite suivant les methodes generalement utilisees par Bombardier Aeronautique. D'abord, une analyse comparative par elements finis a permis de valider une rigidite equivalente de l'aile adaptative a la section d'aile d'origine. Le modele par elements finis fut par la suite mis en boucle avec des feuilles de calcul afin de valider la stabilite et la resistance de la peau en composite pour les cas de chargement aerodynamique reels. En dernier lieu, une analyse de joints boulonnes fut realisee en utilisant un outil interne nomme LJ 85 BJSFM GO.v9 developpe par Bombardier Aeronautique. Ces analyses ont permis de valider numeriquement l'integrite structurale de la peau de composite pour des chargements et des admissibles de materiaux aeronautiques typiques.
[Sampling methods for PM2.5 from stationary sources: a review].
Jiang, Jing-Kun; Deng, Jian-Guo; Li, Zhen; Li, Xing-Hua; Duan, Lei; Hao, Ji-Ming
2014-05-01
The new China national ambient air quality standard has been published in 2012 and will be implemented in 2016. To meet the requirements in this new standard, monitoring and controlling PM2,,5 emission from stationary sources are very important. However, so far there is no national standard method on sampling PM2.5 from stationary sources. Different sampling methods for PM2.5 from stationary sources and relevant international standards were reviewed in this study. It includes the methods for PM2.5 sampling in flue gas and the methods for PM2.5 sampling after dilution. Both advantages and disadvantages of these sampling methods were discussed. For environmental management, the method for PM2.5 sampling in flue gas such as impactor and virtual impactor was suggested as a standard to determine filterable PM2.5. To evaluate environmental and health effects of PM2.5 from stationary sources, standard dilution method for sampling of total PM2.5 should be established.
Comparison of methods for sampling plant bugs on cotton in South Texas (2010)
USDA-ARS?s Scientific Manuscript database
A total of 26 cotton fields were sampled by experienced and inexperienced samplers at 3 growth stages using 5 methods to compare the most efficient and accurate method for sampling plant bugs in cotton. Each of the 5 methods had its own distinct advantages and disadvantages as a sampling method (too...
Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi
2016-02-01
Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62 mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.
Gutiérrez-Fonseca, Pablo E; Lorion, Christopher M
2014-04-01
The use of aquatic macroinvertebrates as bio-indicators in water quality studies has increased considerably over the last decade in Costa Rica, and standard biomonitoring methods have now been formulated at the national level. Nevertheless, questions remain about the effectiveness of different methods of sampling freshwater benthic assemblages, and how sampling intensity may influence biomonitoring results. In this study, we compared the results of qualitative sampling using commonly applied methods with a more intensive quantitative approach at 12 sites in small, lowland streams on the southern Caribbean slope of Costa Rica. Qualitative samples were collected following the official protocol using a strainer during a set time period and macroinvertebrates were field-picked. Quantitative sampling involved collecting ten replicate Surber samples and picking out macroinvertebrates in the laboratory with a stereomicroscope. The strainer sampling method consistently yielded fewer individuals and families than quantitative samples. As a result, site scores calculated using the Biological Monitoring Working Party-Costa Rica (BMWP-CR) biotic index often differed greatly depending on the sampling method. Site water quality classifications using the BMWP-CR index differed between the two sampling methods for 11 of the 12 sites in 2005, and for 9 of the 12 sites in 2006. Sampling intensity clearly had a strong influence on BMWP-CR index scores, as well as perceived differences between reference and impacted sites. Achieving reliable and consistent biomonitoring results for lowland Costa Rican streams may demand intensive sampling and requires careful consideration of sampling methods.
Maes, Sharon; Huu, Son Nguyen; Heyndrickx, Marc; Weyenberg, Stephanie van; Steenackers, Hans; Verplaetse, Alex; Vackier, Thijs; Sampers, Imca; Raes, Katleen; Reu, Koen De
2017-12-01
Biofilms are an important source of contamination in food companies, yet the composition of biofilms in practice is still mostly unknown. The chemical and microbiological characterization of surface samples taken after cleaning and disinfection is very important to distinguish free-living bacteria from the attached bacteria in biofilms. In this study, sampling methods that are potentially useful for both chemical and microbiological analyses of surface samples were evaluated. In the manufacturing facilities of eight Belgian food companies, surfaces were sampled after cleaning and disinfection using two sampling methods: the scraper-flocked swab method and the sponge stick method. Microbiological and chemical analyses were performed on these samples to evaluate the suitability of the sampling methods for the quantification of extracellular polymeric substance components and microorganisms originating from biofilms in these facilities. The scraper-flocked swab method was most suitable for chemical analyses of the samples because the material in these swabs did not interfere with determination of the chemical components. For microbiological enumerations, the sponge stick method was slightly but not significantly more effective than the scraper-flocked swab method. In all but one of the facilities, at least 20% of the sampled surfaces had more than 10 2 CFU/100 cm 2 . Proteins were found in 20% of the chemically analyzed surface samples, and carbohydrates and uronic acids were found in 15 and 8% of the samples, respectively. When chemical and microbiological results were combined, 17% of the sampled surfaces were contaminated with both microorganisms and at least one of the analyzed chemical components; thus, these surfaces were characterized as carrying biofilm. Overall, microbiological contamination in the food industry is highly variable by food sector and even within a facility at various sampling points and sampling times.
Code of Federal Regulations, 2010 CFR
2010-07-01
... will consider a sample obtained using any of the applicable sampling methods specified in appendix I to... appendix I sampling methods are not being formally adopted by the Administrator, a person who desires to employ an alternative sampling method is not required to demonstrate the equivalency of his method under...
A new design of groundwater sampling device and its application.
Tsai, Yih-jin; Kuo, Ming-ching T
2005-01-01
Compounds in the atmosphere contaminate samples of groundwater. An inexpensive and simple method for collecting groundwater samples is developed to prevent contamination when the background concentration of contaminants is high. This new design of groundwater sampling device involves a glass sampling bottle with a Teflon-lined valve at each end. A cleaned and dried sampling bottle was connected to a low flow-rate peristaltic pump with Teflon tubing and was filled with water. No headspace volume was remained in the sampling bottle. The sample bottle was then packed in a PVC bag to prevent the target component from infiltrating into the water sample through the valves. In this study, groundwater was sampled at six wells using both the conventional method and the improved method. The analysis of trichlorofluoromethane (CFC-11) concentrations at these six wells indicates that all the groundwater samples obtained by the conventional sampling method were contaminated by CFC-11 from the atmosphere. The improved sampling method greatly eliminated the problems of contamination, preservation and quantitative analysis of natural water.
Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.
Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby
2018-02-06
Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
40 CFR Appendix I to Part 261 - Representative Sampling Methods
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 27 2013-07-01 2013-07-01 false Representative Sampling Methods I Appendix I to Part 261 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES...—Representative Sampling Methods The methods and equipment used for sampling waste materials will vary with the...
Monte Carlo approaches to sampling forested tracts with lines or points
Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire
2001-01-01
Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...
This Paper covers the basics of passive sampler design, compares passive samplers to conventional methods of air sampling, and discusses considerations when implementing a passive sampling program. The Paper also discusses field sampling and sample analysis considerations to ensu...
Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.
Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J
2015-06-15
Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance component = 6.2), rather than due to pasture (variance component = 0.55) or season (variance component = 0.15). Using the observed distribution of L3, the required sample size (i.e. number of plots per pasture) for sampling a pasture through random plots with a particular precision was simulated. A higher relative precision was acquired when estimating PLC on pastures with a high larval contamination and a low level of aggregation compared to pastures with a low larval contamination when the same sample size was applied. In the future, herbage sampling through random plots across pasture (method 2) seems a promising method to develop further as no significant difference in counts between the methods was found and this method was less time consuming. Copyright © 2015 Elsevier B.V. All rights reserved.
Comparison of preprocessing methods and storage times for touch DNA samples
Dong, Hui; Wang, Jing; Zhang, Tao; Ge, Jian-ye; Dong, Ying-qiang; Sun, Qi-fan; Liu, Chao; Li, Cai-xia
2017-01-01
Aim To select appropriate preprocessing methods for different substrates by comparing the effects of four different preprocessing methods on touch DNA samples and to determine the effect of various storage times on the results of touch DNA sample analysis. Method Hand touch DNA samples were used to investigate the detection and inspection results of DNA on different substrates. Four preprocessing methods, including the direct cutting method, stubbing procedure, double swab technique, and vacuum cleaner method, were used in this study. DNA was extracted from mock samples with four different preprocessing methods. The best preprocess protocol determined from the study was further used to compare performance after various storage times. DNA extracted from all samples was quantified and amplified using standard procedures. Results The amounts of DNA and the number of alleles detected on the porous substrates were greater than those on the non-porous substrates. The performances of the four preprocessing methods varied with different substrates. The direct cutting method displayed advantages for porous substrates, and the vacuum cleaner method was advantageous for non-porous substrates. No significant degradation trend was observed as the storage times increased. Conclusion Different substrates require the use of different preprocessing method in order to obtain the highest DNA amount and allele number from touch DNA samples. This study provides a theoretical basis for explorations of touch DNA samples and may be used as a reference when dealing with touch DNA samples in case work. PMID:28252870
Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.
2006-02-14
Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically positioned near the sample cells. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.
7 CFR 58.812 - Methods of sample analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Methods of sample analysis. 58.812 Section 58.812... Procedures § 58.812 Methods of sample analysis. Samples shall be tested according to the applicable methods of laboratory analysis contained in either DA Instruction 918-RL, as issued by the USDA, Agricultural...
7 CFR 58.245 - Method of sample analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Method of sample analysis. 58.245 Section 58.245... Procedures § 58.245 Method of sample analysis. Samples shall be tested according to the applicable methods of laboratory analysis contained in either DA Instruction 918-RL as issued by the USDA, Agricultural Marketing...
This is a sampling and analysis method for the determination of asbestos in air. Samples are analyzed by transmission electron microscopy (TEM). Although a small subset of samples are to be prepared using a direct procedure, the majority of samples analyzed using this method wil...
An improved sampling method of complex network
NASA Astrophysics Data System (ADS)
Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing
2014-12-01
Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.
Equilibrium Molecular Thermodynamics from Kirkwood Sampling
2015-01-01
We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys.2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide. PMID:25915525
NASA Astrophysics Data System (ADS)
Liu, Xiaodong
2017-08-01
A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P
1995-01-01
This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.
Comparability among four invertebrate sampling methods, Fountain Creek Basin, Colorado, 2010-2012
Zuellig, Robert E.; Bruce, James F.; Stogner, Sr., Robert W.; Brown, Krystal D.
2014-01-01
The U.S. Geological Survey, in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, designed a study to determine if sampling method and sample timing resulted in comparable samples and assessments of biological condition. To accomplish this task, annual invertebrate samples were collected concurrently using four sampling methods at 15 U.S. Geological Survey streamflow gages in the Fountain Creek basin from 2010 to 2012. Collectively, the four methods are used by local (U.S. Geological Survey cooperative monitoring program) and State monitoring programs (Colorado Department of Public Health and Environment) in the Fountain Creek basin to produce two distinct sample types for each program that target single-and multiple-habitats. This study found distinguishable differences between single-and multi-habitat sample types using both community similarities and multi-metric index values, while methods from each program within sample type were comparable. This indicates that the Colorado Department of Public Health and Environment methods were compatible with the cooperative monitoring program methods within multi-and single-habitat sample types. Comparisons between September and October samples found distinguishable differences based on community similarities for both sample types, whereas only differences were found for single-habitat samples when multi-metric index values were considered. At one site, differences between September and October index values from single-habitat samples resulted in opposing assessments of biological condition. Direct application of the results to inform the revision of the existing Fountain Creek basin U.S. Geological Survey cooperative monitoring program are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that amore » decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 2. qualitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0 3. quantitative data (e.g., contaminant concentrations expressed as CFU/cm2) when the FNR = 0 or when using statistical sampling methods that account for FNR > 0 4. quantitative data when the FNR > 0 but statistical sampling methods are used that assume the FNR = 0. For Situation 2, the hotspot sampling approach provides for stating with Z% confidence that a hotspot of specified shape and size with detectable contamination will be found. Also for Situation 2, the CJR approach provides for stating with X% confidence that at least Y% of the decision area does not contain detectable contamination. Forms of these statements for the other three situations are discussed in Section 2.2. Statistical methods that account for FNR > 0 currently only exist for the hotspot sampling approach with qualitative data (or quantitative data converted to qualitative data). This report documents the current status of methods and formulas for the hotspot and CJR sampling approaches. Limitations of these methods are identified. Extensions of the methods that are applicable when FNR = 0 to account for FNR > 0, or to address other limitations, will be documented in future revisions of this report if future funding supports the development of such extensions. For quantitative data, this report also presents statistical methods and formulas for 1. quantifying the uncertainty in measured sample results 2. estimating the true surface concentration corresponding to a surface sample 3. quantifying the uncertainty of the estimate of the true surface concentration. All of the methods and formulas discussed in the report were applied to example situations to illustrate application of the methods and interpretation of the results.« less
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Validated Test Method 5030C: Purge-and-Trap for Aqueous Samples
This method describes a purge-and-trap procedure for the analysis of volatile organic compoundsin aqueous samples & water miscible liquid samples. It also describes the analysis of high concentration soil and waste sample extracts prepared in Method 5035.
Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.; Green, David
2005-03-29
Methods and apparatus for analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically coupled with the vessel body. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.
Popic, Tony J; Davila, Yvonne C; Wardle, Glenda M
2013-01-01
Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2) area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.
Popic, Tony J.; Davila, Yvonne C.; Wardle, Glenda M.
2013-01-01
Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km2 area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net sampling methods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service. PMID:23799127
A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing
2015-09-01
The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.
Zhang, Heng; Lan, Fang; Shi, Yupeng; Wan, Zhi-Gang; Yue, Zhen-Feng; Fan, Fang; Lin, Yan-Kui; Tang, Mu-Jin; Lv, Jing-Zhang; Xiao, Tan; Yi, Changqing
2014-06-15
VitaFast(®) test kits designed for the microbiological assay in microtiter plate format can be applied to quantitative determination of B-group water-soluble vitamins such as vitamin B12, folic acid and biotin, et al. Compared to traditional microbiological methods, VitaFast(®) kits significantly reduce sample processing time and provide greater reliability, higher productivity and better accuracy. Recently, simultaneous determination of vitamin B12, folic acid and biotin in one sample is urgently required when evaluating the quality of infant formulae in our practical work. However, the present sample preparation protocols which are developed for individual test systems, are incompatible with simultaneous determination of several analytes. To solve this problem, a novel "three-in-one" sample preparation method is herein developed for simultaneous determination of B-group water-soluble vitamins using VitaFast(®) kits. The performance of this novel "three-in-one" sample preparation method was systematically evaluated through comparing with individual sample preparation protocols. The experimental results of the assays which employed "three-in-one" sample preparation method were in good agreement with those obtained from conventional VitaFast(®) extraction methods, indicating that the proposed "three-in-one" sample preparation method is applicable to the present three VitaFast(®) vitamin test systems, thus offering a promising alternative for the three independent sample preparation methods. The proposed new sample preparation method will significantly improve the efficiency of infant formulae inspection. Copyright © 2013 Elsevier Ltd. All rights reserved.
Surveying immigrants without sampling frames - evaluating the success of alternative field methods.
Reichel, David; Morales, Laura
2017-01-01
This paper evaluates the sampling methods of an international survey, the Immigrant Citizens Survey, which aimed at surveying immigrants from outside the European Union (EU) in 15 cities in seven EU countries. In five countries, no sample frame was available for the target population. Consequently, alternative ways to obtain a representative sample had to be found. In three countries 'location sampling' was employed, while in two countries traditional methods were used with adaptations to reach the target population. The paper assesses the main methodological challenges of carrying out a survey among a group of immigrants for whom no sampling frame exists. The samples of the survey in these five countries are compared to results of official statistics in order to assess the accuracy of the samples obtained through the different sampling methods. It can be shown that alternative sampling methods can provide meaningful results in terms of core demographic characteristics although some estimates differ to some extent from the census results.
Chen, Meilian; Lee, Jong-Hyeon; Hur, Jin
2015-10-01
Despite literature evidence suggesting the importance of sampling methods on the properties of sediment pore waters, their effects on the dissolved organic matter (PW-DOM) have been unexplored to date. Here, we compared the effects of two commonly used sampling methods (i.e., centrifuge and Rhizon sampler) on the characteristics of PW-DOM for the first time. The bulk dissolved organic carbon (DOC), ultraviolet-visible (UV-Vis) absorption, and excitation-emission matrixes coupled with parallel factor analysis (EEM-PARAFAC) of the PW-DOM samples were compared for the two sampling methods with the sediments from minimal to severely contaminated sites. The centrifuged samples were found to have higher average values of DOC, UV absorption, and protein-like EEM-PARAFAC components. The samples collected with the Rhizon sampler, however, exhibited generally more humified characteristics than the centrifuged ones, implying a preferential collection of PW-DOM with respect to the sampling methods. Furthermore, the differences between the two sampling methods seem more pronounced in relatively more polluted sites. Our observations were possibly explained by either the filtration effect resulting from the smaller pore size of the Rhizon sampler or the desorption of DOM molecules loosely bound to minerals during centrifugation, or both. Our study suggests that consistent use of one sampling method is crucial for PW-DOM studies and also that caution should be taken in the comparison of data collected with different sampling methods.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-23
... Alimentarius Commission: Meeting of the Codex Committee on Methods of Analysis and Sampling AGENCY: Office of... discussed at the 33rd Session of the Codex Committee on Methods of Analysis and Sampling (CCMAS) of the... the criteria appropriate to Codex Methods of Analysis and Sampling; serving as a coordinating body for...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
... Alimentarius Commission: Meeting of the Codex Committee on Methods of Analysis and Sampling AGENCY: Office of... discussed at the 32nd session of the Codex Committee on Methods of Analysis and Sampling (CCMAS) of the... appropriate to Codex Methods of Analysis and Sampling; serving as a coordinating body for Codex with other...
Intra prediction using face continuity in 360-degree video coding
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; He, Yuwen; Ye, Yan
2017-09-01
This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.
7 CFR 29.110 - Method of sampling.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Method of sampling. 29.110 Section 29.110 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Regulations Inspectors, Samplers, and Weighers § 29.110 Method of sampling. In sampling tobacco...
7 CFR 29.110 - Method of sampling.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Method of sampling. 29.110 Section 29.110 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Regulations Inspectors, Samplers, and Weighers § 29.110 Method of sampling. In sampling tobacco...
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Comparison of individual and pooled sampling methods for detecting bacterial pathogens of fish
Mumford, Sonia; Patterson, Chris; Evered, J.; Brunson, Ray; Levine, J.; Winton, J.
2005-01-01
Examination of finfish populations for viral and bacterial pathogens is an important component of fish disease control programs worldwide. Two methods are commonly used for collecting tissue samples for bacteriological culture, the currently accepted standards for detection of bacterial fish pathogens. The method specified in the Office International des Epizooties Manual of Diagnostic Tests for Aquatic Animals permits combining renal and splenic tissues from as many as 5 fish into pooled samples. The American Fisheries Society (AFS) Blue Book/US Fish and Wildlife Service (USFWS) Inspection Manual specifies the use of a bacteriological loop for collecting samples from the kidney of individual fish. An alternative would be to more fully utilize the pooled samples taken for virology. If implemented, this approach would provide substantial savings in labor and materials. To compare the relative performance of the AFS/USFWS method and this alternative approach, cultures of Yersinia ruckeri were used to establish low-level infections in groups of rainbow trout (Oncorhynchus mykiss) that were sampled by both methods. Yersinia ruckeri was cultured from 22 of 37 groups by at least 1 method. The loop method yielded 18 positive groups, with 1 group positive in the loop samples but negative in the pooled samples. The pooled samples produced 21 positive groups, with 4 groups positive in the pooled samples but negative in the loop samples. There was statistically significant agreement (Spearman coefficient 0.80, P < 0.001) in the relative ability of the 2 sampling methods to permit detection of low-level bacterial infections of rainbow trout.
Neutron activation analysis of certified samples by the absolute method
NASA Astrophysics Data System (ADS)
Kadem, F.; Belouadah, N.; Idiri, Z.
2015-07-01
The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.
Application of work sampling technique to analyze logging operations.
Edwin S. Miyata; Helmuth M. Steinhilb; Sharon A. Winsauer
1981-01-01
Discusses the advantages and disadvantages of various time study methods for determining efficiency and productivity in logging. The work sampling method is compared with the continuous time-study method. Gives the feasibility, capability, and limitation of the work sampling method.
Methods of sampling airborne fungi in working environments of waste treatment facilities.
Černá, Kristýna; Wittlingerová, Zdeňka; Zimová, Magdaléna; Janovský, Zdeněk
2016-01-01
The objective of the present study was to evaluate and compare the efficiency of a filter based sampling method and a high volume sampling method for sampling airborne culturable fungi present in waste sorting facilities. Membrane filters method was compared with surface air system method. The selected sampling methods were modified and tested in 2 plastic waste sorting facilities. The total number of colony-forming units (CFU)/m3 of airborne fungi was dependent on the type of sampling device, on the time of sampling, which was carried out every hour from the beginning of the work shift, and on the type of cultivation medium (p < 0.001). Detected concentrations of airborne fungi ranged 2×102-1.7×106 CFU/m3 when using the membrane filters (MF) method, and 3×102-6.4×104 CFU/m3 when using the surface air system (SAS) method. Both methods showed comparable sensitivity to the fluctuations of the concentrations of airborne fungi during the work shifts. The SAS method is adequate for a fast indicative determination of concentration of airborne fungi. The MF method is suitable for thorough assessment of working environment contamination by airborne fungi. Therefore we recommend the MF method for the implementation of a uniform standard methodology of airborne fungi sampling in working environments of waste treatment facilities. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
Extending the solvent-free MALDI sample preparation method.
Hanton, Scott D; Parees, David M
2005-01-01
Matrix-assisted laser desorption/ionization (MALDI) mass spectrometry is an important technique to characterize many different materials, including synthetic polymers. MALDI mass spectral data can be used to determine the polymer average molecular weights, repeat units, and end groups. One of the key issues in traditional MALDI sample preparation is making good solutions of the analyte and the matrix. Solvent-free sample preparation methods have been developed to address these issues. Previous results of solvent-free or dry prepared samples show some advantages over traditional wet sample preparation methods. Although the results of the published solvent-free sample preparation methods produced excellent mass spectra, we found the method to be very time-consuming, with significant tool cleaning, which presents a significant possibility of cross contamination. To address these issues, we developed an extension of the solvent-free method that replaces the mortar and pestle grinding with ball milling the sample in a glass vial with two small steel balls. This new method generates mass spectra with equal quality of the previous methods, but has significant advantages in productivity, eliminates cross contamination, and is applicable to liquid and soft or waxy analytes.
Configurations and calibration methods for passive sampling techniques.
Ouyang, Gangfeng; Pawliszyn, Janusz
2007-10-19
Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, William H.
2017-09-15
The objectives for this presentation are to describe the method that the IAEA uses to determine a sampling plan for nuclear material measurements; describe the terms detection probability and significant quantity; list the three nuclear materials measurement types; describe the sampling method applied to an item facility; and describe multiple method sampling.
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
Methods of analyzing crude oil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooks, Robert Graham; Jjunju, Fred Paul Mark; Li, Anyin
The invention generally relates to methods of analyzing crude oil. In certain embodiments, methods of the invention involve obtaining a crude oil sample, and subjecting the crude oil sample to mass spectrometry analysis. In certain embodiments, the method is performed without any sample pre-purification steps.
Volatile organic compounds: sampling methods and their worldwide profile in ambient air.
Kumar, Anuj; Víden, Ivan
2007-08-01
The atmosphere is a particularly difficult analytical system because of the very low levels of substances to be analysed, sharp variations in pollutant levels with time and location, differences in wind, temperature and humidity. This makes the selection of an efficient sampling technique for air analysis a key step to reliable results. Generally, methods for volatile organic compounds sampling include collection of the whole air or preconcentration of samples on adsorbents. All the methods vary from each other according to the sampling technique, type of sorbent, method of extraction and identification technique. In this review paper we discuss various important aspects for sampling of volatile organic compounds by the widely used and advanced sampling methods. Characteristics of various adsorbents used for VOCs sampling are also described. Furthermore, this paper makes an effort to comprehensively review the concentration levels of volatile organic compounds along with the methodology used for analysis, in major cities of the world.
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
40 CFR 60.496 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Surface Coating Industry § 60.496 Test methods and procedures. (a) The reference methods in appendix A to...) Method 4 for stack gas moisture. (b) For Method 24, the coating sample must be a 1-litre sample collected... volume must be 0.003 dscm except that shorter sampling times or smaller volumes, when necessitated by...
[Comparison of the Conventional Centrifuged and Filtrated Preparations in Urine Cytology].
Sekita, Nobuyuki; Shimosakai, Hirofumi; Nishikawa, Rika; Sato, Hiroaki; Kouno, Hiroyoshi; Fujimura, Masaaki; Mikami, Kazuo
2016-03-01
The urine cytology test is one of the most important tools for the diagnosis of malignant urinary tract tumors. This test is also of great value for predicting malignancy. However, the sensitivity of this test is not high enough to screen for malignant cells. In our laboratory, we were able to attain a high sensitivity of urine cytology tests after changing the preparation method of urine samples. The differences in the cytodiagnosis between the two methods are discussed here. From January 2012 to June 2013, 2,031 urine samples were prepared using the conventional centrifuge method (C method) ; and from September 2013 to March 2015, 2,453 urine samples were prepared using the filtration method (F method) for the cytology test. When the samples included in category 4 or 5, were defined as cytological positive, the sensitivities of this test with samples prepared using the F method were significantly high compared with samples prepared using the C method (72% vs 28%, p<0.001). The number of cells on the glass slides prepared by the F method was significantly higher than that of the samples prepared by the C method (p<0.001). After introduction of the F method, the number of f alse negative cases was decreased in the urine cytology test because a larger number of cells was seen and easily detected as atypical or malignant epithelial cells. Therefore, this method has a higher sensitivity than the conventional C method as the sensitivity of urine cytology tests relies partially on the number of cells visualized in the prepared samples.
Flow Cytometric Human Leukocyte Antigen-B27 Typing with Stored Samples for Batch Testing
Seo, Bo Young
2013-01-01
Background Flow cytometry (FC) HLA-B27 typing is still used extensively for the diagnosis of spondyloarthropathies. If patient blood samples are stored for a prolonged duration, this testing can be performed in a batch manner, and in-house cellular controls could easily be procured. In this study, we investigated various methods of storing patient blood samples. Methods We compared four storage methods: three methods of analyzing lymphocytes (whole blood stored at room temperature, frozen mononuclear cells, and frozen white blood cells [WBCs] after lysing red blood cells [RBCs]), and one method using frozen platelets (FPLT). We used three ratios associated with mean fluorescence intensities (MFI) for HLAB27 assignment: the B27 MFI ratio (sample/control) for HLA-B27 fluorescein-5-isothiocyanate (FITC); the B7 MFI ratio for HLA-B7 phycoerythrin (PE); and the ratio of these two ratios, B7/B27 ratio. Results Comparing the B27 MFI ratios of each storage method for the HLA-B27+ samples and the B7/B27 ratios for the HLA-B7+ samples revealed that FPLT was the best of the four methods. FPLT had a sensitivity of 100% and a specificity of 99.3% for HLA-B27 assignment in DNA-typed samples (N=164) when the two criteria, namely, B27 MFI ratio >4.0 and B7/B27 ratio <1.5, were used. Conclusions The FPLT method was found to offer a simple, economical, and accurate method of FC HLA-B27 typing by using stored patient samples. If stored samples are used, this method has the potential to replace the standard FC typing method when used in combination with a complementary DNA-based method. PMID:23667843
Sampling Methods in Cardiovascular Nursing Research: An Overview.
Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie
2014-01-01
Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.
Face recognition based on symmetrical virtual image and original training image
NASA Astrophysics Data System (ADS)
Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao
2018-02-01
In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.
19 CFR 151.83 - Method of sampling.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Method of sampling. 151.83 Section 151.83 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Cotton § 151.83 Method of sampling. For...
Comparing three sampling techniques for estimating fine woody down dead biomass
Robert E. Keane; Kathy Gray
2013-01-01
Designing woody fuel sampling methods that quickly, accurately and efficiently assess biomass at relevant spatial scales requires extensive knowledge of each sampling method's strengths, weaknesses and tradeoffs. In this study, we compared various modifications of three common sampling methods (planar intercept, fixed-area microplot and photoload) for estimating...
Model-based inference for small area estimation with sampling weights
Vandendijck, Y.; Faes, C.; Kirby, R.S.; Lawson, A.; Hens, N.
2017-01-01
Obtaining reliable estimates about health outcomes for areas or domains where only few to no samples are available is the goal of small area estimation (SAE). Often, we rely on health surveys to obtain information about health outcomes. Such surveys are often characterised by a complex design, stratification, and unequal sampling weights as common features. Hierarchical Bayesian models are well recognised in SAE as a spatial smoothing method, but often ignore the sampling weights that reflect the complex sampling design. In this paper, we focus on data obtained from a health survey where the sampling weights of the sampled individuals are the only information available about the design. We develop a predictive model-based approach to estimate the prevalence of a binary outcome for both the sampled and non-sampled individuals, using hierarchical Bayesian models that take into account the sampling weights. A simulation study is carried out to compare the performance of our proposed method with other established methods. The results indicate that our proposed method achieves great reductions in mean squared error when compared with standard approaches. It performs equally well or better when compared with more elaborate methods when there is a relationship between the responses and the sampling weights. The proposed method is applied to estimate asthma prevalence across districts. PMID:28989860
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Bonhivers, Jean-Christophe
The increase in production of goods over the last decades has led to the need for improving the management of natural resources management and the efficiency of processes. As a consequence, heat integration methods for industry have been developed. These have been successful for the design of new plants: the integration principles are largely employed, and energy intensity has dramatically decreased in many processes. Although progress has also been achieved in integration methods for retrofit, these methods still need further conceptual development. Furthermore, methodological difficulties increase when trying to retrofit heat exchange networks that are closely interrelated to water networks, such as the case of pulp and paper mills. The pulp and paper industry seeks to increase its profitability by reducing production costs and optimizing supply chains. Recent process developments in forestry biorefining give this industry the opportunity for diversification into bio-products, increasing potential profit margins, and at the same time modernizing its energy systems. Identification of energy strategies for a mill in a changing environment, including the possibility of adding a biorefinery process on the industrial site, requires better integration methods for retrofit situations. The objective of this thesis is to develop an energy integration method for the retrofit of industrial systems and the transformation of pulp and paper mills, ant to demonstrate the method in case studies. Energy is conserved and degraded in a process. Heat can be converted into electricity, stored as chemical energy, or rejected to the environment. A systematic analysis of successive degradations of energy between the hot utilities until the environment, through process operations and existing heat exchangers, is essential in order to reduce the heat consumption. In this thesis, the "Bridge Method" for energy integration by heat exchanger network retrofit has been developed. This method is the first that considers the analysis of these degradations. The fundamental mechanism to reduce the heat consumption in an existing network has been made explicit; it is the basis of the developed method. The Bridge Method includes the definition of "a bridge", which is a set of modifications leading to heat reduction in a heat exchanger network. It is proven that, for a given set of streams, only bridges can lead to heat savings. The Bridge Method also includes (1) a global procedure for heat exchanger network retrofit, (2) a procedure to enumerate systematically the bridges, (3) "a network table" to easily evaluate them, and (4) an "energy transfer diagram" showing the effect of the two first principles of thermodynamics of energy conservation and degradation in industrial processes in order to identify energy savings opportunities. The Bridge Method can be used for the analysis of networks including several types of heat transfer, and site-wide analysis. The Bridge Method has been applied in case studies for retrofitting networks composed of indirect-contact heat exchangers, including the network of a kraft pulp mill, and also networks of direct-contact heat exchangers, including the hot water production system of a pulp mill. The method has finally been applied for the evaluation of a biorefinery process, alone or hosted in a kraft pulp mill. Results show that the use of the method significantly reduces the search space and leads to identification of the relevant solutions. The necessity of a bridge to reduce the inputs and outputs of a process is a consequence of the two first thermodynamics principles of energy conservation and increase in entropy. The concept of bridge alone can also be used as a tool for process analysis, and in numerical optimization-based approaches for energy integration.
Sommer, D; Enderlein, D; Antakli, A; Schönenbrücher, H; Slaghuis, J; Redmann, T; Lierz, M
2012-01-01
The efficiency of two commercial PCR methods based on real-time technology, the foodproof® Salmonella detection system and the BAX® PCR Assay Salmonella system was compared to standardized culture methods (EN ISO 6579:2002 - Annex D) for the detection of Salmonella spp. in poultry samples. Four sample matrices (feed, dust, boot swabs, feces) obtained directly from poultry flocks, as well as artificially spiked samples of the same matrices, were used. All samples were tested for Salmonella spp. using culture methods first as the gold standard. In addition samples spiked with Salmonella Enteridis were tested to evaluate the sensitivity of both PCR methods. Furthermore all methods were evaluated in an annual ring-trial of the National Salmonella Reference Laboratory of Germany. Salmonella detection in the matrices feed, dust and boot swabs were comparable in both PCR systems whereas the results from feces differed markedly. The quality, especially the freshness, of the fecal samples had an influence on the sensitivity of the real-time PCR and the results of the culture methods. In fresh fecal samples an initial spiking level of 100cfu/25g Salmonella Enteritidis was detected. Two-days-dried fecal samples allowed the detection of 14cfu/25g. Both real- time PCR protocols appear to be suitable for the detection of Salmonella spp. in all four matrices. The foodproof® system detected eight samples more to be positive compared to the BAX® system, but had a potential false positive result in one case. In 7-days-dried samples none of the methods was able to detect Salmonella likely through letal cell damage. In general the advantage of PCR analyses over the culture method is the reduction of working time from 4-5 days to only 2 days. However, especially for the analysis of fecal samples official validation should be conducted according to the requirement of EN ISO6579:2002 - Annex D.
Gyawali, P; Ahmed, W; Jagals, P; Sidhu, J P S; Toze, S
2015-12-01
Hookworm infection contributes around 700 million infections worldwide especially in developing nations due to increased use of wastewater for crop production. The effective recovery of hookworm ova from wastewater matrices is difficult due to their low concentrations and heterogeneous distribution. In this study, we compared the recovery rates of (i) four rapid hookworm ova concentration methods from municipal wastewater, and (ii) two concentration methods from sludge samples. Ancylostoma caninum ova were used as surrogate for human hookworm (Ancylostoma duodenale and Necator americanus). Known concentration of A. caninum hookworm ova were seeded into wastewater (treated and raw) and sludge samples collected from two wastewater treatment plants (WWTPs) in Brisbane and Perth, Australia. The A. caninum ova were concentrated from treated and raw wastewater samples using centrifugation (Method A), hollow fiber ultrafiltration (HFUF) (Method B), filtration (Method C) and flotation (Method D) methods. For sludge samples, flotation (Method E) and direct DNA extraction (Method F) methods were used. Among the four methods tested, filtration (Method C) method was able to recover higher concentrations of A. caninum ova consistently from treated wastewater (39-50%) and raw wastewater (7.1-12%) samples collected from both WWTPs. The remaining methods (Methods A, B and D) yielded variable recovery rate ranging from 0.2 to 40% for treated and raw wastewater samples. The recovery rates for sludge samples were poor (0.02-4.7), although, Method F (direct DNA extraction) provided 1-2 orders of magnitude higher recovery rate than Method E (flotation). Based on our results it can be concluded that the recovery rates of hookworm ova from wastewater matrices, especially sludge samples, can be poor and highly variable. Therefore, choice of concentration method is vital for the sensitive detection of hookworm ova in wastewater matrices. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.
2016-12-01
The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.
Rothrock, Michael J.; Hiett, Kelli L.; Gamble, John; Caudill, Andrew C.; Cicconi-Hogan, Kellie M.; Caporaso, J. Gregory
2014-01-01
The efficacy of DNA extraction protocols can be highly dependent upon both the type of sample being investigated and the types of downstream analyses performed. Considering that the use of new bacterial community analysis techniques (e.g., microbiomics, metagenomics) is becoming more prevalent in the agricultural and environmental sciences and many environmental samples within these disciplines can be physiochemically and microbiologically unique (e.g., fecal and litter/bedding samples from the poultry production spectrum), appropriate and effective DNA extraction methods need to be carefully chosen. Therefore, a novel semi-automated hybrid DNA extraction method was developed specifically for use with environmental poultry production samples. This method is a combination of the two major types of DNA extraction: mechanical and enzymatic. A two-step intense mechanical homogenization step (using bead-beating specifically formulated for environmental samples) was added to the beginning of the “gold standard” enzymatic DNA extraction method for fecal samples to enhance the removal of bacteria and DNA from the sample matrix and improve the recovery of Gram-positive bacterial community members. Once the enzymatic extraction portion of the hybrid method was initiated, the remaining purification process was automated using a robotic workstation to increase sample throughput and decrease sample processing error. In comparison to the strict mechanical and enzymatic DNA extraction methods, this novel hybrid method provided the best overall combined performance when considering quantitative (using 16S rRNA qPCR) and qualitative (using microbiomics) estimates of the total bacterial communities when processing poultry feces and litter samples. PMID:25548939
Improving the analysis of composite endpoints in rare disease trials.
McMenamin, Martina; Berglind, Anna; Wason, James M S
2018-05-22
Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.
Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R.; Hammer, Gregory B.; Schulman, Scott R.; Zajicek, Anne; Jett, David A.; Boss, Gerry R.
2013-01-01
Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms. PMID:23653045
Molecular cancer classification using a meta-sample-based regularized robust coding method.
Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen
2014-01-01
Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.
Liu, Gui-Long; Huang, Shi-Hong; Shi, Che-Si; Zeng, Bin; Zhang, Ke-Shi; Zhong, Xian-Ci
2018-02-10
Using copper thin-walled tubular specimens, the subsequent yield surfaces under pre-tension, pre-torsion and pre-combined tension-torsion are measured, where the single-sample and multi-sample methods are applied respectively to determine the yield stresses at specified offset strain. The rule and characteristics of the evolution of the subsequent yield surface are investigated. Under the conditions of different pre-strains, the influence of test point number, test sequence and specified offset strain on the measurement of subsequent yield surface and the concave phenomenon for measured yield surface are studied. Moreover, the feasibility and validity of the two methods are compared. The main conclusions are drawn as follows: (1) For the single or multi-sample method, the measured subsequent yield surfaces are remarkably different from cylindrical yield surfaces proposed by the classical plasticity theory; (2) there are apparent differences between the test results from the two kinds of methods: the multi-sample method is not influenced by the number of test points, test order and the cumulative effect of residual plastic strain resulting from the other test point, while those are very influential in the single-sample method; and (3) the measured subsequent yield surface may appear concave, which can be transformed to convex for single-sample method by changing the test sequence. However, for the multiple-sample method, the concave phenomenon will disappear when a larger offset strain is specified.
Arnold, Mark E; Mueller-Doblies, Doris; Gosling, Rebecca J; Martelli, Francesca; Davies, Robert H
2015-01-01
Reports of Salmonella in ducks in the UK currently rely upon voluntary submissions from the industry, and as there is no harmonized statutory monitoring and control programme, it is difficult to compare data from different years in order to evaluate any trends in Salmonella prevalence in relation to sampling methodology. Therefore, the aim of this project was to assess the sensitivity of a selection of environmental sampling methods, including the sampling of faeces, dust and water troughs or bowls for the detection of Salmonella in duck flocks, and a range of sampling methods were applied to 67 duck flocks. Bayesian methods in the absence of a gold standard were used to provide estimates of the sensitivity of each of the sampling methods relative to the within-flock prevalence. There was a large influence of the within-flock prevalence on the sensitivity of all sample types, with sensitivity reducing as the within-flock prevalence reduced. Boot swabs (individual and pool of four), swabs of faecally contaminated areas and whole house hand-held fabric swabs showed the overall highest sensitivity for low-prevalence flocks and are recommended for use to detect Salmonella in duck flocks. The sample type with the highest proportion positive was a pool of four hair nets used as boot swabs, but this was not the most sensitive sample for low-prevalence flocks. All the environmental sampling types (faeces swabs, litter pinches, drag swabs, water trough samples and dust) had higher sensitivity than individual faeces sampling. None of the methods consistently identified all the positive flocks, and at least 10 samples would be required for even the most sensitive method (pool of four boot swabs) to detect a 5% prevalence. The sampling of dust had a low sensitivity and is not recommended for ducks.
Improved radiation dose efficiency in solution SAXS using a sheath flow sample environment
Kirby, Nigel; Cowieson, Nathan; Hawley, Adrian M.; Mudie, Stephen T.; McGillivray, Duncan J.; Kusel, Michael; Samardzic-Boban, Vesna; Ryan, Timothy M.
2016-01-01
Radiation damage is a major limitation to synchrotron small-angle X-ray scattering analysis of biomacromolecules. Flowing the sample during exposure helps to reduce the problem, but its effectiveness in the laminar-flow regime is limited by slow flow velocity at the walls of sample cells. To overcome this limitation, the coflow method was developed, where the sample flows through the centre of its cell surrounded by a flow of matched buffer. The method permits an order-of-magnitude increase of X-ray incident flux before sample damage, improves measurement statistics and maintains low sample concentration limits. The method also efficiently handles sample volumes of a few microlitres, can increase sample throughput, is intrinsically resistant to capillary fouling by sample and is suited to static samples and size-exclusion chromatography applications. The method unlocks further potential of third-generation synchrotron beamlines to facilitate new and challenging applications in solution scattering. PMID:27917826
Ha, Ji Won; Hahn, Jong Hoon
2017-02-01
Acupuncture sample injection is a simple method to deliver well-defined nanoliter-scale sample plugs in PDMS microfluidic channels. This acupuncture injection method in microchip CE has several advantages, including minimization of sample consumption, the capability of serial injections of different sample solutions into the same microchannel, and the capability of injecting sample plugs into any desired position of a microchannel. Herein, we demonstrate that the simple and cost-effective acupuncture sample injection method can be used for PDMS microchip-based field amplified sample stacking in the most simplified straight channel by applying a single potential. We achieved the increase in electropherogram signals for the case of sample stacking. Furthermore, we present that microchip CGE of ΦX174 DNA-HaeⅢ digest can be performed with the acupuncture injection method on a glass microchip while minimizing sample loss and voltage control hardware. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Method for chromium analysis and speciation
Aiken, Abigail M.; Peyton, Brent M.; Apel, William A.; Petersen, James N.
2004-11-02
A method of detecting a metal in a sample comprising a plurality of metal is disclosed. The method comprises providing the sample comprising a metal to be detected. The sample is added to a reagent solution comprising an enzyme and a substrate, where the enzyme is inhibited by the metal to be detected. An array of chelating agents is used to eliminate the inhibitory effects of additional metals in the sample. An enzymatic activity in the sample is determined and compared to an enzymatic activity in a control solution to detect the metal to be detected. A method of determining a concentration of the metal in the sample is also disclosed. A method of detecting a valence state of a metal is also disclosed.
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
Myatt, Mark; Mai, Nguyen Phuong; Quynh, Nguyen Quang; Nga, Nguyen Huy; Tai, Ha Huy; Long, Nguyen Hung; Minh, Tran Hung; Limburg, Hans
2005-01-01
OBJECTIVE: To report on the use of lot quality-assurance sampling (LQAS) surveys undertaken within an area-sampling framework to identify priority areas for intervention with trachoma control activities in Viet Nam. METHODS: The LQAS survey method for the rapid assessment of the prevalence of active trachoma was adapted for use in Viet Nam with the aim of classifying individual communes by the prevalence of active trachoma among children in primary school. School-based sampling was used; school sites to be sampled were selected using an area-sampling approach. A total of 719 communes in 41 districts in 18 provinces were surveyed. FINDINGS: Survey staff found the LQAS survey method both simple and rapid to use after initial problems with area-sampling methods were identified and remedied. The method yielded a finer spatial resolution of prevalence than had been previously achieved in Viet Nam using semiquantitative rapid assessment surveys and multistage cluster-sampled surveys. CONCLUSION: When used with area-sampling techniques, the LQAS survey method has the potential to form the basis of survey instruments that can be used to efficiently target resources for interventions against active trachoma. With additional work, such methods could provide a generally applicable tool for effective programme planning and for the certification of the elimination of trachoma as a blinding disease. PMID:16283052
Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till
2018-02-01
(1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.
Parajulee, M N; Shrestha, R B; Leser, J F
2006-04-01
A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.
Performance of Traditional and Molecular Methods for Detecting Biological Agents in Drinking Water
Francy, Donna S.; Bushon, Rebecca N.; Brady, Amie M.G.; Bertke, Erin E.; Kephart, Christopher M.; Likirdopulos, Christina A.; Mailot, Brian E.; Schaefer, Frank W.; Lindquist, H.D. Alan
2009-01-01
To reduce the impact from a possible bioterrorist attack on drinking-water supplies, analytical methods are needed to rapidly detect the presence of biological agents in water. To this end, 13 drinking-water samples were collected at 9 water-treatment plants in Ohio to assess the performance of a molecular method in comparison to traditional analytical methods that take longer to perform. Two 100-liter samples were collected at each site during each sampling event; one was seeded in the laboratory with six biological agents - Bacillus anthracis (B. anthracis), Burkholderia cepacia (as a surrogate for Bu. pseudomallei), Francisella tularensis (F. tularensis), Salmonella Typhi (S. Typhi), Vibrio cholerae (V. cholerae), and Cryptospordium parvum (C. parvum). The seeded and unseeded samples were processed by ultrafiltration and analyzed by use of quantiative polymerase chain reaction (qPCR), a molecular method, and culture methods for bacterial agents or the immunomagnetic separation/fluorescent antibody (IMS/FA) method for C. parvum as traditional methods. Six replicate seeded samples were also processed and analyzed. For traditional methods, recoveries were highly variable between samples and even between some replicate samples, ranging from below detection to greater than 100 percent. Recoveries were significantly related to water pH, specific conductance, and dissolved organic carbon (DOC) for all bacteria combined by culture methods, but none of the water-quality characteristics tested were related to recoveries of C. parvum by IMS/FA. Recoveries were not determined by qPCR because of problems in quantifying organisms by qPCR in the composite seed. Instead, qPCR results were reported as detected, not detected (no qPCR signal), or +/- detected (Cycle Threshold or 'Ct' values were greater than 40). Several sample results by qPCR were omitted from the dataset because of possible problems with qPCR reagents, primers, and probes. For the remaining 14 qPCR results (including some replicate samples), F. tularensis and V. cholerae were detected in all samples after ultrafiltration, B. anthracis was detected in 13 and +/- detected in 1 sample, and C. parvum was detected in 9 and +/- detected in 4 samples. Bu. cepacia was detected in nine samples, +/- detected in two samples, and not detected in three samples (for two out of three samples not detected, a different strain was used). The qPCR assay for V. cholerae provided two false positive - but late - signals in one unseeded sample. Numbers found by qPCR after ultrafiltration were significantly or nearly significantly related to those found by traditional methods for B. anthracis, F. tularensis, and V. cholerae but not for Bu. cepacia and C. parvum. A qPCR assay for S. Typhi was not available. The qPCR method can be used to rapidly detect B. anthracis, F. tularensis, and V. cholerae with some certainty in drinking-water samples, but additional work would be needed to optimize and test qPCR for Bu. cepacia and C. parvum and establish relations to traditional methods. The specificity for the V. cholerae assay needs to be further investigated. Evidence is provided that ultrafiltration and qPCR are promising methods to rapidly detect biological agents in the Nation's drinking-water supplies and thus reduce the impact and consequences from intentional bioterrorist events. To our knowledge, this is the first study to compare the use of traditional and qPCR methods to detect biological agents in large-volume drinking-water samples.
Brady, Amie M.G.; Bushon, Rebecca N.; Bertke, Erin E.
2009-01-01
Water quality at beaches is monitored for fecal indicator bacteria by traditional, culture-based methods that can take 18 to 24 hours to obtain results. A rapid detection method that provides estimated concentrations of fecal indicator bacteria within 1 hour from the start of sample processing would allow beach managers to post advisories or close the beach when the conditions are actually considered unsafe instead of a day later, when conditions may have changed. A rapid method that couples immunomagnetic separation with adenosine triphosphate detection (IMS/ATP rapid method) was evaluated through monitoring of Escherichia coli (E. coli) at three Lake Erie beaches in Ohio (Edgewater and Villa Angela in Cleveland and Huntington in Bay Village). Beach water samples were collected between 4 and 5 days per week during the recreational seasons (May through September) of 2006 and 2007. Composite samples were created in the lab from two point samples collected at each beach and were shown to be comparable substitutes for analysis of two individual samples. E. coli concentrations in composite samples, as determined by the culture-based method, ranged from 4 to 24,000 colony-forming units per 100 milliliters during this study across all beaches. Turbidity also was measured for each sample and ranged from 0.8 to 260 neophelometric turbidity ratio units. Environmental variables were noted at the time of sampling, including number of birds at the beach and wave height. Rainfall amounts were measured at National Weather Service stations at local airports. Turbidity, rainfall, and wave height were significantly related to the culture-based method results each year and for both years combined at each beach. The number of birds at the beach was significantly related to the culture-based method results only at Edgewater during 2006 and during both years combined. Results of the IMS/ATP method were compared to results of the culture-based method for samples by year for each beach. The IMS/ATP method underwent several changes and refinements during the first year, including changes in reagents and antibodies and alterations to the method protocol. Because of the changes in the method, results from the two years of study could not be combined. Kendall's tau correlation coefficients for relations between the IMS/ATP and culture-based methods were significant except for samples collected during 2006 at Edgewater and for samples collected during 2007 at Villa Angela. Further, relations were stronger for samples collected in 2006 than for those collected in 2007, except at Edgewater where the reverse was observed. The 2007 dataset was examined to identify possible reasons for the observed difference in significance of relations by year. By dividing the 2007 data set into groups as a function of sampling date, relations (Kendall's tau) between methods were observed to be stronger for samples collected earlier in the season than for those collected later in the season. At Edgewater and Villa Angela, there were more birds at the beach at time of sampling later in the season compared to earlier in the season. (The number of birds was not examined at Huntington.) Also, more wet days (when rainfall during the 24 hours prior to sampling was greater than 0.05 inch) were sampled later in the season compared to earlier in the season. Differences in the dominant fecal source may explain the change in the relations between the culture-based and IMS/ATP methods.
Method and apparatus for data sampling
Odell, Daniel M. C.
1994-01-01
A method and apparatus for sampling radiation detector outputs and determining event data from the collected samples. The method uses high speed sampling of the detector output, the conversion of the samples to digital values, and the discrimination of the digital values so that digital values representing detected events are determined. The high speed sampling and digital conversion is performed by an A/D sampler that samples the detector output at a rate high enough to produce numerous digital samples for each detected event. The digital discrimination identifies those digital samples that are not representative of detected events. The sampling and discrimination also provides for temporary or permanent storage, either serially or in parallel, to a digital storage medium.
40 CFR 80.8 - Sampling methods for gasoline and diesel fuel.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Sampling methods for gasoline and... gasoline and diesel fuel. The sampling methods specified in this section shall be used to collect samples of gasoline and diesel fuel for purposes of determining compliance with the requirements of this part...
40 CFR 80.8 - Sampling methods for gasoline and diesel fuel.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Sampling methods for gasoline and... gasoline and diesel fuel. The sampling methods specified in this section shall be used to collect samples of gasoline and diesel fuel for purposes of determining compliance with the requirements of this part...
40 CFR 80.8 - Sampling methods for gasoline and diesel fuel.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Sampling methods for gasoline and... gasoline and diesel fuel. The sampling methods specified in this section shall be used to collect samples of gasoline and diesel fuel for purposes of determining compliance with the requirements of this part...
Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review
Miao, Yinglong; McCammon, J. Andrew
2016-01-01
Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations. PMID:27453631
Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review.
Miao, Yinglong; McCammon, J Andrew
Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations.
Crepeau, Kathryn L.; Fram, Miranda S.; Bush, Noel
2004-01-01
An analytical method for the determination of the trihalomethane formation potential of water samples has been developed. The trihalomethane formation potential is measured by dosing samples with chlorine under specified conditions of pH, temperature, incubation time, darkness, and residual-free chlorine, and then analyzing the resulting trihalomethanes by purge and trap/gas chromatography equipped with an electron capture detector. Detailed explanations of the method and quality-control practices are provided. Method validation experiments showed that the trihalomethane formation potential varies as a function of time between sample collection and analysis, residual-free chlorine concentration, method of sample dilution, and the concentration of bromide in the sample.
Mechanisms of fracture of ring samples made of FCC metals on loading with magnetic-pulse method
NASA Astrophysics Data System (ADS)
Morozov, Viktor; Kats, Victor; Savenkov, Georgiy; Lukin, Anton
2018-05-01
Results of study of deformation and fracture of ring-shaped samples made of thin strips of cuprum, aluminum and steel in wide range of loading velocity are presented. Three developed by us schemes of magnetic-pulse method are used for the samples loading. The method of samples fracture with the high electrical resistance (e.g. steel) is proposed. Crack velocity at the sample fracture is estimated. Fracture surfaces are inspected. Mechanisms of dynamic fracture of the sample arere discussed.
Evaluating performance of stormwater sampling approaches using a dynamic watershed model.
Ackerman, Drew; Stein, Eric D; Ritter, Kerry J
2011-09-01
Accurate quantification of stormwater pollutant levels is essential for estimating overall contaminant discharge to receiving waters. Numerous sampling approaches exist that attempt to balance accuracy against the costs associated with the sampling method. This study employs a novel and practical approach of evaluating the accuracy of different stormwater monitoring methodologies using stormflows and constituent concentrations produced by a fully validated continuous simulation watershed model. A major advantage of using a watershed model to simulate pollutant concentrations is that a large number of storms representing a broad range of conditions can be applied in testing the various sampling approaches. Seventy-eight distinct methodologies were evaluated by "virtual samplings" of 166 simulated storms of varying size, intensity and duration, representing 14 years of storms in Ballona Creek near Los Angeles, California. The 78 methods can be grouped into four general strategies: volume-paced compositing, time-paced compositing, pollutograph sampling, and microsampling. The performances of each sampling strategy was evaluated by comparing the (1) median relative error between the virtually sampled and the true modeled event mean concentration (EMC) of each storm (accuracy), (2) median absolute deviation about the median or "MAD" of the relative error or (precision), and (3) the percentage of storms where sampling methods were within 10% of the true EMC (combined measures of accuracy and precision). Finally, costs associated with site setup, sampling, and laboratory analysis were estimated for each method. Pollutograph sampling consistently outperformed the other three methods both in terms of accuracy and precision, but was the most costly method evaluated. Time-paced sampling consistently underestimated while volume-paced sampling over estimated the storm EMCs. Microsampling performance approached that of pollutograph sampling at a substantial cost savings. The most efficient method for routine stormwater monitoring in terms of a balance between performance and cost was volume-paced microsampling, with variable sample pacing to ensure that the entirety of the storm was captured. Pollutograph sampling is recommended if the data are to be used for detailed analysis of runoff dynamics.
Myatt, Mark; Mai, Nguyen Phuong; Quynh, Nguyen Quang; Nga, Nguyen Huy; Tai, Ha Huy; Long, Nguyen Hung; Minh, Tran Hung; Limburg, Hans
2005-10-01
To report on the use of lot quality-assurance sampling (LQAS) surveys undertaken within an area-sampling framework to identify priority areas for intervention with trachoma control activities in Viet Nam. The LQAS survey method for the rapid assessment of the prevalence of active trachoma was adapted for use in Viet Nam with the aim of classifying individual communes by the prevalence of active trachoma among children in primary school. School-based sampling was used; school sites to be sampled were selected using an area-sampling approach. A total of 719 communes in 41 districts in 18 provinces were surveyed. Survey staff found the LQAS survey method both simple and rapid to use after initial problems with area-sampling methods were identified and remedied. The method yielded a finer spatial resolution of prevalence than had been previously achieved in Viet Nam using semiquantitative rapid assessment surveys and multistage cluster-sampled surveys. When used with area-sampling techniques, the LQAS survey method has the potential to form the basis of survey instruments that can be used to efficiently target resources for interventions against active trachoma. With additional work, such methods could provide a generally applicable tool for effective programme planning and for the certification of the elimination of trachoma as a blinding disease.
DuPont Qualicon BAX System polymerase chain reaction assay. Performance Tested Method 100201.
Tice, George; Andaloro, Bridget; Fallon, Dawn; Wallace, F Morgan
2009-01-01
A recent outbreak of Salmonella in peanut butter has highlighted the need for validation of rapid detection methods. A multilaboratory study for detecting Salmonella in peanut butter was conducted as part of the AOAC Research Institute Emergency Response Validation program for methods that detect outbreak threats to food safety. Three sites tested spiked samples from the same master mix according to the U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA-BAM) method and the BAX System method. Salmonella Typhimurium (ATCC 14028) was grown in brain heart infusion for 24 h at 37 degrees C, then diluted to appropriate levels for sample inoculation. Master samples of peanut butter were spiked at high and low target levels, mixed, and allowed to equilibrate at room temperature for 2 weeks. Spike levels were low [1.08 most probable number (MPN)/25 g]; high (11.5 MPN/25 g) and unspiked to serve as negative controls. Each master sample was divided into 25 g portions and coded to blind the samples. Twenty portions of each spiked master sample and five portions of the unspiked sample were tested at each site. At each testing site, samples were blended in 25 g portions with 225 mL prewarmed lactose broth until thoroughly homogenized, then allowed to remain at room temperature for 55-65 min. Samples were adjusted to a pH of 6.8 +/- 0.2, if necessary, and incubated for 22-26 h at 35 degrees C. Across the three reporting laboratories, the BAX System detected Salmonella in 10/60 low-spike samples and 58/60 high-spike samples. The reference FDA-BAM method yielded positive results for 11/60 low-spike and 58/60 high-spike samples. Neither method demonstrated positive results for any of the 15 unspiked samples.
Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey
2018-04-01
Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.
Duyvejonck, Hans; Cools, Piet; Decruyenaere, Johan; Roelens, Kristien; Noens, Lucien; Vermeulen, Stefan; Claeys, Geert; Decat, Ellen; Van Mechelen, Els; Vaneechoutte, Mario
2015-01-01
Candida species are known as opportunistic pathogens, and a possible cause of invasive infections. Because of their species-specific antimycotic resistance patterns, reliable techniques for their detection, quantification and identification are needed. We validated a DNA amplification method for direct detection of Candida spp. from clinical samples, namely the ITS2-High Resolution Melting Analysis (direct method), by comparing it with a culture and MALDI-TOF Mass Spectrometry based method (indirect method) to establish the presence of Candida species in three different types of clinical samples. A total of 347 clinical samples, i.e. throat swabs, rectal swabs and vaginal swabs, were collected from the gynaecology/obstetrics, intensive care and haematology wards at the Ghent University Hospital, Belgium. For the direct method, ITS2-HRM was preceded by NucliSENS easyMAG DNA extraction, directly on the clinical samples. For the indirect method, clinical samples were cultured on Candida ID and individual colonies were identified by MALDI-TOF. For 83.9% of the samples there was complete concordance between both techniques, i.e. the same Candida species were detected in 31.1% of the samples or no Candida species were detected in 52.8% of the samples. In 16.1% of the clinical samples, discrepant results were obtained, of which only 6.01% were considered as major discrepancies. Discrepancies occurred mostly when overall numbers of Candida cells in the samples were low and/or when multiple species were present in the sample. Most of the discrepancies could be decided in the advantage of the direct method. This is due to samples in which no yeast could be cultured whereas low amounts could be detected by the direct method and to samples in which high quantities of Candida robusta according to ITS2-HRM were missed by culture on Candida ID agar. It remains to be decided whether the diagnostic advantages of the direct method compensate for its disadvantages.
Methyl-CpG island-associated genome signature tags
Dunn, John J
2014-05-20
Disclosed is a method for analyzing the organismic complexity of a sample through analysis of the nucleic acid in the sample. In the disclosed method, through a series of steps, including digestion with a type II restriction enzyme, ligation of capture adapters and linkers and digestion with a type IIS restriction enzyme, genome signature tags are produced. The sequences of a statistically significant number of the signature tags are determined and the sequences are used to identify and quantify the organisms in the sample. Various embodiments of the invention described herein include methods for using single point genome signature tags to analyze the related families present in a sample, methods for analyzing sequences associated with hyper- and hypo-methylated CpG islands, methods for visualizing organismic complexity change in a sampling location over time and methods for generating the genome signature tag profile of a sample of fragmented DNA.
Methods for Determining Particle Size Distributions from Nuclear Detonations.
1987-03-01
Debris . . . 30 IV. Summary of Sample Preparation Method . . . . 35 V. Set Parameters for PCS ... ........... 39 VI. Analysis by Vendors...54 XV. Results From Brookhaven Analysis Using The Method of Cumulants ... ........... . 54 XVI. Results From Brookhaven Analysis of Sample...R-3 Using Histogram Method ......... .55 XVII. Results From Brookhaven Analysis of Sample R-8 Using Histogram Method ........... 56 XVIII.TEM Particle
NASA Astrophysics Data System (ADS)
Peselnick, L.
1982-08-01
An ultrasonic method is presented which combines features of the differential path and the phase comparison methods. The proposed differential path phase comparison method, referred to as the `hybrid' method for brevity, eliminates errors resulting from phase changes in the bond between the sample and buffer rod. Define r(P) [and R(P)] as the square of the normalized frequency for cancellation of sample waves for shear [and for compressional] waves. Define N as the number of wavelengths in twice the sample length. The pressure derivatives r'(P) and R' (P) for samples of Alcoa 2024-T4 aluminum were obtained by using the phase comparison and the hybrid methods. The values of the pressure derivatives obtained by using the phase comparison method show variations by as much as 40% for small values of N (N < 50). The pressure derivatives as determined from the hybrid method are reproducible to within ±2% independent of N. The values of the pressure derivatives determined by the phase comparison method for large N are the same as those determined by the hybrid method. Advantages of the hybrid method are (1) no pressure dependent phase shift at the buffer-sample interface, (2) elimination of deviatoric stress in the sample portion of the sample assembly with application of hydrostatic pressure, and (3) operation at lower ultrasonic frequencies (for comparable sample lengths), which eliminates detrimental high frequency ultrasonic problems. A reduction of the uncertainties of the pressure derivatives of single crystals and of low porosity polycrystals permits extrapolation of such experimental data to deeper mantle depths.
Luo, Yong; Wu, Dapeng; Zeng, Shaojiang; Gai, Hongwei; Long, Zhicheng; Shen, Zheng; Dai, Zhongpeng; Qin, Jianhua; Lin, Bingcheng
2006-09-01
A novel sample injection method for chip CE was presented. This injection method uses hydrostatic pressure, generated by emptying the sample waste reservoir, for sample loading and electrokinetic force for dispensing. The injection was performed on a double-cross microchip. One cross, created by the sample and separation channels, is used for formation of a sample plug. Another cross, formed by the sample and controlling channels, is used for plug control. By varying the electric field in the controlling channel, the sample plug volume can be linearly adjusted. Hydrostatic pressure takes advantage of its ease of generation on a microfluidic chip, without any electrode or external pressure pump, thus allowing a sample injection with a minimum number of electrodes. The potential of this injection method was demonstrated by a four-separation-channel chip CE system. In this system, parallel sample separation can be achieved with only two electrodes, which is otherwise impossible with conventional injection methods. Hydrostatic pressure maintains the sample composition during the sample loading, allowing the injection to be free of injection bias.
Yi, Ming; Stephens, Robert M.
2008-01-01
Analysis of microarray and other high throughput data often involves identification of genes consistently up or down-regulated across samples as the first step in extraction of biological meaning. This gene-level paradigm can be limited as a result of valid sample fluctuations and biological complexities. In this report, we describe a novel method, SLEPR, which eliminates this limitation by relying on pathway-level consistencies. Our method first selects the sample-level differentiated genes from each individual sample, capturing genes missed by other analysis methods, ascertains the enrichment levels of associated pathways from each of those lists, and then ranks annotated pathways based on the consistency of enrichment levels of individual samples from both sample classes. As a proof of concept, we have used this method to analyze three public microarray datasets with a direct comparison with the GSEA method, one of the most popular pathway-level analysis methods in the field. We found that our method was able to reproduce the earlier observations with significant improvements in depth of coverage for validated or expected biological themes, but also produced additional insights that make biological sense. This new method extends existing analyses approaches and facilitates integration of different types of HTP data. PMID:18818771
Flotemersch, Joseph E; North, Sheila; Blocksom, Karen A
2014-02-01
Benthic macroinvertebrates are sampled in streams and rivers as one of the assessment elements of the US Environmental Protection Agency's National Rivers and Streams Assessment. In a 2006 report, the recommendation was made that different yet comparable methods be evaluated for different types of streams (e.g., low gradient vs. high gradient). Consequently, a research element was added to the 2008-2009 National Rivers and Streams Assessment to conduct a side-by-side comparison of the standard macroinvertebrate sampling method with an alternate method specifically designed for low-gradient wadeable streams and rivers that focused more on stream edge habitat. Samples were collected using each method at 525 sites in five of nine aggregate ecoregions located in the conterminous USA. Methods were compared using the benthic macroinvertebrate multimetric index developed for the 2006 Wadeable Streams Assessment. Statistical analysis did not reveal any trends that would suggest the overall assessment of low-gradient streams on a regional or national scale would change if the alternate method was used rather than the standard sampling method, regardless of the gradient cutoff used to define low-gradient streams. Based on these results, the National Rivers and Streams Survey should continue to use the standard field method for sampling all streams.
Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat
2018-03-01
To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
Ng, Ding-Quan; Liu, Shu-Wei; Lin, Yi-Pin
2018-09-15
In this study, a sampling campaign with a total of nine sampling events investigating lead in drinking water was conducted at 7 sampling locations in an old building with lead pipes in service in part of the building on the National Taiwan University campus. This study aims to assess the effectiveness of four different sampling methods, namely first draw sampling, sequential sampling, random daytime sampling and flush sampling, in lead contamination detection. In 3 out of the 7 sampling locations without lead pipe, lead could not be detected (<1.1 μg/L) in most samples regardless of the sampling methods. On the other hand, in the 4 sampling locations where lead pipes still existed, total lead concentrations >10 μg/L were consistently observed in 3 locations using any of the four sampling methods while the remaining location was identified to be contaminated using sequential sampling. High lead levels were consistently measured by the four sampling methods in the 3 locations in which particulate lead was either predominant or comparable to soluble lead. Compared to first draw and random daytime samplings, although flush sampling had a high tendency to reduce total lead in samples in lead-contaminated sites, the extent of lead reduction was location-dependent and not dependent on flush durations between 5 and 10 min. Overall, first draw sampling and random daytime sampling were reliable and effective in determining lead contamination in this study. Flush sampling could reveal the contamination if the extent is severe but tends to underestimate lead exposure risk. Copyright © 2018 Elsevier B.V. All rights reserved.
Phonologically driven variability: the case of determiners.
Bürki, Audrey; Laganaro, Marina; Alario, F Xavier
2014-09-01
Speakers usually produce words in connected speech. In such contexts, the form in which many words are uttered is influenced by the phonological properties of neighboring words. The current article examines the representations and processes underlying the production of phonologically constrained word form variations. For this purpose, we consider determiners whose form is sensitive to phonological context (e.g., in English: a car vs. an animal; in French: le chien 'the dog' vs. l'âne 'the donkey'). Two hypotheses have been proposed regarding how these words are processed. Determiners either are thought to have different representations for each of their surface forms, or they are thought to have only 1 representation while other forms are generated online after selection through a rule-based process. We tested the predictions derived from these 2 views in 3 picture naming experiments. Participants named pictures using determiner-adjective-noun phrases (e.g., la nouvelle table 'the new table'). Phonologically consistent or inconsistent conditions were contrasted, based on the phonological onsets of the adjective and the noun. Results revealed shorter naming latencies for consistent than for inconsistent sequences (i.e., a phonological consistency effect) for all the determiner types tested. Our interpretation of these findings converges on the assumption that determiners with varying surface forms are represented in memory with multiple phonological-lexical representations. This conclusion is discussed in relation to models of determiner processing and models of lexical variability.
Innovations en vaccinologie: enjeux et perspectives pour l’Afrique
Diop, Doudou; Sanicas, Melvin
2017-01-01
La vaccination est incontestablement l’une des interventions de santé publique les plus efficaces et les plus rentables qui soient. Les vaccins continuent de révolutionner notre capacité à prévenir les maladies et à améliorer la santé. Avec toutes les avancées technologiques, nous sommes en mesure d’étendre les avantages des vaccins à plus de gens et de fournir une meilleure protection contre les maladies infectieuses mortelles. Toutefois, avec le développement incessant de nouvelles souches microbiennes à travers le monde, la recherche en vaccinologie se doit d’innover continuellement. D’énormes progrès ont été réalisés pour améliorer la couverture vaccinale et introduire de nouveaux vaccins en Afrique. De nouveaux types de vaccins associés à des outils de vectorisation, d’administration et de délivrance spécifiques mais aussi des adjuvants susceptibles de moduler finement la réponse immunitaire sont attendus dans le futur. En Afrique, il est nécessaire de développer une approche régionale afin de répondre efficacement aux nombreux défis. Une meilleure information, la formation des personnels de santé en vaccinologie et des recherches bien ciblées sont les clés des futurs accomplissements dans le domaine. PMID:28690749
Hermann Karsten, pioneer of geologic mapping in northwestern South America
NASA Astrophysics Data System (ADS)
Aalto, K. R.
2015-06-01
In the late 19th century, a regional map of Nueva Granada (present-day Colombia, Panama and parts of Venezuela and Ecuador) was published by German botanist and geologist Hermann Karsten (1817-1908). Karsten's work was incorporated by Agustín Codazzi (1793-1859), an Italian who emigrated to Venezuela and Colombia to serve as a government cartographer and geographer, in his popular Atlas geográfico e histórico de la Republica de Colombia (1889). Geologic mapping and most observations provided in this 1889 atlas were taken from Karsten's Géologie de l'ancienne Colombie bolivarienne: Vénézuela, Nouvelle-Grenade et Ecuador (1886), as cited by Manual Paz and/or Felipe Pérez, who edited this edition of the atlas. Karsten defined four epochs in Earth history: Primera - without life - primary crystalline rocks, Segunda - with only marine life - chiefly sedimentary rocks, Tercera - with terrestrial quadrupeds and fresh water life forms life - chiefly sedimentary rocks, and Cuarta - mankind appears, includes diluvial (glacigenic) and post-diluvial terranes. He noted that Colombia is composed of chiefly of Quaternary, Tertiary and Cretaceous plutonic, volcanic and sedimentary rocks, and that Earth's internal heat (calor central) accounted, by escape of inner gases, for volcanism, seismicity and uplift of mountains. Karsten's regional mapping and interpretation thus constitutes the primary source and ultimate pioneering geologic research.
Atteinte pulmonaire sévère au cours de la vascularite hypocomplémentémique urticarienne
Raoufi, Mohammed; Laine, Mustapha; Amrani, Hicham Naji; Souhi, Hicham; Janah, Hicham; Elouazzani, Hanane; Rhorfi, Ismail Abderrahmane; Abid, Ahmed
2016-01-01
L'atteinte pulmonaire au cours de la vascularite urticarienne hypocomplémentémique (VUH) ou syndrome de Mc duffie est très rare et de pronostique péjoratif. Nous rapportons le cas d'une patiente âgée de 55 ans suivie pour VUH depuis 20 ans. Le diagnostic était retenu sur les lésions urticariennes, l'inflammation occulaire, le test positif C1q-p par immunodiffusion, avec un faible du taux de C1q. La patiente a été traitée par des cycles à base de cyclophosphamide, des corticoïdes et des cures de rituximab alors qu'elle a développé une dyspnée actuellement classe III (classification NYHA). Le bilan clinico-radiologique et fonctionnel a montré une distension thoracique et une broncho-pneumopathie obstructive sévère non amélioré par le traitement systémique. Un traitement par aérosolthérapie a été démarré et la patiente présentait une nette amélioration clinique. L'atteinte pulmonaire au cours de la vascularite urticarienne hypocomplémentémique de Mc duffie conditionne le pronostique vital à cours terme. La connaissance des différentes formes de cette atteinte ouvre de nouvelles perspectives thérapeutiques. PMID:28154640
Entérite lupique récidivante améliorée par Azathioprine
Marzouk, Sameh; Garbaa, Saida; Cherif, Yosra; Jallouli, Moez; Bahri, Fathi; Bahloul, Zouhir
2015-01-01
Les manifestations gastro-intestinales observées au cours du lupus érythémateux systémique sont fréquentes et peuvent intéresser n'importe quel segment du tractus digestif. L'entérite lupique constitue l'une des manifestations responsable de douleurs abdominales. Son traitement est basé essentiellement sur les corticoïdes. Le recours aux immunosuppresseurs est réservé aux formes récidivantes ou en cas d’échec des corticoïdes. Nous rapportons une nouvelle observation d'entérite lupique récidivante améliorée par azathioprine. Il s'agissait d'une femme âgée de 30 ans chez laquelle le diagnostic du lupus a été retenu en 2004. Un an après, elle a présenté des douleurs abdominales, des vomissements et des diarrhées. Les explorations ont conclu à une entérite lupique après élimination de toute autre cause notamment infectieuse. Elle a été traitée par des corticoïdes à forte dose. Cependant à chaque tentative de dégression, elle présentait la même symptomatologie. En 2010 l'azathioprine a été associé permettant de juguler la maladie et de diminuer la corticothérapie. PMID:26113946
The pollen of metaphor: Box, cage, and trap as containment in the eighteenth century.
Milne, Anne
2016-06-01
This article uses the concept of "the pollen of metaphor" to discuss three forms of non-human animal containment in the eighteenth century: François Huber's Leaf or Book Hive bee box first described in his Nouvelles Observations sur les Abeilles (1792, English translation 1806), Sarah Trimmer's bird cages in her didactic children's book, Fabulous Histories; Or, The Story of the Robins (1786), and a mouse trap in Anna Letitia Barbauld's 1773 poem, "The Mouse's Petition, found in the trap where he had been confined all night by Dr. Priestley, for the sake of making experiments with different kinds of air." All three works highlight the eighteenth-century art of observation. The inherent commitment to relationships in the observation process suggests that interpreting ocular evidence involves "plausible relations," metaphor and/or "productive analogy." The article teases out subtle differences between the ways that each author uses containments and concludes that while Huber seeks to circumscribe non-human animal behavior within the bounds of 'reasonable' animal husbandry to better serve human needs, Trimmer goes further to connect 'appropriate' non-human animal containment to moral strictures governing humans. Barbauld's intervention using a literate, speaking animal subject confronts such moral governance to argue for equal rights based on principles of true equality rather than what is observed to be 'reasonable' and/or 'moral.' Copyright © 2016 Elsevier Ltd. All rights reserved.
Les perdus de vue en radiothérapie: expérience de l'Institut National d'Oncologie au Maroc
Mezouri, Imane; Chenna, Hanane; Bellefqih, Sara; Elkacemi, Hanan; Kebdani, Tayeb; Benjaafar, Noureddine
2014-01-01
Les perdus de vue (PDV) sont toute personne incluse dans un programme et dont on est sans nouvelles depuis six mois. L'objectif de cette étude est de fournir une description objective du problème des malades PDV au service de radiothérapie à l'Institut National d'Oncologie (INO), elle permet d’étudier l'impact des facteurs socio-économiques, démographiques et ceux liés à la maladie entraînant l'abandon du traitement par le patient. Nous avons réalisé une étude rétrospective de 77 patients PDV parmi 2254 patient admis à l'INO du premier janvier au 31 décembre 2011 pour traitement par radiothérapie. La présente analyse a mis en évidence que les taux d'abandon sont associés à des facteurs liés à la maladie et qu’à la fois le patient et le médecin doivent être formés et être conscients de la façon dont les stades avancés de la maladie, le mauvais statut de performance ainsi que la combinaison des autres problèmes de santé peuvent suffisamment conduire le patient à l'abandon du traitement. PMID:25584129
Krempa, Heather M.
2015-10-29
Relative percent differences between methods were greater than 10 percent for most analyzed trace elements. Barium, cobalt, manganese, and boron had concentrations that were significantly different between sampling methods. Barium, molybdenum, boron, and uranium method concentrations indicate a close association between pump and grab samples based on bivariate plots and simple linear regressions. Grab sample concentrations were generally larger than pump concentrations for these elements and may be because of using a larger pore sized filter for grab samples. Analysis of zinc blank samples suggests zinc contamination in filtered grab samples. Variations of analyzed trace elements between pump and grab samples could reduce the ability to monitor temporal changes and potential groundwater contamination threats. The degree of precision necessary for monitoring potential groundwater threats and application objectives need to be considered when determining acceptable variation amounts.
Representativeness of direct observations selected using a work-sampling equation.
Sharp, Rebecca A; Mudford, Oliver C; Elliffe, Douglas
2015-01-01
Deciding on appropriate sampling to obtain representative samples of behavior is important but not straightforward, because the relative duration of the target behavior may affect its observation in a given sampling interval. Work-sampling methods, which offer a way to adjust the frequency of sampling according to a priori or ongoing estimates of the behavior to achieve a preselected level of representativeness, may provide a solution. Full-week observations of 7 behaviors were conducted for 3 students with autism spectrum disorder and intellectual disabilities. Work-sampling methods were used to select momentary time samples from the full time-of-interest, which produced representative samples. However, work sampling required impractically high numbers of time samples to obtain representative samples. More practical momentary time samples produced less representative samples, particularly for low-duration behaviors. The utility and limits of work-sampling methods for applied behavior analysis are discussed. © Society for the Experimental Analysis of Behavior.
Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations
Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.; Hutchison, Janine R.
2016-01-01
Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces. PMID:27736999
Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations.
Hess, Becky M; Amidan, Brett G; Anderson, Kevin K; Hutchison, Janine R
2016-01-01
Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.
SnagPRO: snag and tree sampling and analysis methods for wildlife
Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough
2008-01-01
We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...
ERIC Educational Resources Information Center
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.
2007-01-01
A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…
An evaluation of methods for estimating decadal stream loads
NASA Astrophysics Data System (ADS)
Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-11-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
An evaluation of methods for estimating decadal stream loads
Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-01-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
Alum, Absar; Rock, Channah; Abbaszadegan, Morteza
2014-01-01
For land application, biosolids are classified as Class A or Class B based on the levels of bacterial, viral, and helminths pathogens in residual biosolids. The current EPA methods for the detection of these groups of pathogens in biosolids include discrete steps. Therefore, a separate sample is processed independently to quantify the number of each group of the pathogens in biosolids. The aim of the study was to develop a unified method for simultaneous processing of a single biosolids sample to recover bacterial, viral, and helminths pathogens. At the first stage for developing a simultaneous method, nine eluents were compared for their efficiency to recover viruses from a 100 gm spiked biosolids sample. In the second stage, the three top performing eluents were thoroughly evaluated for the recovery of bacteria, viruses, and helminthes. For all three groups of pathogens, the glycine-based eluent provided higher recovery than the beef extract-based eluent. Additional experiments were performed to optimize performance of glycine-based eluent under various procedural factors such as, solids to eluent ratio, stir time, and centrifugation conditions. Last, the new method was directly compared with the EPA methods for the recovery of the three groups of pathogens spiked in duplicate samples of biosolids collected from different sources. For viruses, the new method yielded up to 10% higher recoveries than the EPA method. For bacteria and helminths, recoveries were 74% and 83% by the new method compared to 34% and 68% by the EPA method, respectively. The unified sample processing method significantly reduces the time required for processing biosolids samples for different groups of pathogens; it is less impacted by the intrinsic variability of samples, while providing higher yields (P = 0.05) and greater consistency than the current EPA methods.
A STANDARDIZED ASSESSMENT METHOD (SAM) FOR RIVERINE MACROINVERTEBRATES
A macroinvertebrate sampling method for large rivers based on desirable characteristics of existing nonwadeable methods was developed and tested. Six sites each were sampled on the Great Miami and Kentucky Rivers, reflecting a human disturbance gradient. Samples were collected ...
CTEPP STANDARD OPERATING PROCEDURE FOR PACKING AND SHIPPING STUDY SAMPLES (SOP-3.11)
This SOP describes the methods for packing and shipping study samples. These methods are for packing and shipping biological and environmental samples. The methods have been tested and used in the previous pilot studies.
Nonprobability and probability-based sampling strategies in sexual science.
Catania, Joseph A; Dolcini, M Margaret; Orellana, Roberto; Narayanan, Vasudah
2015-01-01
With few exceptions, much of sexual science builds upon data from opportunistic nonprobability samples of limited generalizability. Although probability-based studies are considered the gold standard in terms of generalizability, they are costly to apply to many of the hard-to-reach populations of interest to sexologists. The present article discusses recent conclusions by sampling experts that have relevance to sexual science that advocates for nonprobability methods. In this regard, we provide an overview of Internet sampling as a useful, cost-efficient, nonprobability sampling method of value to sex researchers conducting modeling work or clinical trials. We also argue that probability-based sampling methods may be more readily applied in sex research with hard-to-reach populations than is typically thought. In this context, we provide three case studies that utilize qualitative and quantitative techniques directed at reducing limitations in applying probability-based sampling to hard-to-reach populations: indigenous Peruvians, African American youth, and urban men who have sex with men (MSM). Recommendations are made with regard to presampling studies, adaptive and disproportionate sampling methods, and strategies that may be utilized in evaluating nonprobability and probability-based sampling methods.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-06
... Analytic Methods and Sampling Procedures for the United States National Residue Program for Meat, Poultry... implementing several multi-residue methods for analyzing samples of meat, poultry, and egg products for animal.... These modern, high-efficiency methods will conserve resources and provide useful and reliable results...
USDA-ARS?s Scientific Manuscript database
A sample preparation method was evaluated for the determination of polybrominated diphenyl ethers (PBDEs) in mussel samples, by using colorimetric and electrochemical immunoassay-based screening methods. A simple sample preparation in conjunction with a rapid screening method possesses the desired c...
An Improved Manual Method for NOx Emission Measurement.
ERIC Educational Resources Information Center
Dee, L. A.; And Others
The current manual NO (x) sampling and analysis method was evaluated. Improved time-integrated sampling and rapid analysis methods were developed. In the new method, the sample gas is drawn through a heated bed of uniquely active, crystalline, Pb02 where NO (x) is quantitatively absorbed. Nitrate ion is later extracted with water and the…
Comparing two sampling methods to engage hard-to-reach communities in research priority setting.
Valerio, Melissa A; Rodriguez, Natalia; Winkler, Paula; Lopez, Jaime; Dennison, Meagen; Liang, Yuanyuan; Turner, Barbara J
2016-10-28
Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1) snowball sampling, a chain- referral method or 2) purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community). Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities' stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 %) consented, 52 (95 %) attended the first meeting, and 36 (65 %) attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 %) consented, 36 (58 %) attended the first meeting, and 26 (42 %) attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P < 0.05). Despite differing recruitment strategies, stakeholders from the two communities identified largely similar ideas for research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045) which was higher for the purposive/convenience sampling group and for city improvements/transportation services (P = 0.004) which was higher for the snowball sampling group. In each of the two similar hard-to-reach communities, a community advisory board partnered with researchers to implement a different sampling method to recruit stakeholders. The snowball sampling method achieved greater participation with more Hispanics but also more individuals with disabilities than a purposive-convenience sampling method. However, priorities for research on chronic pain from both stakeholder groups were similar. Although utilizing a snowball sampling method appears to be superior, further research is needed on implementation costs and resources.
Ruple-Czerniak, A; Bolte, D S; Burgess, B A; Morley, P S
2014-07-01
Nosocomial salmonellosis is an important problem in veterinary hospitals that treat horses and other large animals. Detection and mitigation of outbreaks and prevention of healthcare-associated infections often require detection of Salmonella enterica in the hospital environment. To compare 2 previously published methods for detecting environmental contamination with S. enterica in a large animal veterinary teaching hospital. Hospital-based comparison of environmental sampling techniques. A total of 100 pairs of environmental samples were collected from stalls used to house large animal cases (horses, cows or New World camelids) that were confirmed to be shedding S. enterica by faecal culture. Stalls were cleaned and disinfected prior to sampling, and the same areas within each stall were sampled for the paired samples. One method of detection used sterile, premoistened sponges that were cultured using thioglycolate enrichment before plating on XLT-4 agar. The other method used electrostatic wipes that were cultured using buffered peptone water, tetrathionate and Rappaport-Vassiliadis R10 broths before plating on XLT-4 agar. Salmonella enterica was recovered from 14% of samples processed using the electrostatic wipe sampling and culture procedure, whereas S. enterica was recovered from only 4% of samples processed using the sponge sampling and culture procedure. There was test agreement for 85 pairs of culture-negative samples and 3 pairs of culture-positive samples. However, the remaining 12 pairs of samples with discordant results created significant disagreement between the 2 detection methods (P<0.01). Persistence of Salmonella in the environment of veterinary hospitals can occur even with rigorous cleaning and disinfection. Use of sensitive methods for detection of environmental contamination is critical when detecting and mitigating this problem in veterinary hospitals. These results suggest that the electrostatic wipe sampling and culture method was more sensitive than the sponge sampling and culture method. © 2013 EVJ Ltd.
Filter forensics: microbiota recovery from residential HVAC filters.
Maestre, Juan P; Jennings, Wiley; Wylie, Dennis; Horner, Sharon D; Siegel, Jeffrey; Kinney, Kerry A
2018-01-30
Establishing reliable methods for assessing the microbiome within the built environment is critical for understanding the impact of biological exposures on human health. High-throughput DNA sequencing of dust samples provides valuable insights into the microbiome present in human-occupied spaces. However, the effect that different sampling methods have on the microbial community recovered from dust samples is not well understood across sample types. Heating, ventilation, and air conditioning (HVAC) filters hold promise as long-term, spatially integrated, high volume samplers to characterize the airborne microbiome in homes and other climate-controlled spaces. In this study, the effect that dust recovery method (i.e., cut and elution, swabbing, or vacuuming) has on the microbial community structure, membership, and repeatability inferred by Illumina sequencing was evaluated. The results indicate that vacuum samples captured higher quantities of total, bacterial, and fungal DNA than swab or cut samples. Repeated swab and vacuum samples collected from the same filter were less variable than cut samples with respect to both quantitative DNA recovery and bacterial community structure. Vacuum samples captured substantially greater bacterial diversity than the other methods, whereas fungal diversity was similar across all three methods. Vacuum and swab samples of HVAC filter dust were repeatable and generally superior to cut samples. Nevertheless, the contribution of environmental and human sources to the bacterial and fungal communities recovered via each sampling method was generally consistent across the methods investigated. Dust recovery methodologies have been shown to affect the recovery, repeatability, structure, and membership of microbial communities recovered from dust samples in the built environment. The results of this study are directly applicable to indoor microbiota studies utilizing the filter forensics approach. More broadly, this study provides a better understanding of the microbial community variability attributable to sampling methodology and helps inform interpretation of data collected from other types of dust samples collected from indoor environments.
A cryopreservation method for Pasteurella multocida from wetland samples
Moore, Melody K.; Shadduck, D.J.; Goldberg, Diana R.; Samuel, M.D.
1998-01-01
A cryopreservation method and improved isolation techniques for detection of Pasteurella multocida from wetland samples were developed. Wetland water samples were collected in the field, diluted in dimethyl sulfoxide (DMSO, final concentration 10%), and frozen at -180 C in a liquid nitrogen vapor shipper. Frozen samples were transported to the laboratory where they were subsequently thawed and processed in Pasteurella multocida selective broth (PMSB) to isolate P. multocida. This method allowed for consistent isolation of 2 to 18 organisms/ml from water seeded with known concentrations of P. multocida. The method compared favorably with the standard mouse inoculation method and allowed for preservation of the samples until they could be processed in the laboratory.
Method and apparatus for data sampling
Odell, D.M.C.
1994-04-19
A method and apparatus for sampling radiation detector outputs and determining event data from the collected samples is described. The method uses high speed sampling of the detector output, the conversion of the samples to digital values, and the discrimination of the digital values so that digital values representing detected events are determined. The high speed sampling and digital conversion is performed by an A/D sampler that samples the detector output at a rate high enough to produce numerous digital samples for each detected event. The digital discrimination identifies those digital samples that are not representative of detected events. The sampling and discrimination also provides for temporary or permanent storage, either serially or in parallel, to a digital storage medium. 6 figures.
Meyer, M.T.; Lee, E.A.; Ferrell, G.M.; Bumgarner, J.E.; Varns, Jerry
2007-01-01
This report describes the performance of an offline tandem solid-phase extraction (SPE) method and an online SPE method that use liquid chromatography/mass spectrometry for the analysis of 23 and 35 antibiotics, respectively, as used in several water-quality surveys conducted since 1999. In the offline tandem SPE method, normalized concentrations for the quinolone, macrolide, and sulfonamide antibiotics in spiked environmental samples averaged from 81 to 139 percent of the expected spiked concentrations. A modified standard-addition technique was developed to improve the quantitation of the tetracycline antibiotics, which had 'apparent' concentrations that ranged from 185 to 1,200 percent of their expected spiked concentrations in matrix-spiked samples. In the online SPE method, normalized concentrations for the quinolone, macrolide, sulfonamide, and tetracycline antibiotics in matrix-spiked samples averaged from 51 to 142 percent of their expected spiked concentrations, and the beta-lactam antibiotics in matrix-spiked samples averaged from 22 to 76 percent of their expected spiked concentration. Comparison of 44 samples analyzed by both the offline tandem SPE and online SPE methods showed 50 to 100 percent agreement in sample detection for overlapping analytes and 68 to 100 percent agreement in a presence-absence comparison for all analytes. The offline tandem and online SPE methods were compared to an independent method that contains two overlapping antibiotic compounds, sulfamethoxazole and trimethoprim, for 96 and 44 environmental samples, respectively. The offline tandem SPE showed 86 and 92 percent agreement in sample detection and 96 and 98 percent agreement in a presence-absence comparison for sulfamethoxazole and trimethoprim, respectively. The online SPE method showed 57 and 56 percent agreement in sample detection and 72 and 91 percent agreement in presence-absence comparison for sulfamethoxazole and trimethoprim, respectively. A linear regression with an R2 of 0.91 was obtained for trimethoprim concentrations, and an R2 of 0.35 was obtained for sulfamethoxazole concentrations determined from samples analyzed by the offline tandem SPE and online SPE methods. Linear regressions of trimethoprim and sulfamethoxazole concentrations determined from samples analyzed by the offline tandem SPE method and the independent M3 pharmaceutical method yielded R2 of 0.95 and 0.87, respectively. Regressed comparison of the offline tandem SPE method to the online SPE and M3 methods showed that the online SPE method gave higher concentrations for sulfamethoxazole and trimethoprim than were obtained from the offline tandem SPE or M3 methods.
NASA Astrophysics Data System (ADS)
Roether, Wolfgang; Vogt, Martin; Vogel, Sandra; Sültenfuß, Jürgen
2013-06-01
We present a new method to obtain samples for the measurement of helium isotopes and neon in water, to replace the classical sampling procedure using clamped-off Cu tubing containers that we have been using so far. The new method saves the gas extraction step prior to admission to the mass spectrometer, which the classical method requires. Water is drawn into evacuated glass ampoules with subsequent flame sealing. Approximately 50% headspace is left, from which admission into the mass spectrometer occurs without further treatment. Extensive testing has shown that, with due care and with small corrections applied, the samples represent the gas concentrations in the water within ±0.07% (95% confidence level; ±0.05% with special handling). Fast evacuation is achieved by pumping on a small charge of water placed in the ampoule. The new method was successfully tested at sea in comparison with Cu-tubing sampling. We found that the ampoule samples were superior in data precision and that a lower percentage of samples were lost prior to measurement. Further measurements revealed agreement between the two methods in helium, 3He and neon within ±0.1%. The new method facilitates the dealing with large sample sets and minimizes the delay between sampling and measurement. The method is applicable also for gases other than helium and neon.
Single-view phase retrieval of an extended sample by exploiting edge detection and sparsity
Tripathi, Ashish; McNulty, Ian; Munson, Todd; ...
2016-10-14
We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample's edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update rules to robustly recover the sample's exit wave. We test these methods on simulated samples by varying the sparsity of the edge-detected representation of the exit wave. Finally, our tests illustrate the strengths and limitations of the proposed method in imaging extended samples.
Boiano, J M; Wallace, M E; Sieber, W K; Groff, J H; Wang, J; Ashley, K
2000-08-01
A field study was conducted with the goal of comparing the performance of three recently developed or modified sampling and analytical methods for the determination of airborne hexavalent chromium (Cr(VI)). The study was carried out in a hard chrome electroplating facility and in a jet engine manufacturing facility where airborne Cr(VI) was expected to be present. The analytical methods evaluated included two laboratory-based procedures (OSHA Method ID-215 and NIOSH Method 7605) and a field-portable method (NIOSH Method 7703). These three methods employ an identical sampling methodology: collection of Cr(VI)-containing aerosol on a polyvinyl chloride (PVC) filter housed in a sampling cassette, which is connected to a personal sampling pump calibrated at an appropriate flow rate. The basis of the analytical methods for all three methods involves extraction of the PVC filter in alkaline buffer solution, chemical isolation of the Cr(VI) ion, complexation of the Cr(VI) ion with 1,5-diphenylcarbazide, and spectrometric measurement of the violet chromium diphenylcarbazone complex at 540 nm. However, there are notable specific differences within the sample preparation procedures used in three methods. To assess the comparability of the three measurement protocols, a total of 20 side-by-side air samples were collected, equally divided between a chromic acid electroplating operation and a spray paint operation where water soluble forms of Cr(VI) were used. A range of Cr(VI) concentrations from 0.6 to 960 microg m(-3), with Cr(VI) mass loadings ranging from 0.4 to 32 microg, was measured at the two operations. The equivalence of the means of the log-transformed Cr(VI) concentrations obtained from the different analytical methods was compared. Based on analysis of variance (ANOVA) results, no statistically significant differences were observed between mean values measured using each of the three methods. Small but statistically significant differences were observed between results obtained from performance evaluation samples for the NIOSH field method and the OSHA laboratory method.
[Determination of ethylene glycol in biological fluids--propylene glycol interferences].
Gomółka, Ewa; Cudzich-Czop, Sylwia; Sulka, Adrianna
2013-01-01
Many laboratories in Poland do not use gas chromatography (GC) method for determination of ethylene glycol (EG) and methanol in blood of poisoned patients, they use non specific spectrophotometry methods. One of the interfering substances is propylene glycol (PG)--compound present in many medical and cosmetic products: drops, air freshens, disinfectants, electronic cigarettes and others. In Laboratory of Analytical Toxicology and Drug Monitoring in Krakow determination of EG is made by GC method. The method enables to distinguish and make resolution of (EG) and (PG) in biological samples. In the years 2011-2012 in several serum samples from diagnosed patients PG was present in concentration from several to higher than 100 mg/dL. The aim of the study was to estimate PG interferences of serum EG determination by spectrophotometry method. Serum samples containing PG and EG were used in the study. The samples were analyzed by two methods: GC and spectrophotometry. Results of serum samples spiked with PG with no EG analysed by spectrophotometry method were improper ("false positive"). The results were correlated to PG concentration in samples. Calculated cross-reactivity of PG in the method was 42%. Positive results of EG measured by spectrophotometry method must be confirmed by reference GC method. Spectrophotometry method shouldn't be used for diagnostics and monitoring of patients poisoned by EG.
Bruce, James F.; Roberts, James J.; Zuellig, Robert E.
2018-05-24
The U.S. Geological Survey (USGS), in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, analyzed previously collected invertebrate data to determine the comparability among four sampling methods and two versions (2010 and 2017) of the Colorado Benthic Macroinvertebrate Multimetric Index (MMI). For this study, annual macroinvertebrate samples were collected concurrently (in space and time) at 15 USGS surface-water gaging stations in the Fountain Creek Basin from 2010 to 2012 using four sampling methods. The USGS monitoring project in the basin uses two of the methods and the Colorado Department of Public Health and Environment recommends the other two. These methods belong to two distinct sample types, one that targets single habitats and one that targets multiple habitats. The study results indicate that there are significant differences in MMI values obtained from the single-habitat and multihabitat sample types but methods from each program within each sample type produced comparable values. This study also determined that MMI values calculated by different versions of the Colorado Benthic Macroinvertebrate MMI are indistinguishable. This indicates that the Colorado Department of Public Health and Environment methods are comparable with the USGS monitoring project methods for single-habitat and multihabitat sample types. This report discusses the direct application of the study results to inform the revision of the existing USGS monitoring project in the Fountain Creek Basin.
Towards robust and repeatable sampling methods in eDNA based studies.
Dickie, Ian A; Boyer, Stephane; Buckley, Hannah; Duncan, Richard P; Gardner, Paul; Hogg, Ian D; Holdaway, Robert J; Lear, Gavin; Makiola, Andreas; Morales, Sergio E; Powell, Jeff R; Weaver, Louise
2018-05-26
DNA based techniques are increasingly used for measuring the biodiversity (species presence, identity, abundance and community composition) of terrestrial and aquatic ecosystems. While there are numerous reviews of molecular methods and bioinformatic steps, there has been little consideration of the methods used to collect samples upon which these later steps are based. This represents a critical knowledge gap, as methodologically sound field sampling is the foundation for subsequent analyses. We reviewed field sampling methods used for metabarcoding studies of both terrestrial and freshwater ecosystem biodiversity over a nearly three-year period (n = 75). We found that 95% (n = 71) of these studies used subjective sampling methods, inappropriate field methods, and/or failed to provide critical methodological information. It would be possible for researchers to replicate only 5% of the metabarcoding studies in our sample, a poorer level of reproducibility than for ecological studies in general. Our findings suggest greater attention to field sampling methods and reporting is necessary in eDNA-based studies of biodiversity to ensure robust outcomes and future reproducibility. Methods must be fully and accurately reported, and protocols developed that minimise subjectivity. Standardisation of sampling protocols would be one way to help to improve reproducibility, and have additional benefits in allowing compilation and comparison of data from across studies. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Comparison of extraction methods for quantifying vitamin E from animal tissues.
Xu, Zhimin
2008-12-01
Four extraction methods: (1) solvent (SOL), (2) ultrasound assisted solvent (UA), (3) saponification and solvent (SP), and (4) saponification and ultrasound assisted solvent (SP-UA), were used in sample preparation for quantifying vitamin E (tocopherols) in chicken liver and plasma samples. The extraction yields of SOL, UA, SP, and SP-UA methods obtained by adding delta-tocopherol as internal reference were 95%, 104%, 65%, and 62% for liver and 98%, 103%, 97%, and 94% for plasma, respectively. The methods with saponification significantly affected the stabilities of tocopherols in liver samples. The measured values of alpha- and gamma-tocopherols using the solvent only extraction (SOL) method were much lower than that using any of the other extraction methods. This indicated that less of the tocopherols in those samples were in a form that could be extracted directly by solvent. The measured value of alpha-tocopherol in the liver sample using the ultrasound assisted solvent (UA) method was 1.5-2.5 times of that obtained from the saponification and solvent (SP) method. The differences in measured values of tocopherols in the plasma samples by using the two methods were not significant. However, the measured value of the saponification and ultrasound assisted solvent (SP-UA) method was lower than either the saponification and solvent (SP) or the ultrasound assisted solvent (UA) method. Also, the reproducibility of the ultrasound assisted solvent (UA) method was greater than any of the saponification methods. Compared with the traditional saponification method, the ultrasound assisted solvent method could effectively extract tocopherols from sample matrix without any chemical degradation reactions, especially for complex animal tissue such as liver.
Appearance-based representative samples refining method for palmprint recognition
NASA Astrophysics Data System (ADS)
Wen, Jiajun; Chen, Yan
2012-07-01
The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.
THE SCREENING AND RANKING ALGORITHM FOR CHANGE-POINTS DETECTION IN MULTIPLE SAMPLES
Song, Chi; Min, Xiaoyi; Zhang, Heping
2016-01-01
The chromosome copy number variation (CNV) is the deviation of genomic regions from their normal copy number states, which may associate with many human diseases. Current genetic studies usually collect hundreds to thousands of samples to study the association between CNV and diseases. CNVs can be called by detecting the change-points in mean for sequences of array-based intensity measurements. Although multiple samples are of interest, the majority of the available CNV calling methods are single sample based. Only a few multiple sample methods have been proposed using scan statistics that are computationally intensive and designed toward either common or rare change-points detection. In this paper, we propose a novel multiple sample method by adaptively combining the scan statistic of the screening and ranking algorithm (SaRa), which is computationally efficient and is able to detect both common and rare change-points. We prove that asymptotically this method can find the true change-points with almost certainty and show in theory that multiple sample methods are superior to single sample methods when shared change-points are of interest. Additionally, we report extensive simulation studies to examine the performance of our proposed method. Finally, using our proposed method as well as two competing approaches, we attempt to detect CNVs in the data from the Primary Open-Angle Glaucoma Genes and Environment study, and conclude that our method is faster and requires less information while our ability to detect the CNVs is comparable or better. PMID:28090239
Efficient global biopolymer sampling with end-transfer configurational bias Monte Carlo
NASA Astrophysics Data System (ADS)
Arya, Gaurav; Schlick, Tamar
2007-01-01
We develop an "end-transfer configurational bias Monte Carlo" method for efficient thermodynamic sampling of complex biopolymers and assess its performance on a mesoscale model of chromatin (oligonucleosome) at different salt conditions compared to other Monte Carlo moves. Our method extends traditional configurational bias by deleting a repeating motif (monomer) from one end of the biopolymer and regrowing it at the opposite end using the standard Rosenbluth scheme. The method's sampling efficiency compared to local moves, pivot rotations, and standard configurational bias is assessed by parameters relating to translational, rotational, and internal degrees of freedom of the oligonucleosome. Our results show that the end-transfer method is superior in sampling every degree of freedom of the oligonucleosomes over other methods at high salt concentrations (weak electrostatics) but worse than the pivot rotations in terms of sampling internal and rotational sampling at low-to-moderate salt concentrations (strong electrostatics). Under all conditions investigated, however, the end-transfer method is several orders of magnitude more efficient than the standard configurational bias approach. This is because the characteristic sampling time of the innermost oligonucleosome motif scales quadratically with the length of the oligonucleosomes for the end-transfer method while it scales exponentially for the traditional configurational-bias method. Thus, the method we propose can significantly improve performance for global biomolecular applications, especially in condensed systems with weak nonbonded interactions and may be combined with local enhancements to improve local sampling.
Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less
Won, Jonghun; Lee, Gyu Rie; Park, Hahnbeom; Seok, Chaok
2018-06-07
The second extracellular loops (ECL2s) of G-protein-coupled receptors (GPCRs) are often involved in GPCR functions, and their structures have important implications in drug discovery. However, structure prediction of ECL2 is difficult because of its long length and the structural diversity among different GPCRs. In this study, a new ECL2 conformational sampling method involving both template-based and ab initio sampling was developed. Inspired by the observation of similar ECL2 structures of closely related GPCRs, a template-based sampling method employing loop structure templates selected from the structure database was developed. A new metric for evaluating similarity of the target loop to templates was introduced for template selection. An ab initio loop sampling method was also developed to treat cases without highly similar templates. The ab initio method is based on the previously developed fragment assembly and loop closure method. A new sampling component that takes advantage of secondary structure prediction was added. In addition, a conserved disulfide bridge restraining ECL2 conformation was predicted and analytically incorporated into sampling, reducing the effective dimension of the conformational search space. The sampling method was combined with an existing energy function for comparison with previously reported loop structure prediction methods, and the benchmark test demonstrated outstanding performance.
Conceptual data sampling for breast cancer histology image classification.
Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir
2017-10-01
Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sydoff, Marie; Stenström, Kristina
2010-04-01
The Department of Physics at Lund University is participating in a European Union project called EUMAPP (European Union Microdose AMS Partnership Programme), in which sample preparation and accelerator mass spectrometry (AMS) measurements of biological samples from microdosing studies have been made. This paper describes a simplified method of converting biological samples to solid graphite for 14C analysis with AMS. The method is based on online combustion of the samples, and reduction of CO 2 in septa-sealed vials. The septa-sealed vials and disposable materials are used to eliminate sample cross-contamination. Measurements of ANU and Ox I standards show deviations of 2% and 3%, respectively, relative to reference values. This level of accuracy is sufficient for biological samples from microdosing studies. Since the method has very few handling steps from sample to graphite, the risk of failure during the sample preparation process is minimized, making the method easy to use in routine preparation of samples.
Zainathan, S C; Carson, J; Crane, M St J; Nowak, B F
2013-04-01
The use of swabs relative to organs as a sample collection method for the detection of Tasmanian salmon reovirus (TSRV) in farmed Tasmanian Atlantic salmon, Salmo salar L., was evaluated by RT-qPCR. Evaluation of individual and pooled sample collection (organs vs swabs) was carried out to determine the sensitivity of the collection methods and the effect of pooling of samples for the detection of TSRV. Detection of TSRV in individual samples was as sensitive when organs were sampled compared to swabs, and in pooled samples, organs demonstrated a sensitivity of one 10-fold dilution higher than sampling of pooled swabs. Storage of swabs at 4 °C for t = 24 h demonstrated results similar to those at t = 0. Advantages of using swabs as a preferred sample collection method for the detection of TSRV compared to organ samples are evident from these experimental trials. © 2012 Blackwell Publishing Ltd.
Apparatus and method for handheld sampling
Staab, Torsten A.
2005-09-20
The present invention includes an apparatus, and corresponding method, for taking a sample. The apparatus is built around a frame designed to be held in at least one hand. A sample media is used to secure the sample. A sample media adapter for securing the sample media is operated by a trigger mechanism connectively attached within the frame to the sample media adapter.
A METHODS COMPARISON FOR COLLECTING MACROINVERTEBRATES IN THE OHIO RIVER
Collection of representative benthic macroinvertebrate samples from large rivers has been challenging researchers for many years. The objective of our study was to develop an appropriate method(s) for sampling macroinvertebrates from the Ohio River. Four existing sampling metho...
Comparison of four sampling methods for the detection of Salmonella in broiler litter.
Buhr, R J; Richardson, L J; Cason, J A; Cox, N A; Fairchild, B D
2007-01-01
Experiments were conducted to compare litter sampling methods for the detection of Salmonella. In experiment 1, chicks were challenged orally with a suspension of naladixic acid-resistant Salmonella and wing banded, and additional nonchallenged chicks were placed into each of 2 challenge pens. Nonchallenged chicks were placed into each nonchallenge pen located adjacent to the challenge pens. At 7, 8, 10, and 11 wk of age the litter was sampled using 4 methods: fecal droppings, litter grab, drag swab, and sock. For the challenge pens, Salmonella-positive samples were detected in 3 of 16 fecal samples, 6 of 16 litter grab samples, 7 of 16 drag swabs samples, and 7 of 16 sock samples. Samples from the nonchallenge pens were Salmonella positive in 2 of 16 litter grab samples, 9 of 16 drag swab samples, and 9 of 16 sock samples. In experiment 2, chicks were challenged with Salmonella, and the litter in the challenge and adjacent nonchallenge pens were sampled at 4, 6, and 8 wk of age with broilers remaining in all pens. For the challenge pens, Salmonella was detected in 10 of 36 fecal samples, 20 of 36 litter grab samples, 14 of 36 drag swab samples, and 26 of 36 sock samples. Samples from the adjacent nonchallenge pens were positive for Salmonella in 6 of 36 fecal droppings samples, 4 of 36 litter grab samples, 7 of 36 drag swab samples, and 19 of 36 sock samples. Sock samples had the highest rates of Salmonella detection. In experiment 3, the litter from a Salmonella-challenged flock was sampled at 7, 8, and 9 wk by socks and drag swabs. In addition, comparisons with drag swabs that were stepped on during sampling were made. Both socks (24 of 36, 67%) and drag swabs that were stepped on (25 of 36, 69%) showed significantly more Salmonella-positive samples than the traditional drag swab method (16 of 36, 44%). Drag swabs that were stepped on had comparable Salmonella detection level to that for socks. Litter sampling methods that incorporate stepping on the sample material while in contact with the litter appear to detect Salmonella in greater incidence than traditional sampling methods of dragging swabs over the litter surface.
Frison, Severine; Kerac, Marko; Checchi, Francesco; Nicholas, Jennifer
2017-01-01
The assessment of the prevalence of acute malnutrition in children under five is widely used for the detection of emergencies, planning interventions, advocacy, and monitoring and evaluation. This study examined PROBIT Methods which convert parameters (mean and standard deviation (SD)) of a normally distributed variable to a cumulative probability below any cut-off to estimate acute malnutrition in children under five using Middle-Upper Arm Circumference (MUAC). We assessed the performance of: PROBIT Method I, with mean MUAC from the survey sample and MUAC SD from a database of previous surveys; and PROBIT Method II, with mean and SD of MUAC observed in the survey sample. Specifically, we generated sub-samples from 852 survey datasets, simulating 100 surveys for eight sample sizes. Overall the methods were tested on 681 600 simulated surveys. PROBIT methods relying on sample sizes as small as 50 had better performance than the classic method for estimating and classifying the prevalence of acute malnutrition. They had better precision in the estimation of acute malnutrition for all sample sizes and better coverage for smaller sample sizes, while having relatively little bias. They classified situations accurately for a threshold of 5% acute malnutrition. Both PROBIT methods had similar outcomes. PROBIT Methods have a clear advantage in the assessment of acute malnutrition prevalence based on MUAC, compared to the classic method. Their use would require much lower sample sizes, thus enable great time and resource savings and permit timely and/or locally relevant prevalence estimates of acute malnutrition for a swift and well-targeted response.
Approximation of the exponential integral (well function) using sampling methods
NASA Astrophysics Data System (ADS)
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
NASA Astrophysics Data System (ADS)
Mahjoub, Mehdi
La resolution de l'equation de Boltzmann demeure une etape importante dans la prediction du comportement d'un reacteur nucleaire. Malheureusement, la resolution de cette equation presente toujours un defi pour une geometrie complexe (reacteur) tout comme pour une geometrie simple (cellule). Ainsi, pour predire le comportement d'un reacteur nucleaire,un schema de calcul a deux etapes est necessaire. La premiere etape consiste a obtenir les parametres nucleaires d'une cellule du reacteur apres une etape d'homogeneisation et condensation. La deuxieme etape consiste en un calcul de diffusion pour tout le reacteur en utilisant les resultats de la premiere etape tout en simplifiant la geometrie du reacteur a un ensemble de cellules homogenes le tout entoure de reflecteur. Lors des transitoires (accident), ces deux etapes sont insuffisantes pour pouvoir predire le comportement du reacteur. Comme la resolution de l'equation de Boltzmann dans sa forme dependante du temps presente toujours un defi de taille pour tous types de geometries,un autre schema de calcul est necessaire. Afin de contourner cette difficulte, l'hypothese adiabatique est utilisee. Elle se concretise en un schema de calcul a quatre etapes. La premiere et deuxieme etapes demeurent les memes pour des conditions nominales du reacteur. La troisieme etape se resume a obtenir les nouvelles proprietes nucleaires de la cellule a la suite de la perturbation pour les utiliser, au niveau de la quatrieme etape, dans un nouveau calcul de reacteur et obtenir l'effet de la perturbation sur le reacteur. Ce projet vise a verifier cette hypothese. Ainsi, un nouveau schema de calcul a ete defini. La premiere etape de ce projet a ete de creer un nouveau logiciel capable de resoudre l'equation de Boltzmann dependante du temps par la methode stochastique Monte Carlo dans le but d'obtenir des sections efficaces qui evoluent dans le temps. Ce code a ete utilise pour simuler un accident LOCA dans un reacteur nucleaire de type CANDU-6. Les sections efficaces dependantes du temps ont ete par la suite utilisees dans un calcul de diffusion espace-temps pour un reacteur CANDU-6 subissant un accident de type LOCA affectant la moitie du coeur afin d'observer son comportement durant toutes les phases de la perturbation. Dans la phase de developpement, nous avons choisi de demarrer avec le code OpenMC, developpe au MIT,comme plateforme initiale de developpement. L'introduction et le traitement des neutrons retardes durant la simulation ont presente un grand defi a surmonter. Il est important de noter que le code developpe utilisant la methode Monte Carlo peut etre utilise a grande echelle pour la simulation de tous les types des reacteurs nucleaires si les supports informatiques sont disponibles.
Cox, Jennie; Indugula, Reshmi; Vesper, Stephen; Zhu, Zheng; Jandarov, Roman; Reponen, Tiina
2017-10-18
Evaluating fungal contamination indoors is complicated because of the many different sampling methods utilized. In this study, fungal contamination was evaluated using five sampling methods and four matrices for results. The five sampling methods were a 48 hour indoor air sample collected with a Button™ inhalable aerosol sampler and four types of dust samples: a vacuumed floor dust sample, newly settled dust collected for four weeks onto two types of electrostatic dust cloths (EDCs) in trays, and a wipe sample of dust from above floor surfaces. The samples were obtained in the bedrooms of asthmatic children (n = 14). Quantitative polymerase chain reaction (qPCR) was used to analyze the dust and air samples for the 36 fungal species that make up the Environmental Relative Moldiness Index (ERMI). The results from the samples were compared by four matrices: total concentration of fungal cells, concentration of fungal species associated with indoor environments, concentration of fungal species associated with outdoor environments, and ERMI values (or ERMI-like values for air samples). The ERMI values for the dust samples and the ERMI-like values for the 48 hour air samples were not significantly different. The total cell concentrations of the 36 species obtained with the four dust collection methods correlated significantly (r = 0.64-0.79, p < 0.05), with the exception of the vacuumed floor dust and newly settled dust. In addition, fungal cell concentrations of indoor associated species correlated well between all four dust sampling methods (r = 0.68-0.86, p < 0.01). No correlation was found between the fungal concentrations in the air and dust samples primarily because of differences in concentrations of Cladosporium cladosporioides Type 1 and Epicoccum nigrum. A representative type of dust sample and a 48 hour air sample might both provide useful information about fungal exposures.
Combinatorial Screening Of Inorganic And Organometallic Materials
Li, Yi , Li, Jing , Britton, Ted W.
2002-06-25
A method for differentiating and enumerating nucleated red blood cells in a blood sample is described. The method includes the steps of lysing red blood cells of a blood sample with a lytic reagent, measuring nucleated blood cells by DC impedance measurement in a non-focused flow aperture, differentiating nucleated red blood cells from other cell types, and reporting nucleated red blood cells in the blood sample. The method further includes subtracting nucleated red blood cells and other interference materials from the count of remaining blood cells, and reporting a corrected white blood cell count of the blood sample. Additionally, the method further includes measuring spectrophotometric absorbance of the sample mixture at a predetermined wavelength of a hemoglobin chromogen formed upon lysing the blood sample, and reporting hemoglobin concentration of the blood sample.
Fabrication par injection flexible de pieces coniques pour des applications aerospatiales
NASA Astrophysics Data System (ADS)
Shebib Loiselle, Vincent
Les materiaux composites sont presents dans les tuyeres de moteurs spatiaux depuis les annees soixante. Aujourd'hui, l'avenement des tissus tridimensionnels apporte une solution innovatrice au probleme de delamination qui limitait les proprietes mecaniques de ces composites. L'utilisation de ces tissus necessite toutefois la conception de procedes de fabrication mieux adaptes. Une nouvelle methode de fabrication de pieces composites pour des applications aerospatiales a ete etudiee tout au long de ce travail. Celle-ci applique les principes de l'injection flexible (procede Polyflex) a la fabrication de pieces coniques de fortes epaisseurs. La piece de validation a fabriquer represente un modele reduit de piece de tuyere de moteur spatial. Elle est composee d'un renfort tridimensionnel en fibres de carbone et d'une resine phenolique. La reussite du projet est definie par plusieurs criteres sur la compaction et la formation de plis du renfort et sur la formation de porosites de la piece fabriquee. Un grand nombre d'etapes ont ete necessaires avant la fabrication de deux pieces de validation. Premierement, pour repondre au critere sur la compaction du renfort, la conception d'un outil de caracterisation a ete entreprise. L'etude de la compaction a ete effectuee afin d'obtenir les informations necessaires a la comprehension de la deformation d'un renfort 3D axisymetrique. Ensuite, le principe d'injection de la piece a ete defini pour ce nouveau procede. Pour en valider les concepts proposes, la permeabilite du renfort fibreux ainsi que la viscosite de la resine ont du etre caracterisees. A l'aide de ces donnees, une serie de simulations de l'ecoulement pendant l'injection de la piece ont ete realisees et une approximation du temps de remplissage calculee. Apres cette etape, la conception du moule de tuyere a ete entamee et appuyee par une simulation mecanique de la resistance aux conditions de fabrication. Egalement, plusieurs outillages necessaires pour la fabrication ont ete concus et installes au nouveau laboratoire CGD (composites grandes dimensions). En parallele, plusieurs etudes ont ete effectuees pour comprendre les phenomenes influencant la polymerisation de la resine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Ashish; McNulty, Ian; Munson, Todd
We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample's edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update rules to robustly recover the sample's exit wave. We test these methods on simulated samples by varying the sparsity of the edge-detected representation of the exit wave. Finally, our tests illustrate the strengths and limitations of the proposed method in imaging extended samples.
Henderson, Gemma; Cox, Faith; Kittelmann, Sandra; Miri, Vahideh Heidarian; Zethof, Michael; Noel, Samantha J.; Waghorn, Garry C.; Janssen, Peter H.
2013-01-01
Molecular microbial ecology techniques are widely used to study the composition of the rumen microbiota and to increase understanding of the roles they play. Therefore, sampling and DNA extraction methods that result in adequate yields of microbial DNA that also accurately represents the microbial community are crucial. Fifteen different methods were used to extract DNA from cow and sheep rumen samples. The DNA yield and quality, and its suitability for downstream PCR amplifications varied considerably, depending on the DNA extraction method used. DNA extracts from nine extraction methods that passed these first quality criteria were evaluated further by quantitative PCR enumeration of microbial marker loci. Absolute microbial numbers, determined on the same rumen samples, differed by more than 100-fold, depending on the DNA extraction method used. The apparent compositions of the archaeal, bacterial, ciliate protozoal, and fungal communities in identical rumen samples were assessed using 454 Titanium pyrosequencing. Significant differences in microbial community composition were observed between extraction methods, for example in the relative abundances of members of the phyla Bacteroidetes and Firmicutes. Microbial communities in parallel samples collected from cows by oral stomach-tubing or through a rumen fistula, and in liquid and solid rumen digesta fractions, were compared using one of the DNA extraction methods. Community representations were generally similar, regardless of the rumen sampling technique used, but significant differences in the abundances of some microbial taxa such as the Clostridiales and the Methanobrevibacter ruminantium clade were observed. The apparent microbial community composition differed between rumen sample fractions, and Prevotellaceae were most abundant in the liquid fraction. DNA extraction methods that involved phenol-chloroform extraction and mechanical lysis steps tended to be more comparable. However, comparison of data from studies in which different sampling techniques, different rumen sample fractions or different DNA extraction methods were used should be avoided. PMID:24040342
Henderson, Gemma; Cox, Faith; Kittelmann, Sandra; Miri, Vahideh Heidarian; Zethof, Michael; Noel, Samantha J; Waghorn, Garry C; Janssen, Peter H
2013-01-01
Molecular microbial ecology techniques are widely used to study the composition of the rumen microbiota and to increase understanding of the roles they play. Therefore, sampling and DNA extraction methods that result in adequate yields of microbial DNA that also accurately represents the microbial community are crucial. Fifteen different methods were used to extract DNA from cow and sheep rumen samples. The DNA yield and quality, and its suitability for downstream PCR amplifications varied considerably, depending on the DNA extraction method used. DNA extracts from nine extraction methods that passed these first quality criteria were evaluated further by quantitative PCR enumeration of microbial marker loci. Absolute microbial numbers, determined on the same rumen samples, differed by more than 100-fold, depending on the DNA extraction method used. The apparent compositions of the archaeal, bacterial, ciliate protozoal, and fungal communities in identical rumen samples were assessed using 454 Titanium pyrosequencing. Significant differences in microbial community composition were observed between extraction methods, for example in the relative abundances of members of the phyla Bacteroidetes and Firmicutes. Microbial communities in parallel samples collected from cows by oral stomach-tubing or through a rumen fistula, and in liquid and solid rumen digesta fractions, were compared using one of the DNA extraction methods. Community representations were generally similar, regardless of the rumen sampling technique used, but significant differences in the abundances of some microbial taxa such as the Clostridiales and the Methanobrevibacter ruminantium clade were observed. The apparent microbial community composition differed between rumen sample fractions, and Prevotellaceae were most abundant in the liquid fraction. DNA extraction methods that involved phenol-chloroform extraction and mechanical lysis steps tended to be more comparable. However, comparison of data from studies in which different sampling techniques, different rumen sample fractions or different DNA extraction methods were used should be avoided.
Carter, Melissa D.; Crow, Brian S.; Pantazides, Brooke G.; Watson, Caroline M.; deCastro, B. Rey; Thomas, Jerry D.; Blake, Thomas A.; Johnson, Rudolph C.
2017-01-01
A high-throughput prioritization method was developed for use with a validated confirmatory method detecting organophosphorus nerve agent exposure by immunomagnetic separation-HPLC-MS/MS. A ballistic gradient was incorporated into this analytical method in order to profile unadducted butyrylcholinesterase (BChE) in clinical samples. With Zhang, et al. 1999’s Z′-factor of 0.88 ± 0.01 (SD) of control analytes and Z-factor of 0.25 ± 0.06 (SD) of serum samples, the assay is rated an “excellent assay” for the synthetic peptide controls used and a “double assay” when used to prioritize clinical samples. Hits, defined as samples containing BChE Ser-198 adducts or no BChE present, were analyzed in a confirmatory method for identification and quantitation of the BChE adduct, if present. The ability to prioritize samples by highest exposure for confirmatory analysis is of particular importance in an exposure to cholinesterase inhibitors such as organophosphorus nerve agents where a large number of clinical samples may be collected. In an initial blind screen, 67 out of 70 samples were accurately identified giving an assay accuracy of 96% and yielded no false negatives. The method is the first to provide a high-throughput prioritization assay for profiling adduction of Ser-198 BChE in clinical samples. PMID:23954929
Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L
2013-08-01
Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco
2016-01-01
Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco
2016-01-01
Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population’s sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns. PMID:27441554
[Free crystalline silica: a comparison of methods for its determination in total dust].
Maciejewska, Aleksandra; Szadkowska-Stańczyk, Irena; Kondratowicz, Grzegorz
2005-01-01
The major objective of the study was to compare and investigate the usefulness of quantitative analyses of free crystalline silica (FCS) in the assessment of dust exposure in samples of total dust of varied composition, using three methods: chemical method in common use in Poland; infrared spectrometry; and x-ray powder diffraction. Mineral composition and FCS contents were investigated in 9 laboratory samples of raw materials, materials, and industrial wastes, containing from about 2 to over 80% of crystalline silica and reduced to particles of size corresponding with that of total dust. Sample components were identified using XRD and FT-IR methods. Ten independent determinations of FCS with each of the three study methods were performed in dust samples. An analysis of linear correlation was applied to investigate interrelationship between mean FCS determinations. In analyzed dust samples, along with silica dust there were numerous minerals interfering with silica during the quantitative analysis. Comparison of mean results of FCS determinations showed that the results obtained using the FT-IR method were by 12-13% lower than those obtained with two other methods. However, the differences observed were within the limits of changeability of results associated with their precision and dependence on reference materials used. Assessment of occupational exposure to dusts containing crystalline silica can be performed on the basis of quantitative analysis of FCS in total dusts using each of the compared methods. The FT-IR method is most appropriate for the FCS determination in samples of small amount of silica or collected at low dust concentrations; the XRD method for the analysis of multicomponent samples; and the chemical method in the case of medium and high FCS contents in samples or high concentrations of dusts in the work environment.
Feng, Shu; Gale, Michael J.; Fay, Jonathan D.; Faridi, Ambar; Titus, Hope E.; Garg, Anupam K.; Michaels, Keith V.; Erker, Laura R.; Peters, Dawn; Smith, Travis B.; Pennesi, Mark E.
2015-01-01
Purpose To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Methods Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Results Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. Conclusions We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population. PMID:26325414
Comparing the NIOSH Method 5040 to a Diesel Particulate Matter Meter for Elemental Carbon
NASA Astrophysics Data System (ADS)
Ayers, David Matthew
Introduction: The sampling of elemental carbon has been associated with monitoring exposures in the trucking and mining industries. Recently, in the field of engineered nanomaterials, single wall and muti-wall carbon nanotubes (MWCNTs) are being produced in ever increasing quantities. The only approved atmospheric sampling for multi-wall carbon nanotubes in NIOSH Method 5040. These results are accurate but can take up to 30 days for sample results to be received. Objectives: Compare the results of elemental carbon sampling from the NIOSH Method 5040 to a Diesel Particulate Matter (DPM) Meter. Methods: MWCNTs were transferred and weighed between several trays placed on a scale. The NIOSH Method 5040 and DPM sampling train was hung 6 inches above the receiving tray. The transferring and weighing of the MWCNTs created an aerosol containing elemental carbon. Twenty-one total samples using both meters type were collected. Results: The assumptions for a Two-Way ANOVA were violated therefore, Mann-Whitney U Tests and a Kruskal-Wallis Test were performed. The hypotheses for both research questions were rejected. There was a significant difference in the EC concentrations obtained by the NIOSH Method 5040 and the DPM meter. There were also significant differences in elemental carbon level concentrations when sampled using a DPM meter versus a sampling pump based upon the three concentration levels (low, medium and high). Conclusions: The differences in the EC concentrations were statistically significant therefore, the two methods (NIOSH Method 5040 and DPM) are not the same. The NIOSH Method 5040 should continue to be the only authorized method of establishing an EC concentration for MWCNTs until a MWCNT specific method or an instantaneous meter is invented.
Methods for estimating the amount of vernal pool habitat in the northeastern United States
Van Meter, R.; Bailey, L.L.; Grant, E.H.C.
2008-01-01
The loss of small, seasonal wetlands is a major concern for a variety of state, local, and federal organizations in the northeastern U.S. Identifying and estimating the number of vernal pools within a given region is critical to developing long-term conservation and management strategies for these unique habitats and their faunal communities. We use three probabilistic sampling methods (simple random sampling, adaptive cluster sampling, and the dual frame method) to estimate the number of vernal pools on protected, forested lands. Overall, these methods yielded similar values of vernal pool abundance for each study area, and suggest that photographic interpretation alone may grossly underestimate the number of vernal pools in forested habitats. We compare the relative efficiency of each method and discuss ways of improving precision. Acknowledging that the objectives of a study or monitoring program ultimately determine which sampling designs are most appropriate, we recommend that some type of probabilistic sampling method be applied. We view the dual-frame method as an especially useful way of combining incomplete remote sensing methods, such as aerial photograph interpretation, with a probabilistic sample of the entire area of interest to provide more robust estimates of the number of vernal pools and a more representative sample of existing vernal pool habitats.
Method and apparatus for measuring nuclear magnetic properties
Weitekamp, D.P.; Bielecki, A.; Zax, D.B.; Zilm, K.W.; Pines, A.
1987-12-01
A method for studying the chemical and structural characteristics of materials is disclosed. The method includes placement of a sample material in a high strength polarizing magnetic field to order the sample nuclei. The condition used to order the sample is then removed abruptly and the ordering of the sample allowed to evolve for a time interval. At the end of the time interval, the ordering of the sample is measured by conventional nuclear magnetic resonance techniques. 5 figs.
Method and apparatus for measuring nuclear magnetic properties
Weitekamp, Daniel P.; Bielecki, Anthony; Zax, David B.; Zilm, Kurt W.; Pines, Alexander
1987-01-01
A method for studying the chemical and structural characteristics of materials is disclosed. The method includes placement of a sample material in a high strength polarizing magnetic field to order the sample nucleii. The condition used to order the sample is then removed abruptly and the ordering of the sample allowed to evolve for a time interval. At the end of the time interval, the ordering of the sample is measured by conventional nuclear magnetic resonance techniques.
A LITERATURE REVIEW OF WIPE SAMPLING METHODS ...
Wipe sampling is an important technique for the estimation of contaminant deposition in buildings, homes, or outdoor surfaces as a source of possible human exposure. Numerousmethods of wipe sampling exist, and each method has its own specification for the type of wipe, wetting solvent, and determinative step to be used, depending upon the contaminant of concern. The objective of this report is to concisely summarize the findings of a literature review that was conducted to identify the state-of-the-art wipe sampling techniques for a target list of compounds. This report describes the methods used to perform the literature review; a brief review of wipe sampling techniques in general; an analysis of physical and chemical properties of each target analyte; an analysis of wipe sampling techniques for the target analyte list; and asummary of the wipe sampling techniques for the target analyte list, including existing data gaps. In general, no overwhelming consensus can be drawn from the current literature on how to collect a wipe sample for the chemical warfare agents, organophosphate pesticides, and other toxic industrial chemicals of interest to this study. Different methods, media, and wetting solvents have been recommended and used by various groups and different studies. For many of the compounds of interest, no specific wipe sampling methodology has been established for their collection. Before a wipe sampling method (or methods) can be established for the co
NASA Astrophysics Data System (ADS)
Aspinall, M. D.; Joyce, M. J.; Mackin, R. O.; Jarrah, Z.; Boston, A. J.; Nolan, P. J.; Peyton, A. J.; Hawkes, N. P.
2009-01-01
A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s-1. Events arising from the 7Li(p, n)7Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential.
Detection and monitoring of invasive exotic plants: a comparison of four sampling methods
Cynthia D. Huebner
2007-01-01
The ability to detect and monitor exotic invasive plants is likely to vary depending on the sampling method employed. Methods with strong qualitative thoroughness for species detection often lack the intensity necessary to monitor vegetation change. Four sampling methods (systematic plot, stratified-random plot, modified Whittaker, and timed meander) in hemlock and red...
Zhang, L; Liu, X J
2016-06-03
With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.
Taylor, Vivien F; Toms, Andrew; Longerich, Henry P
2002-01-01
The application of open vessel focused microwave acid digestion is described for the preparation of geological and environmental samples for analysis using inductively coupled plasma-mass spectrometry (ICP-MS). The method is compared to conventional closed-vessel high pressure methods which are limited in the use of HF to break down silicates. Open-vessel acid digestion more conveniently enables the use of HF to remove Si from geological and plant samples as volatile SiF4, as well as evaporation-to-dryness and sequential acid addition during the procedure. Rock reference materials (G-2 granite, MRG-1 gabbros, SY-2 syenite, JA-1 andesite, and JB-2 and SRM-688 basalts) and plant reference materials (BCR and IAEA lichens, peach leaves, apple leaves, Durham wheat flour, and pine needles) were digested with results comparable to conventional hotplate digestion. The microwave digestion method gave poor results for granitic samples containing refractory minerals, however fusion was the preferred method of preparation for these samples. Sample preparation time was reduced from several days, using conventional hotplate digestion method, to one hour per sample using our microwave method.
Ikebe, Jinzen; Umezawa, Koji; Higo, Junichi
2016-03-01
Molecular dynamics (MD) simulations using all-atom and explicit solvent models provide valuable information on the detailed behavior of protein-partner substrate binding at the atomic level. As the power of computational resources increase, MD simulations are being used more widely and easily. However, it is still difficult to investigate the thermodynamic properties of protein-partner substrate binding and protein folding with conventional MD simulations. Enhanced sampling methods have been developed to sample conformations that reflect equilibrium conditions in a more efficient manner than conventional MD simulations, thereby allowing the construction of accurate free-energy landscapes. In this review, we discuss these enhanced sampling methods using a series of case-by-case examples. In particular, we review enhanced sampling methods conforming to trivial trajectory parallelization, virtual-system coupled multicanonical MD, and adaptive lambda square dynamics. These methods have been recently developed based on the existing method of multicanonical MD simulation. Their applications are reviewed with an emphasis on describing their practical implementation. In our concluding remarks we explore extensions of the enhanced sampling methods that may allow for even more efficient sampling.
A sequential bioequivalence design with a potential ethical advantage.
Fuglsang, Anders
2014-07-01
This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.
Blekhman, Ran; Tang, Karen; Archie, Elizabeth A; Barreiro, Luis B; Johnson, Zachary P; Wilson, Mark E; Kohn, Jordan; Yuan, Michael L; Gesquiere, Laurence; Grieneisen, Laura E; Tung, Jenny
2016-08-16
Field studies of wild vertebrates are frequently associated with extensive collections of banked fecal samples-unique resources for understanding ecological, behavioral, and phylogenetic effects on the gut microbiome. However, we do not understand whether sample storage methods confound the ability to investigate interindividual variation in gut microbiome profiles. Here, we extend previous work on storage methods for gut microbiome samples by comparing immediate freezing, the gold standard of preservation, to three methods commonly used in vertebrate field studies: lyophilization, storage in ethanol, and storage in RNAlater. We found that the signature of individual identity consistently outweighed storage effects: alpha diversity and beta diversity measures were significantly correlated across methods, and while samples often clustered by donor, they never clustered by storage method. Provided that all analyzed samples are stored the same way, banked fecal samples therefore appear highly suitable for investigating variation in gut microbiota. Our results open the door to a much-expanded perspective on variation in the gut microbiome across species and ecological contexts.
Rocha, C F D; Van Sluys, M; Hatano, F H; Boquimpani-Freitas, L; Marra, R V; Marques, R V
2004-11-01
Studies on anurans in restinga habitats are few and, as a result, there is little information on which methods are more efficient for sampling them in this environment. Ten methods are usually used for sampling anuran communities in tropical and sub-tropical areas. In this study we evaluate which methods are more appropriate for this purpose in the restinga environment of Parque Nacional da Restinga de Jurubatiba. We analyzed six methods among those usually used for anuran samplings. For each method, we recorded the total amount of time spent (in min.), the number of researchers involved, and the number of species captured. We calculated a capture efficiency index (time necessary for a researcher to capture an individual frog) in order to make comparable the data obtained. Of the methods analyzed, the species inventory (9.7 min/searcher /ind.- MSI; richness = 6; abundance = 23) and the breeding site survey (9.5 MSI; richness = 4; abundance = 22) were the most efficient. The visual encounter inventory (45.0 MSI) and patch sampling (65.0 MSI) methods were of comparatively lower efficiency restinga, whereas the plot sampling and the pit-fall traps with drift-fence methods resulted in no frog capture. We conclude that there is a considerable difference in efficiency of methods used in the restinga environment and that the complete species inventory method is highly efficient for sampling frogs in the restinga studied and may be so in other restinga environments. Methods that are usually efficient in forested areas seem to be of little value in open restinga habitats.
Rapid method for sampling metals for materials identification
NASA Technical Reports Server (NTRS)
Higgins, L. E.
1971-01-01
Nondamaging process similar to electrochemical machining is useful in obtaining metal samples from places inaccessible to conventional sampling methods or where methods would be hazardous or contaminating to specimens. Process applies to industries where metals or metal alloys play a vital role.
Methods for making nucleotide probes for sequencing and synthesis
Church, George M; Zhang, Kun; Chou, Joseph
2014-07-08
Compositions and methods for making a plurality of probes for analyzing a plurality of nucleic acid samples are provided. Compositions and methods for analyzing a plurality of nucleic acid samples to obtain sequence information in each nucleic acid sample are also provided.
Systematic Evaluation of Aggressive Air Sampling for Bacillus ...
Report The primary objectives of this project were to evaluate the Aggressive Air Sampling (AAS) method compared to currently used surface sampling methods and to determine if AAS is a viable option for sampling Bacillus anthracis spores.
Evaluation of Surface Sampling for Bacillus Spores Using ...
Report The primary objectives of this project were to evaluate the Aggressive Air Sampling (AAS) method compared to currently used surface sampling methods and to determine if AAS is a viable option for sampling Bacillus anthracis spores.
Effects of Sample Preparation on the Infrared Reflectance Spectra of Powders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brauer, Carolyn S.; Johnson, Timothy J.; Myers, Tanya L.
2015-05-22
While reflectance spectroscopy is a useful tool in identifying molecular compounds, laboratory measurement of solid (particularly powder) samples often is confounded by sample preparation methods. For example, both the packing density and surface roughness can have an effect on the quantitative reflectance spectra of powdered samples. Recent efforts in our group have focused on developing standard methods for measuring reflectance spectra that accounts for sample preparation, as well as other factors such as particle size and provenance. In this work, the effect of preparation method on sample reflectivity was investigated by measuring the directional-hemispherical spectra of samples that were hand-packedmore » as well as pressed into pellets using an integrating sphere attached to a Fourier transform infrared spectrometer. The results show that the methods used to prepare the sample have a substantial effect on the measured reflectance spectra, as do other factors such as particle size.« less
Effects of sample preparation on the infrared reflectance spectra of powders
NASA Astrophysics Data System (ADS)
Brauer, Carolyn S.; Johnson, Timothy J.; Myers, Tanya L.; Su, Yin-Fong; Blake, Thomas A.; Forland, Brenda M.
2015-05-01
While reflectance spectroscopy is a useful tool for identifying molecular compounds, laboratory measurement of solid (particularly powder) samples often is confounded by sample preparation methods. For example, both the packing density and surface roughness can have an effect on the quantitative reflectance spectra of powdered samples. Recent efforts in our group have focused on developing standard methods for measuring reflectance spectra that accounts for sample preparation, as well as other factors such as particle size and provenance. In this work, the effect of preparation method on sample reflectivity was investigated by measuring the directional-hemispherical spectra of samples that were hand-loaded as well as pressed into pellets using an integrating sphere attached to a Fourier transform infrared spectrometer. The results show that the methods used to prepare the sample can have a substantial effect on the measured reflectance spectra, as do other factors such as particle size.
Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.
Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean
2014-01-01
MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607
Tulipan, Rachel J; Phillips, Heidi; Garrett, Laura D; Dirikolu, Levent; Mitchell, Mark A
2017-05-01
OBJECTIVE To characterize long-term elution of platinum from carboplatin-impregnated calcium sulfate hemihydrate (CI-CSH) beads in vitro by comparing 2 distinct sample collection methods designed to mimic 2 in vivo environments. SAMPLES 162 CI-CSH beads containing 4.6 mg of carboplatin (2.4 mg of platinum/bead). PROCEDURES For method 1, which mimicked an in vivo environment with rapid and complete fluid exchange, each of 3 plastic 10-mL conical tubes contained 3 CI-CSH beads and 5 mL of PBS solution. Eluent samples were obtained by evacuation of all fluid at 1, 2, 3, 6, 9, and 12 hours and 1, 2, 3, 6, 9, 12, 15, 18, 22, 26, and 30 days. Five milliliters of fresh PBS solution was then added to each tube. For method 2, which mimicked an in vivo environment with no fluid exchange, each of 51 tubes (ie, 3 tubes/17 sample collection times) contained 3 CI-CSH beads and 5 mL of PBS solution. Eluent samples were obtained from the assigned tubes for each time point. All samples were analyzed for platinum content by inductively coupled plasma-mass spectrometry. RESULTS Platinum was released from CI-CSH beads for 22 to 30 days. Significant differences were found in platinum concentration and percentage of platinum eluted from CI-CSH beads over time for each method. Platinum concentrations and elution percentages in method 2 samples were significantly higher than those of method 1 samples, except for the first hour measurements. CONCLUSIONS AND CLINICAL RELEVANCE Sample collection methods 1 and 2 may provide estimates of the minimum and maximum platinum release, respectively, from CI-CSH beads in vivo.
Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks
NASA Astrophysics Data System (ADS)
Sun, Wei; Chang, K. C.
2005-05-01
Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.
Determination des Parametres Atmospheriques des Etoiles Naines Blanches de Type DB
NASA Astrophysics Data System (ADS)
Beauchamp, Alain
1995-01-01
Les etoiles naines blanches dont les spectres visibles sont domines par des raies fortes d'helium neutre sont subdivisees en trois classes, DB (raies d'helium neutre seulement), DBA (raies d'helium neutre et d'hydrogene) et DBZ (raies d'helium neutre et d'elements lourds). Nous analysons trois echantillons de spectres observes de ces types de naines blanches. Les echantillons consistent, respectivement, de 48 spectres dans le domaine du visible (3700-5100 A). 24 dans l'ultraviolet (1200-3100 A) et quatre dans la partie rouge du visible (5100-6900) A). Parmi les objets de l'echantillon visible, nous identifions quatre nouvelles DBA, ainsi que deux nouvelles DBZ, auparavant classees DB. L'analyse nous permet de determiner spectroscopiquement les parametres atmospheriques, soit la temperature effective, la gravite de surface, ainsi que l'abondance relative de l'hydrogene, N(H)/N(He), dans le cas des DBA. Pour les objets plus chauds que ~15,000 K, la gravite de surface determinee est fiable, et nous obtenons les masses stellaires avec une relation masse -rayon theorique. Les exigences propres a l'analyse de ces objets ont requis d'importantes ameliorations dans la modelisation de leurs atmospheres et distributions de flux de radiation emis par ces derniers. Nous avons inclus dans les modeles d'atmospheres, pour la premiere fois a notre connaissance, les effets dus a la molecule He_sp{2 }{+}, ainsi que l'equation d'etat de Hummer et Mihalas (1988), qui tient compte des perturbations entre particules dans le calcul des populations des differents niveaux atomiques. Nous traitons la convection dans le cadre de la theorie de la longueur de melange. Trois grilles de modeles d'atmospheres a l'ETL (equilibre thermodynamique local) ont ete produites, pour un ensemble de temperatures effectives, gravites de surface et abondances d'hydrogene couvrant les proprietes des etoiles de nos echantillons; elles sont caracterisees par differentes parametrisations appelees, respectivement, ML1, ML2 et ML3, de la theorie de longueur de melange. Nous avons calcule une grille de spectres synthetiques avec les memes parametrisations que la grille de modeles d'atmospheres. Notre traitement de l'elargissement des raies de l'helium neutre a ete ameliore de facon significative par rapport aux etudes precedentes. D'une part, nous tenons compte de l'elargissement des raies produit par les interactions entre l'emetteur et les particules neutres (elargissements par resonance et de van der Waals) en plus de celui par les particules chargees (elargissement Stark). D'autre part, nous avons calcule nous-memes les profils Stark avec les meilleures theories d'elargissement disponibles pour la majorite des raies observees; ces profils depassent en qualite ce qui a ete publie jusqu'a ce jour. Nous avons calcule la distribution de masse des etoiles DB plus chaudes que 15,000 K. La distribution de masse des DB est tres etroite, avec environ les trois quarts des etoiles incluses dans l'intervalle 0.55-0.65 Modot. La masse moyenne des etoiles DB est de 0.58 M_⊙ avec sigma = 0.07. La difference principale entre les distributions de masse des DB et DA est la faible proportion de DB dans les ailes de la distribution, ce qui implique que les DA moins massives que ~0.4 M odot et plus massives que ~0.8 M_⊙ ne se convertissent pas en DB. Les objets les plus massifs de notre echantillon sont de type DBA, ce qui suggere que la masse elevee favorise la visibilite de l'hydrogene. (Abstract shortened by UMI.).
Benefit of a topical slimming cream in conjunction with dietary advice
Escudier, B; Fanchon, C; Labrousse, E; Pellae, M
2011-01-01
Synopsis The aim of this study was to determine how worthwhile it would be to combine a newly developed topical slimming product with customized dietary habits not based on calorie restriction, so as to improve the cellulite appearance of the skin. At the beginning of the study, a nutritionist recorded the dietary habits of each participant and gave recommendations to each of them according to their food consumption. The chosen methodology was a right/left comparison, one thigh and hip being treated with the new topical slimming product and the other one left untreated to serve as a random control. Objective evaluations were performed by blind assessors. Control of food intake improved the cellulite score after 4 weeks when compared with the base value, but this reduction was significantly greater and earlier on the treated side than on the untreated side, indicating an objective additional benefit derived from the new slimming cream. This result corroborated the slimming effect assessed by measurement in centimetres of the circumference of the upper thighs and the reconstructed volume of the thigh between two fixed horizontal slices. Furthermore, skin tonicity, a major component of cellulite visibility, was also significantly improved on the treated side after only 2 weeks. This new topical cream thus enhances the benefit of a dietetic control for the treatment of the visible aspect of cellulite on the skin. Résumé Cette étude avait pour objectif d’étudier l'intérêt éventuel d'associer une nouvelle crème amincissante à des conseils nutritionnels personnalisés, non basés sur la restriction calorique mais sur un meilleur équilibre alimentaire afin d'améliorer l'aspect cellulitique de la peau. Au début de l’étude un médecin nutritionniste a enregistré les habitudes alimentaires de chaque sujet et a donné des recommandations à chacun en fonction de leur consommation alimentaire. La méthodologie contrôlée en hémi corps, gauche/droite a été choisie, une cuisse, une fesse ayant été traités par le produit à l’étude, l'autre partie du corps étant restée sans traitement selon la randomisation. Les évaluations objectives ont été réalisées par des investigateurs en aveugle. Le contrôle des aliments a permis une réduction du score de cellulite après 4 semaines mais cette réduction est statistiquement plus importante et plus rapide du côté traité avec le produit topique que du côté non traité, ce qui correspond à un effet additionnel objectivé du produit topique. Ce résultat est conforté par l'effet amincissant évalué par la mesure centimétrique de la circonférence de la partie supérieure de la cuisse et l’évolution du volume reconstruit de la cuisse entre deux niveaux. De plus la tonicité de la peau, composante importante de la visibilité de la cellulite, est améliorée significativement dès 2 semaines du coté traité; Cette nouvelle crème apporte donc un bénéfice complémentaire au contrôle diététique pour le traitement de la cellulite. PMID:21284660
NASA Astrophysics Data System (ADS)
Goyette, Stephane
1995-11-01
Le sujet de cette these concerne la modelisation numerique du climat regional. L'objectif principal de l'exercice est de developper un modele climatique regional ayant les capacites de simuler des phenomenes de meso-echelle spatiale. Notre domaine d'etude se situe sur la Cote Ouest nord americaine. Ce dernier a retenu notre attention a cause de la complexite du relief et de son controle sur le climat. Les raisons qui motivent cette etude sont multiples: d'une part, nous ne pouvons pas augmenter, en pratique, la faible resolution spatiale des modeles de la circulation generale de l'atmosphere (MCG) sans augmenter a outrance les couts d'integration et, d'autre part, la gestion de l'environnement exige de plus en plus de donnees climatiques regionales determinees avec une meilleure resolution spatiale. Jusqu'alors, les MCG constituaient les modeles les plus estimes pour leurs aptitudes a simuler le climat ainsi que les changements climatiques mondiaux. Toutefois, les phenomenes climatiques de fine echelle echappent encore aux MCG a cause de leur faible resolution spatiale. De plus, les repercussions socio-economiques des modifications possibles des climats sont etroitement liees a des phenomenes imperceptibles par les MCG actuels. Afin de circonvenir certains problemes inherents a la resolution, une approche pratique vise a prendre un domaine spatial limite d'un MCG et a y imbriquer un autre modele numerique possedant, lui, un maillage de haute resolution spatiale. Ce processus d'imbrication implique alors une nouvelle simulation numerique. Cette "retro-simulation" est guidee dans le domaine restreint a partir de pieces d'informations fournies par le MCG et forcee par des mecanismes pris en charge uniquement par le modele imbrique. Ainsi, afin de raffiner la precision spatiale des previsions climatiques de grande echelle, nous developpons ici un modele numerique appele FIZR, permettant d'obtenir de l'information climatique regionale valide a la fine echelle spatiale. Cette nouvelle gamme de modeles-interpolateurs imbriques qualifies d'"intelligents" fait partie de la famille des modeles dits "pilotes". L'hypothese directrice de notre etude est fondee sur la supposition que le climat de fine echelle est souvent gouverne par des forcages provenant de la surface plutot que par des transports atmospheriques de grande echelle spatiale. La technique que nous proposons vise donc a guider FIZR par la Dynamique echantillonnee d'un MCG et de la forcer par la Physique du MCG ainsi que par un forcage orographique de meso-echelle, en chacun des noeuds de la grille fine de calculs. Afin de valider la robustesse et la justesse de notre modele climatique regional, nous avons choisi la region de la Cote Ouest du continent nord americain. Elle est notamment caracterisee par une distribution geographique des precipitations et des temperatures fortement influencee par le relief sous-jacent. Les resultats d'une simulation d'un mois de janvier avec FIZR demontrent que nous pouvons simuler des champs de precipitations et de temperatures au niveau de l'abri beaucoup plus pres des observations climatiques comparativement a ceux simules a partir d'un MCG. Ces performances sont manifestement attribuees au forcage orographique de meso-echelle de meme qu'aux caracteristiques de surface determinees a fine echelle. Un modele similaire a FIZR peut, en principe, etre implante sur l'importe quel MCG, donc, tout organisme de recherche implique en modelisation numerique mondiale de grande echelle pourra se doter d'un el outil de regionalisation.
de Souza, Marjorie M A; Hartel, Gunter; Whiteman, David C; Antonsson, Annika
2018-04-01
Very little is known about the natural history of oral HPV infection. Several different methods exist to collect oral specimens and detect HPV, but their respective performance characteristics are unknown. We compared two different methods for oral specimen collection (oral saline rinse and commercial saliva kit) from 96 individuals and then analyzed the samples for HPV by two different PCR detection methods (single GP5+/6+ PCR and nested MY09/11 and GP5+/6+ PCR). For the oral rinse samples, the oral HPV prevalence was 10.4% (GP+ PCR; 10% repeatability) vs 11.5% (nested PCR method; 100% repeatability). For the commercial saliva kit samples, the prevalences were 3.1% vs 16.7% with the GP+ PCR vs the nested PCR method (repeatability 100% for both detection methods). Overall the agreement was fair or poor between samples and methods (kappa 0.06-0.36). Standardizing methods of oral sample collection and HPV detection would ensure comparability between future oral HPV studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters
Calfee, M. Worth; Rose, Laura J.; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal
2016-01-01
The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37 mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p ≤ 0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p > 0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. PMID:24184312
Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters.
Calfee, M Worth; Rose, Laura J; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal
2014-01-01
The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p≤0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p>0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. Published by Elsevier B.V.
An evaluation of flow-stratified sampling for estimating suspended sediment loads
Robert B. Thomas; Jack Lewis
1995-01-01
Abstract - Flow-stratified sampling is a new method for sampling water quality constituents such as suspended sediment to estimate loads. As with selection-at-list-time (SALT) and time-stratified sampling, flow-stratified sampling is a statistical method requiring random sampling, and yielding unbiased estimates of load and variance. It can be used to estimate event...
Drummond, A; Rodrigo, A G
2000-12-01
Reconstruction of evolutionary relationships from noncontemporaneous molecular samples provides a new challenge for phylogenetic reconstruction methods. With recent biotechnological advances there has been an increase in molecular sequencing throughput, and the potential to obtain serial samples of sequences from populations, including rapidly evolving pathogens, is fast being realized. A new method called the serial-sample unweighted pair grouping method with arithmetic means (sUPGMA) is presented that reconstructs a genealogy or phylogeny of sequences sampled serially in time using a matrix of pairwise distances. The resulting tree depicts the terminal lineages of each sample ending at a different level consistent with the sample's temporal order. Since sUPGMA is a variant of UPGMA, it will perform best when sequences have evolved at a constant rate (i.e., according to a molecular clock). On simulated data, this new method performs better than standard cluster analysis under a variety of longitudinal sampling strategies. Serial-sample UPGMA is particularly useful for analysis of longitudinal samples of viruses and bacteria, as well as ancient DNA samples, with the minimal requirement that samples of sequences be ordered in time.
A Novel Method to Handle the Effect of Uneven Sampling Effort in Biodiversity Databases
Pardo, Iker; Pata, María P.; Gómez, Daniel; García, María B.
2013-01-01
How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses. PMID:23326357
A novel method to handle the effect of uneven sampling effort in biodiversity databases.
Pardo, Iker; Pata, María P; Gómez, Daniel; García, María B
2013-01-01
How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses.
Viability qPCR, a new tool for Legionella risk management.
Lizana, X; López, A; Benito, S; Agustí, G; Ríos, M; Piqué, N; Marqués, A M; Codony, F
2017-11-01
Viability quantitative Polymerase Chain Reaction (v-qPCR) is a recent analytical approach for only detecting live microorganisms by DNA amplification-based methods This approach is based on the use of a reagent that irreversibly fixes dead cells DNA. In this study, we evaluate the utility of v-qPCR versus culture method for Legionellosis risk management. The present study was performed using 116 real samples. Water samples were simultaneously analysed by culture, v-qPCR and qPCR methods. Results were compared by means of a non-parametric test. In 11.6% of samples using both methods (culture method and v-qPCR) results were positive, in 50.0% of samples both methods gave rise to negative results. As expected, equivalence between methods was not observed in all cases, as in 32.1% of samples positive results were obtained by v-qPCR and all of them gave rise to negative results by culture. Only in 6.3% of samples, with very low Legionella levels, was culture positive and v-qPCR negative. In 3.5% of samples, overgrowth of other bacteria did not allow performing the culture. When comparing both methods, significant differences between culture and v-qPCR were in the samples belonging to the cooling towers-evaporative condensers group. The v-qPCR method detected greater presence and obtained higher concentrations of Legionella spp. (p<0.001). Otherwise, no significant differences between methods were found in the rest of the groups. The v-qPCR method can be used as a quick tool to evaluate Legionellosis risk, especially in cooling towers-evaporative condensers, where this technique can detect higher levels than culture. The combined interpretation of PCR results along with the ratio of live cells is proposed as a tool for understanding the sample context and estimating the Legionellosis risk potential according to 4 levels of hierarchy. Copyright © 2017 Elsevier GmbH. All rights reserved.
A Typology of Mixed Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.
2007-01-01
This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…
Sampling bee communities using pan traps: alternative methods increase sample size
USDA-ARS?s Scientific Manuscript database
Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...
Universal nucleic acids sample preparation method for cells, spores and their mixture
Bavykin, Sergei [Darien, IL
2011-01-18
The present invention relates to a method for extracting nucleic acids from biological samples. More specifically the invention relates to a universal method for extracting nucleic acids from unidentified biological samples. An advantage of the presently invented method is its ability to effectively and efficiently extract nucleic acids from a variety of different cell types including but not limited to prokaryotic or eukaryotic cells and/or recalcitrant organisms (i.e. spores). Unlike prior art methods which are focused on extracting nucleic acids from vegetative cell or spores, the present invention effectively extracts nucleic acids from spores, multiple cell types or mixtures thereof using a single method. Important that the invented method has demonstrated an ability to extract nucleic acids from spores and vegetative bacterial cells with similar levels effectiveness. The invented method employs a multi-step protocol which erodes the cell structure of the biological sample, isolates, labels, fragments nucleic acids and purifies labeled samples from the excess of dye.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Preparation of bone samples in the Gliwice Radiocarbon Laboratory for AMS radiocarbon dating.
Piotrowska, N; Goslar, T
2002-12-01
In the Gliwice Radiocarbon Laboratory, a system for preparation of samples for AMS dating has been built. At first it was used to produce graphite targets from plant macrofossils and sediments. In this study we extended its capabilities with the preparation of bones. We dealt with 3 methods; the first was the classical Longin method of collagen extraction, the second one included additional treatment of powdered bone in alkali solution, while in the third one carboxyl carbon was separated from amino acids obtained after hydrolysis of protein. The suitability of the methods was tested on 2 bone samples. Most of our samples gave ages > 40 kyr BP, suggesting good performance of the adapted methods, except for one sample prepared with simple Longin method. For routine preparation of bones we chose the Longin method with additional alkali treatment.
Sun, Yangbo; Chen, Long; Huang, Bisheng; Chen, Keli
2017-07-01
As a mineral, the traditional Chinese medicine calamine has a similar shape to many other minerals. Investigations of commercially available calamine samples have shown that there are many fake and inferior calamine goods sold on the market. The conventional identification method for calamine is complicated, therefore as a result of the large scale of calamine samples, a rapid identification method is needed. To establish a qualitative model using near-infrared (NIR) spectroscopy for rapid identification of various calamine samples, large quantities of calamine samples including crude products, counterfeits and processed products were collected and correctly identified using the physicochemical and powder X-ray diffraction method. The NIR spectroscopy method was used to analyze these samples by combining the multi-reference correlation coefficient (MRCC) method and the error back propagation artificial neural network algorithm (BP-ANN), so as to realize the qualitative identification of calamine samples. The accuracy rate of the model based on NIR and MRCC methods was 85%; in addition, the model, which took comprehensive multiple factors into consideration, can be used to identify crude calamine products, its counterfeits and processed products. Furthermore, by in-putting the correlation coefficients of multiple references as the spectral feature data of samples into BP-ANN, a BP-ANN model of qualitative identification was established, of which the accuracy rate was increased to 95%. The MRCC method can be used as a NIR-based method in the process of BP-ANN modeling.
Application of a Permethrin Immunosorbent Assay Method to Residential Soil and Dust Samples
A low-cost, high throughput bioanalytical screening method was developed for monitoring cis/trans-permethrin in dust and soil samples. The method consisted of a simple sample preparation procedure [sonication with dichloromethane followed by a solvent exchange into methanol:wate...
Rapid fusion method for the determination of Pu, Np, and Am in large soil samples
Maxwell, Sherrod L.; Culligan, Brian; Hutchison, Jay B.; ...
2015-02-14
A new rapid sodium hydroxide fusion method for the preparation of 10-20 g soil samples has been developed by the Savannah River National Laboratory (SRNL). The method enables lower detection limits for plutonium, neptunium, and americium in environmental soil samples. The method also significantly reduces sample processing time and acid fume generation compared to traditional soil digestion techniques using hydrofluoric acid. Ten gram soil aliquots can be ashed and fused using the new method in 1-2 hours, completely dissolving samples, including refractory particles. Pu, Np and Am are separated using stacked 2mL cartridges of TEVA and DGA Resin and measuredmore » using alpha spectrometry. The method can be adapted for measurement by inductively-coupled plasma mass spectrometry (ICP-MS). Two 10 g soil aliquots of fused soil may be combined prior to chromatographic separations to further improve detection limits. Total sample preparation time, including chromatographic separations and alpha spectrometry source preparation, is less than 8 hours.« less
Local Feature Selection for Data Classification.
Armanfard, Narges; Reilly, James P; Komeili, Majid
2016-06-01
Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.
Resampling methods in Microsoft Excel® for estimating reference intervals
Theodorsson, Elvar
2015-01-01
Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
Uran, Harun; Gokoglu, Nalan
2014-04-01
The aim of this study was to determine the nutritional and quality characteristics of anchovy after cooking. The fish were cooked by different methods (frying, baking and grilling) at two different temperatures (160 °C, 180 °C). Crude ash, crude protein and crude fat contents of cooked fish increased due to rise in dry matter contents. While cooking methods affected mineral content of anchovy, cooking temperature did not affect. The highest values of monounsaturated fatty acids were found in baked samples. Polyunsaturated fatty acids in baked samples were also high and similar in fried samples. Fried samples, which were the most preferred, lost its nutritional characteristics more than baked and grilled samples. Grilled and baked fish samples can be recommended for healthy consumption. However, grilled fish samples had hard texture due to more moisture loss than other methods. Therefore, it is concluded that baking is the best cooking method for anchovy.
Cruz, Mutya; Wang, Miao; Frisch-Daiello, Jessica; Han, Xianlin
2016-07-01
Extraction of lipids from biological samples is a critical step in lipidomics, especially for shotgun lipidomics where lipid extracts are directly infused into a mass spectrometer. The butanol-methanol (BUME) extraction method was originally developed to extract lipids from plasma samples with 1 % acetic acid. Considering some lipids are sensitive to acidic environments, we modified this protocol by replacing acetic acid with lithium chloride solution and extended the modified extraction to tissue samples. Although no significant reduction of plasmalogen levels in the acidic BUME extracts of rat heart samples was found, the modified method was established to extract various tissue samples, including rat liver, heart, and plasma. Essentially identical profiles of the majority of lipid classes were obtained from the extracts of the modified BUME and traditional Bligh-Dyer methods. However, it was found that neither the original, nor the modified BUME method was suitable for 4-hydroxyalkenal species measurement in biological samples.
Cruz, Mutya; Wang, Miao; Frisch-Daiello, Jessica; Han, Xianlin
2016-01-01
Extraction of lipids from biological samples is a critical step in lipidomics, especially for shotgun lipidomics where lipid extracts are directly infused into a mass spectrometer. The butanol-methanol (BUME) extraction method was originally developed to extract lipids from plasma samples with 1% acetic acid. Considering some lipids are sensitive to acidic environments, we modified this protocol by replacing acetic acid with lithium chloride solution and extended the modified extraction to tissue samples. Although no significant reduction of plasmalogen levels in the acidic BUME extracts of rat heart samples was found, the modified method was established to extract various tissue samples, including rat liver, heart, and plasma. Essentially identical profiles of the majority of lipid classes were obtained from the extracts of the modified BUME and traditional Bligh-Dyer methods. However, it was found that neither the original, nor the modified BUME method was suitable for 4-hydroxyalkenal species measurement in biological samples. PMID:27245345
Jiang, Jia-Jia; Duan, Fa-Jie; Li, Yan-Chao; Hua, Xiang-Ning
2014-03-01
Synchronization sampling is very important in underwater towed array system where every acquisition node (AN) samples analog signals by its own analog-digital converter (ADC). In this paper, a simple and effective synchronization sampling method is proposed to ensure synchronized operation among different ANs of the underwater towed array system. We first present a master-slave synchronization sampling model, and then design a high accuracy phase-locked loop to synchronize all delta-sigma ADCs to a reference clock. However, when the master-slave synchronization sampling model is used, both the time-delay (TD) of messages traveling along the wired transmission medium and the jitter of the clocks will bring out synchronization sampling error (SSE). Therefore, a simple method is proposed to estimate and compensate the TD of the messages transmission, and then another effective method is presented to overcome the SSE caused by the jitter of the clocks. An experimental system with three ANs is set up, and the related experimental results verify the validity of the synchronization sampling method proposed in this paper.
NASA Astrophysics Data System (ADS)
Jiang, Jia-Jia; Duan, Fa-Jie; Li, Yan-Chao; Hua, Xiang-Ning
2014-03-01
Synchronization sampling is very important in underwater towed array system where every acquisition node (AN) samples analog signals by its own analog-digital converter (ADC). In this paper, a simple and effective synchronization sampling method is proposed to ensure synchronized operation among different ANs of the underwater towed array system. We first present a master-slave synchronization sampling model, and then design a high accuracy phase-locked loop to synchronize all delta-sigma ADCs to a reference clock. However, when the master-slave synchronization sampling model is used, both the time-delay (TD) of messages traveling along the wired transmission medium and the jitter of the clocks will bring out synchronization sampling error (SSE). Therefore, a simple method is proposed to estimate and compensate the TD of the messages transmission, and then another effective method is presented to overcome the SSE caused by the jitter of the clocks. An experimental system with three ANs is set up, and the related experimental results verify the validity of the synchronization sampling method proposed in this paper.
Rapid fusion method for the determination of refractory thorium and uranium isotopes in soil samples
Maxwell, Sherrod L.; Hutchison, Jay B.; McAlister, Daniel R.
2015-02-14
Recently, approximately 80% of participating laboratories failed to accurately determine uranium isotopes in soil samples in the U.S Department of Energy Mixed Analyte Performance Evaluation Program (MAPEP) Session 30, due to incomplete dissolution of refractory particles in the samples. Failing laboratories employed acid dissolution methods, including hydrofluoric acid, to recover uranium from the soil matrix. The failures illustrate the importance of rugged soil dissolution methods for the accurate measurement of analytes in the sample matrix. A new rapid fusion method has been developed by the Savannah River National Laboratory (SRNL) to prepare 1-2 g soil sample aliquots very quickly, withmore » total dissolution of refractory particles. Soil samples are fused with sodium hydroxide at 600 ºC in zirconium crucibles to enable complete dissolution of the sample. Uranium and thorium are separated on stacked TEVA and TRU extraction chromatographic resin cartridges, prior to isotopic measurements by alpha spectrometry on cerium fluoride microprecipitation sources. Plutonium can also be separated and measured using this method. Batches of 12 samples can be prepared for measurement in <5 hours.« less
A Comparison of Two Sampling Strategies to Assess Discomycete Diversity in Wet Tropical Forests
SHARON A. CANTRELL
2004-01-01
Most of the fungal diversity studies that have used a systematic collecting scheme have not included the discomycetes, so optimal sampling methods are not available for this group. In this study, I tested two sampling methods at each sites in the Caribbean National Forest, Puerto Rico and Ebano Verde Reserve, Dominican Republic. For a plot-based sampling method, 10 Ã...
Lungu, Bwalya; Waltman, W Douglas; Berghaus, Roy D; Hofacre, Charles L
2012-04-01
Conventional culture methods have traditionally been considered the "gold standard" for the isolation and identification of foodborne bacterial pathogens. However, culture methods are labor-intensive and time-consuming. A Salmonella enterica serotype Enteritidis-specific real-time PCR assay that recently received interim approval by the National Poultry Improvement Plan for the detection of Salmonella Enteritidis was evaluated against a culture method that had also received interim National Poultry Improvement Plan approval for the analysis of environmental samples from integrated poultry houses. The method was validated with 422 field samples collected by either the boot sock or drag swab method. The samples were cultured by selective enrichment in tetrathionate broth followed by transfer onto a modified semisolid Rappaport-Vassiliadis medium and then plating onto brilliant green with novobiocin and xylose lysine brilliant Tergitol 4 plates. One-milliliter aliquots of the selective enrichment broths from each sample were collected for DNA extraction by the commercial PrepSEQ nucleic acid extraction assay and analysis by the Salmonella Enteritidis-specific real-time PCR assay. The real-time PCR assay detected no significant differences between the boot sock and drag swab samples. In contrast, the culture method detected a significantly higher number of positive samples from boot socks. The diagnostic sensitivity of the real-time PCR assay for the field samples was significantly higher than that of the culture method. The kappa value obtained was 0.46, indicating moderate agreement between the real-time PCR assay and the culture method. In addition, the real-time PCR method had a turnaround time of 2 days compared with 4 to 8 days for the culture method. The higher sensitivity as well as the reduction in time and labor makes this real-time PCR assay an excellent alternative to conventional culture methods for diagnostic purposes, surveillance, and research studies to improve food safety.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, S.; Jones, V.
2009-05-27
A new rapid separation method that allows separation and preconcentration of actinides in urine samples was developed for the measurement of longer lived actinides by inductively coupled plasma mass spectrometry (ICP-MS) and short-lived actinides by alpha spectrometry; a hybrid approach. This method uses stacked extraction chromatography cartridges and vacuum box technology to facilitate rapid separations. Preconcentration, if required, is performed using a streamlined calcium phosphate precipitation. Similar technology has been applied to separate actinides prior to measurement by alpha spectrometry, but this new method has been developed with elution reagents now compatible with ICP-MS as well. Purified solutions are splitmore » between ICP-MS and alpha spectrometry so that long- and short-lived actinide isotopes can be measured successfully. The method allows for simultaneous extraction of 24 samples (including QC samples) in less than 3 h. Simultaneous sample preparation can offer significant time savings over sequential sample preparation. For example, sequential sample preparation of 24 samples taking just 15 min each requires 6 h to complete. The simplicity and speed of this new method makes it attractive for radiological emergency response. If preconcentration is applied, the method is applicable to larger sample aliquots for occupational exposures as well. The chemical recoveries are typically greater than 90%, in contrast to other reported methods using flow injection separation techniques for urine samples where plutonium yields were 70-80%. This method allows measurement of both long-lived and short-lived actinide isotopes. 239Pu, 242Pu, 237Np, 243Am, 234U, 235U and 238U were measured by ICP-MS, while 236Pu, 238Pu, 239Pu, 241Am, 243Am and 244Cm were measured by alpha spectrometry. The method can also be adapted so that the separation of uranium isotopes for assay is not required, if uranium assay by direct dilution of the urine sample is preferred instead. Multiple vacuum box locations may be set-up to supply several ICP-MS units with purified sample fractions such that a high sample throughput may be achieved, while still allowing for rapid measurement of short-lived actinides by alpha spectrometry.« less
Subrandom methods for multidimensional nonuniform sampling.
Worley, Bradley
2016-08-01
Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
Mavridou, A; Smeti, E; Mandilara, G; Mandilara, G; Boufa, P; Vagiona-Arvanitidou, M; Vantarakis, A; Vassilandonopoulou, G; Pappa, O; Roussia, V; Tzouanopoulos, A; Livadara, M; Aisopou, I; Maraka, V; Nikolaou, E; Mandilara, G
2010-01-01
In this study ten laboratories in Greece compared the performance of reference method TTC Tergitol 7 Agar (with the additional test of beta-glucuronidase production) with five alternative methods, to detect E. coli in water, in line with European Water Directive recommendations. The samples were prepared by spiking drinking water with sewage effluent following a standard protocol. Chlorinated and non-chlorinated samples were used. The statistical analysis was based on the mean relative difference of confirmed counts and was performed in line with ISO 17994. The results showed that in total, three of the alternative methods (Chromocult Coliform agar, Membrane Lauryl Sulfate agar and Trypton Bilex-glucuronidase medium) were not different from TTC Tergitol 7 agar (TTC Tergitol 7 agar vs Chromocult Coliform agar, 294 samples, mean RD% 5.55; vs MLSA, 302 samples, mean RD% 1; vs TBX, 297 samples, mean RD% -2.78). The other two alternative methods (Membrane Faecal coliform medium and Colilert 18/ Quantitray) gave significantly higher counts than TTC Tergitol 7 agar (TTC Tergitol 7 agar vs MFc, 303 samples, mean RD% 8.81; vs Colilert-18/Quantitray, 76 samples, mean RD% 18.91). In other words, the alternative methods generated performance that was as reliable as, or even better than, the reference method. This study will help laboratories in Greece overcome culture and counting problems deriving from the EU reference method for E. coli counts in water samples.
Evaluation on determination of iodine in coal by energy dispersive X-ray fluorescence
Wang, B.; Jackson, J.C.; Palmer, C.; Zheng, B.; Finkelman, R.B.
2005-01-01
A quick and inexpensive method of relative high iodine determination from coal samples was evaluated. Energy dispersive X-ray fluorescence (EDXRF) provided a detection limit of about 14 ppm (3 times of standard deviations of the blank sample), without any complex sample preparation. An analytical relative standard deviation of 16% was readily attainable for coal samples. Under optimum conditions, coal samples with iodine concentrations higher than 5 ppm can be determined using this EDXRF method. For the time being, due to the general iodine concentrations of coal samples lower than 5 ppm, except for some high iodine content coal, this method can not effectively been used for iodine determination. More work needed to meet the requirement of determination of iodine from coal samples for this method. Copyright ?? 2005 by The Geochemical Society of Japan.
Galea, Karen S; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez
2014-06-01
Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs' trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods' comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Galea, Karen S.; McGonagle, Carolyn; Sleeuwenhoek, Anne; Todd, David; Jiménez, Araceli Sánchez
2014-01-01
Dermal exposure to drilling fluids and crude oil is an exposure route of concern. However, there have been no published studies describing sampling methods or reporting dermal exposure measurements. We describe a study that aimed to evaluate a wipe sampling method to assess dermal exposure to an oil-based drilling fluid and crude oil, as well as to investigate the feasibility of using an interception cotton glove sampler for exposure on the hands/wrists. A direct comparison of the wipe and interception methods was also completed using pigs’ trotters as a surrogate for human skin and a direct surface contact exposure scenario. Overall, acceptable recovery and sampling efficiencies were reported for both methods, and both methods had satisfactory storage stability at 1 and 7 days, although there appeared to be some loss over 14 days. The methods’ comparison study revealed significantly higher removal of both fluids from the metal surface with the glove samples compared with the wipe samples (on average 2.5 times higher). Both evaluated sampling methods were found to be suitable for assessing dermal exposure to oil-based drilling fluids and crude oil; however, the comparison study clearly illustrates that glove samplers may overestimate the amount of fluid transferred to the skin. Further comparison of the two dermal sampling methods using additional exposure situations such as immersion or deposition, as well as a field evaluation, is warranted to confirm their appropriateness and suitability in the working environment. PMID:24598941
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin
2017-08-15
Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.
Shear Strength of Remoulding Clay Samples Using Different Methods of Moulding
NASA Astrophysics Data System (ADS)
Norhaliza, W.; Ismail, B.; Azhar, A. T. S.; Nurul, N. J.
2016-07-01
Shear strength for clay soil was required to determine the soil stability. Clay was known as a soil with complex natural formations and very difficult to obtain undisturbed samples at the site. The aim of this paper was to determine the unconfined shear strength of remoulded clay on different methods in moulding samples which were proctor compaction, hand operated soil compacter and miniature mould methods. All the samples were remoulded with the same optimum moisture content (OMC) and density that were 18% and 1880 kg/m3 respectively. The unconfined shear strength results of remoulding clay soils for proctor compaction method was 289.56kPa with the strain 4.8%, hand operated method was 261.66kPa with the strain 4.4% and miniature mould method was 247.52kPa with the strain 3.9%. Based on the proctor compaction method, the reduction percentage of unconfined shear strength of remoulded clay soil of hand operated method was 9.66%, and for miniature mould method was 14.52%. Thus, because there was no significant difference of reduction percentage of unconfined shear strength between three different methods, so it can be concluded that remoulding clay by hand operated method and miniature mould method were accepted and suggested to perform remoulding clay samples by other future researcher. However for comparison, the hand operated method was more suitable to form remoulded clay sample in term of easiness, saving time and less energy for unconfined shear strength determination purposes.
NASA Astrophysics Data System (ADS)
Nasir, N. F.; Mirus, M. F.; Ismail, M.
2017-09-01
Crude glycerol which produced from transesterification reaction has limited usage if it does not undergo purification process. It also contains excess methanol, catalyst and soap. Conventionally, purification method of the crude glycerol involves high cost and complex processes. This study aimed to determine the effects of using different purification methods which are direct method (comprises of ion exchange and methanol removal steps) and multistep method (comprises of neutralization, filtration, ion exchange and methanol removal steps). Two crude glycerol samples were investigated; the self-produced sample through the transesterification process of palm oil and the sample obtained from biodiesel plant. Samples were analysed using Fourier Transform Infrared Spectroscopy, Gas Chromatography and High Performance Liquid Chromatography. The results of this study for both samples after purification have showed that the pure glycerol was successfully produced and fatty acid salts were eliminated. Also, the results indicated the absence of methanol in both samples after purification process. In short, the combination of 4 purification steps has contributed to a higher quality of glycerol. Multistep purification method gave a better result compared to the direct method as neutralization and filtration steps helped in removing most excess salt, fatty acid and catalyst.
Novel methodology to isolate microplastics from vegetal-rich samples.
Herrera, Alicia; Garrido-Amador, Paloma; Martínez, Ico; Samper, María Dolores; López-Martínez, Juan; Gómez, May; Packard, Theodore T
2018-04-01
Microplastics are small plastic particles, globally distributed throughout the oceans. To properly study them, all the methodologies for their sampling, extraction, and measurement should be standardized. For heterogeneous samples containing sediments, animal tissues and zooplankton, several procedures have been described. However, definitive methodologies for samples, rich in algae and plant material, have not yet been developed. The aim of this study was to find the best extraction protocol for vegetal-rich samples by comparing the efficacies of five previously described digestion methods, and a novel density separation method. A protocol using 96% ethanol for density separation was better than the five digestion methods tested, even better than using H 2 O 2 digestion. As it was the most efficient, simple, safe and inexpensive method for isolating microplastics from vegetal rich samples, we recommend it as a standard separation method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Method and system for laser-based formation of micro-shapes in surfaces of optical elements
Bass, Isaac Louis; Guss, Gabriel Mark
2013-03-05
A method of forming a surface feature extending into a sample includes providing a laser operable to emit an output beam and modulating the output beam to form a pulse train having a plurality of pulses. The method also includes a) directing the pulse train along an optical path intersecting an exposed portion of the sample at a position i and b) focusing a first portion of the plurality of pulses to impinge on the sample at the position i. Each of the plurality of pulses is characterized by a spot size at the sample. The method further includes c) ablating at least a portion of the sample at the position i to form a portion of the surface feature and d) incrementing counter i. The method includes e) repeating steps a) through d) to form the surface feature. The sample is free of a rim surrounding the surface feature.
Fortes, Esther D; David, John; Koeritzer, Bob; Wiedmann, Martin
2013-05-01
There is a continued need to develop improved rapid methods for detection of foodborne pathogens. The aim of this project was to evaluate the 3M Molecular Detection System (3M MDS), which uses isothermal DNA amplification, and the 3M Molecular Detection Assay Listeria using environmental samples obtained from retail delicatessens and meat, seafood, and dairy processing plants. Environmental sponge samples were tested for Listeria with the 3M MDS after 22 and 48 h of enrichment in 3M Modified Listeria Recovery Broth (3M mLRB); enrichments were also used for cultural detection of Listeria spp. Among 391 samples tested for Listeria, 74 were positive by both the 3M MDS and the cultural method, 310 were negative by both methods, 2 were positive by the 3M MDS and negative by the cultural method, and one sample was negative by the 3M MDS and positive by the cultural method. Four samples were removed from the sample set, prior to statistical analyses, due to potential cross-contamination during testing. Listeria isolates from positive samples represented L. monocytogenes, L. innocua, L. welshimeri, and L. seeligeri. Overall, the 3M MDS and culture-based detection after enrichment in 3M mLRB did not differ significantly (P < 0.05) with regard to the number of positive samples, when chi-square analyses were performed for (i) number of positive samples after 22 h, (ii) number of positive samples after 48 h, and (iii) number of positive samples after 22 and/or 48 h of enrichment in 3M mLRB. Among 288 sampling sites that were tested with duplicate sponges, 67 each tested positive with the 3M MDS and the traditional U.S. Food and Drug Administration Bacteriological Analytical Manual method, further supporting that the 3M MDS performs equivalently to traditional methods when used with environmental sponge samples.
Evaluating Composite Sampling Methods of Bacillus spores at Low Concentrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.
Restoring facility operations after the 2001 Amerithrax attacks took over three months to complete, highlighting the need to reduce remediation time. The most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite: a single cellulose sponge samples multiple coupons; 2) single medium multi-pass composite: a single cellulose sponge is used to sample multiple coupons; and 3) multi-medium post-samplemore » composite: a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155CFU/cm2, respectively). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p-value < 0.0001) and coupon material (p-value = 0.0008). Recovery efficiency (RE) was higher overall using the post-sample composite (PSC) method compared to single medium composite from both clean and grime coated materials. RE with the PSC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, painted wall board, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but significantly lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.« less
Feng, Shu; Gale, Michael J; Fay, Jonathan D; Faridi, Ambar; Titus, Hope E; Garg, Anupam K; Michaels, Keith V; Erker, Laura R; Peters, Dawn; Smith, Travis B; Pennesi, Mark E
2015-09-01
To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population.
Yong, Dongeun; Ki, Chang-Seok; Kim, Jae-Seok; Seong, Moon-Woo; Lee, Hyukmin
2016-01-01
Background Real-time reverse transcription PCR (rRT-PCR) of sputum samples is commonly used to diagnose Middle East respiratory syndrome coronavirus (MERS-CoV) infection. Owing to the difficulty of extracting RNA from sputum containing mucus, sputum homogenization is desirable prior to nucleic acid isolation. We determined optimal homogenization methods for isolating viral nucleic acids from sputum. Methods We evaluated the following three sputum-homogenization methods: proteinase K and DNase I (PK-DNase) treatment, phosphate-buffered saline (PBS) treatment, and N-acetyl-L-cysteine and sodium citrate (NALC) treatment. Sputum samples were spiked with inactivated MERS-CoV culture isolates. RNA was extracted from pretreated, spiked samples using the easyMAG system (bioMérieux, France). Extracted RNAs were then subjected to rRT-PCR for MERS-CoV diagnosis (DiaPlex Q MERS-coronavirus, SolGent, Korea). Results While analyzing 15 spiked sputum samples prepared in technical duplicate, false-negative results were obtained with five (16.7%) and four samples (13.3%), respectively, by using the PBS and NALC methods. The range of threshold cycle (Ct) values observed when detecting upE in sputum samples was 31.1–35.4 with the PK-DNase method, 34.7–39.0 with the PBS method, and 33.9–38.6 with the NALC method. Compared with the control, which were prepared by adding a one-tenth volume of 1:1,000 diluted viral culture to PBS solution, the ranges of Ct values obtained by the PBS and NALC methods differed significantly from the mean control Ct of 33.2 (both P<0.0001). Conclusions The PK-DNase method is suitable for homogenizing sputum samples prior to RNA extraction. PMID:27374711
GROUND WATER PURGING AND SAMPLING METHODS: HISTORY VS. HYSTERIA
It has been over 10 years since the low-flow ground water purging and sampling method was initially reported in the literature. The method grew from the recognition that well purging was necessary to collect representative samples, bailers could not achieve well purging, and high...
THE INFLUENCE OF PHYSICAL FACTORS ON COMPARATIVE PERFORMANCE OF SAMPLING METHODS IN LARGE RIVERS
In 1999, we compared five existing benthic macroinvertebrate sampling methods used in boatable rivers. Each sampling protocol was performed at each of 60 sites distributed among four rivers in the Ohio River drainage basin. Initial comparison of methods using key macroinvertebr...
Herrington, Jason S; Fan, Zhi-Hua Tina; Lioy, Paul J; Zhang, Junfeng Jim
2007-01-15
Airborne aldehyde and ketone (carbonyl) sampling methodologies based on derivatization with 2,4-dinitrophenylhydrazine (DNPH)-coated solid sorbents could unequivocally be considered the "gold" standard. Originally developed in the late 1970s, these methods have been extensively evaluated and developed up to the present day. However, these methods have been inadequately evaluated for the long-term (i.e., 24 h or greater) sampling collection efficiency (CE) of carbonyls other than formaldehyde. The current body of literature fails to demonstrate that DNPH-coated solid sorbent sampling methods have acceptable CEs for the long-term sampling of carbonyls other than formaldehyde. Despite this, such methods are widely used to report the concentrations of multiple carbonyls from long-term sampling, assuming approximately 100% CEs. Laboratory experiments were conducted in this study to evaluate the long-term formaldehyde and acetaldehyde sampling CEs for several commonly used DNPH-coated solid sorbents. Results from sampling known concentrations of formaldehyde and acetaldehyde generated in a dynamic atmosphere generation system demonstrate that the 24-hour formaldehyde sampling CEs ranged from 83 to 133%, confirming the findings made in previous studies. However, the 24-hour acetaldehyde sampling CEs ranged from 1 to 62%. Attempts to increase the acetaldehyde CEs by adding acid to the samples post sampling were unsuccessful. These results indicate that assuming approximately 100% CEs for 24-hour acetaldehyde sampling, as commonly done with DNPH-coated solid sorbent methods, would substantially under estimate acetaldehyde concentrations.
[Respondent-Driven Sampling: a new sampling method to study visible and hidden populations].
Mantecón, Alejandro; Juan, Montse; Calafat, Amador; Becoña, Elisardo; Román, Encarna
2008-01-01
The paper introduces a variant of chain-referral sampling: respondent-driven sampling (RDS). This sampling method shows that methods based on network analysis can be combined with the statistical validity of standard probability sampling methods. In this sense, RDS appears to be a mathematical improvement of snowball sampling oriented to the study of hidden populations. However, we try to prove its validity with populations that are not within a sampling frame but can nonetheless be contacted without difficulty. The basics of RDS are explained through our research on young people (aged 14 to 25) who go clubbing, consume alcohol and other drugs, and have sex. Fieldwork was carried out between May and July 2007 in three Spanish regions: Baleares, Galicia and Comunidad Valenciana. The presentation of the study shows the utility of this type of sampling when the population is accessible but there is a difficulty deriving from the lack of a sampling frame. However, the sample obtained is not a random representative one in statistical terms of the target population. It must be acknowledged that the final sample is representative of a 'pseudo-population' that approximates to the target population but is not identical to it.
Systems and methods for separating particles and/or substances from a sample fluid
Mariella, Jr., Raymond P.; Dougherty, George M.; Dzenitis, John M.; Miles, Robin R.; Clague, David S.
2016-11-01
Systems and methods for separating particles and/or toxins from a sample fluid. A method according to one embodiment comprises simultaneously passing a sample fluid and a buffer fluid through a chamber such that a fluidic interface is formed between the sample fluid and the buffer fluid as the fluids pass through the chamber, the sample fluid having particles of interest therein; applying a force to the fluids for urging the particles of interest to pass through the interface into the buffer fluid; and substantially separating the buffer fluid from the sample fluid.
Detecting the sampling rate through observations
NASA Astrophysics Data System (ADS)
Shoji, Isao
2018-09-01
This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.
Probe Heating Method for the Analysis of Solid Samples Using a Portable Mass Spectrometer
Kumano, Shun; Sugiyama, Masuyuki; Yamada, Masuyoshi; Nishimura, Kazushige; Hasegawa, Hideki; Morokuma, Hidetoshi; Inoue, Hiroyuki; Hashimoto, Yuichiro
2015-01-01
We previously reported on the development of a portable mass spectrometer for the onsite screening of illicit drugs, but our previous sampling system could only be used for liquid samples. In this study, we report on an attempt to develop a probe heating method that also permits solid samples to be analyzed using a portable mass spectrometer. An aluminum rod is used as the sampling probe. The powdered sample is affixed to the sampling probe or a droplet of sample solution is placed on the tip of the probe and dried. The probe is then placed on a heater to vaporize the sample. The vapor is then introduced into the portable mass spectrometer and analyzed. With the heater temperature set to 130°C, the developed system detected 1 ng of methamphetamine, 1 ng of amphetamine, 3 ng of 3,4-methylenedioxymethamphetamine, 1 ng of 3,4-methylenedioxyamphetamine, and 0.3 ng of cocaine. Even from mixtures consisting of clove powder and methamphetamine powder, methamphetamine ions were detected by tandem mass spectrometry. The developed probe heating method provides a simple method for the analysis of solid samples. A portable mass spectrometer incorporating this method would thus be useful for the onsite screening of illicit drugs. PMID:26819909
Filla, Robert T; Schrell, Adrian M; Coulton, John B; Edwards, James L; Roper, Michael G
2018-02-20
A method for multiplexed sample analysis by mass spectrometry without the need for chemical tagging is presented. In this new method, each sample is pulsed at unique frequencies, mixed, and delivered to the mass spectrometer while maintaining a constant total flow rate. Reconstructed ion currents are then a time-dependent signal consisting of the sum of the ion currents from the various samples. Spectral deconvolution of each reconstructed ion current reveals the identity of each sample, encoded by its unique frequency, and its concentration encoded by the peak height in the frequency domain. This technique is different from other approaches that have been described, which have used modulation techniques to increase the signal-to-noise ratio of a single sample. As proof of concept of this new method, two samples containing up to 9 analytes were multiplexed. The linear dynamic range of the calibration curve was increased with extended acquisition times of the experiment and longer oscillation periods of the samples. Because of the combination of the samples, salt had little effect on the ability of this method to achieve relative quantitation. Continued development of this method is expected to allow for increased numbers of samples that can be multiplexed.
Zheng, Lu; Gao, Naiyun; Deng, Yang
2012-01-01
It is difficult to isolate DNA from biological activated carbon (BAC) samples used in water treatment plants, owing to the scarcity of microorganisms in BAC samples. The aim of this study was to identify DNA extraction methods suitable for a long-term, comprehensive ecological analysis of BAC microbial communities. To identify a procedure that can produce high molecular weight DNA, maximizes detectable diversity and is relatively free from contaminants, the microwave extraction method, the cetyltrimethylammonium bromide (CTAB) extraction method, a commercial DNA extraction kit, and the ultrasonic extraction method were used for the extraction of DNA from BAC samples. Spectrophotometry, agarose gel electrophoresis and polymerase chain reaction (PCR)-restriction fragment length polymorphisms (RFLP) analysis were conducted to compare the yield and quality of DNA obtained using these methods. The results showed that the CTAB method produce the highest yield and genetic diversity of DNA from BAC samples, but DNA purity was slightly less than that obtained with the DNA extraction-kit method. This study provides a theoretical basis for establishing and selecting DNA extraction methods for BAC samples.
NASA Astrophysics Data System (ADS)
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
Valid statistical inference methods for a case-control study with missing data.
Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun
2018-04-01
The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.
Krämer, Nadine; Löfström, Charlotta; Vigre, Håkan; Hoorfar, Jeffrey; Bunge, Cornelia; Malorny, Burkhard
2011-03-01
Salmonella is a major zoonotic pathogen which causes outbreaks and sporadic cases of gastroenteritis in humans worldwide. The primary sources for Salmonella are food-producing animals such as pigs and poultry. For risk assessment and hazard analysis and critical control point (HACCP) concepts, it is essential to produce large amounts of quantitative data, which is currently not achievable with the standard cultural based methods for enumeration of Salmonella. This study presents the development of a novel strategy to enumerate low numbers of Salmonella in cork borer samples taken from pig carcasses as a first concept and proof of principle for a new sensitive and rapid quantification method based on combined enrichment and real-time PCR. The novelty of the approach is in the short pre-enrichment step, where for most bacteria, growth is in the log phase. The method consists of an 8h pre-enrichment of the cork borer sample diluted 1:10 in non-selective buffered peptone water, followed by DNA extraction, and Salmonella detection and quantification by real-time PCR. The limit of quantification was 1.4 colony forming units (CFU)/20 cm(2) (approximately 10 g) of artificially contaminated sample with 95% confidence interval of ± 0.7 log CFU/sample. The precision was similar to the standard reference most probable number (MPN) method. A screening of 200 potentially naturally contaminated cork borer samples obtained over seven weeks in a slaughterhouse resulted in 25 Salmonella-positive samples. The analysis of salmonellae within these samples showed that the PCR method had a higher sensitivity for samples with a low contamination level (<6.7 CFU/sample), where 15 of the samples negative with the MPN method was detected with the PCR method and 5 were found to be negative by both methods. For the samples with a higher contamination level (6.7-310 CFU/sample) a good agreement between the results obtained with the PCR and MPN methods was obtained. The quantitative real-time PCR method can easily be applied to other food and environmental matrices by adaptation of the pre-enrichment time and media. Copyright © 2010 Elsevier B.V. All rights reserved.
Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.
Morrison, Shane A; Luttbeg, Barney; Belden, Jason B
2016-11-01
Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50 = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.
Inoue, Hiroaki; Takama, Tomoko; Yoshizaki, Miwa; Agata, Kunio
2015-01-01
We detected Legionella species in 111 bath water samples and 95 cooling tower water samples by using a combination of conventional plate culture, quantitative polymerase chain reaction (qPCR) and qPCR combined with ethidium monoazide treatment (EMA-qPCR) methods. In the case of bath water samples, Legionella spp. were detected in 30 samples by plate culture, in 85 samples by qPCR, and in 49 samples by EMA-qPCR. Of 81 samples determined to be Legionella-negative by plate culture, 56 and 23 samples were positive by qPCR and EMA-qPCR, respectively. Therefore, EMA treatment decreased the number of Legionella-positive bath water samples detected by qPCR. In contrast, EMA treatment had no effect on cooling tower water samples. We therefore expect that EMA-qPCR is a useful method for the rapid detection of viable Legionella spp. from bath water samples.
NASA Astrophysics Data System (ADS)
Miquel, Benjamin
The dynamic or seismic behavior of hydraulic structures is, as for conventional structures, essential to assure protection of human lives. These types of analyses also aim at limiting structural damage caused by an earthquake to prevent rupture or collapse of the structure. The particularity of these hydraulic structures is that not only the internal displacements are caused by the earthquake, but also by the hydrodynamic loads resulting from fluid-structure interaction. This thesis reviews the existing complex and simplified methods to perform such dynamic analysis for hydraulic structures. For the complex existing methods, attention is placed on the difficulties arising from their use. Particularly, interest is given in this work on the use of transmitting boundary conditions to simulate the semi infinity of reservoirs. A procedure has been developed to estimate the error that these boundary conditions can introduce in finite element dynamic analysis. Depending on their formulation and location, we showed that they can considerably affect the response of such fluid-structure systems. For practical engineering applications, simplified procedures are still needed to evaluate the dynamic behavior of structures in contact with water. A review of the existing simplified procedures showed that these methods are based on numerous simplifications that can affect the prediction of the dynamic behavior of such systems. One of the main objectives of this thesis has been to develop new simplified methods that are more accurate than those existing. First, a new spectral analysis method has been proposed. Expressions for the fundamental frequency of fluid-structure systems, key parameter of spectral analysis, have been developed. We show that this new technique can easily be implemented in a spreadsheet or program, and that its calculation time is near instantaneous. When compared to more complex analytical or numerical method, this new procedure yields excellent prediction of the dynamic behavior of fluid-structure systems. Spectral analyses ignore the transient and oscillatory nature of vibrations. When such dynamic analyses show that some areas of the studied structure undergo excessive stresses, time history analyses allow a better estimate of the extent of these zones as well as a time notion of these excessive stresses. Furthermore, the existing spectral analyses methods for fluid-structure systems account only for the static effect of higher modes. Thought this can generally be sufficient for dams, for flexible structures the dynamic effect of these modes should be accounted for. New methods have been developed for fluid-structure systems to account for these observations as well as the flexibility of foundations. A first method was developed to study structures in contact with one or two finite or infinite water domains. This new technique includes flexibility of structures and foundations as well as the dynamic effect of higher vibration modes and variations of the levels of the water domains. Extension of this method was performed to study beam structures in contact with fluids. These new developments have also allowed extending existing analytical formulations of the dynamic properties of a dry beam to a new formulation that includes effect of fluid-structure interaction. The method yields a very good estimate of the dynamic behavior of beam-fluid systems or beam like structures in contact with fluid. Finally, a Modified Accelerogram Method (MAM) has been developed to modify the design earthquake into a new accelerogram that directly accounts for the effect of fluid-structure interaction. This new accelerogram can therefore be applied directly to the dry structure (i.e. without water) in order to calculate the dynamic response of the fluid-structure system. This original technique can include numerous parameters that influence the dynamic response of such systems and allows to treat analytically the fluid-structure interaction while keeping the advantages of finite element modeling.
Rosing, H.; Hillebrand, M. J. X.; Blesson, S.; Mengesha, B.; Diro, E.; Hailu, A.; Schellens, J. H. M.; Beijnen, J. H.
2016-01-01
To facilitate future pharmacokinetic studies of combination treatments against leishmaniasis in remote regions in which the disease is endemic, a simple cheap sampling method is required for miltefosine quantification. The aims of this study were to validate a liquid chromatography-tandem mass spectrometry method to quantify miltefosine in dried blood spot (DBS) samples and to validate its use with Ethiopian patients with visceral leishmaniasis (VL). Since hematocrit (Ht) levels are typically severely decreased in VL patients, returning to normal during treatment, the method was evaluated over a range of clinically relevant Ht values. Miltefosine was extracted from DBS samples using a simple method of pretreatment with methanol, resulting in >97% recovery. The method was validated over a calibration range of 10 to 2,000 ng/ml, and accuracy and precision were within ±11.2% and ≤7.0% (≤19.1% at the lower limit of quantification), respectively. The method was accurate and precise for blood spot volumes between 10 and 30 μl and for Ht levels of 20 to 35%, although a linear effect of Ht levels on miltefosine quantification was observed in the bioanalytical validation. DBS samples were stable for at least 162 days at 37°C. Clinical validation of the method using paired DBS and plasma samples from 16 VL patients showed a median observed DBS/plasma miltefosine concentration ratio of 0.99, with good correlation (Pearson's r = 0.946). Correcting for patient-specific Ht levels did not further improve the concordance between the sampling methods. This successfully validated method to quantify miltefosine in DBS samples was demonstrated to be a valid and practical alternative to venous blood sampling that can be applied in future miltefosine pharmacokinetic studies with leishmaniasis patients, without Ht correction. PMID:26787691
NASA Astrophysics Data System (ADS)
Gagne, Jocelyn
Usually, flights optimization and planning will take place before flight, on ground. However, it is not always feasible to do such optimization, or sometime unpredictable events may force pilots to change the flight path. In those circumstances, the pilots can only rely on charts or their Flight Management System (FMS) in order to maintain an economic flight. However, those FMS often rely on those same charts, which will not take into consideration different parameters, such as the cost index, the length on the flight or the weather. Even if some FMS take into consideration the weather, they may only rely on manually entered or limited data that could be outdated, insufficient or incomplete. The alleviate these problems, the function program's that was developed is mainly to determine the optimum flight profile for an aircraft, or more precisely, at the lowest overall cost, considering a take-off weight and weather conditions. The total cost is based on the value of time as well as the cost of fuel, resulting in the use of a ratio called the cost index. This index allows both to prioritize either the time or fuel consumption according to the costs related to a specific flight and/or airline. Thus, from a weight, the weather (wind, temperature, pressure), and the cost index, the program will calculate from the "Performance DataBase" (PDB) of a specific airplane an optimal flight profile over a given distance. The algorithm is based on linear interpolations in the performances tables using the Lagrange method. Moreover, in order to fully optimize the flight, the current program can, according to departure date and coordinates, download the latest available forecast from environment Canada website and calculate the optimum flight accordingly. The forecast data use by the program take the form of a 0.6 × 0.6 degrees grid in which the effects of wind, pressure and temperature are interpolated according to the aircraft geographical position and time. Using these tables, performances and forecasts, the program is therefore able to calculate the optimum profile from ground, but also in flight, if any change would occur on the path. Because all data is tabulated and not calculated, the required calculation power remains low, resulting in a short calculation time. Keywords: optimization, algorithm, simulation, cost.
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
Improved Sampling Method Reduces Isokinetic Sampling Errors.
ERIC Educational Resources Information Center
Karels, Gale G.
The particulate sampling system currently in use by the Bay Area Air Pollution Control District, San Francisco, California is described in this presentation for the 12th Conference on Methods in Air Pollution and Industrial Hygiene Studies, University of Southern California, April, 1971. The method represents a practical, inexpensive tool that can…
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
40 CFR 53.59 - Aerosol transport test for Class I equivalent method samplers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... sample collection filter) differs significantly from that specified for reference method samplers as... transport is the percentage of a laboratory challenge aerosol which penetrates to the active sample filter of the candidate equivalent method sampler. (2) The active sample filter is the exclusive filter...
Investigations at hazardous waste sites and sites of chemical spills often require on-site measurements and sampling activities to assess the type and extent of contamination. This document is a compilation of sampling methods and materials suitable to address most needs that ari...
Absolute method of measuring magnetic susceptibility
Thorpe, A.; Senftle, F.E.
1959-01-01
An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella
2016-12-09
Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.
[Recent advances in sample preparation methods of plant hormones].
Wu, Qian; Wang, Lus; Wu, Dapeng; Duan, Chunfeng; Guan, Yafeng
2014-04-01
Plant hormones are a group of naturally occurring trace substances which play a crucial role in controlling the plant development, growth and environment response. With the development of the chromatography and mass spectroscopy technique, chromatographic analytical method has become a widely used way for plant hormone analysis. Among the steps of chromatographic analysis, sample preparation is undoubtedly the most vital one. Thus, a highly selective and efficient sample preparation method is critical for accurate identification and quantification of phytohormones. For the three major kinds of plant hormones including acidic plant hormones & basic plant hormones, brassinosteroids and plant polypeptides, the sample preparation methods are reviewed in sequence especially the recently developed methods. The review includes novel methods, devices, extractive materials and derivative reagents for sample preparation of phytohormones analysis. Especially, some related works of our group are included. At last, the future developments in this field are also prospected.
[Standard sample preparation method for quick determination of trace elements in plastic].
Yao, Wen-Qing; Zong, Rui-Long; Zhu, Yong-Fa
2011-08-01
Reference sample was prepared by masterbatch method, containing heavy metals with known concentration of electronic information products (plastic), the repeatability and precision were determined, and reference sample preparation procedures were established. X-Ray fluorescence spectroscopy (XRF) analysis method was used to determine the repeatability and uncertainty in the analysis of the sample of heavy metals and bromine element. The working curve and the metrical methods for the reference sample were carried out. The results showed that the use of the method in the 200-2000 mg x kg(-1) concentration range for Hg, Pb, Cr and Br elements, and in the 20-200 mg x kg(-1) range for Cd elements, exhibited a very good linear relationship, and the repeatability of analysis methods for six times is good. In testing the circuit board ICB288G and ICB288 from the Mitsubishi Heavy Industry Company, results agreed with the recommended values.
A study of active learning methods for named entity recognition in clinical text.
Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua
2015-12-01
Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random sampling, the best uncertainty based method saved 42% annotations in words. But the best diversity based method reduced only 7% annotation effort. In the simulated setting, AL methods, particularly uncertainty-sampling based approaches, seemed to significantly save annotation cost for the clinical NER task. The actual benefit of active learning in clinical NER should be further evaluated in a real-time setting. Copyright © 2015 Elsevier Inc. All rights reserved.
Observational studies of patients in the emergency department: a comparison of 4 sampling methods.
Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R
2012-08-01
We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-07-22
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account.
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-01-01
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account. PMID:27445105
An Overview of Conventional and Emerging Analytical Methods for the Determination of Mycotoxins
Cigić, Irena Kralj; Prosen, Helena
2009-01-01
Mycotoxins are a group of compounds produced by various fungi and excreted into the matrices on which they grow, often food intended for human consumption or animal feed. The high toxicity and carcinogenicity of these compounds and their ability to cause various pathological conditions has led to widespread screening of foods and feeds potentially polluted with them. Maximum permissible levels in different matrices have also been established for some toxins. As these are quite low, analytical methods for determination of mycotoxins have to be both sensitive and specific. In addition, an appropriate sample preparation and pre-concentration method is needed to isolate analytes from rather complicated samples. In this article, an overview of methods for analysis and sample preparation published in the last ten years is given for the most often encountered mycotoxins in different samples, mainly in food. Special emphasis is on liquid chromatography with fluorescence and mass spectrometric detection, while in the field of sample preparation various solid-phase extraction approaches are discussed. However, an overview of other analytical and sample preparation methods less often used is also given. Finally, different matrices where mycotoxins have to be determined are discussed with the emphasis on their specific characteristics important for the analysis (human food and beverages, animal feed, biological samples, environmental samples). Various issues important for accurate qualitative and quantitative analyses are critically discussed: sampling and choice of representative sample, sample preparation and possible bias associated with it, specificity of the analytical method and critical evaluation of results. PMID:19333436
Usefulness of in-house PCR methods for hepatitis B virus DNA detection.
Portilho, Moyra Machado; Baptista, Marcia Leite; da Silva, Messias; de Sousa, Paulo Sérgio Fonseca; Lewis-Ximenez, Lia Laura; Lampe, Elisabeth; Villar, Livia Melo
2015-10-01
The aim of the present study was to evaluate the performance of three in-house PCR techniques for HBV DNA detection and compare it with commercial quantitative methods to evaluate the usefulness of in-house methods for HBV diagnosis. Three panels of HBsAg reactive sera samples were evaluated: (i) 50 samples were examined using three methods for in-house qualitative PCR and the Cobas Amplicor HBV Monitor Assay; (ii) 87 samples were assayed using in-house semi-nested PCR and the Cobas TaqMan HBV test; (iii) 11 serial samples obtained from 2 HBV-infected individuals were assayed using the Cobas Amplicor HBV test and semi-nested PCR. In panel I, HBV DNA was detected in 44 samples using the Cobas Amplicor HBV test, 42 samples using semi-nested PCR (90% concordance with Cobas Amplicor), 22 samples using PCR for the core gene (63.6% concordance) and 29 samples using single-round PCR for the pre-S/S gene (75% concordance). In panel II, HBV DNA was quantified in 78 of the 87 HBsAg reactive samples using Cobas TaqMan but 52 samples using semi-nested PCR (67.8% concordance). HBV DNA was detected in serial samples until the 17th and 26th week after first donation using in-house semi-nested PCR and the Cobas Amplicor HBV test, respectively. In-house semi-nested PCR presented adequate concordance with commercial methods as an alternative method for HBV molecular diagnosis in low-resource settings. Copyright © 2015 Elsevier B.V. All rights reserved.
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
NASA Astrophysics Data System (ADS)
Šantić, Branko; Gracin, Davor
2017-12-01
A new simple Monte Carlo method is introduced for the study of electrostatic screening by surrounding ions. The proposed method is not based on the generally used Markov chain method for sample generation. Each sample is pristine and there is no correlation with other samples. As the main novelty, the pairs of ions are gradually added to a sample provided that the energy of each ion is within the boundaries determined by the temperature and the size of ions. The proposed method provides reliable results, as demonstrated by the screening of ion in plasma and in water.
Preliminary evaluation of a gel tube agglutination major cross-match method in dogs.
Villarnovo, Dania; Burton, Shelley A; Horney, Barbara S; MacKenzie, Allan L; Vanderstichel, Raphaël
2016-09-01
A major cross-match gel tube test is available for use in dogs yet has not been clinically evaluated. This study compared cross-match results obtained using the gel tube and the standard tube methods for canine samples. Study 1 included 107 canine sample donor-recipient pairings cross-match tested with the RapidVet-H method gel tube test and compared results with the standard tube method. Additionally, 120 pairings using pooled sera containing anti-canine erythrocyte antibody at various concentrations were tested with leftover blood from a hospital population to assess sensitivity and specificity of the gel tube method in comparison with the standard method. The gel tube method had a good relative specificity of 96.1% in detecting lack of agglutination (compatibility) compared to the standard tube method. Agreement between the 2 methods was moderate. Nine of 107 pairings showed agglutination/incompatibility on either test, too few to allow reliable calculation of relative sensitivity. Fifty percent of the gel tube method results were difficult to interpret due to sample spreading in the reaction and/or negative control tubes. The RapidVet-H method agreed with the standard cross-match method on compatible samples, but detected incompatibility in some sample pairs that were compatible with the standard method. Evaluation using larger numbers of incompatible pairings is needed to assess diagnostic utility. The gel tube method results were difficult to categorize due to sample spreading. Weak agglutination reactions or other factors such as centrifuge model may be responsible. © 2016 American Society for Veterinary Clinical Pathology.
NASA Astrophysics Data System (ADS)
Kuusimäki, Leea; Peltonen, Kimmo; Vainiotalo, Sinikka
A previously introduced method for monitoring environmental tobacco smoke (ETS) was further validated. The method is based on diffusive sampling of a vapour-phase marker, 3-ethenylpyridine (3-EP), with 3 M passive monitors (type 3500). Experiments were done in a dynamic chamber to assess diffusive sampling in comparison with active sampling in charcoal tubes or XAD-4 tubes. The sampling rate for 3-EP collected on the diffusive sampler was 23.1±0.6 mL min -1. The relative standard deviation for parallel samples ( n=6) ranged from 4% to 14% among experiments ( n=9). No marked reverse diffusion of 3-EP was detected nor any significant effect of relative humidity at 20%, 50% or 80%. The diffusive sampling of 3-EP was validated in field measurements in 15 restaurants in comparison with 3-EP and nicotine measurements using active sampling. The 3-EP concentration in restaurants ranged from 0.01 to 9.8 μg m -3, and the uptake rate for 3-EP based on 92 parallel samples was 24.0±0.4 mL min -1. A linear correlation ( r=0.98) was observed between 3-EP and nicotine concentrations, the average ratio of 3-EP to nicotine being 1:8. Active sampling of 3-EP and nicotine in charcoal tubes provided more reliable results than sampling in XAD-4 tubes. All samples were analysed using gas chromatography-mass spectrometry after elution with a 15% solution of pyridine in toluene. For nicotine, the limit of quantification of the charcoal tube method was 4 ng per sample, corresponding to 0.04 μg m -3 for an air sample of 96 L. For 3-EP, the limit of quantification of the diffusive method was 0.5-1.0 ng per sample, corresponding to 0.04-0.09 μg m -3 for 8 h sampling. The diffusive method proved suitable for ETS monitoring, even at low levels of ETS.
Feasibility of zero tolerance for Salmonella on raw poultry
USDA-ARS?s Scientific Manuscript database
Ideally, poultry producing countries around the globe should use internationally standardized sampling methods for Salmonella. It is difficult to compare prevalence data from country-to-country when sample plan, sample type, sample frequency and laboratory media along with methods differ. The Europe...
Flexible sampling large-scale social networks by self-adjustable random walk
NASA Astrophysics Data System (ADS)
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
Bouchard, Daniel; Wanner, Philipp; Luo, Hong; McLoughlin, Patrick W; Henderson, James K; Pirkle, Robert J; Hunkeler, Daniel
2017-10-20
The methodology of the solvent-based dissolution method used to sample gas phase volatile organic compounds (VOC) for compound-specific isotope analysis (CSIA) was optimized to lower the method detection limits for TCE and benzene. The sampling methodology previously evaluated by [1] consists in pulling the air through a solvent to dissolve and accumulate the gaseous VOC. After the sampling process, the solvent can then be treated similarly as groundwater samples to perform routine CSIA by diluting an aliquot of the solvent into water to reach the required concentration of the targeted contaminant. Among solvents tested, tetraethylene glycol dimethyl ether (TGDE) showed the best aptitude for the method. TGDE has a great affinity with TCE and benzene, hence efficiently dissolving the compounds during their transition through the solvent. The method detection limit for TCE (5±1μg/m 3 ) and benzene (1.7±0.5μg/m 3 ) is lower when using TGDE compared to methanol, which was previously used (385μg/m 3 for TCE and 130μg/m 3 for benzene) [2]. The method detection limit refers to the minimal gas phase concentration in ambient air required to load sufficient VOC mass into TGDE to perform δ 13 C analysis. Due to a different analytical procedure, the method detection limit associated with δ 37 Cl analysis was found to be 156±6μg/m 3 for TCE. Furthermore, the experimental results validated the relationship between the gas phase TCE and the progressive accumulation of dissolved TCE in the solvent during the sampling process. Accordingly, based on the air-solvent partitioning coefficient, the sampling methodology (e.g. sampling rate, sampling duration, amount of solvent) and the final TCE concentration in the solvent, the concentration of TCE in the gas phase prevailing during the sampling event can be determined. Moreover, the possibility to analyse for TCE concentration in the solvent after sampling (or other targeted VOCs) allows the field deployment of the sampling method without the need to determine the initial gas phase TCE concentration. The simplified field deployment approach of the solvent-based dissolution method combined with the conventional analytical procedure used for groundwater samples substantially facilitates the application of CSIA to gas phase studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Eggenkamp, H G M; Louvat, P
2018-04-30
In natural samples bromine is present in trace amounts, and measurement of stable Br isotopes necessitates its separation from the matrix. Most methods described previously need large samples or samples with high Br/Cl ratios. The use of metals as reagents, proposed in previous Br distillation methods, must be avoided for multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) analyses, because of risk of cross-contamination, since the instrument is also used to measure stable isotopes of metals. Dedicated to water and evaporite samples with low Br/Cl ratios, the proposed method is a simple distillation that separates bromide from chloride for isotopic analyses by MC-ICP-MS. It is based on the difference in oxidation potential between chloride and bromide in the presence of nitric acid. The sample is mixed with dilute (1:5) nitric acid in a distillation flask and heated over a candle flame for 10 min. The distillate (bromine) is trapped in an ammonia solution and reduced to bromide. Chloride is only distilled to a very small extent. The obtained solution can be measured directly by MC-ICP-MS for stable Br isotopes. The method was tested for a variety of volumes, ammonia concentrations, pH values and distillation times and compared with the classic ion-exchange chromatography method. The method more efficiently separates Br from Cl, so that samples with lower Br/Cl ratios can be analysed, with Br isotope data in agreement with those obtained by previous methods. Unlike other Br extraction methods based on oxidation, the distillation method presented here does not use any metallic ion for redox reactions that could contaminate the mass spectrometer. It is efficient in separating Br from samples with low Br/Cl ratios. The method ensures reproducible recovery yields and a long-term reproducibility of ±0.11‰ (1 standard deviation). The distillation method was successfully applied to samples with low Br/Cl ratios and low Br amounts (down to 20 μg). Copyright © 2018 John Wiley & Sons, Ltd.
Jha, Virendra K.; Wydoski, Duane S.
2002-01-01
A method for the isolation of 20 parent organophosphate pesticides and 5 pesticide degradates from filtered natural-water samples is described. Seven of these compounds are reported permanently with an estimated concentration because of performance issues. Water samples are filtered to remove suspended particulate matter, and then 1 liter of filtrate is pumped through disposable solid-phase extraction columns that contain octadecyl-bonded porous silica to extract the compounds. The C-18 columns are dried with nitrogen gas, and method compounds are eluted from the columns with ethyl acetate. The extract is analyzed by dual capillary-column gas chromatography with flame photometric detection. Single-operator method detection limits in all three water-matrix samples ranged from 0.004 to 0.012 microgram per liter. Method performance was validated by spiking all compounds into three different matrices at three different concentrations. Eight replicates were analyzed at each concentration level in each matrix. Mean recoveries of method compounds spiked in surface-water samples ranged from 39 to 149 percent and those in ground-water samples ranged from 40 to 124 percent for all pesticides except dimethoate. Mean recoveries of method compounds spiked in reagent-water samples ranged from 41 to 119 percent for all pesticides except dimethoate. Dimethoate exhibited reduced recoveries (mean of 43 percent in low- and medium-concentration level spiked samples and 20 percent in high-concentration level spiked samples) in all matrices because of incomplete collection on the C-18 column. As a result, concen-trations of dimethoate and six other compounds (based on performance issues) in samples are reported in this method with an estimated remark code.
A passive guard for low thermal conductivity measurement of small samples by the hot plate method
NASA Astrophysics Data System (ADS)
Jannot, Yves; Degiovanni, Alain; Grigorova-Moutiers, Veneta; Godefroy, Justine
2017-01-01
Hot plate methods under steady state conditions are based on a 1D model to estimate the thermal conductivity, using measurements of the temperatures T 0 and T 1 of the two sides of the sample and of the heat flux crossing it. To be consistent with the hypothesis of the 1D heat flux, either a hot plate guarded apparatus is used, or the temperature is measured at the centre of the sample. On one hand the latter method can be used only if the ratio thickness/width of the sample is sufficiently low and on the other hand the guarded hot plate method requires large width samples (typical cross section of 0.6 × 0.6 m2). That is why both methods cannot be used for low width samples. The method presented in this paper is based on an optimal choice of the temperatures T 0 and T 1 compared to the ambient temperature T a, enabling the estimation of the thermal conductivity with a centered hot plate method, by applying the 1D heat flux model. It will be shown that these optimal values do not depend on the size or on the thermal conductivity of samples (in the range 0.015-0.2 W m-1 K-1), but only on T a. The experimental results obtained validate the method for several reference samples for values of the ratio thickness/width up to 0.3, thus enabling the measurement of the thermal conductivity of samples having a small cross-section, down to 0.045 × 0.045 m2.
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
Shao, Jingyuan; Cao, Wen; Qu, Haibin; Pan, Jianyang; Gong, Xingchu
2018-01-01
The aim of this study was to present a novel analytical quality by design (AQbD) approach for developing an HPLC method to analyze herbal extracts. In this approach, critical method attributes (CMAs) and critical method parameters (CMPs) of the analytical method were determined using the same data collected from screening experiments. The HPLC-ELSD method for separation and quantification of sugars in Codonopsis Radix extract (CRE) samples and Astragali Radix extract (ARE) samples was developed as an example method with a novel AQbD approach. Potential CMAs and potential CMPs were found with Analytical Target Profile. After the screening experiments, the retention time of the D-glucose peak of CRE samples, the signal-to-noise ratio of the D-glucose peak of CRE samples, and retention time of the sucrose peak in ARE samples were considered CMAs. The initial and final composition of the mobile phase, flow rate, and column temperature were found to be CMPs using a standard partial regression coefficient method. The probability-based design space was calculated using a Monte-Carlo simulation method and verified by experiments. The optimized method was validated to be accurate and precise, and then it was applied in the analysis of CRE and ARE samples. The present AQbD approach is efficient and suitable for analysis objects with complex compositions.
Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun
2015-01-22
High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes.
Sampling Methods for Detection and Monitoring of the Asian Citrus Psyllid (Hemiptera: Psyllidae).
Monzo, C; Arevalo, H A; Jones, M M; Vanaclocha, P; Croxton, S D; Qureshi, J A; Stansly, P A
2015-06-01
The Asian citrus psyllid (ACP), Diaphorina citri Kuwayama is a key pest of citrus due to its role as vector of citrus greening disease or "huanglongbing." ACP monitoring is considered an indispensable tool for management of vector and disease. In the present study, datasets collected between 2009 and 2013 from 245 citrus blocks were used to evaluate precision, sensitivity for detection, and efficiency of five sampling methods. The number of samples needed to reach a 0.25 standard error-mean ratio was estimated using Taylor's power law and used to compare precision among sampling methods. Comparison of detection sensitivity and time expenditure (cost) between stem-tap and other sampling methodologies conducted consecutively at the same location were also assessed. Stem-tap sampling was the most efficient sampling method when ACP densities were moderate to high and served as the basis for comparison with all other methods. Protocols that grouped trees near randomly selected locations across the block were more efficient than sampling trees at random across the block. Sweep net sampling was similar to stem-taps in number of captures per sampled unit, but less precise at any ACP density. Yellow sticky traps were 14 times more sensitive than stem-taps but much more time consuming and thus less efficient except at very low population densities. Visual sampling was efficient for detecting and monitoring ACP at low densities. Suction sampling was time consuming and taxing but the most sensitive of all methods for detection of sparse populations. This information can be used to optimize ACP monitoring efforts. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Capillary microextraction: A new method for sampling methamphetamine vapour.
Nair, M V; Miskelly, G M
2016-11-01
Clandestine laboratories pose a serious health risk to first responders, investigators, decontamination companies, and the public who may be inadvertently exposed to methamphetamine and other chemicals used in its manufacture. Therefore there is an urgent need for reliable methods to detect and measure methamphetamine at such sites. The most common method for determining methamphetamine contamination at former clandestine laboratory sites is selected surface wipe sampling, followed by analysis with gas chromatography-mass spectrometry (GC-MS). We are investigating the use of sampling for methamphetamine vapour to complement such wipe sampling. In this study, we report the use of capillary microextraction (CME) devices for sampling airborne methamphetamine, and compare their sampling efficiency with a previously reported dynamic SPME method. The CME devices consisted of PDMS-coated glass filter strips inside a glass tube. The devices were used to dynamically sample methamphetamine vapour in the range of 0.42-4.2μgm -3 , generated by a custom-built vapour dosing system, for 1-15min, and methamphetamine was analysed using a GC-MS fitted with a ChromatoProbe thermal desorption unit. The devices showed good reproducibility (RSD<15%), and a curvilinear pre-equilibrium relationship between sampling times and peak area, which can be utilised for calibration. Under identical sampling conditions, the CME devices were approximately 30 times more sensitive than the dynamic SPME method. The CME devices could be stored for up to 3days after sampling prior to analysis. Consecutive sampling of methamphetamine and its isotopic substitute, d-9 methamphetamine showed no competitive displacement. This suggests that CME devices, pre-loaded with an internal standard, could be a feasible method for sampling airborne methamphetamine at former clandestine laboratories. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun
2015-01-01
High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes. PMID:25621612
Recommendations for level-determined sampling in wells
NASA Astrophysics Data System (ADS)
Lerner, David N.; Teutsch, Georg
1995-10-01
Level-determined samples of groundwater are increasingly important for hydrogeological studies. The techniques for collecting them range from the use of purpose drilled wells, sometimes with sophisticated dedicated multi-level samplers in them, to a variety of methods used in open wells. Open, often existing, wells are frequently used on cost grounds, but there are risks of obtaining poor and unrepresentative samples. Alternative approaches to level-determined sampling incorporate seven concepts: depth sampling; packer systems; individual wells; dedicated multi-level systems; separation pumping; baffle systems; multi-port sock samplers. These are outlined and evaluated in terms of the environment to be sampled, and the features and performance of the methods. Recommendations are offered to match methods to sampling problems.
Compendium of selected methods for sampling and analysis at geothermal facilities
NASA Astrophysics Data System (ADS)
Kindle, C. H.; Pool, K. H.; Ludwick, J. D.; Robertson, D. E.
1984-06-01
An independent study of the field has resulted in a compilation of the best methods for sampling, preservation and analysis of potential pollutants from geothermally fueled electric power plants. These methods are selected as the most usable over the range of application commonly experienced in the various geothermal plant sample locations. In addition to plant and well piping, techniques for sampling cooling towers, ambient gases, solids, surface and subsurface waters are described. Emphasis is placed on the use of sampling proves to extract samples from heterogeneous flows. Certain sampling points, constituents and phases of plant operation are more amenable to quality assurance improvement in the emission measurements than others and are so identified.
Tack, Pieter; Vekemans, Bart; Laforce, Brecht; Rudloff-Grund, Jennifer; Hernández, Willinton Y; Garrevoet, Jan; Falkenberg, Gerald; Brenker, Frank; Van Der Voort, Pascal; Vincze, Laszlo
2017-02-07
Using X-ray absorption near edge structure (XANES) spectroscopy, information on the local chemical structure and oxidation state of an element of interest can be acquired. Conventionally, this information can be obtained in a spatially resolved manner by scanning a sample through a focused X-ray beam. Recently, full-field methods have been developed to obtain direct 2D chemical state information by imaging a large sample area. These methods are usually in transmission mode, thus restricting the use to thin and transmitting samples. Here, a fluorescence method is displayed using an energy-dispersive pnCCD detector, the SLcam, characterized by measurement times far superior to what is generally applicable. Additionally, this method operates in confocal mode, thus providing direct 3D spatially resolved chemical state information from a selected subvolume of a sample, without the need of rotating a sample. The method is applied to two samples: a gold-supported magnesia catalyst (Au/MgO) and a natural diamond containing Fe-rich inclusions. Both samples provide XANES spectra that can be overlapped with reference XANES spectra, allowing this method to be used for fingerprinting and linear combination analysis of known XANES reference compounds.
Rapid method to determine 226Ra in steel samples
Maxwell, Sherrod L.; Culligan, Brian; Hutchison, Jay B.; ...
2017-09-22
The rapid measurement of 226Ra in steel samples is very important in the event of a radiological emergency. 226Ra (T 1/2 = 1600 y) is a natural radionuclide present in the environment and a highly toxic alpha-emitter. Due to its long life and tendency to concentrate in bones, 226Ra ingestion or inhalation can lead to significant committed dose to individuals. A new method for the determination of 226Ra in steel samples has been developed at the Savannah River Environmental Laboratory. The new method employs a rugged acid digestion method that includes hydrofluoric acid, followed by a single precipitation step tomore » rapidly preconcentrate the radium and remove most of the dissolved steel sample matrix. Radium is then separated using a combination of cation exchange and extraction chromatography, and 226Ra is measured by alpha spectrometry. This approach has a sample preparation time of ~ 8 h for steel samples, has a very high tracer yield (> 88%), and removes interferences effectively. A 133Ba yield tracer is used so that samples can be counted immediately following the separation method, avoiding lengthy ingrowth times that are required in other methods.« less
Rapid method to determine 226Ra in steel samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Sherrod L.; Culligan, Brian; Hutchison, Jay B.
The rapid measurement of 226Ra in steel samples is very important in the event of a radiological emergency. 226Ra (T 1/2 = 1600 y) is a natural radionuclide present in the environment and a highly toxic alpha-emitter. Due to its long life and tendency to concentrate in bones, 226Ra ingestion or inhalation can lead to significant committed dose to individuals. A new method for the determination of 226Ra in steel samples has been developed at the Savannah River Environmental Laboratory. The new method employs a rugged acid digestion method that includes hydrofluoric acid, followed by a single precipitation step tomore » rapidly preconcentrate the radium and remove most of the dissolved steel sample matrix. Radium is then separated using a combination of cation exchange and extraction chromatography, and 226Ra is measured by alpha spectrometry. This approach has a sample preparation time of ~ 8 h for steel samples, has a very high tracer yield (> 88%), and removes interferences effectively. A 133Ba yield tracer is used so that samples can be counted immediately following the separation method, avoiding lengthy ingrowth times that are required in other methods.« less
Oral sampling methods are associated with differences in immune marker concentrations.
Fakhry, Carole; Qeadan, Fares; Gilman, Robert H; Yori, Pablo; Kosek, Margaret; Patterson, Nicole; Eisele, David W; Gourin, Christine G; Chitguppi, Chandala; Marks, Morgan; Gravitt, Patti
2018-06-01
To determine whether the concentration and distribution of immune markers in paired oral samples were similar. Clinical research. Cross-sectional study. Paired saliva and oral secretions (OS) samples were collected. The concentration of immune markers was estimated using Luminex multiplex assay (Thermo Fisher Scientific, Waltham, MA). For each sample, the concentration of respective immune markers was normalized to total protein present and log-transformed. Median concentrations of immune markers were compared between both types of samples. Intermarker correlation in each sampling method and across sampling methods was evaluated. There were 90 study participants. Concentrations of immune markers in saliva samples were significantly different from concentrations in OS samples. Oral secretions samples showed higher concentrations of immunoregulatory markers, whereas the saliva samples contained proinflammatory markers in higher concentration. The immune marker profile in saliva samples is distinct from the immune marker profile in paired OS samples. 2b. Laryngoscope, 128:E214-E221, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
NASA Astrophysics Data System (ADS)
Almirall, Jose R.; Trejos, Tatiana; Hobbs, Andria; Furton, Kenneth G.
2003-09-01
The importance of small amounts of glass and paint evidence as a means to associate a crime event to a suspect or a suspect to another individual has been demonstrated in many cases. Glass is a fragile material that is often found at the scenes of crimes such as burglaries, hit-and-run accidents and violent crime offenses. Previous work has demonstrated the utility of elemental analysis by solution ICP-MS of small amounts of glass for the comparison between a fragment found at a crime scene to a possible source of the glass. The multi-element capability and the sensitivity of ICP-MS combined with the simplified sample introduction of laser ablation prior to ion detection provides for an excellent and relatively non-destructive technique for elemental analysis of glass fragments. The direct solid sample introduction technique of laser ablation (LA) is reported as an alternative to the solution method. Direct solid sampling provides several advantages over solution methods and shows great potential for a number of solid sample analyses in forensic science. The advantages of laser ablation include the simplification of sample preparation, thereby reducing the time and complexity of the analysis, the elimination of handling acid dissolution reagents such as HF and the reduction of sources of interferences in the ionization plasma. Direct sampling also provides for essentially "non-destructive" sampling due to the removal of very small amounts of sample needed for analysis. The discrimination potential of LA-ICP-MS is compared with previously reported solution ICP-MS methods using external calibration with internal standardization and a newly reported solution isotope dilution (ID) method. A total of ninety-one different glass samples were used for the comparison study using the techniques mentioned. One set consisted of forty-five headlamps taken from a variety of automobiles representing a range of twenty years of manufacturing dates. A second set consisted of forty-six automotive glasses (side windows and windshields) representing casework glass from different vehicle manufacturers over several years was also characterized by RI and elemental composition analysis. The solution sample introduction techniques (external calibration and isotope dilution) provide for excellent sensitivity and precision but have the disadvantages of destroying the sample and also involve complex sample preparation. The laser ablation method was simpler, faster and produced comparable discrimination to the EC-ICP-MS and ID-ICP-MS. LA-ICP-MS can provide for an excellent alternative to solution analysis of glass in forensic casework samples. Paints and coatings are frequently encountered as trace evidence samples submitted to forensic science laboratories. A LA-ICP-MS method has been developed to complement the commonly used techniques in forensic laboratories in order to better characterize these samples for forensic purposes. Time-resolved plots of each sample can be compared to associate samples to each other or to discriminate between samples. Additionally, the concentration of lead and the ratios of other elements have been determined in various automotive paints by the reported method. A sample set of eighteen (18) survey automotive paint samples have been analyzed with the developed method in order to determine the utility of LA-ICP-MS and to compare the method to the more commonly used scanning electron microscopy (SEM) method for elemental characterization of paint layers in forensic casework.
Martínez-Mier, E. Angeles; Soto-Rojas, Armando E.; Buckley, Christine M.; Margineda, Jorge; Zero, Domenick T.
2010-01-01
Objective The aim of this study was to assess methods currently used for analyzing fluoridated salt in order to identify the most useful method for this type of analysis. Basic research design Seventy-five fluoridated salt samples were obtained. Samples were analyzed for fluoride content, with and without pretreatment, using direct and diffusion methods. Element analysis was also conducted in selected samples. Fluoride was added to ultra pure NaCl and non-fluoridated commercial salt samples and Ca and Mg were added to fluoride samples in order to assess fluoride recoveries using modifications to the methods. Results Larger amounts of fluoride were found and recovered using diffusion than direct methods (96%–100% for diffusion vs. 67%–90% for direct). Statistically significant differences were obtained between direct and diffusion methods using different ion strength adjusters. Pretreatment methods reduced the amount of recovered fluoride. Determination of fluoride content was influenced both by the presence of NaCl and other ions in the salt. Conclusion Direct and diffusion techniques for analysis of fluoridated salt are suitable methods for fluoride analysis. The choice of method should depend on the purpose of the analysis. PMID:20088217
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Efficient free energy calculations by combining two complementary tempering sampling methods.
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-14
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
Efficient free energy calculations by combining two complementary tempering sampling methods
NASA Astrophysics Data System (ADS)
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-01
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
NASA Astrophysics Data System (ADS)
Yang, Linlin; Sun, Hai; Fu, Xudong; Wang, Suli; Jiang, Luhua; Sun, Gongquan
2014-07-01
A novel method for measuring effective diffusion coefficient of porous materials is developed. The oxygen concentration gradient is established by an air-breathing proton exchange membrane fuel cell (PEMFC). The porous sample is set in a sample holder located in the cathode plate of the PEMFC. At a given oxygen flux, the effective diffusion coefficients are related to the difference of oxygen concentration across the samples, which can be correlated with the differences of the output voltage of the PEMFC with and without inserting the sample in the cathode plate. Compared to the conventional electrical conductivity method, this method is more reliable for measuring non-wetting samples.
Rapid method for the determination of 226Ra in hydraulic fracturing wastewater samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Sherrod L.; Culligan, Brian K.; Warren, Richard A.
A new method that rapidly preconcentrates and measures 226Ra from hydraulic fracturing wastewater samples was developed in the Savannah River Environmental Laboratory. The method improves the quality of 226Ra measurements using gamma spectrometry by providing up to 100x preconcentration of 226Ra from this difficult sample matrix, which contains very high levels of calcium, barium, strontium, magnesium and sodium. The high chemical yield, typically 80-90%, facilitates a low detection limit, important for lower level samples, and indicates method ruggedness. Ba-133 tracer is used to determine chemical yield and correct for geometry-related counting issues. The 226Ra sample preparation takes < 2 hours.
Rapid method for the determination of 226Ra in hydraulic fracturing wastewater samples
Maxwell, Sherrod L.; Culligan, Brian K.; Warren, Richard A.; ...
2016-03-24
A new method that rapidly preconcentrates and measures 226Ra from hydraulic fracturing wastewater samples was developed in the Savannah River Environmental Laboratory. The method improves the quality of 226Ra measurements using gamma spectrometry by providing up to 100x preconcentration of 226Ra from this difficult sample matrix, which contains very high levels of calcium, barium, strontium, magnesium and sodium. The high chemical yield, typically 80-90%, facilitates a low detection limit, important for lower level samples, and indicates method ruggedness. Ba-133 tracer is used to determine chemical yield and correct for geometry-related counting issues. The 226Ra sample preparation takes < 2 hours.
Carter, James L.; Resh, Vincent H.
2001-01-01
A survey of methods used by US state agencies for collecting and processing benthic macroinvertebrate samples from streams was conducted by questionnaire; 90 responses were received and used to describe trends in methods. The responses represented an estimated 13,000-15,000 samples collected and processed per year. Kicknet devices were used in 64.5% of the methods; other sampling devices included fixed-area samplers (Surber and Hess), artificial substrates (Hester-Dendy and rock baskets), grabs, and dipnets. Regional differences existed, e.g., the 1-m kicknet was used more often in the eastern US than in the western US. Mesh sizes varied among programs but 80.2% of the methods used a mesh size between 500 and 600 (mu or u)m. Mesh size variations within US Environmental Protection Agency regions were large, with size differences ranging from 100 to 700 (mu or u)m. Most samples collected were composites; the mean area sampled was 1.7 m2. Samples rarely were collected using a random method (4.7%); most samples (70.6%) were collected using "expert opinion", which may make data obtained operator-specific. Only 26.3% of the methods sorted all the organisms from a sample; the remainder subsampled in the laboratory. The most common method of subsampling was to remove 100 organisms (range = 100-550). The magnification used for sorting ranged from 1 (sorting by eye) to 30x, which results in inconsistent separation of macroinvertebrates from detritus. In addition to subsampling, 53% of the methods sorted large/rare organisms from a sample. The taxonomic level used for identifying organisms varied among taxa; Ephemeroptera, Plecoptera, and Trichoptera were generally identified to a finer taxonomic resolution (genus and species) than other taxa. Because there currently exists a large range of field and laboratory methods used by state programs, calibration among all programs to increase data comparability would be exceptionally challenging. However, because many techniques are shared among methods, limited testing could be designed to evaluate whether procedural differences affect the ability to determine levels of environmental impairment using benthic macroinvertebrate communities.
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
2017-10-26
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Densitometry By Acoustic Levitation
NASA Technical Reports Server (NTRS)
Trinh, Eugene H.
1989-01-01
"Static" and "dynamic" methods developed for measuring mass density of acoustically levitated solid particle or liquid drop. "Static" method, unknown density of sample found by comparison with another sample of known density. "Dynamic" method practiced with or without gravitational field. Advantages over conventional density-measuring techniques: sample does not have to make contact with container or other solid surface, size and shape of samples do not affect measurement significantly, sound field does not have to be know in detail, and sample can be smaller than microliter. Detailed knowledge of acoustic field not necessary.
Mixture and method for simulating soiling and weathering of surfaces
Sleiman, Mohamad; Kirchstetter, Thomas; Destaillats, Hugo; Levinson, Ronnen; Berdahl, Paul; Akbari, Hashem
2018-01-02
This disclosure provides systems, methods, and apparatus related to simulated soiling and weathering of materials. In one aspect, a soiling mixture may include an aqueous suspension of various amounts of salt, soot, dust, and humic acid. In another aspect, a method may include weathering a sample of material in a first exposure of the sample to ultraviolet light, water vapor, and elevated temperatures, depositing a soiling mixture on the sample, and weathering the sample in a second exposure of the sample to ultraviolet light, water vapor, and elevated temperatures.
NASA Astrophysics Data System (ADS)
Cheng, T.; Zhou, X.; Jia, Y.; Yang, G.; Bai, J.
2018-04-01
In the project of China's First National Geographic Conditions Census, millions of sample data have been collected all over the country for interpreting land cover based on remote sensing images, the quantity of data files reaches more than 12,000,000 and has grown in the following project of National Geographic Conditions Monitoring. By now, using database such as Oracle for storing the big data is the most effective method. However, applicable method is more significant for sample data's management and application. This paper studies a database construction method which is based on relational database with distributed file system. The vector data and file data are saved in different physical location. The key issues and solution method are discussed. Based on this, it studies the application method of sample data and analyzes some kinds of using cases, which could lay the foundation for sample data's application. Particularly, sample data locating in Shaanxi province are selected for verifying the method. At the same time, it takes 10 first-level classes which defined in the land cover classification system for example, and analyzes the spatial distribution and density characteristics of all kinds of sample data. The results verify that the method of database construction which is based on relational database with distributed file system is very useful and applicative for sample data's searching, analyzing and promoted application. Furthermore, sample data collected in the project of China's First National Geographic Conditions Census could be useful in the earth observation and land cover's quality assessment.
LeBouf, Ryan F; Virji, Mohammed Abbas; Ranpara, Anand; Stefaniak, Aleksandr B
2017-07-01
This method was designed for sampling select quaternary ammonium (quat) compounds in air or on surfaces followed by analysis using ultraperformance liquid chromatography tandem mass spectrometry. Target quats were benzethonium chloride, didecyldimethylammonium bromide, benzyldimethyldodecylammonium chloride, benzyldimethyltetradecylammonium chloride, and benzyldimethylhexadecylammonium chloride. For air sampling, polytetrafluoroethylene (PTFE) filters are recommended for 15-min to 24-hour sampling. For surface sampling, Pro-wipe® 880 (PW) media was chosen. Samples were extracted in 60:40 acetonitrile:0.1% formic acid for 1 hour on an orbital shaker. Method detection limits range from 0.3 to 2 ng/ml depending on media and analyte. Matrix effects of media are minimized through the use of multiple reaction monitoring versus selected ion recording. Upper confidence limits on accuracy meet the National Institute for Occupational Safety and Health 25% criterion for PTFE and PW media for all analytes. Using PTFE and PW analyzed with multiple reaction monitoring, the method quantifies levels among the different quats compounds with high precision (<10% relative standard deviation) and low bias (<11%). The method is sensitive enough with very low method detection limits to capture quats on air sampling filters with only a 15-min sample duration with a maximum assessed storage time of 103 days before sample extraction. This method will support future exposure assessment and quantitative epidemiologic studies to explore exposure-response relationships and establish levels of quats exposures associated with adverse health effects. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Wen, Tingxi; Zhang, Zhongnan; Qiu, Ming; Zeng, Ming; Luo, Weizhen
2017-01-01
The computer mouse is an important human-computer interaction device. But patients with physical finger disability are unable to operate this device. Surface EMG (sEMG) can be monitored by electrodes on the skin surface and is a reflection of the neuromuscular activities. Therefore, we can control limbs auxiliary equipment by utilizing sEMG classification in order to help the physically disabled patients to operate the mouse. To develop a new a method to extract sEMG generated by finger motion and apply novel features to classify sEMG. A window-based data acquisition method was presented to extract signal samples from sEMG electordes. Afterwards, a two-dimensional matrix image based feature extraction method, which differs from the classical methods based on time domain or frequency domain, was employed to transform signal samples to feature maps used for classification. In the experiments, sEMG data samples produced by the index and middle fingers at the click of a mouse button were separately acquired. Then, characteristics of the samples were analyzed to generate a feature map for each sample. Finally, the machine learning classification algorithms (SVM, KNN, RBF-NN) were employed to classify these feature maps on a GPU. The study demonstrated that all classifiers can identify and classify sEMG samples effectively. In particular, the accuracy of the SVM classifier reached up to 100%. The signal separation method is a convenient, efficient and quick method, which can effectively extract the sEMG samples produced by fingers. In addition, unlike the classical methods, the new method enables to extract features by enlarging sample signals' energy appropriately. The classical machine learning classifiers all performed well by using these features.
Sung, Heungsup; Yong, Dongeun; Ki, Chang Seok; Kim, Jae Seok; Seong, Moon Woo; Lee, Hyukmin; Kim, Mi Na
2016-09-01
Real-time reverse transcription PCR (rRT-PCR) of sputum samples is commonly used to diagnose Middle East respiratory syndrome coronavirus (MERS-CoV) infection. Owing to the difficulty of extracting RNA from sputum containing mucus, sputum homogenization is desirable prior to nucleic acid isolation. We determined optimal homogenization methods for isolating viral nucleic acids from sputum. We evaluated the following three sputum-homogenization methods: proteinase K and DNase I (PK-DNase) treatment, phosphate-buffered saline (PBS) treatment, and N-acetyl-L-cysteine and sodium citrate (NALC) treatment. Sputum samples were spiked with inactivated MERS-CoV culture isolates. RNA was extracted from pretreated, spiked samples using the easyMAG system (bioMérieux, France). Extracted RNAs were then subjected to rRT-PCR for MERS-CoV diagnosis (DiaPlex Q MERS-coronavirus, SolGent, Korea). While analyzing 15 spiked sputum samples prepared in technical duplicate, false-negative results were obtained with five (16.7%) and four samples (13.3%), respectively, by using the PBS and NALC methods. The range of threshold cycle (Ct) values observed when detecting upE in sputum samples was 31.1-35.4 with the PK-DNase method, 34.7-39.0 with the PBS method, and 33.9-38.6 with the NALC method. Compared with the control, which were prepared by adding a one-tenth volume of 1:1,000 diluted viral culture to PBS solution, the ranges of Ct values obtained by the PBS and NALC methods differed significantly from the mean control Ct of 33.2 (both P<0.0001). The PK-DNase method is suitable for homogenizing sputum samples prior to RNA extraction.
Satzke, Catherine; Dunne, Eileen M.; Porter, Barbara D.; Klugman, Keith P.; Mulholland, E. Kim
2015-01-01
Background The pneumococcus is a diverse pathogen whose primary niche is the nasopharynx. Over 90 different serotypes exist, and nasopharyngeal carriage of multiple serotypes is common. Understanding pneumococcal carriage is essential for evaluating the impact of pneumococcal vaccines. Traditional serotyping methods are cumbersome and insufficient for detecting multiple serotype carriage, and there are few data comparing the new methods that have been developed over the past decade. We established the PneuCarriage project, a large, international multi-centre study dedicated to the identification of the best pneumococcal serotyping methods for carriage studies. Methods and Findings Reference sample sets were distributed to 15 research groups for blinded testing. Twenty pneumococcal serotyping methods were used to test 81 laboratory-prepared (spiked) samples. The five top-performing methods were used to test 260 nasopharyngeal (field) samples collected from children in six high-burden countries. Sensitivity and positive predictive value (PPV) were determined for the test methods and the reference method (traditional serotyping of >100 colonies from each sample). For the alternate serotyping methods, the overall sensitivity ranged from 1% to 99% (reference method 98%), and PPV from 8% to 100% (reference method 100%), when testing the spiked samples. Fifteen methods had ≥70% sensitivity to detect the dominant (major) serotype, whilst only eight methods had ≥70% sensitivity to detect minor serotypes. For the field samples, the overall sensitivity ranged from 74.2% to 95.8% (reference method 93.8%), and PPV from 82.2% to 96.4% (reference method 99.6%). The microarray had the highest sensitivity (95.8%) and high PPV (93.7%). The major limitation of this study is that not all of the available alternative serotyping methods were included. Conclusions Most methods were able to detect the dominant serotype in a sample, but many performed poorly in detecting the minor serotype populations. Microarray with a culture amplification step was the top-performing method. Results from this comprehensive evaluation will inform future vaccine evaluation and impact studies, particularly in low-income settings, where pneumococcal disease burden remains high. PMID:26575033