Modern Data Analysis techniques in Noise and Vibration Problems
1981-11-01
Hilbert l’une de l’autre. Cette propriete se retrouve dans l’etude de la causalite : ce qui de- finit un critere pratique caracterisant un signal donc, par...entre Ie champ direct et Ie champ reflechi se caracterisent loca- lement par l’existence de frequences pour lesquelles l’interference est totale
1988-10-01
TURBISTAN). En ce qui concerne les materiaux pour aubes de turbine la question reste ouverte. La caracterisation des materiaux pour... caracterisation mecanique. Face aux couts des caracterisations et qualifications de materiaux pour disques de lurbomachines, il importe de tenter...de concentrer ses propres forces : une solution possible est de constituer des plans standard de caracterisation . Federer au sein d’un groupe de
Land Operations in the Year 2020 (LO2020) (Operations terrestres a l’horizon 2020 (LO2020)).
1999-03-01
CAPABILITIES Technologies [ ] □ [500-700] n[>70°] 186 APPENDIX 4 to ANNEX V SHORT LISTED TECHNOLOGIES CARACTERISED REGARDING CC 1. top... CARACTERISATION MATRIX techno Legend: no relevance weak relevance good relevance strong relevance 189 KEY TECHNOLOGIES CARACTERISED REGARDING COST (34
1979-10-01
AD-AO95 392 DEFENCE RESEARCH ESTABLISHMENT OTTAWA (ONTARIO) FI6 8/13 CARACTERISATION PHY SI QUE DES SOLS CAMP MILITAIRE DE PETAWAWA (P--ETC(U) OCT 79...defense nationale 2b, GROUP Ottawa, Ontario -KIA-04----- , I D )kI1,vlt Nr Ti fti CARACTERISATION PHYSIQUE DES SOLS, BASE DES FORCES CANADIENNES PETAWAAA I
1994-02-01
devait servir de forum pour un 6change d’informations sur ce sujet important. Dans ce cas. [a caracterisation se ref~re A l’analyse du comportement des...phases de d~veloppement et de mise a l’echelle. mais aussi aux activites de caracterisation et de demonstration des composants. Dans le cas des materiaux
Environmental Life Cycle Techniques for New Weapons Acquisition Systems
2004-09-01
Amount) Total grenade War Caracterisation factors (MJ/nn) Unit (Impact indicator) Total grenade War Total of all compartments MJ 59200 82700... Caracterisation factors (Kg CFC11 eq/nn) Unit (Impact indicator) Total grenade War Total of all compartments kg CFC-11 0,000429 0,00046 Remaining...Substance Compartment Unit (Amount) Total grenade War Caracterisation factors (Kg C2H2/nn) Unit (Impact indicator) Total grenade War Total of all
Airframe/Propulsion Interference
1975-03-01
elancee, en incidence, est prcsenic figure 14. L’intrados se caracterise par une divergence des lignes dc courant, et par une reducti.in ’es nombres...ale de la zone dissipative en R et A/S une fonction du nombre de Hach au recollement Mq et d’un parametre de forme Hiq caracterisant le...extralte d’une Etude expE- rimentale prEsentEe rEf. [13]. Bile se caractErise par un elancement assez important et un rapport de tronoature ft^ /\\M.t
2003-09-26
sur Ia caracterisation environnementale de leurs secteurs d’entrainement majeurs afin d’ameliorer les connaissances des impacts de tous les types...et geographique. En 2001, Ia premiere phase de cette etude a consiste en la caracterisation hydrogeologique partielle dans la portion nord du...secteur d’entrainement. Cette premiere phase a implique le forage de 42 puits, afin de caracteriser la dynamique et la qualite des eaux souterraines. En
1991-07-01
example, caracterises par l’existence d’une pointe de survitesse importante, c’est le nombre de Reynolds qui r~git It "caract~re transitionnel’" de... caracterisation d’un Ecoulement cisaillC tridimensionnel autour d’une aile en fl~che. Une precedente Etude avait Etd nen~e qui avait pour but de qualifier I...su.vant iea configi-rations Attdi4es. 4 -m CARACTERISATION DE L ECLKTEg2NT Ii eat admis. anlon [22), que iclatement tourbiiionna~re est caravteris6
Stochastic Pseudo-Boolean Optimization
2011-07-31
Right-Hand Side,” 2009 IN- FORMS Annual Meeting, San Diego, CA, October 11-14, 2009. 113 References [1] A.-Ghouila-Houri. Caracterisation des matrices...Optimization, 10:7–21, 2005. [30] P. Camion. Caracterisation des matrices unimodulaires. Cahiers Centre Etudes Rech., 5(4), 1963. [31] P. Camion
A New Approach to Electrical Characterization of Exploding Foil Initiators
1998-12-01
processed to illustrate the methodology. RESUME Dans une etude precedente de 1a caracterisation electrique des detonateurs a element projete (DEP), on a...applicable a 1a caracterisation electrique des DEP et decrit la methodologie experimentale adequate. Cette methodologie est illustree par la
1993-11-01
sont caracterises par la striosconic continue tic la la partie tie la couche tie melange situde sous le jet figure 3 ainsi que par la tomoscopie tie... caracterisent les ondzs). Ces ondes un disque; de Mach. Sur la figuire 4, on observe la semblent proveriir tie la r6gion tie l’jccteur, juste trace du
Caracterisation thermique de modules de refroidissement pour la photovoltaique concentree
NASA Astrophysics Data System (ADS)
Collin, Louis-Michel
Pour rentabiliser la technologie des cellules solaires, une reduction du cout d'exploitation et de fabrication est necessaire. L'utilisation de materiaux photovoltaiques a un impact appreciable sur le prix final par quantite d'energie produite. Une technologie en developpement consiste a concentrer la lumiere sur les cellules solaires afin de reduire cette quantite de materiaux. Or, concentrer la lumiere augmente la temperature de la cellule et diminue ainsi son efficacite. Il faut donc assurer a la cellule un refroidissement efficace. La charge thermique a evacuer de la cellule passe au travers du recepteur, soit la composante soutenant physiquement la cellule. Le recepteur transmet le flux thermique de la cellule a un systeme de refroidissement. L'ensemble recepteur-systeme de refroidissement se nomme module de refroidissement. Habituellement, la surface du recepteur est plus grande que celle de la cellule. La chaleur se propage donc lateralement dans le recepteur au fur et a mesure qu'elle traverse le recepteur. Une telle propagation de la chaleur fournit une plus grande surface effective, reduisant la resistance thermique apparente des interfaces thermiques et du systeme de refroidissement en aval vers le module de refroidissement. Actuellement, aucune installation ni methode ne semble exister afin de caracteriser les performances thermiques des recepteurs. Ce projet traite d'une nouvelle technique de caracterisation pour definir la diffusion thermique du recepteur a l'interieur d'un module de refroidissement. Des indices de performance sont issus de resistances thermiques mesurees experimentalement sur les modules. Une plateforme de caracterisation est realisee afin de mesurer experimentalement les criteres de performance. Cette plateforme injecte un flux thermique controle sur une zone localisee de la surface superieure du recepteur. L'injection de chaleur remplace le flux thermique normalement fourni par la cellule. Un systeme de refroidissement est installe a la surface opposee du recepteur pour evacuer la chaleur injectee. Les resultats mettent egalement en evidence l'importance des interfaces thermiques et les avantages de diffuser la chaleur dans les couches metalliques avant de la conduire au travers des couches dielectriques du recepteur. Des recepteurs de multiples compositions ont ete caracterises, demontrant que les outils developpes peuvent definir la capacite de diffusion thermique. La repetabilite de la plateforme est evaluee par l'analyse de l'etendue des mesures repetees sur des echantillons selectionnes. La plateforme demontre une precision et reproductibilite de +/- 0.14 ° C/W. Ce travail fournit des outils pour la conception des recepteurs en proposant une mesure qui permet de comparer et d'evaluer l'impact thermique de ces recepteurs integres a uri module de refroidissement. Mots-cles : cellule solaire, photovoltaique, transfert de chaleur, concentration, resistances thermiques, plateforme de caracterisation, refroidissement
NASA Astrophysics Data System (ADS)
Salissou, Yacoubou
L'objectif global vise par les travaux de cette these est d'ameliorer la caracterisation des proprietes macroscopiques des materiaux poreux a structure rigide ou souple par des approches inverses et indirectes basees sur des mesures acoustiques faites en tube d'impedance. La precision des approches inverses et indirectes utilisees aujourd'hui est principalement limitee par la qualite des mesures acoustiques obtenues en tube d'impedance. En consequence, cette these se penche sur quatre problemes qui aideront a l'atteinte de l'objectif global precite. Le premier probleme porte sur une caracterisation precise de la porosite ouverte des materiaux poreux. Cette propriete en est une de passage permettant de lier la mesure des proprietes dynamiques acoustiques d'un materiau poreux aux proprietes effectives de sa phase fluide decrite par les modeles semi-phenomenologiques. Le deuxieme probleme traite de l'hypothese de symetrie des materiaux poreux selon leur epaisseur ou un index et un critere sont proposes pour quantifier l'asymetrie d'un materiau. Cette hypothese est souvent source d'imprecision des methodes de caracterisation inverses et indirectes en tube d'impedance. Le critere d'asymetrie propose permet ainsi de s'assurer de l'applicabilite et de la precision de ces methodes pour un materiau donne. Le troisieme probleme vise a mieux comprendre le probleme de transmission sonore en tube d'impedance en presentant pour la premiere fois un developpement exact du probleme par decomposition d'ondes. Ce developpement permet d'etablir clairement les limites des nombreuses methodes existantes basees sur des tubes de transmission a 2, 3 ou 4 microphones. La meilleure comprehension de ce probleme de transmission est importante puisque c'est par ce type de mesures que des methodes permettent d'extraire successivement la matrice de transfert d'un materiau poreux et ses proprietes dynamiques intrinseques comme son impedance caracteristique et son nombre d'onde complexe. Enfin, le quatrieme probleme porte sur le developpement d'une nouvelle methode de transmission exacte a 3 microphones applicable a des materiaux ou systemes symetriques ou non. Dans le cas symetrique, on montre que cette approche permet une nette amelioration de la caracterisation des proprietes dynamiques intrinseques d'un materiau. Mots cles. materiaux poreux, tube d'impedance, transmission sonore, absorption sonore, impedance acoustique, symetrie, porosite, matrice de transfert.
NASA Astrophysics Data System (ADS)
Morlot, T.; Mathevet, T.; Perret, C.; Favre Pugin, A. C.
2014-12-01
Streamflow uncertainty estimation has recently received a large attention in the literature. A dynamic rating curve assessment method has been introduced (Morlot et al., 2014). This dynamic method allows to compute a rating curve for each gauging and a continuous streamflow time-series, while calculating streamflow uncertainties. Streamflow uncertainty takes into account many sources of uncertainty (water level, rating curve interpolation and extrapolation, gauging aging, etc.) and produces an estimated distribution of streamflow for each days. In order to caracterise streamflow uncertainty, a probabilistic framework has been applied on a large sample of hydrometric stations of the Division Technique Générale (DTG) of Électricité de France (EDF) hydrometric network (>250 stations) in France. A reliability diagram (Wilks, 1995) has been constructed for some stations, based on the streamflow distribution estimated for a given day and compared to a real streamflow observation estimated via a gauging. To build a reliability diagram, we computed the probability of an observed streamflow (gauging), given the streamflow distribution. Then, the reliability diagram allows to check that the distribution of probabilities of non-exceedance of the gaugings follows a uniform law (i.e., quantiles should be equipropables). Given the shape of the reliability diagram, the probabilistic calibration is caracterised (underdispersion, overdispersion, bias) (Thyer et al., 2009). In this paper, we present case studies where reliability diagrams have different statistical properties for different periods. Compared to our knowledge of river bed morphology dynamic of these hydrometric stations, we show how reliability diagram gives us invaluable information on river bed movements, like a continuous digging or backfilling of the hydraulic control due to erosion or sedimentation processes. Hence, the careful analysis of reliability diagrams allows to reconcile statistics and long-term river bed morphology processes. This knowledge improves our real-time management of hydrometric stations, given a better caracterisation of erosion/sedimentation processes and the stability of hydrometric station hydraulic control.
Miroirs multicouches C/SI a incidence normale pour la region spectrale 25-40 nanometres
NASA Astrophysics Data System (ADS)
Grigonis, Marius
Nous avons propose la nouvelle combinaison de materiaux, C/Si, pour la fabrication de miroirs multicouches a incidence normale dans la region spectrale 25-40 nm. Les resultats experimentaux montrent que cette combinaison possede une reflectivite d'environ ~25% dans la region spectrale 25-33 nm et une reflectivite d'environ ~23% dans la region spectrale 33-40 nm. Ces valeurs de reflectivite sont les plus grandes obtenues jusqu'a maintenant dans la region spectrale 25-40 nm. Les miroirs multicouches ont ete par la suite caracterises par microscopie electronique a transmission, par diverses techniques de diffraction des rayons X et par spectroscopies d'electrons AES et ESCA. La resistance des miroirs aux temperatures elevees a ete egalement etudiee. Les resultats fournis par les methodes de caracterisation indiquent que cette combinaison possede des caracteristiques tres prometteuses pour son application comme miroir pour les rayons X mous.
Formation des etoiles massives dans les galaxies spirales
NASA Astrophysics Data System (ADS)
Lelievre, Mario
Le but de cette thèse est de décrire la formation des étoiles massives dans les galaxies spirales appartenant à divers types morphologiques. L'imagerie Hα profonde combinée à une robuste méthode d'identification des régions HII ont permis de détecter et de mesurer les propriétés (position, taille, luminosité, taux de formation d'étoiles) de plusieurs régions HII situées dans le disque interne (R < R25) de dix galaxies mais aussi à leur périphérie (R ≥ R 25). De façon générale, la répartition des régions HII ne montre aucune évidence de structure morphologique à R < R25 (bras spiraux, anneau, barre) à moins de limiter l'analyse aux régions HII les plus grosses ou les plus lumineuses. La répartition des régions HII, de même que leur taille et leur luminosité, sont toutefois sujettes à de forts effets de sélection qui dépendent de la distance des galaxies et qu'il faut corriger en ramenant l'échantillon à une résolution spatiale commune. Les fonctions de luminosité montrent que les régions HII les plus brillantes ont tendance à se former dans la portion interne du disque. De plus, l'analyse des pentes révèle une forte corrélation linéaire par rapport au type morphologique. Aucun pic n'est observé dans les fonctions de luminosité à log L-37 qui révèlerait la transition entre les régions HII bornées par l'ionisation et par la densité. Une relation cubique est obtenue entre la taille et la luminosité des régions HII, cette relation variant toutefois de façon significative entre le disque interne et la périphérie d'une même galaxie. La densité et la dynamique du gaz et des étoiles pourraient influencer de façon significative la stabilité des nuages moléculaires face à l'effondrement gravitationnel. D'une part, l'étendue du disque de régions HII pour cinq galaxies de l'échantillon coïncide avec celle de l'hydrogène atomique. D'autre part, en analysant la stabilité des disques galactiques, on conclue qu'en incluant la densité des étoiles vieilles présentes, on arrive à mieux contraindre le rayon à partir duquel aucune formation d'étoiles ne devrait se produire dans les galaxies.
Realisation et Applications D'un Laser a Fibre a L'erbium Monofrequence
NASA Astrophysics Data System (ADS)
Larose, Robert
L'incorporation d'ions de terres rares a l'interieur de la matrice de verre d'une fibre optique a permis l'emergence de composants amplificateurs tout-fibre. Le but de cette these consiste d'une part a analyser et a modeliser un tel dispositif et d'autre part, a fabriquer puis a caracteriser un amplificateur et un oscillateur a fibre. A l'aide d'une fibre fortement dopee a l'erbium fabriquee sur demande, on realise un laser a fibre syntonisable qui fonctionne en regime multimodes longitudinaux avec une largeur de raie de 1.5 GHz et egalement comme source monofrequencielle de largeur de raie de 70 kHz. Le laser sert ensuite a caracteriser un reseau de Bragg ecrit par photosensibilite dans une fibre optique. La technique de syntonisation permet aussi l'asservissement au fond d'une resonance de l'acetylene. Le laser garde alors la position centrale de la raie avec une erreur de moins de 1 MHz corrigeant ainsi les derives mecaniques de la cavite.
Caractérisations structurale et morphologique des couches minces de CuInS2 et d'In-S "airless spray"
NASA Astrophysics Data System (ADS)
Kamoun, N.; Belgacem, S.; Amlouk, M.; Bennaceur, R.; Abdelmoula, K.; Belhadj Amara, A.
1994-03-01
We have prepared CuInS2 thin layers by airless spray "S.P.A." in order to use them as an absorber in a photovoltaic cell. The X-ray diffraction analysis has showed that these layers are well crystallized with a privileged (112) principal orientation for a ratio of the concentrations in the pulverized solution x=frac[Cu^I][In^{III]}=1.1. After heat treatment under vacuum the crystallization have been clearly improved. The structural analysis of the thin CuInS2 layers have revealed that a secondary phases of In2S3 and In6S7 are present. Thus we have realized by the same technique thin In-S layers whose structural and morphological properties have been studied. This analysis has showed that the In-S layers are well crystallized for a ratio y=frac[In^{3+]}[S^{2-]}=0.6 in the spray solution. The In-S layers are essentially formed by a β-In2S3 material. Although the In6S7 phase appears to the detriment of β-In2S3 phase for y= 0.75. Nous avons préparé des couches minces de CuInS2, par pulvérisation chimique réactive sans air "P.S.A.", en vue de leur utilisation en tant qu'absorbeur dans un dispositif photovoltaïque. L'analyse par diffraction X a montré que ces couches sont bien cristallisées et que l'orientation principale (112) est privilégiée pour un rapport de concentrations x=frac[Cu^I]{[In^{III}]}=1,1 dans la solution à pulvériser. Après le traitement thermique sous vide la cristallisation est nettement améliorée. L'analyse structurale des couches minces de CuInS2 a révélé que ces couches renferment des phases secondaires d'In2S3 et d'In6S7. Ainsi nous avons réalisé par la même technique "P.S.A.", des couches minces d'In-S dont nous avons étudié les propriétés structurales et morphologiques, Cette analyse a montré que les couches d'In-S sont bien cristallisées. Pour un rapport de concentrations en solution de pulvérisation y=frac[In^{3+]}[S^{2-]}=0,6 les couches d'In-S sont surtout formées du matériau β-In2S3. Alors que la phase In6S7 apparaît au détriment de la phase β-In2S3 pour y= 0,75.
Adnet, J J; Pinteaux, A; Pousse, G; Caulet, T
1976-04-01
Three simple methods (adapted from optical techniques) for normal and pathological elastic tissue caracterisation in electron microscopy on thin and ultrathin sections are proposed. Two of these methods (orcein and fuchsin resorcin) seem to have a specificity for arterial and breast cancer elastic tissue. Weigert's method gives the best contrast.
Caracterisation pratique des systemes quantiques et memoires quantiques auto-correctrices 2D
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier
Cette these s'attaque a deux problemes majeurs de l'information quantique: - Comment caracteriser efficacement un systeme quantique? - Comment stocker de l'information quantique? Elle se divise done en deux parties distinctes reliees par des elements techniques communs. Chacune est toutefois d'un interet propre et se suffit a elle-meme. Caracterisation pratique des systemes quantiques. Le calcul quantique exige un tres grand controle des systemes quantiques composes de plusieurs particules, par exemple des atomes confines dans un piege electromagnetique ou des electrons dans un dispositif semi-conducteur. Caracteriser un tel systeme quantique consiste a obtenir de l'information sur l'etat grace a des mesures experimentales. Or, chaque mesure sur le systeme quantique le perturbe et doit done etre effectuee apres avoir reprepare le systeme de facon identique. L'information recherchee est ensuite reconstruite numeriquement a partir de l'ensemble des donnees experimentales. Les experiences effectuees jusqu'a present visaient a reconstruire l'etat quantique complet du systeme, en particulier pour demontrer la capacite de preparer des etats intriques, dans lesquels les particules presentent des correlations non-locales. Or, la procedure de tomographie utilisee actuellement n'est envisageable que pour des systemes composes d'un petit nombre de particules. Il est donc urgent de trouver des methodes de caracterisation pour les systemes de grande taille. Dans cette these, nous proposons deux approches theoriques plus ciblees afin de caracteriser un systeme quantique en n'utilisant qu'un effort experimental et numerique raisonnable. - La premiere consiste a estimer la distance entre l'etat realise en laboratoire et l'etat cible que l'experimentateur voulait preparer. Nous presentons un protocole, dit de certification, demandant moins de ressources que la tomographie et tres efficace pour plusieurs classes d'etats importantes pour l'informatique quantique. - La seconde approche, dite de tomographie variationnelle, propose de reconstruire l'etat en restreignant l'espace de recherche a une classe variationnelle plutot qu'a l'immense espace des etats possibles. Un etat variationnel etant decrit par un petit nombre de parametres, un petit nombre d'experiences peut suffire a identifier les parametres variationnels de l'etat experimental. Nous montrons que c'est le cas pour deux classes variationnelles tres utilisees, les etats a produits matriciels (MPS) et l'ansatz pour intrication multi-echelle (MERA). Memoires quantiques auto-correctrices 2D. Une memoire quantique auto-correctrice est un systeme physique preservant de l'information quantique durant une duree de temps macroscopique. Il serait done l'equivalent quantique d'un disque dur ou d'une memoire flash equipant les ordinateurs actuels. Disposer d'un tel dispositif serait d'un grand interet pour l'informatique quantique. Une memoire quantique auto-correctrice est initialisee en preparant un etat fondamental, c'est-a-dire un etat stationnaire de plus basse energie. Afin de stocker de l'information quantique, il faut plusieurs etats fondamentaux distincts, chacun correspondant a une valeur differente de la memoire. Plus precisement, l'espace fondamental doit etre degenere. Dans cette these, on s'interesse a des systemes de particules disposees sur un reseau bidimensionnel (2D), telles les pieces sur un echiquier, qui sont plus faciles a realiser que les systemes 3D. Nous identifions deux criteres pour l'auto-correction: - La memoire quantique doit etre stable face aux perturbations provenant de l'environnement, par exemple l'application d'un champ magnetique externe. Ceci nous amene a considerer les systemes topologiques 2D dont les degres de liberte sont intrinsequement robustes aux perturbations locales de l'environnement. - La memoire quantique doit etre robuste face a un environnement thermique. Il faut s'assurer que les excitations thermiques n'amenent pas deux etats fondamentaux distincts vers le meme etat excite, sinon l'information aura ete perdue. Notre resultat principal montre qu'aucun systeme topologique 2D n'est auto-correcteur: l'environnement peut changer l'etat fondamental en deplacant aleatoirement de petits paquets d'energie, un mecanisme coherent avec l'intuition que tout systeme topologique admet des excitations localisees ou quasiparticules. L'interet de ce resultat est double. D'une part, il oriente la recherche d'un systeme auto-correcteur en montrant qu'il doit soit (i) etre tridimensionnel, ce qui est difficile a realiser experimentalement, soit (ii) etre base sur des mecanismes de protection nouveaux, allant au-dela des considerations energetiques. D'autre part, ce resultat constitue un premier pas vers la demonstration formelle de l'existence de quasiparticules pour tout systeme topologique.
Characterising the Ionosphere (La caracterisation de l’ionosphere)
2009-01-01
and these emissions are characteristic for proton precipitation. The hydrogen that is produced by charge exchange collisions has the kinetic energy ...the same kinetic energy as the original proton had, but does not gyrate around the magnetic field. The precipitation therefore spreads horizontally...latitudinal extend of the D-region ionization [Rodger et al., 2006]. Depending on their energy , these energetic protons also penetrate into the middle
Une tachycardie à QRS large mal tolérée chez un nourrisson
Affangla, Désiré Alain; Leye, Mohamed; Simo, Angèle Wabo; D’Almeida, Franck; Sarr, Thérèse Yandé; Phiri, Adamson; Kane, Adama
2017-01-01
Les tachycardies à QRS large mal tolérées du nourrisson posent le problème de leur diagnostic et de la prise en charge en urgence. Nous rapportons un cas de tachycardie à QRS large chez un nourrisson de 35 jours reçu pour détresse cardio-circulatoire. Le cœur était morphologiquement normal à l’échographie cardiaque Doppler. Un traitement par une dose charge d’Amiodarone n’a pas permis de réduire cette tachycardie. Un retour en rythme sinusal a été obtenu après cardioversion par un défibrillateur externe semi-automatique type Lifeline. Un traitement d’entretien par Amiodarone per os est institué et le patient est en rythme sinusal à 03 mois. PMID:28904685
Optimization of Laminated Composite Plates
1989-09-01
plane loads has already been studied, and a number of technical publications and software packages can be found. In the present report, an optimization of...described above. There is no difficulty in any case, and comercial software , from personal computers to macro- systems, is available. In the chapter...Reforzado y su Aplicacion a los Medios de Transporte", Ph.D. University of Zaragoza, Spain, 1984. 77. Miravete A., "Caracterisation et mise au Point d’un
NASA Astrophysics Data System (ADS)
Lebel, Larry
Une procedure experimentale a ete developpee pour caracteriser les mecanismes de deterioration et la durabilite de materiaux composites a matrice ceramique (CMC) dans une application de piece statique de turbine a gaz. Tandis que la plupart des essais de caracterisation publies sur les materiaux CMC ont ete realises sous des conditions de chargement controle, la presente recherche tente de reproduire la relaxation des contraintes qui se produit normalement dans une piece statique a haute temperature. Dans l'experience proposee, un echantillon planaire de forme haltere est chauffe de facon cyclique sur une de ses faces et refroidi sur l'autre, tout en etant contraint dans ses deplacements. La contrainte de flexion resultante au centre de l'echantillon, mesuree par une cellule de charge, correspond a la contrainte de flexion qui a ete prealablement predite au centre des panneaux d'une chambre a combustion generique. Un materiau CMC multicouche compose d'une matrice d'alumine poreuse et de fibres NextelMD 720 a ete utilise pour developper l'experience. Des essais de calibration ont d'abord ete realises en utilisant un systeme de chauffage par lampe infrarouge, atteignant jusqu'a 1160 °C a la surface de l'echantillon. Un systeme laser au CO2 a par la suite ete utilise pour realiser des essais de deterioration a haute puissance, atteignant en fin d'essai des temperatures de surface excedant la limite de 1200 °C du materiau et des differences de temperature a travers l'epaisseur de plus de 1000 °C. Sous la puissance de chauffage imposee a amplitude constante, l'accumulation de dommage a fait en sorte d'augmenter la temperature en surface et les gradients de temperature a travers le materiau. Une reduction de la contrainte dans le temps a ete observee a cause du fluage, de la fissuration et de la delamination du materiau sous la condition de confinement du deplacement, menant a une stabilisation du niveau de dommage a une certaine profondeur dependant de la contrainte thermique initiale. La procedure de caracterisation developpee s'avere etre un outil prometteur pour developper de nouveaux types de materiaux, de meme que pour comparer la durabilite de materiaux existants sous des conditions representatives de pieces statiques de turbine a gaz. None
Assessment of Infrared Sounder Radiometric Noise from Analysis of Spectral Residuals
NASA Astrophysics Data System (ADS)
Dufour, E.; Klonecki, A.; Standfuss, C.; Tournier, B.; Serio, C.; Masiello, G.; Tjemkes, S.; Stuhlmann, R.
2016-08-01
For the preparation and performance monitoring of the future generation of hyperspectral InfraRed sounders dedicated to the precise vertical profiling of the atmospheric state, such as the Meteosat Third Generation hyperspectral InfraRed Sounder, a reliable assessment of the instrument radiometric error covariance matrix is needed.Ideally, an inflight estimation of the radiometrric noise is recommended as certain sources of noise can be driven by the spectral signature of the observed Earth/ atmosphere radiance. Also, unknown correlated noise sources, generally related to incomplete knowledge of the instrument state, can be present, so a caracterisation of the noise spectral correlation is also neeed.A methodology, relying on the analysis of post-retreival spectral residuals, is designed and implemented to derive in-flight the covariance matrix on the basis of Earth scenes measurements. This methodology is successfully demonstrated using IASI observations as MTG-IRS proxy data and made it possible to highlight anticipated correlation structures explained by apodization and micro-vibration effects (ghost). This analysis is corroborated by a parallel estimation based on an IASI black body measurement dataset and the results of an independent micro-vibration model.
NASA Astrophysics Data System (ADS)
Amouriq, Yves; Guedon, Jeanpierre; Normand, Nicolas; Arlicot, Aurore; Benhdech, Yassine; Weiss, Pierre
2011-03-01
Bone microarchitecture is the predictor of bone quality or bone disease. It can only be measured on a bone biopsy, which is invasive and not available for all clinical situations. Texture analysis on radiographs is a common way to investigate bone microarchitecture. But relationship between three-dimension histomorphometric parameters and two-dimension texture parameters is not always well known, with poor results. The aim of this study is to performed angulated radiographs of the same region of interest and see if a better relationship between texture analysis on several radiographs and histomorphometric parameters can be developed. Computed radiography images of dog (Beagle) mandible section in molar regions were compared with high-resolution micro-CT (Computed-Tomograph) volumes. Four radiographs with 27° angle (up, down, left, right, using Rinn ring and customized arm positioning system) were performed from initial radiograph position. Bone texture parameters were calculated on all images. Texture parameters were also computed from new images obtained by difference between angulated images. Results of fractal values in different trabecular areas give some caracterisation of bone microarchitecture.
Vers des boites quantiques a base de graphene
NASA Astrophysics Data System (ADS)
Branchaud, Simon
Le graphene est un materiau a base de carbone qui est etudie largement depuis 2004. De tres nombreux articles ont ete publies tant sur les proprietes electroniques, qu'optiques ou mecaniques de ce materiel. Cet ouvrage porte sur l'etude des fluctuations de conductance dans le graphene, et sur la fabrication et la caracterisation de nanostructures gravees dans des feuilles de ce cristal 2D. Des mesures de magnetoresistance a basse temperature ont ete faites pres du point de neutralite de charge (PNC) ainsi qu'a haute densite electronique. On trouve deux origines aux fluctuations de conductance pres du PNC, soit des oscillations mesoscopiques provenant de l'interference quantique, et des fluctuations dites Hall quantique apparaissant a plus haut champ (>0.5T), semblant suivre les facteurs de remplissage associes aux monocouches de graphene. Ces dernieres fluctuations sont attribuees a la charge d'etats localises, et revelent un precurseur a l'effet Hall quantique, qui lui, ne se manifeste pas avant 2T. On arrive a extraire les parametres caracterisant l'echantillon a partir de ces donnees. A la fin de cet ouvrage, on effectue des mesures de transport dans des constrictions et ilots de graphene, ou des boites quantiques sont formees. A partir de ces mesures, on extrait les parametres importants de ces boites quantiques, comme leur taille et leur energie de charge.
NASA Astrophysics Data System (ADS)
Amrani, Salah
La fabrication de l'aluminium est realisee dans une cellule d'electrolyse, et cette operation utilise des anodes en carbone. L'evaluation de la qualite de ces anodes reste indispensable avant leur utilisation. La presence des fissures dans les anodes provoque une perturbation du procede l'electrolyse et une diminution de sa performance. Ce projet a ete entrepris pour determiner l'impact des differents parametres de procedes de fabrication des anodes sur la fissuration des anodes denses. Ces parametres incluent ceux de la fabrication des anodes crues, des proprietes des matieres premieres et de la cuisson. Une recherche bibliographique a ete effectuee sur tous les aspects de la fissuration des anodes en carbone pour compiler les travaux anterieurs. Une methodologie detaillee a ete mise au point pour faciliter le deroulement des travaux et atteindre les objectifs vises. La majorite de ce document est reservee pour la discussion des resultats obtenus au laboratoire de l'UQAC et au niveau industriel. Concernant les etudes realisees a l'UQAC, une partie des travaux experimentaux est reservee a la recherche des differents mecanismes de fissuration dans les anodes denses utilisees dans l'industrie d'aluminium. L'approche etait d'abord basee sur la caracterisation qualitative du mecanisme de la fissuration en surface et en profondeur. Puis, une caracterisation quantitative a ete realisee pour la determination de la distribution de la largeur de la fissure sur toute sa longueur, ainsi que le pourcentage de sa surface par rapport a la surface totale de l'echantillon. Cette etude a ete realisee par le biais de la technique d'analyse d'image utilisee pour caracteriser la fissuration d'un echantillon d'anode cuite. L'analyse surfacique et en profondeur de cet echantillon a permis de voir clairement la formation des fissures sur une grande partie de la surface analysee. L'autre partie des travaux est basee sur la caracterisation des defauts dans des echantillons d'anodes crues fabriquees industriellement. Cette technique a consiste a determiner le profil des differentes proprietes physiques. En effet, la methode basee sur la mesure de la distribution de la resistivite electrique sur la totalite de l'echantillon est la technique qui a ete utilisee pour localiser la fissuration et les macro-pores. La microscopie optique et l'analyse d'image ont, quant a elles, permis de caracteriser les zones fissurees tout en determinant la structure des echantillons analyses a l'echelle microscopique. D'autres tests ont ete menes, et ils ont consiste a etudier des echantillons cylindriques d'anodes de 50 mm de diametre et de 130 mm de longueur. Ces derniers ont ete cuits dans un four a UQAC a differents taux de chauffage dans le but de pouvoir determiner l'influence des parametres de cuisson sur la formation de la fissuration dans ce genre de carottes. La caracterisation des echantillons d'anodes cuites a ete faite a l'aide de la microscopie electronique a balayage et de l'ultrason. La derniere partie des travaux realises a l'UQAC contient une etude sur la caracterisation des anodes fabriquees au laboratoire sous differentes conditions d'operation. L'evolution de la qualite de ces anodes a ete faite par l'utilisation de plusieurs techniques. L'evolution de la temperature de refroidissement des anodes crues de laboratoire a ete mesuree; et un modele mathematique a ete developpe et valide avec les donnees experimentales. Cela a pour objectif d'estimer la vitesse de refroidissement ainsi que le stress thermique. Toutes les anodes fabriquees ont ete caracterisees avant la cuisson par la determination de certaines proprietes physiques (resistivite electrique, densite apparente, densite optique et pourcentage de defauts). La tomographie et la distribution de la resistivite electrique, qui sont des techniques non destructives, ont ete employees pour evaluer les defauts internes des anodes. Pendant la cuisson des anodes de laboratoire, l'evolution de la resistivite electrique a ete suivie et l'etape de devolatilisation a ete identifiee. Certaines anodes ont ete cuites a differents taux de chauffage (bas, moyen, eleve et un autre combine) dans l'objectif de trouver les meilleures conditions de cuisson en vue de minimiser la fissuration. D'autres anodes ont ete cuites a differents niveaux de cuisson, cela dans le but d'identifier a quelle etape de l'operation de cuisson la fissuration commence a se developper. Apres la cuisson, les anodes ont ete recuperees pour, a nouveau, faire leur caracterisation par les memes techniques utilisees precedemment. L'objectif principal de cette partie etait de reveler l'impact de differents parametres sur le probleme de fissuration, qui sont repartis sur toute la chaine de production des anodes. Le pourcentage de megots, la quantite de brai et la distribution des particules sont des facteurs importants a considerer pour etudier l'effet de la matiere premiere sur le probleme de la fissuration. Concernant l'effet des parametres du procede de fabrication sur le meme probleme, le temps de vibration, la pression de compaction et le procede de refroidissement ont ete a la base de cette etude. Finalement, l'influence de la phase de cuisson sur l'apparition de la fissuration a ete prise en consideration par l'intermediaire du taux de chauffage et du niveau de cuisson. Les travaux realises au niveau industriel ont ete faits lors d'une campagne de mesure dans le but d'evaluer la qualite des anodes de carbone en general et l'investigation du probleme de fissuration en particulier. Ensuite, il s'agissait de reveler les effets de differents parametres sur le probleme de la fissuration. Vingt-quatre anodes cuites ont ete utilisees. Elles ont ete fabriquees avec differentes matieres premieres (brai, coke, megots) et sous diverses conditions (pression, temps de vibration). Le parametre de la densite de fissuration a ete calcule en se basant sur l'inspection visuelle de la fissuration des carottes. Cela permet de classifier les differentes fissurations en plusieurs categories en se basant sur certains criteres tels que le type de fissures (horizontale, verticale et inclinee), leurs localisations longitudinales (bas, milieu et haut de l'anode) et transversales (gauche, centrale et droite). Les effets de la matiere premiere, les parametres de fabrication des anodes crues ainsi que les conditions de cuisson sur la fissuration ont ete etudies. La fissuration des anodes denses en carbones cause un serieux probleme pour l'industrie d'aluminium primaire. La realisation de ce projet a permis la revelation de differents mecanismes de fissuration, la classification de fissuration par plusieurs criteres (position, types localisation) et l'evaluation de l'impact de differents parametres sur la fissuration. Les etudes effectuees dans le domaine de cuisson ont donne la possibilite d'ameliorer l'operation et reduire la fissuration des anodes. Le travail consiste aussi a identifier des techniques capables d'evaluer la qualite d'anodes (l'ultrason, la tomographie et la distribution de la resistivite electrique). La fissuration des anodes en carbone est consideree comme un probleme complexe, car son apparition depend de plusieurs parametres repartis sur toute la chaine de production. Dans ce projet, plusieurs nouvelles etudes ont ete realisees, et elles permettent de donner de l'originalite aux travaux de recherches faits dans le domaine de la fissuration des anodes de carbone pour l'industrie de l'aluminium primaire. Les etudes realisees dans ce projet permettent d'ajouter d'un cote, une valeur scientifique pour mieux comprendre le probleme de fissuration des anodes et d'un autre cote, d'essayer de proposer des methodes qui peuvent reduire ce probleme a l'echelle industrielle.
Evaluation de la qualite osseuse par les ondes guidees ultrasonores =
NASA Astrophysics Data System (ADS)
Abid, Alexandre
La caracterisation des proprietes mecaniques de l'os cortical est un domaine d'interet pour la recherche orthopedique. En effet, cette caracterisation peut apporter des informations primordiales pour determiner le risque de fracture, la presence de microfractures ou encore depister l'osteoporose. Les deux principales techniques actuelles de caracterisation de ces proprietes sont le Dual-energy X-ray Absorptiometry (DXA) et le Quantitative Computed Tomogaphy (QCT). Ces techniques ne sont pas optimales et presentent certaines limites, ainsi l'efficacite du DXA est questionnee dans le milieu orthopedique tandis que le QCT necessite des niveaux de radiations problematiques pour en faire un outil de depistage. Les ondes guidees ultrasonores sont utilisees depuis de nombreuses annees pour detecter les fissures, la geometrie et les proprietes mecaniques de cylindres, tuyaux et autres structures dans des milieux industriels. De plus, leur utilisation est plus abordable que celle du DXA et n'engendrent pas de radiation ce qui les rendent prometteuses pour detecter les proprietes mecaniques des os. Depuis moins de dix ans, de nombreux laboratoires de recherche tentent de transposer ces techniques au monde medical, en propageant les ondes guidees ultrasonores dans les os. Le travail presente ici a pour but de demontrer le potentiel des ondes guidees ultrasonores pour determiner l'evolution des proprietes mecaniques de l'os cortical. Il commence par une introduction generale sur les ondes guidees ultrasonores et une revue de la litterature des differentes techniques relatives a l'utilisation des ondes guidees ultrasonores sur les os. L'article redige lors de ma maitrise est ensuite presente. L'objectif de cet article est d'exciter et de detecter certains modes des ondes guides presentant une sensibilite a la deterioration des proprietes mecaniques de l'os cortical. Ce travail est realise en modelisant par elements finis la propagation de ces ondes dans deux modeles osseux cylindriques. Ces deux modeles sont composes d'une couche peripherique d'os cortical et remplis soit d'os trabeculaire soit de moelle osseuse. Ces deux modeles permettent d'obtenir deux geometries, chacune propice a la propagation circonferentielle ou longitudinale des ondes guidees. Les resultats, ou trois differents modes ont pu etre identifies, sont compares avec des donnees experimentales obtenues avec des fantomes osseux et theoriques. La sensibilite de chaque mode pour les differents parametres des proprietes mecaniques est alors etudiee ce qui permet de conclure sur le potentiel de chaque mode quant a la prediction de risque de fracture ou de presence de microfractures.
NASA Astrophysics Data System (ADS)
Boussaboun, Zakariae
Les mineraux d'argile sont des catalyseurs possibles pour la formation du graphene a partir de precurseurs organiques, comme le saccharose. Les argiles sont abondantes, securitaires et economiques pour la formation du graphene. L'objectif principal de ce memoire est de demontrer qu'il est possible de synthetiser un materiau hybride contenant de l'argile et du graphene. La preparation de ces materiaux carbones a base de l'argile (bentonite et cloisite) et le saccharose a ete realisee selon deux methodes. La premiere methode est faite en trois etapes : 1) periode de contact entre l'argile et la source de carbone dans un environnement humide, 2) infiltration de la matiere carbonee et transformation au four a micro-onde, 3) chauffage a 750°C sous azote pour obtenir des materiaux carbones. Par contre la deuxieme methode est faite en deux etapes, sans micro-onde, et avec une augmentation de la quantite de source de carbone (saccharose et alginate). La caracterisation du materiau a permis de suivre les reactions de transformation de la source de carbone vers le graphene. Cette caracterisation a ete faite par la spectroscopie IRTF et Raman, l'analyse thermogravimetrique (TGA), la surface specifique (methode BET) et le microscope electronique a balayage (MEB). La conductivite electrique a ete mesuree par un spectrometre dielectrique et en fonction de la pression appliquee avec un multimetre. Le materiau realise etait incorpore dans une matrice avec un polyethylene a basse densite pour avoir un polymere avec des caracteristiques specifiques. La conductivite thermique a ete ensuite mesuree suivant la norme ASTM E1530. L'echantillon realise avec la deuxieme methode avec une proportion de bentonite pour 5 proportions de saccharose (M2 B1 : S5) signale la possibilite de produire des materiaux de graphene a partir de ressources naturelles. La surface specifique a considerablement augmente de (75,88 m2/g) pour bentonite non traiter a (139,76 m2/g) pour l'echantillon (M2 B1 : S5). Une augmentation significative de la conductivite par pression (95,3 S/m sous une pression de 6,5 MPa par rapport a 1,45*10 -3 S/m pour la bentonite) et la conductivite thermique dans le polyethylene basse densite a une concentration de 10% d'additif (0,332 W/m.K a 0,279 W/m.K) ont ete observes pour le meme echantillon M2 B1 : S5 comparativement a la bentonite non traitee. Les applications possibles sont par exemple les senseurs et les actuateurs par pression.
NASA Astrophysics Data System (ADS)
Fournier, Marie-Claude
Une caracterisation des emissions atmospheriques provenant des sources fixes en operation, alimentees au gaz et a l'huile legere, a ete conduite aux installations visees des sites no.1 et no.2. La caracterisation et les calculs theoriques des emissions atmospheriques aux installations des sites no.1 et no.2 presentent des resultats qui sont en dessous des valeurs reglementaires pour des conditions d'operation normales en periode hivernale et par consequent, a de plus fortes demandes energetiques. Ainsi, pour une demande energetique plus basse, le taux de contaminants dans les emissions atmospheriques pourrait egalement etre en dessous des reglementations municipales et provinciales en vigueur. Dans la perspective d'une nouvelle reglementation provinciale, dont les termes sont discutes depuis 2005, il serait souhaitable que le proprietaire des infrastructures visees participe aux echanges avec le Ministere du Developpement Durable, de l'Environnement et des Parcs (MDDEP) du Quebec. En effet, meme si le principe de droit acquis permettrait d'eviter d'etre assujetti a la nouvelle reglementation, l'application de ce type de principe ne s'inscrit pas dans ceux d'un developpement durable. L'âge avance des installations etudiees implique la planification d'un entretien rigoureux afin d'assurer les conditions optimales de combustion en fonction du type de combustible. Des tests de combustion sur une base reguliere sont donc recommandes. Afin de supporter le processus de suivi et d'evaluation de la performance environnementale des sources fixes, un outil d'aide a la gestion de l'information environnementale a ete developpe. Dans ce contexte, la poursuite du developpement d'un outil d'aide a la gestion de l'information environnementale faciliterait non seulement le travail des personnes affectees aux inventaires annuels mais egalement le processus de communication entre les differents acteurs concernes tant intra- qu'inter-etablissement. Cet outil serait egalement un bon moyen pour sensibiliser le personnel a leur consommation energetique ainsi qu'a leur role dans la lutte contre les emissions polluantes et les gaz a effets de serre. En outre, ce type d'outil a pour principale fonction de generer des rapports dynamiques pouvant s'adapter a des besoins precis. Le decoupage coherent de l'information associe a un developpement par modules offre la perspective d'application de l'outil pour d'autres types d'activites. Dans ce cas, il s'agit de definir la part commune avec les modules existants et planifier les activites de developpement specifiques selon la meme demarche que celle presentee dans le present document.
Tumeur stromale du mésentère: une cause inhabituelle d'une masse abdominale
Tarchouli, Mohamed; Bounaim, Ahmed; Boudhas, Adil; Ratbi, Moulay Brahim; Ndjota, Bobby Nguele; Ali, Abdelmounaim Ait; Sair, Khalid
2015-01-01
Les tumeurs stromales gastro-intestinales (GIST) sont les tumeurs mésenchymateuses les plus fréquentes du tractus digestif. Elles représentent une entité nosologique individualisée depuis la découverte de l'expression quasi-constante de la protéine c-Kit détectée par la coloration immunohistochimique de l'antigène CD117. Des tumeurs avec les mêmes caractéristiques morphologiques et immuno-phénotypiques peuvent rarement apparaître en dehors du tractus gastro-intestinal. Nous rapportons le cas d'une jeune patiente de 34 ans présentant une masse tumorale mésentérique se révélant être de nature stromale sans aucun contact avec la paroi intestinale. Il s'agit d'une localisation très rare des tumeurs stromales à laquelle il faut penser en préopératoire afin d'avoir une conduite thérapeutique adaptée et efficace. PMID:26327998
NASA Astrophysics Data System (ADS)
Fareh, Fouad
Le moulage par injection basse pression des poudres metalliques est une technique de fabrication qui permet de fabriquer des pieces possedant la complexite des pieces coulees mais avec les proprietes mecaniques des pieces corroyees. Cependant, l'optimisation des etapes de deliantage et de frittage a ete jusqu'a maintenant effectuee a l'aide de melange pour lesquels la moulabilite optimale n'a pas encore ete demontree. Ainsi, la comprehension des proprietes rheologiques et de la segregation des melanges est tres limitee et cela presente le point faible du processus de LPIM. L'objectif de ce projet de recherche etait de caracteriser l'influence des liants sur le comportement rheologique des melanges en mesurant la viscosite et la segregation des melanges faible viscosite utilises dans le procede LPIM. Afin d'atteindre cet objectif, des essais rheologiques et thermogravimetriques ont ete conduits sur 12 melanges. Ces melanges ont ete prepares a base de poudre d'Inconel 718 de forme spherique (chargement solide constant a 60%) et de cires, d'agents surfactants ou epaississants. Les essais rheologiques ont ete utilises entre autre pour calculer l'indice d'injectabilite ?STV des melanges, tandis que les essais thermogravimetriques ont permis d'evaluer precisement la segregation des poudres dans les melanges. Il a ete demontre que les trois (3) melanges contenant de la cire de paraffine et de l'acide stearique presentent des indices alpha STV plus eleves qui sont avantageux pour le moulage par injection des poudres metalliques (MIM), mais segregent beaucoup trop pour que la piece fabriquee produise de bonnes caracteristiques mecaniques. A l'oppose, le melange contenant de la cire de paraffine et de l'ethylene-vinyle acetate ainsi que le melange contenant seulement de la cire de carnauba segregent peu voire pas du tout, mais possedent de tres faibles indices alphaSTV : ils sont donc difficilement injectables. Le meilleur compromis semble donc etre les melanges contenant de la cire (de paraffine, d'abeille et de carnauba) et de faible teneur en acide stearique et en ethylene-vinyle acetate. Par ailleurs, les lois physiques preexistantes ont permis de confirmer les resultats des essais rheologiques et thermogravimetriques, mais aussi de mettre en evidence l'influence de la segregation sur les proprietes rheologiques des melanges. Ces essais ont aussi montre l'effet de constituants de liant et du temps passe a l'etat fondu sur l'intensite de la segregation dans les melanges. Les melanges contenants de l'acide stearique segregent rapidement. La caracterisation des melanges developpes pour le moulage basse pression des poudres metalliques doit etre obtenue a l'aide d'une methode de courte duree pour eviter la segregation et de mesurer precisement l'aptitude a l'ecoulement de ces melanges.
Spondylodiscite granulomateuse: surtout la tuberculose mais ne pas omettre le lymphome
Zinebi, Ali; Rkiouak, Adil; Akhouad, Yousef; Reggad, Ahmed; Kasmy, Zohour; Boudlal, Mostafa; Lho, Abdelhamid Nait; Rabhi, Moncef; Sinaa, Mohamed; Ennibi, Khalid; Chaari, Jilali
2016-01-01
Les douleurs lombaires relèvent d'étiologies multiples dont le diagnostic peut être source de grandes difficultés. Le lymphome rachidien primitif est rare et son diagnostic nécessite une biopsie souvent scanoguidée. Un homme de 30 ans, était hospitalisé pour lombalgies inflammatoires évoluant dans un contexte d'altération de l'état général avec à l'examen des douleurs à la palpation des apophyses épineux L2L3, sans syndrome tumoral périphérique. Le bilan biologique montrait un syndrome inflammatoire. Le bilan morphologique était en faveur d'une spondylodiscite. La première biopsie montrait une ostéite granulomateuse. L'aggravation clinique et radiologique sous anti bacillaire a mené à reconsidérer le diagnostic et la deuxième biopsie confirme le diagnostic du lymphome. Le diagnostic de tuberculose osseuse en particulier vertébrale nécessite une confirmation bactériologique et ou histologique pour ne pas méconnaître un lymphome osseux primitif. PMID:28292061
NASA Astrophysics Data System (ADS)
Danouj, Boujemaa
An important issue affecting the sustainability of power transformers is systematic and progressive deterioration of the insulation system by the action of partial discharge. Ideally, it is appropriate to use on line, non-destructive techniques for detection and diagnosis of failures related to insulation systems, in order to determine whether preventive maintenance action is required. Thus, huge material losses can be saved (spared), while improving reliability and system availability. Based on a new generation of piezoelectric sensors (High Temperature Ultrasonic Transducers HTUTs), recently developed by the Industrial Materials Institute (IMI) in Boucherville (Qc, Canada) and offers very interesting features (broad band frequency response, flexible, miniature, economic, etc..), we propose in this thesis an investigation on the applicability of this technology to the problematic of partial discharges. This work presents an analysis of the metrological performance of these sensors and demonstrated empirically the consistency of their measures. It outlines the results of validation from a comparative study with the measures of a standard detection circuit. In addition, it also presents the potential of these sensors to locate partial discharge source position by acoustic emission.
Le niobate de lithium a haute temperature pour les applications ultrasons =
NASA Astrophysics Data System (ADS)
De Castilla, Hector
L'objectif de ce travail de maitrise en sciences appliquees est de trouver puis etudier un materiau piezoelectrique qui est potentiellement utilisable dans les transducteurs ultrasons a haute temperature. En effet, ces derniers sont actuellement limites a des temperatures de fonctionnement en dessous de 300°C a cause de l'element piezoelectrique qui les compose. Palier a cette limitation permettrait des controles non destructifs par ultrasons a haute temperature. Avec de bonnes proprietes electromecaniques et une temperature de Curie elevee (1200°C), le niobate de lithium (LiNbO 3) est un bon candidat. Mais certaines etudes affirment que des processus chimiques tels que l'apparition de conductivite ionique ou l'emergence d'une nouvelle phase ne permettent pas son utilisation dans les transducteurs ultrasons au-dessus de 600°C. Cependant, d'autres etudes plus recentes ont montre qu'il pouvait generer des ultrasons jusqu'a 1000°C et qu'aucune conductivite n'etait visible. Une hypothese a donc emerge : une conductivite ionique est presente dans le niobate de lithium a haute temperature (>500°C) mais elle n'affecte que faiblement ses proprietes a hautes frequences (>100 kHz). Une caracterisation du niobate de lithium a haute temperature est donc necessaire afin de verifier cette hypothese. Pour cela, la methode par resonance a ete employee. Elle permet une caracterisation de la plupart des coefficients electromecaniques avec une simple spectroscopie d'impedance electrochimique et un modele reliant de facon explicite les proprietes au spectre d'impedance. Il s'agit de trouver les coefficients du modele permettant de superposer au mieux le modele avec les mesures experimentales. Un banc experimental a ete realise permettant de controler la temperature des echantillons et de mesurer leur impedance electrochimique. Malheureusement, les modeles actuellement utilises pour la methode par resonance sont imprecis en presence de couplages entre les modes de vibration. Cela implique de posseder plusieurs echantillons de differentes formes afin d'isoler chaque mode principal de vibration. De plus, ces modeles ne prennent pas bien en compte les harmoniques et modes en cisaillement. C'est pourquoi un nouveau modele analytique couvrant tout le spectre frequentiel a ete developpe afin de predire les resonances en cisaillement, les harmoniques et les couplages entre les modes. Neanmoins, certains modes de resonances et certains couplages ne sont toujours pas modelises. La caracterisation d'echantillons carres a pu etre menee jusqu'a 750°C. Les resultats confirment le caractere prometteur du niobate de lithium. Les coefficients piezoelectriques sont stables en fonction de la temperature et l'elasticite et la permittivite ont le comportement attendu. Un effet thermoelectrique ayant un effet similaire a de la conductivite ionique a ete observe ce qui ne permet pas de quantifier l'impact de ce dernier. Bien que des etudes complementaires soient necessaires, l'intensite des resonances a 750°C semble indiquer que le niobate de lithium peut etre utilise pour des applications ultrasons a hautes frequences (>100 kHz).
Le conflit fémoro-acétabulaire et la coxarthrose
Zhang, Charlie; Li, Linda; Forster, Bruce B.; Kopec, Jacek A.; Ratzlaff, Charles; Halai, Lalji; Cibere, Jolanda; Esdaile, John M.
2015-01-01
Objectif Expliquer la présentation clinique, les observations à l’examen physique, les critères diagnostiques et les options de prise en charge du conflit fémoroacétabulaire (CFA). Sources des données Une recherche documentaire a été effectuée dans PubMed pour trouver des articles pertinents sur la pathogenèse, le diagnostic, le traitement et le pronostic du CFA. Message principal Depuis les dernières années, on reconnaît de plus en plus que le CFA est un précurseur potentiel et une étiologie importante des douleurs à la hanche dans la population adulte et de la coxarthrose idiopathique plus tard dans la vie. Le conflit fémoro-acétabulaire désigne un ensemble d’anomalies morphologiques osseuses de l’articulation de la hanche qui se traduit par un contact anormal durant le mouvement. Le CFA par effet came concerne une proéminence osseuse non sphérique du col fémoral proximal ou de la jonction tête-col. Le CFM par effet tenaille désigne une couverture acétabulaire excessive par-dessus la tête du fémur, qui peut se produire en raison de diverses variantes morphologiques. Les patients qui ont un CFA présentent une douleur inguinale antérieure chronique, profonde ou lancinante, le plus souvent en position assise, durant ou après une activité. Les patients peuvent aussi ressentir des douleurs occasionnelles aiguës durant l’activité. Il faut procéder à une anamnèse approfondie portant notamment sur l’incidence de traumatismes ou la fréquence de l’exercice. Il faut aussi faire un examen physique complet des hanches, de la région lombaire et de l’abdomen pour évaluer d’autres causes de douleurs inguinales antérieures. Le diagnostic de CFA est confirmé par radiographie. Le conflit fémoro-acétabulaire peut être pris en charge selon une approche conservatrice comportant du repos, la modification des activités, des médicaments et de la physiothérapie ou encore être traité par intervention chirurgicale. Conclusion Le conflit fémoro-acétabulaire est une cause importante de douleurs antérieures à l’aine. Une détection précoce et une intervention rapide par un médecin de soins primaires sont essentielles pour atténuer la morbidité et éviter la progression du CFA.
NASA Astrophysics Data System (ADS)
Buat, V.; Heinis, S.; Boquien, M.
2013-11-01
We report on our recent works on the UV-to-IR SED fitting of a sample of distant (z>1) galaxies observed by Herschel in the CDFS as part of the GOODS-Herschel project. Combining stellar and dust emission in galaxies is found powerful to constrain their dust attenuation as well as their star formation activity. We focus on the caracterisation of dust attenuation and on the uncertainties on the derivation of the star formation rates and stellar masses, as a function of the range of wavelengths sampled by the data data and of the assumptions made on the star formation histories
Fluctuations quantiques et instabilites structurales dans les conducteurs a basse dimensionalite
NASA Astrophysics Data System (ADS)
Dikande, Alain Moise
Un engouement particulier s'est manifeste ces dernieres annees pour les systemes electroniques fortement correles, ce en rapport avec l'immense richesse de leurs proprietes physiques. En general, ces proprietes sont induites par la presence d'interactions entre electrons qui, combinees a la structure du reseau moleculaire, donnent parfois lieu a une tres grande variete de phases electroniques et structurales ayant des incidences directes sur les phenomenes de transport dans ces materiaux. Les systemes electroniques couples a un reseau moleculaire et designes systemes electron-phonon font partie de cette classe de materiaux qui ont recemment capte l'attention, en raison notamment de la competition entre plusieurs echelles d'energie dans un environnement caracterise par une forte anisotropie cristalline et une dynamique moleculaire assez importante. En effet, en plus des proprietes electroniques et structurales particulieres la dimensionalite de ces systemes contribue egalement a leur richesse. Ainsi, une tres forte anisotropie structurale peut rehausser de facon considerable l'importance des interactions entre electrons et entre molecules constituant le reseau au point ou la physique du systeme soit regie par de tres fortes fluctuations. Ce dernier contexte est devenu un domaine a part de la physique des systemes fortement correles, a savoir celui des les phenomenes critiques quantiques . Parmi les systemes electron-phonon, on retrouve les composes inorganique KCP et organique TTF-TCNQ decouverts durant les annees 70, et explores en profondeur a cause de leur tendance vers une instabilite du type onde de densite de charge a basse temperature. Ces composes, en general designes systemes de Peierls en reference a l'instabilite de leurs structures electroniques regie par le reseau moleculaire, ont recemment connu un regain d'interet a la lumiere des nouveaux developpements dans les techniques de caracterisation des structures electroniques ainsi que sur le plan de concepts tel le Liquide de Luttinger, propres aux systemes electroniques a une dimension. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Floquet, Jimmy
Dans les cuves d'electrolyse d'aluminium, le milieu de reaction tres corrosif attaque les parois de la cuve, ce qui diminue leur duree de vie et augmente les couts de production. Le talus, qui se forme sous l'effet des pertes de chaleur qui maintiennent un equilibre thermique dans la cuve, sert de protection naturelle a la cuve. Son epaisseur doit etre controlee pour maximiser cet effet. Advenant la resorption non voulue de ce talus, les degats generes peuvent s'evaluer a plusieurs centaines de milliers de dollars par cuve. Aussi, l'objectif est de developper une mesure ultrasonore de l'epaisseur du talus, car elle serait non intrusive et non destructive. La precision attendue est de l'ordre du centimetre pour des mesures d'epaisseurs comprenant 2 materiaux, allant de 5 a 20 cm. Cette precision est le facteur cle permettant aux industriels de controler l'epaisseur du talus de maniere efficace (maximiser la protection des parois tout en maximisant l'efficacite energetique du procede), par l'ajout d'un flux thermique. Cependant, l'efficacite d'une mesure ultrasonore dans cet environnement hostile reste a demontrer. Les travaux preliminaires ont permis de selectionner un transducteur ultrasonore a contact ayant la capacite a resister aux conditions de mesure (hautes temperatures, materiaux non caracterises...). Differentes mesures a froid (traite par analyse temps-frequence) ont permis d'evaluer la vitesse de propagation des ondes dans le materiau de la cuve en graphite et de la cryolite, demontrant la possibilite d'extraire l'information pertinente d'epaisseur du talus in fine. Fort de cette phase de caracterisation des materiaux sur la reponse acoustique des materiaux, les travaux a venir ont ete realises sur un modele reduit de la cuve. Le montage experimental, un four evoluant a 1050 °C, instrumente d'une multitude de capteurs thermique, permettra une comparaison de la mesure intrusive LVDT a celle du transducteur, dans des conditions proches de la mesure industrielle. Mots-cles : Ultrasons, CND, Haute temperature, Aluminium, Cuve d'electrolyse.
Fabrication de memoire monoelectronique non volatile par une approche de nanogrille flottante
NASA Astrophysics Data System (ADS)
Guilmain, Marc
Les transistors monoelectroniques (SET) sont des dispositifs de tailles nanometriques qui permettent la commande d'un electron a la fois et donc, qui consomment peu d'energie. Une des applications complementaires des SET qui attire l'attention est son utilisation dans des circuits de memoire. Une memoire monoelectronique (SEM) non volatile a le potentiel d'operer a des frequences de l'ordre des gigahertz ce qui lui permettrait de remplacer en meme temps les memoires mortes de type FLASH et les memoires vives de type DRAM. Une puce SEM permettrait donc ultimement la reunification des deux grands types de memoire au sein des ordinateurs. Cette these porte sur la fabrication de memoires monoelectroniques non volatiles. Le procede de fabrication propose repose sur le procede nanodamascene developpe par C. Dubuc et al. a 1'Universite de Sherbrooke. L'un des avantages de ce procede est sa compatibilite avec le back-end-of-line (BEOL) des circuits CMOS. Ce procede a le potentiel de fabriquer plusieurs couches de circuits memoirestres denses au-dessus de tranches CMOS. Ce document presente, entre autres, la realisation d'un simulateur de memoires monoelectroniques ainsi que les resultats de simulations de differentes structures. L'optimisation du procede de fabrication de dispositifs monoelectroniques et la realisation de differentes architectures de SEM simples sont traitees. Les optimisations ont ete faites a plusieurs niveaux : l'electrolithographie, la gravure de l'oxyde, le soulevement du titane, la metallisation et la planarisation CMP. La caracterisation electrique a permis d'etudier en profondeur les dispositifs formes de jonction de Ti/TiO2 et elle a demontre que ces materiaux ne sont pas appropries. Par contre, un SET forme de jonction de TiN/Al2O3 a ete fabrique et caracterise avec succes a basse temperature. Cette demonstration demontre le potentiel du procede de fabrication et de la deposition de couche atomique (ALD) pour la fabrication de memoires monoelectroniques. Mots-cles: Transistor monoelectronique (SET), memoire monoelectronique (SEM), jonction tunnel, temps de retention, nanofabrication, electrolithographie, planarisation chimicomecanique.
NASA Astrophysics Data System (ADS)
Mebarki, Fouzia
The aim of this study is to examine the possibility of using thermoplastic composite materials for electrical applications such as supports of automotive engine ignition systems. We are particularly interested in composites based on recycled polyethylene terephtalate (PET). Conventional isolations like PET cannot meet the new prescriptive requirements. The introduction of reinforcement materials, such as glass fibers and mica can improve the mechanical characteristics of these materials. However, this enhancement may also reduce electrical properties especially since these composites have to be used under severe thermal and electric stresses. In order to estimate PET composite insulation lifetimes, accelerated aging tests were carried out at temperatures ranging from room temperature to 140°C and at a frequency of 300Hz. Studies at high temperature will help to identify the service temperature of candidate materials. Dielectric breakdown tests have been made on a large number of samples according to the standard of dielectric strength tests of solid insulating ASTM D-149. These tests have to identify the problematic samples and to check solid insulation quality. The different knowledge gained from this analysis was used to predict material performance. This will give the company the possibility to improve existing formulations and subsequently develop a material having electrical and thermal properties suitable for this application.
1992-05-01
gvfl.nds. where [ý nS.VUJ denotes the jump in the F quantity • ns.VU across a triangle side S where b and i are defined by (1.11) with (in the interior of...error estimate for (1.12) recalling that i = (2.1b) u = 0o n P ,x I, max(C 2 hR(U)/ I VU I , h3/2): (2.Ic) u(.,0) u in le, ’ Theorem 1.1. There is a...8217"..< tn <"" tN -= IF be a sequence of discrete time levels, set We note that by (I. 16d) the quantity I/ = (tni, t.n+1 ), k1 = t+ I - t.n and E(h,U,f
Caracterisation of Titanium Nitride Layers Deposited by Reactive Plasma Spraying
NASA Astrophysics Data System (ADS)
Roşu, Radu Alexandru; Şerban, Viorel-Aurel; Bucur, Alexandra Ioana; Popescu, Mihaela; Uţu, Dragoş
2011-01-01
Forming and cutting tools are subjected to the intense wear solicitations. Usually, they are either subject to superficial heat treatments or are covered with various materials with high mechanical properties. In recent years, thermal spraying is used increasingly in engineering area because of the large range of materials that can be used for the coatings. Titanium nitride is a ceramic material with high hardness which is used to cover the cutting tools increasing their lifetime. The paper presents the results obtained after deposition of titanium nitride layers by reactive plasma spraying (RPS). As deposition material was used titanium powder and as substratum was used titanium alloy (Ti6Al4V). Macroscopic and microscopic (scanning electron microscopy) images of the deposited layers and the X ray diffraction of the coatings are presented. Demonstration program with layers deposited with thickness between 68,5 and 81,4 μm has been achieved and presented.
Modelling and Caracterisation of sea salt aerosols during ChArMEx-ADRIMED campaign in Ersa
NASA Astrophysics Data System (ADS)
Claeys, Marine; Roberts, Greg; Mallet, Marc; Sciare, Jean; Arndt, Jovanna; Mihalopoulos, Nikos
2015-04-01
During ChArMEx-ADRIMED campaign (June and July 2013), aerosol particles measurements were conducted in Ersa (600 m asl), Cap Corsica. The in-situ instrumentation allowed to characterize sea salt aerosols (SSA) by their physico-chemical and optical properties and their size distribution. This study concentrates particularly on a period of a few days where the concentration of sea salt aerosols was higher. The chemistry results indicate that the SSA measured during this period were mostly aged. The comparison of the number size distributions of air masses allow to determine the SSA size mode. These data are used to evaluate the sea salt aerosol emission scheme implemented in the regional scale Meso-Nh model. A new emission scheme based on available source fonctions is tested for different sea state conditions to evaluate the direct radiative impact of sea salt aerosols over the Mediterranean basin.
Modelisation frequentielle de la permittivite du beton pour le controle non destructif par georadar
NASA Astrophysics Data System (ADS)
Bourdi, Taoufik
Le georadar (Ground Penetrating Radar (GPR)) constitue une technique de controle non destructif (CND) interessante pour la mesure des epaisseurs des dalles de beton et la caracterisation des fractures, en raison de ses caracteristiques de resolution et de profondeur de penetration. Les equipements georadar sont de plus en plus faciles a utiliser et les logiciels d'interpretation sont en train de devenir plus aisement accessibles. Cependant, il est ressorti dans plusieurs conferences et ateliers sur l'application du georadar en genie civil qu'il fallait poursuivre les recherches, en particulier sur la modelisation et les techniques de mesure des proprietes electriques du beton. En obtenant de meilleures informations sur les proprietes electriques du beton aux frequences du georadar, l'instrumentation et les techniques d'interpretation pourraient etre perfectionnees plus efficacement. Le modele de Jonscher est un modele qui a montre son efficacite dans le domaine geophysique. Pour la premiere fois, son utilisation dans le domaine genie civil est presentee. Dans un premier temps, nous avons valide l'application du modele de Jonscher pour la caracterisation de la permittivite dielectrique du beton. Les resultats ont montre clairement que ce modele est capable de reproduire fidelement la variation de la permittivite de differents types de beton sur la bande de frequence georadar (100 MHz-2 GHz). Dans un deuxieme temps, nous avons montre l'interet du modele de Jonscher en le comparant a d'autres modeles (Debye et Debye-etendu) deja utilises dans le domaine genie civil. Nous avons montre aussi comment le modele de Jonscher peut presenter une aide a la prediction de l'efficacite de blindage et a l'interpretation des ondes de la technique GPR. Il a ete determine que le modele de Jonscher permet de donner une bonne presentation de la variation de la permittivite du beton dans la gamme de frequence georadar consideree. De plus, cette modelisation est valable pour differents types de beton et a differentes teneurs en eau. Dans une derniere partie, nous avons presente l'utilisation du modele de Jonscher pour l'estimation de l'epaisseur d'une dalle de beton par la technique GPR dans le domaine frequentiel. Mots-cles : CND, beton, georadar , permittivite, Jonscher
Tumeur de Frantz: deux nouveaux cas
Bellarbi, Salma; Sina, Mohamed; Jahid, Ahmed; Zouaidia, Fouad; Bernoussi, Zakia; Mahassini, Najat
2013-01-01
A travers cet article, nous détaillons les caractéristiques clinico-pathologiques et discutons l'histogenèse de la tumeur de Frantz. Deux patients opérés pour tumeur de Frantz. Ils ont eu un traitement chirurgical seul. L'étude morphologique était couplée à un examen immuno-histochimique (IHC) utilisant les anticorps anti CD10, anti- vimentine, anti-énolase neuronale spécifique (NSE), anti-synaptophysine, anti-chromogranine A et anti-cytokératine. Un immuno-marquage à l'anti-oestrogène et l'anti-progestérone a été réalisé dans un cas. Il s'agissait d'une femme âgée de 45ans et d'un garçon de 12 ans. Les aspects échographiques et scannographiques étaient non spécifiques. Une exérèse chirurgicale complète a été réalisée dans les deux cas. L'analyse histologique évoquait une tumeur de Frantz. Le diagnostic a été retenu après étude immuno-histohimique. L'évolution était favorable sans récidive avec respectivement un recul de 18 et 16 mois. La tumeur de Frantz est une entité rare. Son diagnostic repose sur l'examen anatomopathologique complété par l'étude immuno-histochimique. Son pronostic est excellent après résection chirurgicale. PMID:23503717
Beaucournu, J.-C.; Meheretu, Y.; Welegerima, K.; Mergey, T.; Laudisoit, A.
2012-01-01
Nous décrivons un Nosopsyllus s. sto. nouveau du nord de l’Éthiopie, N. atsbi, montrant des ressemblances phylétiques avec N. incisus (Jordan & Rothschild, 1913), espèce cantonnée à la partie orientale de la région afrotropicale. Ceci nous conduit à revoir les populations classées comme incisus sur l’unique critère de la sétation du télomère (trois fortes soies marginales, au lieu des deux classiquement observées dans ces genre et sous-genre). Il apparaît que N. incisus s. sto. est connu au nord-est de la République Démocratique du Congo, au Kenya, au Burundi et en Tanzanie. Au nord et au sud de cette région (centre de l’Éthiopie, d’une part, Zambie et Malawi, d’autre part), deux taxa sont morphologiquement à part et nous les érigeons au rang de sous-espèces : Nosopsyllus (N.) incisus traubi n. ssp. et N. (N.) incisus lewisi n. ssp. À l’heure actuelle, le “complexe incisus” est riche de quatre taxa, à savoir, du nord au sud, N. atsbi n. sp., N. incisus traubi n. ssp., N. incisus incisus (Jordan & Rothschild, 1913) et N. incisus lewisi n. ssp. PMID:22314238
Modelisation des emissions de particules microniques et nanometriques en usinage
NASA Astrophysics Data System (ADS)
Khettabi, Riad
La mise en forme des pieces par usinage emet des particules, de tailles microscopiques et nanometriques, qui peuvent etre dangereuses pour la sante. Le but de ce travail est d'etudier les emissions de ces particules pour fins de prevention et reduction a la source. L'approche retenue est experimentale et theorique, aux deux echelles microscopique et macroscopique. Le travail commence par des essais permettant de determiner les influences du materiau, de l'outil et des parametres d'usinage sur les emissions de particules. E nsuite un nouveau parametre caracterisant les emissions, nomme Dust unit , est developpe et un modele predictif est propose. Ce modele est base sur une nouvelle theorie hybride qui integre les approches energetiques, tribologiques et deformation plastique, et inclut la geometrie de l'outil, les proprietes du materiau, les conditions de coupe et la segmentation des copeaux. Il ete valide au tournage sur quatre materiaux: A16061-T6, AISI1018, AISI4140 et fonte grise.
MICROROC: MICRO-mesh gaseous structure Read-Out Chip
NASA Astrophysics Data System (ADS)
Adloff, C.; Blaha, J.; Chefdeville, M.; Dalmaz, A.; Drancourt, C.; Dulucq, F.; Espargilière, A.; Gaglione, R.; Geffroy, N.; Jacquemier, J.; Karyotakis, Y.; Martin-Chassard, G.; Prast, J.; Seguin-Moreau, N.; de La Taille, Ch; Vouters, G.
2012-01-01
MICRO MEsh GAseous Structure (MICROMEGAS) and Gas Electron Multipliers (GEM) detectors are two candidates for the active medium of a Digital Hadronic CALorimeter (DHCAL) as part of a high energy physics experiment at a future linear collider (ILC/CLIC). Physics requirements lead to a highly granular hadronic calorimeter with up to thirty million channels with probably only hit information (digital readout calorimeter). To validate the concept of digital hadronic calorimetry with such small cell size, the construction and test of a cubic meter technological prototype, made of 40 planes of one square meter each, is necessary. This technological prototype would contain about 400 000 electronic channels, thus requiring the development of front-end ASIC. Based on the experience gained with previous ASIC that were mounted on detectors and tested in particle beams, a new ASIC called MICROROC has been developped. This paper summarizes the caracterisation campaign that was conducted on this new chip as well as its integration into a large area Micromegas chamber of one square meter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cadene, M.
Thin films of Cd sub(1-y)Zn sub y S (0 < y < 0.2) have been prepared either by thermal evaporation of the powdered solids from a single crucible, or by rapid evaporation from two crucibles. Different methods were used to characterise the films according to their structural, electrical and electron-optical properties as a function of the amount of Zn in the film. Both liquid-phase and solid-phase ion exchange processes have been used to deposit a thin film of Cu/sub 2/S on the Cd sub(1-y)Zn sub y S film to produce a p-n hetero-junction. A study of the growth of themore » Cd/sub 2/S layer has been carried out. Photocurrents and voltages have been determined for these Cu/sub 2/S-CdZnS cells.« less
Towards a complete caracterisation of Ganymede's environnement
NASA Astrophysics Data System (ADS)
Cessateur, Gaël; Barthélémy, Mathieu; Lilensten, Jean; Dudok de Wit, Thierry; Kretzschmar, Matthieu; Mbemba Kabuiku, Lydie
2013-04-01
In the framework to the JUICE mission to the Jovian system, a complete picture of the interaction between Ganymede's atmosphere and external forcing is needed. This will definitely allow us to constrain instrument performances according to the mission objectives. The main source of information regarding the upper atmosphere is the non LTE UV-Visible-near IR emissions. Those emissions are both induce by the incident solar UV flux and particle precipitations. This work aims at characterizing the impact from those external forcing, and then at deriving some key physical parameters that are measurable by an orbiter, namely the oxygen red line at 630 nm or the resonant oxygen line at 130 nm for example. We will also present the 4S4J instrument, a proposed EUV radiometer, which will provides the solar local EUV flux, an invaluable parameter for the JUICE mission. Based on new technologies and a new design, only two passbands are considered for reconstructing the whole EUV spectrum.
NASA Astrophysics Data System (ADS)
Francoeur, Dany
Cette these de doctorat s'inscrit dans le cadre de projets CRIAQ (Consortium de recherche et d'innovation en aerospatiale du Quebec) orientes vers le developpement d'approches embarquees pour la detection de defauts dans des structures aeronautiques. L'originalite de cette these repose sur le developpement et la validation d'une nouvelle methode de detection, quantification et localisation d'une entaille dans une structure de joint a recouvrement par la propagation d'ondes vibratoires. La premiere partie expose l'etat des connaissances sur l'identification d'un defaut dans le contexte du Structural Health Monitoring (SHM), ainsi que la modelisation de joint a recouvrements. Le chapitre 3 developpe le modele de propagation d'onde d'un joint a recouvrement endommage par une entaille pour une onde de flexion dans la plage des moyennes frequences (10-50 kHz). A cette fin, un modele de transmission de ligne (TLM) est realise pour representer un joint unidimensionnel (1D). Ce modele 1D est ensuite adapte a un joint bi-dimensionnel (2D) en faisant l'hypothese d'un front d'onde plan incident et perpendiculaire au joint. Une methode d'identification parametrique est ensuite developpee pour permettre a la fois la calibration du modele du joint a recouvrement sain, la detection puis la caracterisation de l'entaille situee sur le joint. Cette methode est couplee a un algorithme qui permet une recherche exhaustive de tout l'espace parametrique. Cette technique permet d'extraire une zone d'incertitude reliee aux parametres du modele optimal. Une etude de sensibilite est egalement realisee sur l'identification. Plusieurs resultats de mesure sur des joints a recouvrements 1D et 2D sont realisees permettant ainsi l'etude de la repetabilite des resultats et la variabilite de differents cas d'endommagement. Les resultats de cette etude demontrent d'abord que la methode de detection proposee est tres efficace et permet de suivre la progression d'endommagement. De tres bons resultats de quantification et de localisation d'entailles ont ete obtenus dans les divers joints testes (1D et 2D). Il est prevu que l'utilisation d'ondes de Lamb permettraient d'etendre la plage de validite de la methode pour de plus petits dommages. Ces travaux visent d'abord la surveillance in-situ des structures de joint a recouvrements, mais d'autres types de defauts. (comme les disbond) et. de structures complexes sont egalement envisageables. Mots cles : joint a recouvrement, surveillance in situ, localisation et caracterisation de dommages
Etude de l'amelioration de la qualite des anodes par la modification des proprietes du brai
NASA Astrophysics Data System (ADS)
Bureau, Julie
La qualite des anodes produites se doit d'etre bonne afin d'obtenir de l'aluminium primaire tout en reduisant le cout de production du metal, la consommation d'energie et les emissions environnementales. Or, l'obtention des proprietes finales de l'anode necessite une liaison satisfaisante entre le coke et le brai. Toutefois, la matiere premiere actuelle n'assure pas forcement la compatibilite entre le coke et le brai. Une des solutions les plus prometteuses, pour ameliorer la cohesion entre ces deux materiaux, est la modification des proprietes du brai. L'objectif de ce travail consiste a modifier les proprietes du brai par l'ajout d'additifs chimiques afin d'ameliorer la mouillabilite du coke par le brai modifie pour produire des anodes de meilleure qualite. La composition chimique du brai est modifiee en utilisant des tensioactifs ou agents de modification de surface choisis dans le but d'enrichir les groupements fonctionnels susceptibles d'ameliorer la mouillabilite. L'aspect economique, l'empreinte environnementale et l'impact sur la production sont consideres dans la selection des additifs chimiques. Afin de realiser ce travail, la methodologie consiste a d'abord caracteriser les brais non modifies, les additifs chimiques et les cokes par la spectroscopie infrarouge a transformee de Fourier (FTIR) afin d'identifier les groupements chimiques presents. Puis, les brais sont modifies en ajoutant un additif chimique afin de possiblement modifier ses proprietes. Differentes quantites d'additif sont ajoutees afin d'examiner l'effet de la variation de la concentration sur les proprietes du brai modifie. La methode FTIR permet d'evaluer la composition chimique des brais modifies afin de constater si l'augmentation de la concentration d'additif enrichit les groupements fonctionnels favorisant l'adhesion coke/brai. Ensuite, la mouillabilite du coke par le brai est observee par la methode goutte- sessile. Une amelioration de la mouillabilite par la modification a l'aide d'un additif chimique signifie une possible amelioration de l'interaction entre le coke et le brai modifie. Afin de completer l'evaluation des donnees recueillies, les resultats de la FTIR et de la mouillabilite sont analyses par le reseau neuronal artificiel afin de mieux comprendre les mecanismes sous-jacents. A la lumiere des resultats obtenus, les additifs chimiques les plus prometteurs sont selectionnes afin de verifier l'effet de leur utilisation sur la qualite des anodes. Pour ce faire, des anodes de laboratoire sont produites en utilisant des brais non modifies et des brais modifies avec les additifs chimiques selectionnes. Par la suite, les anodes sont carottees afin de les caracteriser en determinant certaines de leurs proprietes physiques et chimiques. Enfin, les resultats des echantillons d'anodes faites d'un meme brai non modifie et modifie sont compares afin d'evaluer l'amelioration de la qualite des anodes. Finalement, un examen de l'impact possible de l'utilisation d'un additif chimique pour modifier le brai sur la consommation energetique et en carbone ainsi que la quantite d'aluminium produit est realise. Afin de modifier le brai, trois differents additifs chimiques sont selectionnes, soit un tensioactif et deux agents de modification de surface. L'analyse FTIR des experimentations menees sur les brais modifies demontre que deux additifs ont modifie la composition chimique des brais experimentes. L'analyse des resultats des tests goutte-sessile laisse supposer qu'un brai modifie par ces deux additifs ameliore possiblement l'interaction avec les cokes employes dans cette etude. L'analyse par reseau neuronal artificiel des donnees recueillies permet de mieux comprendre le lien entre la composition chimique d'un brai et sa capacite de mouillabilite avec un coke. La caracterisation des echantillons d'anodes produites permet d'affirmer que ces deux additifs peuvent ameliorer certaines des proprietes anodiques comparativement aux echantillons standards. L'analyse des resultats demontre que l'un des deux additifs semble donner des resultats plus prometteurs. Au final, les travaux realises au cours de ce projet demontrent qu'il est possible d'ameliorer la qualite anodique en modifiant les proprietes du brai. De plus, l'analyse des resultats obtenus fournit une meilleure comprehension des mecanismes entre un brai et un additif chimique.
NASA Astrophysics Data System (ADS)
Daran-Daneau, Cyril
In order to answer the energetic needs of the future, insulation, which is the central piece of high voltage equipment, has to be reinvented. Nanodielectrics seem to be the promise of a mayor technological breakthrough. Based on nanocomposites with a linear low density polyethylene matrix reinforced by nano-clays and manufactured from a commercial master batch, the present thesis aims to characterise the accuracy of measurement techniques applied on nanodielectrics and also the dielectric properties of these materials. Thus, dielectric spectroscopy accuracy both in frequency and time domain is analysed with a specific emphasis on the impact of gold sputtering of the samples and on the measurements transposition from time domain to frequency domain. Also, when measuring dielectric strength, the significant role of surrounding medium and sample thickness on the variation of the alpha scale factor is shown and analysed in relation with the presence of surface partial discharges. Taking into account these limits and for different nanoparticles composition, complex permittivity as a function of frequency, linearity and conductivity as a function of applied electric field is studied with respect to the role that seems to play nanometrics interfaces. Similarly, dielectric strength variation as a function of nano-clays content is investigated with respect to the partial discharge resistance improvement that seems be induced by nanoparticle addition. Finally, an opening towards nanostructuration of underground cables' insulation is proposed considering on one hand the dielectric characterisation of polyethylene matrix reinforced by nano-clays or nano-silica nanodielectrics and on the other hand a succinct cost analysis. Keywords: nanodielectric, linear low density polyethylene, nanoclays, dielectric spectroscopy, dielectric breakdown
Carbon nano structures: Production and characterization
NASA Astrophysics Data System (ADS)
Beig Agha, Rosa
L'objectif de ce memoire est de preparer et de caracteriser des nanostructures de carbone (CNS -- Carbon Nanostructures, en licence a l'Institut de recherche sur l'hydrogene, Quebec, Canada), un carbone avec un plus grand degre de graphitisation et une meilleure porosite. Le Chapitre 1 est une description generale des PEMFCs (PEMFC -- Polymer Electrolyte Membrane Fuel Cell) et plus particulierement des CNS comme support de catalyseurs, leur synthese et purification. Le Chapitre 2 decrit plus en details la methode de synthese et la purification des CNS, la theorie de formation des nanostructures et les differentes techniques de caracterisation que nous avons utilises telles que la diffraction aux rayons-X (XRD -- X-ray diffraction), la microscopie electronique a transmission (TEM -- transmission electron microscope ), la spectroscopie Raman, les isothermes d'adsorption d'azote a 77 K (analyse BET, t-plot, DFT), l'intrusion au mercure, et l'analyse thermogravimetrique (TGA -- thermogravimetric analysis). Le Chapitre 3 presente les resultats obtenus a chaque etape de la synthese des CNS et avec des echantillons produits a l'aide d'un broyeur de type SPEXRTM (SPEX/CertiPrep 8000D) et d'un broyeur de type planetaire (Fritsch Pulverisette 5). La difference essentielle entre ces deux types de broyeur est la facon avec laquelle les materiaux sont broyes. Le broyeur de type SPEX secoue le creuset contenant les materiaux et des billes d'acier selon 3 axes produisant ainsi des impacts de tres grande energie. Le broyeur planetaire quant a lui fait tourner et deplace le creuset contenant les materiaux et des billes d'acier selon 2 axes (plan). Les materiaux sont donc broyes differemment et l'objectif est de voir si les CNS produits ont les memes structures et proprietes. Lors de nos travaux nous avons ete confrontes a un probleme majeur. Nous n'arrivions pas a reproduire les CNS dont la methode de synthese a originellement ete developpee dans les laboratoires de l'Institut de recherche sur l'hydrogene (IRH). Nos echantillons presentaient toujours une grande quantite de carbure de fer au detriment de la formation de nanostructures de carbone. Apres plusieurs mois de recherche nous avons constate que les metaux de base, soit le fer et le cobalt, etaient contamines. Neanmoins, ces recherches nous ont enseigne beaucoup et les resultats sont presentes aux Appendices I a III. Le carbone de depart est du charbon active commercial (CNS201) qui a ete prealablement chauffe a 1,000°C sous vide pendant 90 minutes pour se debarrasser de toute humidite et autres impuretes. En premiere etape, dans un creuset d'acier durci du CNS201 pretraite fut melange a une certaine quantite de Fe et de Co (99.9 % purs). Des proportions typiques sont 50 pd. %, 44 pd. %, et 6 pd. % pour le C, le Fe, et le Co respectivement. Pour les echantillons prepares avec le broyeur SPEX, trois a six billes en acier durci furent utilisees pour le broyage, de masse relative echantillon/poudre de 35 a 1. Pour les echantillons prepares avec le broyeur planetaire, trente-six billes en acier durci furent utilisees pour le broyage, de masse relative echantillon/poudre de 10 a 1. L'hydrogene fut alors introduit dans le creuset pour les deux types de broyeur a une pression de 1.4 MPa, et l'echantillon fut broye pendant 12 h pour le SPEX et 24 h pour le planetaire. Le broyeur SPEX a un rendement de transfert d'energie mecanique plus grand qu'un broyeur planetaire, mais il a le desavantage de contaminer davantage l'echantillon en Fe par attrition. Cependant, ceci peut etre neglige vu que le Fe etait un des catalyseurs metalliques ajoutes au creuset. En deuxieme etape, l'echantillon broye est transfere sous gaz inerte (argon) dans un tube en quartz, qui est alors chauffe a 700°C pendant 90 minutes. Des mesures de patrons de diffraction a rayons-X sur poudre furent faites pour caracteriser les changements structurels des CNS lors des etapes de synthese. Ces mesures furent prises avec un diffractometre Bruker D8 FOCUS utilisant le rayonnement Cu Ka (lambda = 1.54054 A) et une geometrie Theta/2Theta. La Figure 3.1 montre le patron de diffraction de rayon-X du charbon active utilise comme precurseur pour produire les CNS. Le charbon active est prechauffe a haute temperature (1,000°C) pendant 1 h pour enlever l'humidite. La Figure 3.2 montre les patrons de diffraction de rayons-X des echantillons SPEX et planetaire apres broyage de 12 h et 24 h, respectivement. Les structures de charbon ne sont pas encore bien definies, mais un pic a 2theta ≈ 20°-30° correspond aux petites cristallites a caractere turbostatique et un pic correspondant au fer et au carbure de fer apparait a 2theta ≈ 45°. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Demers, Vincent
L'objectif de ce projet est de determiner les conditions de laminage et la temperature de traitement thermique maximisant les proprietes fonctionnelles de l'alliage a memoire de forme Ti-Ni. Les specimens sont caracterises par des mesures de calorimetrie, de microscopie optique, de gene ration de contrainte, de deformation recuperable et des essais mecaniques. Pour un cycle unique, l'utilisation d'un taux d'ecrouissage e=1.5 obtenu avec l'application d'une force de tension FT = 0.1sigma y et d'une huile minerale resulte en un echantillon droit, sans microfissure et qui apres un recuit a 400°C, produit un materiau nanostructure manifestant des proprietes fonctionnelles deux fois plus grandes que le meme materiau ayant une structure polygonisee. Pour des cycles repetes, les memes conditions de laminage sont valables mais le niveau de deformation optimal est situe entre e=0.75-2, et depend particulierement du mode de sollicitation, du niveau de stabilisation et du nombre de cycles a la rupture requis par l'application.
Etude de l'affaiblissement du comportement mecanique du pergelisol du au rechauffement climatique
NASA Astrophysics Data System (ADS)
Buteau, Sylvie
Le rechauffement climatique predit pour les prochaines decennies, aura des impacts majeurs sur le pergelisol qui sont tres peu documentes pour l'instant. La presente etude a pour but d'evaluer ces impacts sur les proprietes mecaniques du pergelisol et sa stabilite a long terme. Une nouvelle technique d'essai de penetration au cone a taux de deformation controle, a ete developpee pour caracteriser en place le pergelisol. Ces essais geotechniques et la mesure de differentes proprietes physiques ont ete effectues sur une butte de pergelisol au cours du printemps 2000. Le developpement et l'utilisation d'un modele geothermique 1D tenant compte de la thermodependance du comportement mecanique ont permis d'evaluer que les etendues de pergelisol chaud deviendraient instables a la suite d'un rechauffement de l'ordre de 5°C sur cent ans. En effet, la resistance mecanique du pergelisol diminuera alors rapidement jusqu'a 11,6 MPa, ce qui correspond a une perte relative de 98% de la resistance par rapport a un scenario sans rechauffement.
Methodes de caracterisation des proprietes thermomecaniques d'un acier martensitique =
NASA Astrophysics Data System (ADS)
Ausseil, Lucas
Le but de l'etude est de developper des methodes permettant de mesurer les proprietes thermomecaniques d'un acier martensitique lors de chauffe rapide. Ces donnees permettent d'alimenter les modeles d'elements finis existant avec des donnees experimentales. Pour cela, l'acier 4340 est utilise. Cet acier est notamment utilise dans les roues d'engrenage, il a des proprietes mecaniques tres interessantes. Il est possible de modifier ses proprietes grâce a des traitements thermiques. Le simulateur thermomecanique Gleeble 3800 est utilise. Il permet de tester theoriquement toutes les conditions presentes dans les procedes de fabrication. Avec les tests de dilatation realises dans ce projet, les temperatures exactes de changement de phases austenitiques et martensitiques sont obtenues. Des tests de traction ont aussi permis de deduire la limite d'elasticite du materiau dans le domaine austenitique allant de 850 °C a 1100 °C. L'effet des deformations sur la temperature de debut de transformation est montre qualitativement. Une simulation numerique est aussi realisee pour comprendre les phenomenes intervenant pendant les essais.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tesseyre, Y.
The study allowed development of an original measuring system for mobility, involving simultaneously a repulsive electrical field and a continuous gas flow. It made it possible to define a model to calculate ionic transparency of grates, taking into account electrical fields below and above them, ion mobility, speed of gas flow and geometric transparency. Calculation of the electrical field proceeded in a plane-plane system, taking into account the space load and diffusion; a graphic method was developed to determine the field, thus avoiding numerical integration of the diffusion equation. The tracings of the mobility spectra obtained in different gases mademore » it possible to determine characteristic discrete mobility values comparable to those observed by other more sophisticated systems for measuring mobilities, such as the flight time systems. Detection of pollutants in weak concentration in dry air was shown. However, the presence of water vapor in the air forms agglomerates around the ions formed, reducing resolution of the system and making it less applicable under normal atmospheric conditions.« less
NASA Astrophysics Data System (ADS)
Paradis, Alexandre
The principal objective of the present thesis is to elaborate a computational model describing the mechanical properties of NiTi under different loading conditions. Secondary objectives are to build an experimental database of NiTi under stress, strain and temperature in order to validate the versatility of the new model proposed herewith. The simulation model used presently at Laboratoire sur les Alliage a Memoire et les Systemes Intelligents (LAMSI) of ETS is showing good behaviour in quasi-static loading. However, dynamic loading with the same model do not allows one to include degradation. The goal of the present thesis is to build a model capable of describing such degradation in a relatively accurate manner. Some experimental testing and results will be presented. In particular, new results on the behaviour of NiTi being paused during cycling are presented in chapter 2. A model is developed in chapter 3 based on Likhachev's micromechanical model. Good agreement is found with experimental data. Finally, an adaptation of the model is presented in chapter 4, allowing it to be eventually implemented into a finite-element commercial software.
Hydrodynamic caracterisation of an heterogeneous aquifer system under semi-arid climate
NASA Astrophysics Data System (ADS)
Drias, T.; Toubal, A. Ch
2009-04-01
The studied zone is a part of the Mellegne's (North-East of Algeria) under pound, this zone is characterised by its semi-arid climate. The water bearing system is formed by the plio-quaternairy alluviums resting on a marley substratuim of age Eocene. The geostatiscitcs approach of the hydrodynamics parameters (Hydrolic load, transmisivity) allowed the study of their spatial distrubution (casting) by the method of Krigeage by blocks and the identification of zones with water-bearing potentialities. In this respect, the zone of Ain Chabro which, is situated in the South of the plain shows the best values of the transmisivity...... The use of a bidimensinnel model in the differences ended in the permanent regime allowed us to establish the global balence sheet (overall assessment) of the tablecloth and to refine the transmisivity field. These would vary more exactley between 10-4 to 10-2 m²/s. The method associating the probability appraoch of Krigeage to that determining the model has facilited the wedging of the model and clarified the inflitration value. Keys words: hydrodynamics, geostatiscitcs, Modeling, Chabro, Tébessa.
NASA Astrophysics Data System (ADS)
Croteau, Etienne
L'objectif de ce projet de doctorat est de developper des outils quantitatifs pour le suivi des traitements de chimiotherapie pour le cancer du sein et de leurs effets cardiotoxiques a l'aide de l'imagerie TEP dynamique. L'analyse cinetique en TEP dynamique permet l'evaluation de parametres biologiques in vivo. Cette analyse peut etre utilise pour caracteriser la reponse tumorale a la chimiotherapie et les effets secondaires nefastes qui peuvent en resulter. Le premier article de cette these decrit la mise au point des techniques d'analyse cinetique qui utilisent la fonction d'entree d'un radiotraceur derive de l'image dynamique. Des corrections de contamination radioactive externe (epanchement) et de l'effet de volume partiel ont ete necessaires pour standardiser l'analyse cinetique et la rendre quantitative. Le deuxieme article porte sur l'evaluation d'un nouveau radiotraceur myocardique. Le 11C-acetoacetate, un nouveau radiotraceur base sur un corps cetonique, a ete compare au 11C-acetate, couramment utilise en imagerie cardiaque TEP. L'utilisation de 3H-acetate et 14C-acetoacetate ont permis d'elucider la cinetique de ces traceurs depuis la fonction d'entree et la captation par les mitochondries cardiaques qui reflete la consommation en oxygene, jusqu'a la liberation de leurs principaux metabolites reciproques (3H20 et 14CO2). Le troisieme et dernier article de cette these presente l'integration d'un modele qui evalue la reserve cardiaque de perfusion et de consommation en oxygene. Un modele de cardiomyopathie a ete etabli a l'aide d'un agent chimiotherapeutique contre le cancer du sein, la doxorubicine, reconnu comme etant cardiotoxique. Un protocole de repos/effort a permis d'evaluer la capacite d'augmentation de perfusion et de consommation en oxygene par le coeur. La demonstration d'une reserve cardiaque reduite caracterise la cardiotoxicite. La derniere contribution de cette these porte sur la mise au point de methodes peu invasives pour mesurer la fonction d'entree en modele animal avec l'utilisation de l'artere caudale et un compteur microvolumetrique, la bi-modalite TEP/IRM dynamique avec le Gd-DTPA et l'etablissement d'un modele d'evaluation simultane de cardiotoxicite et reponse tumorale chez la souris. Le developpement d'outils d'analyse TEP dans l'evaluation de la cardiotoxicite lors de traitements du canter du sein permet de mieux comprendre la relation entre les dommages mitochondriaux et la diminution de la fraction d'ejection. Mots cles : Tomographie d'emission par positrons (TEP), analyses cinetiques, IIC-acetate, 11Cacetoacetate, cardiotoxicite.
Caracterisation of anthropogenic contribution to the coastal fluorescent organic matter
NASA Astrophysics Data System (ADS)
El Nahhal, Ibrahim; Nouhi, Ayoub; Mounier, Stéphane
2015-04-01
It is known that most of the coastal fluorescent organic matter is of a terrestrial origin (Parlanti, 2000; Tedetti, Guigue, & Goutx, 2010). However, the contribution of the anthropogenic organic matter to this pool is not well defined and evaluated. In this work the monitoring of little bay (Toulon Bay, France) was done in the way to determine the organic fluorescent response during a winter period. The sampling campaign consisted of different days during the month of December, 2014 ( 12th, 15th, 17th, 19th) on 21 different sampling sites for the fluorescence measurements (without any filtering of the samples) and the whole month of December for the bacterial and the turbidity measurements. Excitation Emission Matrices (EEMs) of fluorescence (from 200 to 400 nm and 220 to 420 nm excitation and emission range) were treated by parallel factor analysis (PARAFAC).The parafac analysis of the EEM datasets was conducted using PROGMEEF software in Matlab langage. On the same time that the turbidity and bacterial measurement (particularly the E.Coli concentration) were determined. The results gives in a short time range, information on the the contribution of the anthropogenic inputs to the coastal fluorescent organic matter. In addition, the effect of salinity on the photochemical degradation of the anthropogenic organic matter (especially those from wastewater treatment plants) will be studied to investigate their fate in the water end member by the way of laboratory experiments. Parlanti, E. (2000). Dissolved organic matter fluorescence spectroscopy as a tool to estimate biological activity in a coastal zone submitted to anthropogenic inputs. Organic Geochemistry, 31(12), 1765-1781. doi:10.1016/S0146-6380(00)00124-8 Tedetti, M., Guigue, C., & Goutx, M. (2010). Utilization of a submersible UV fluorometer for monitoring anthropogenic inputs in the Mediterranean coastal waters. Marine Pollution Bulletin, 60(3), 350-62. doi:10.1016/j.marpolbul.2009.10.018
NASA Astrophysics Data System (ADS)
Varlet, P.; Beuvon, F.; Cervera, P.; Averbeck, D.; Daumas-Duport, C.
1998-04-01
Poly (ADP-ribose) polymerase (PARP) is a nuclear enzyme encompassing two zinc finger motifs which specifically binds to radiation induced DNA strand breaks. We develop a new immuno-labelling of poly ADP-ribose which coupled together with the immunodetection of cells in cycle with MIB1, permits to detect and quantify the DNA fragmentation induced by radiations (Cesium137). This method, applied to organotypical cultures of human oligodendroglioma, submitted to radiation, a dose dependant nuclear signal. This one increased significantly in the presence of a radiosensitizer like iododeoxyuridine (IUDR 5 g/ml). This poly ADP-ribose immunodetection can be useful, to detect furtherly the individual radiosensitivity of human glioma. Les protéases “ICE-like" ou caspases, sont les homologues humaines du produit du gène ced-3 du ver Caenorhabditis elegans et sont activées lors des étapes précoces de l'apoptose. L'objectif de ce travail vise à déterminer dans quelle mesure l'inhibition de l'une d'entre elles, la caspase-3 est susceptible de modifier la sensibilité des cellules vis-à-vis de l'apoptose radioinduite. Des lymphocytes spléniques murins irradiés en présence de Ac-DVED-CHO un inhibiteur spécifique de la caspase-3 présentent un taux de particules hypodiploïdes radioinduites bien inférieur à celui des contrôles et une diminution drastique de la fragmentation internucléosomale de l'ADN. Toutefois, ni l'externalisation des phospholipides anioniques, autre marqueur spécifique de l'apoptose, ni la viabilité ne sont affectées.
Mesure de haute resolution de la fonction de distribution radiale du silicium amorphe pur
NASA Astrophysics Data System (ADS)
Laaziri, Khalid
1999-11-01
Cette these porte sur l'etude de la structure du silicium amorphe prepare par irradiation ionique. Elle presente des mesures de diffraction de rayons X sur de la poudre de silicium cristallin, du silicium amorphe relaxe et non relaxe, ainsi que tous les developpements mathematiques et physiques necessaires pour extraire la fonction de distribution radiale correspondant a chaque echantillon. Au Chapitre I, nous presentons une methode de fabrication de membranes minces de silicium amorphe pur. Il y a deux etapes majeures lors du processus de fabrication: l'implantation ionique, afin de creer une couche amorphe de plusieurs microns et l'attaque chimique, pour enlever le reste du materiau cristallin. Nous avons caracterise premierement les membranes de silicium amorphe par spectroscopie Raman pour verifier qu'il ne reste plus de trace de materiau cristallin dans les films amorphes. Une deuxieme caracterisation par detection de recul elastique (ERD-TOF) sur ces memes membranes a montre qu'il y a moins de 0.1% atomique de contaminants tels que l'oxygene, le carbone, et l'hydrogene. Au Chapitre II, nous proposons une nouvelle methode de correction de la contribution inelastique "Compton" des spectres de diffusion totale afin d'extraire les pics de diffusion elastique, responsable de la diffraction de Bragg. L'article presente tout d'abord une description simplifiee d'une theorie sur la diffusion inelastique dite "Impulse Approximation" (IA) qui permet de calculer des profils de Compton en fonction de l'energie et de l'angle de diffusion 2theta. Ces profils sont utilises comme fonction de lissage de la diffusion Compton experimentale. Pour lisser les pics de diffusion elastique, nous avons utilise une fonction pic de nature asymetrique. Aux Chapitre III, nous exposons de maniere detaillee les resultats des experiences de diffraction de rayons X sur les membranes de silicium amorphe et la poudre de silicium cristallin que nous avons preparees. Nous abordons aussi les differentes etapes experimentales, d'analyse ainsi que les methodes de determination et de filtrage des transformees de Fourier des donnees de diffraction. Une comparaison des fonctions de distribution radiale du silicium amorphe relaxe et non relaxe indique que la relaxation structurelle dans le silicium amorphe est probablement due en grande partie a une annihilation des defauts plutot qu'a une reorganisation atomique globale du reseau de silicium amorphe. La deduction de la coordination des pics correspondants au premiers voisins atomiques par lissage de fonctions gaussienne indique que la coordination du silicium amorphe relaxe est de 3.88, celle du non-relaxe est de 3.79, alors que la mesure de reference sur la poudre de silicium cristallin donne une valeur de 4 tel que prevu. La sous-coordination du silicium amorphe expliquerait pourquoi sa densite est inferieure a celle du silicium cristallin. (Abstract shortened by UMI.)
Croissance epitaxiale de GaAs sur substrats de Ge par epitaxie par faisceaux chimiques
NASA Astrophysics Data System (ADS)
Belanger, Simon
La situation energetique et les enjeux environnementaux auxquels la societe est confrontee entrainent un interet grandissant pour la production d'electricite a partir de l'energie solaire. Parmi les technologies actuellement disponibles, la filiere du photovoltaique a concentrateur solaire (CPV pour concentrator photovoltaics) possede un rendement superieur et mi potentiel interessant a condition que ses couts de production soient competitifs. La methode d'epitaxie par faisceaux chimiques (CBE pour chemical beam epitaxy) possede plusieurs caracteristiques qui la rendent interessante pour la production a grande echelle de cellules photovoltaiques a jonctions multiples a base de semi-conducteurs III-V. Ce type de cellule possede la meilleure efficacite atteinte a ce jour et est utilise sur les satellites et les systemes photovoltaiques a concentrateur solaire (CPV) les plus efficaces. Une des principales forces de la technique CBE se trouve dans son potentiel d'efficacite d'utilisation des materiaux source qui est superieur a celui de la technique d'epitaxie qui est couramment utilisee pour la production a grande echelle de ces cellules. Ce memoire de maitrise presente les travaux effectues dans le but d'evaluer le potentiel de la technique CBE pour realiser la croissance de couches de GaAs sur des substrats de Ge. Cette croissance constitue la premiere etape de fabrication de nombreux modeles de cellules solaires a haute performance decrites plus haut. La realisation de ce projet a necessite le developpement d'un procede de preparation de surface pour les substrats de germanium, la realisation de nombreuses sceances de croissance epitaxiale et la caracterisation des materiaux obtenus par microscopie optique, microscopie a force atomique (AFM), diffraction des rayons-X a haute resolution (HRXRD), microscopie electronique a transmission (TEM), photoluminescence a basse temperature (LTPL) et spectrometrie de masse des ions secondaires (SIMS). Les experiences ont permis de confirmer l'efficacite du procede de preparation de surface et d'identifier les conditions de croissance optimales. Les resultats de caracterisation indiquent que les materiaux obtenus presentent une tres faible rugosite de surface, une bonne qualite cristalline et un dopage residuel relativement important. De plus, l'interface GaAs/Ge possede une faible densite de defauts. Finalement, la diffusion d'arsenic dans le substrat de germanium est comparable aux valeurs trouvees dans la litterature pour la croissance a basse temperature avec les autres procedes d'epitaxie courants. Ces resultats confirment que la technique d'epitaxie par faisceaux chimiques (CBE) permet de produire des couches de GaAs sur Ge de qualite adequate pour la fabrication de cellules solaires a haute performance. L'apport a la communaute scientifique a ete maximise par le biais de la redaction d'un article soumis a la revue Journal of Crystal Growth et la presentation des travaux a la conference Photovoltaics Canada 2010 . Mots-cles : Epitaxie par jets chimiques, Chemical beam epitaxy, CBE, MOMBE, Germanium, GaAs, Ge
Caracterisation mecanique dynamique de materiaux poro-visco-elastiques
NASA Astrophysics Data System (ADS)
Renault, Amelie
Poro-viscoelastic materials are well modelled with Biot-Allard equations. This model needs a number of geometrical parameters in order to describe the macroscopic geometry of the material and elastic parameters in order to describe the elastic properties of the material skeleton. Several characterisation methods of viscoelastic parameters of porous materials are studied in this thesis. Firstly, quasistatic and resonant characterization methods are described and analyzed. Secondly, a new inverse dynamic characterization of the same modulus is developed. The latter involves a two layers metal-porous beam, which is excited at the center. The input mobility is measured. The set-up is simplified compared to previous methods. The parameters are obtained via an inversion procedure based on the minimisation of the cost function comparing the measured and calculated frequency response functions (FRF). The calculation is done with a general laminate model. A parametric study identifies the optimal beam dimensions for maximum sensitivity of the inversion model. The advantage of using a code which is not taking into account fluid-structure interactions is the low computation time. For most materials, the effect of this interaction on the elastic properties is negligible. Several materials are tested to demonstrate the performance of the method compared to the classical quasi-static approaches, and set its limitations and range of validity. Finally, conclusions about their utilisation are given. Keywords. Elastic parameters, porous materials, anisotropy, vibration.
Lahyani, Mounir; Karmouni, Tarik; Elkhader, Khalid; Koutani, Abdellatif; Andaloussi, Ahmed Ibn Attya
2014-01-01
Ce travail vise à analyser les résultats de la néphrectomie avec thrombectomie atrio-cave sous circulation extracorporelle (CEC) chez sept patients ayant un cancer du rein avec envahissement cave supra-diaphragmatique et de discuter les indications opératoires. Sept patients, six hommes et une femme dont l’âge varie entre 46ans et 65ans, ont été opérés d'un cancer du rein avec extension atrio-cave. L’écho-doppler a toujours permis la mise en évidence de l'extension veineuse mais la limite supérieure du thrombus était formellement identifiée par l'examen tomodensitométrique quatre fois, et par la résonance magnétique nucléaire dans tous les cas. Tous les patients ont été opérés sous CEC à cœur battant en normothermie. Un seul décès postopératoire est survenu. La durée du séjour en réanimation a été de 4,5 jours. Cinq patients ont eu à distance une dissémination métastatique. Cinq malades ont eu une médiane de survie de 11,5 mois (de 7 à16). Un malade a subi une métastasectomie pulmonaire 6 mois après la néphrectomie. L'exérèse des thrombi atrio-caves a été facilitée par la CEC avec une mortalité et une morbidité postopératoires acceptables mais les résultats à distance ont été décevants. Cette intervention ne peut être proposée qu'aux patients n'ayant aucune extension locorégionale et générale décelable, ce qui souligne l'importance des examens morphologiques préopératoires. PMID:25995777
Les tumeurs annexielles cutanées: étude anatomopathologique à propos de 96 cas
El Ochi, Mohamed Réda; Boudhas, Adil; Allaoui, Mohammed; Rharrassi, Issam; Chahdi, Hafsa; Bouzidi, Abderrahman Al; Oukabli, Mohammed
2015-01-01
Les tumeurs annexielles cutanées sont des tumeurs primitives cutanées à la fois rares et hétérogènes. Elles sont le plus souvent bénignes et rarement malignes. Elles sont dominées du point de vu morphologique par leur polymorphisme lésionnel. Le but de cette étude est de relever le profil épidémiologique et les différents aspects anatomopathologiques de ce groupe de tumeurs dans une cohorte de population marocaine et de les comparer avec les données de la littérature. Il s'agit d'une étude rétrospective de 96 cas de tumeurs annexielles cutanées colligées au service d'anatomie et de cytologie pathologique de l'Hôpital Militaire d'Instruction Mohammed V de rabat durant une période de 6 ans, de Janvier 2009 à Décembre 2014. Le pic de fréquence est situé entre 31 et 40 ans. L’âge moyen est de 36 ans avec une prédominance masculine (63,5%). Le siège de prédilection est la région de la tête et cou (47,9%). Les tumeurs bénignes (97,9%) sont plus fréquentes que les tumeurs malignes. La différenciation est folliculaire dans 51% des cas, eccrine/apocrine dans 44,8% des cas et sébacée dans 4,2% des cas. Le type histologique le plus fréquent est le pilomatrixome (33,4%) suivi par l'hidradénome (12,5%) et le spiradénome eccrine (11,5%). Les tumeurs annexielles cutanées sont rares et très variées. Le profil épidémiologique et les aspects anatomopathologiques qui ressortent sont globalement superposables à ceux rapportés dans la littérature. Elles sont majoritairement bénignes, à prédominance masculine et dominées par le pilomatrixome et l'hidradénome nodulaire. Les tumeurs malignes sont rares, agressives et surviennent à un âge plus avancé. PMID:26185579
Védy, S; Garnotel, E; Koeck, J-L; Simon, F; Molinier, S; Puidupin, A
2007-11-01
To determinate the origin of acquired S. aureus among hospitalised patients and to evaluate the transmission of strains between health care workers and hopistalised patients. The method chosen is a prospective study in risky clinical yards. Nasal swabing of patients and health care workers has been done to isolate bacterial samples. Caracterisation and comparaison of bacterial strains have been made using their antibiotic resistance profil and a recent molecular genotyping technic named MLVA (Multi Locus Variable Number of Tandem Repeat). It has never been used in such context. One hundred and fifty-seven strains have been isolated. They have been compared while realizing 1900 PCR and agar gel electrophoresis in 10 days. 15 clones were identified. One of them is mainly represented among patient's nasal carriage and acquired strains. As far as antibiotype and agr type are concerned, it is similar to hospital-acquired clone described in Europe with other technics (MRSA, Gentamicine-S agr 1). This clone appears to be also transmitted between health care workers and patients. Although it exists, we can't appreciate the intensity of this transmission. These results don't allow us to proceed to a systematic screening for nasal carriage among our health care workers. This study shows that MLVA could be a reliable molecular typing method, which could be used in every day practice. In our experience, it is as performing as PFGE, more didactic, faster and easier.
NASA Astrophysics Data System (ADS)
Filali, Bilai
Graphene, as an advanced carbon nano-structure, has attracted a deluge of interest of scholars recently because of it's outstanding mechanical, electrical and thermal properties. There are several different ways to synthesis graphene in practical ways, such as Mechanical Exfoliation, Chemical Vapor Deposition (CVD), and Anodic Arc discharge. In this thesis a method of graphene synthesis in plasma will be discussed, in which this synthesis method is supported by the erosion of the anode material. This graphene synthesis method is one of the most practical methods which can provide high production rate. High purity of graphene flakes have been synthesized with an anodic arc method under certain pressure (about 500 torr). Raman spectrometer, Scanning Electron Microscope (SEM), Atomic Force Microscopy (AFM) and Transmission Electron Microscopy (TEM) have been utilized for characterization of the synthesis products. Arc produced graphene and commercially available graphene was compared by those machine and the difference lies in the number of layers, the thicknesses of each layer and the shape of the structure itself. Temperature dependence of the synthesis procedure has been studied. It has been found that the graphene can be produced on a copper foil substrate under temperatures near the melting point of copper. However, with a decrease in substrate temperature yields a transformation of the synthesized graphene into amorphous carbon. Glow discharge was utilized to functionalize grapheme. SEM and EDS observation indicated increases of oxygen content in the graphene after its exposure to glow discharge.
NASA Astrophysics Data System (ADS)
Kodjo, Apedovi
The aim of this thesis is to contribute to the non-destructive characterization of concrete materials damaged by alkali-silica reaction (ASR). For this purpose, some nonlinear characterization techniques have been developed, as well as a nonlinear resonance test device. In order to optimize the sensitivity of the test device, the excitation module and signal processing have been improved. The nonlinear tests were conducted on seven samples of concrete damaged by ASR, three samples of concrete damaged by heat, three concrete samples damaged mechanically and three sound concrete samples. Since, nonlinear behaviour of the material is often attribute to its micro-defects hysteretic behaviour, it was shown at first that concrete damaged by ASR exhibits an hysteresis behaviour. To conduct this study, an acoustoelastic test was set, and then nonlinear resonance test device was used for characterizing sound concrete and concrete damaged by ASR. It was shown that the nonlinear technique can be used for characterizing the material without knowing its initial state, and also for detecting early damage in the reactive material. Studies were also carried out on the effect of moisture regarding the nonlinear parameters; they allowed understanding the low values of nonlinear parameters measured on concrete samples that were kept in high moisture conditions. In order to find a specific characteristic of damage caused by ASR, the viscosity of ASR gel was used. An approach, based on static creep analysis, performed on the material, while applying the nonlinear resonance technique. The spring-damping model of Maxwell was used for the interpretation of the results. Then, the creep time was analysed on samples damaged by ASR. It appears that the ASR gel increases the creep time. Finally, the limitations of the nonlinear resonance technique for in situ application have been explained and a new applicable nonlinear technique was initiated. This technique use an external source such as a mass for making non-linearity behaviour in the material, while an ultrasound wave is investigating the medium. Keywords. Concrete, Alkali-silica reaction, Nonlinear acoustics, Nonlinearity, Hysteresis, Damage diagnostics.
NASA Astrophysics Data System (ADS)
Mebarki, Fouzia
Cette etude vise a etudier la possibilite d'utiliser des materiaux composites a matrice thermoplastique pour des applications electriques comme les supports des systemes d'allumage dans les moteurs d'automobile. Nous nous interessons plus particulierement aux composites a base de polyethylene terephtalate (PET) recycle. Les isolants classiques comme le PET ne peuvent pas satisfaire toutes les exigences. L'introduction des renforts comme les fibres de verre et le mica peuvent ameliorer les caracteristiques mecaniques de ces materiaux. Toutefois, cette amelioration peut etre accompagnee d'une diminution des proprietes electriques surtout que ces materiaux doivent operer sous contraintes thermiques et electriques tres severes. Afin d'estimer la duree de vie de ces isolants, des essais de vieillissement accelere ont ete effectues a une frequence de 300Hz dans une plage de temperature allant de la temperature ambiante a 140°C. L'etude a haute temperature permettra de determiner la temperature de service des materiaux candidats. Des essais de la rupture dielectrique ont ete realises sur un grand nombre d'echantillon selon la norme ASTM D-149 relative aux mesures de rigidite dielectrique des isolants solides. Ces tests ont permis de deceler les echantillons problematiques et de verifier la qualite de ces isolants solides. Les differentes connaissances acquises lors de cette analyse ont servi a predire les performances des materiaux en service et vont permettre a la compagnie Groupe Lavergne d'apporter des ameliorations au niveau des formulations existantes et par la suite developper un materiau ayant les proprietes electriques et thermiques adequates pour ce type d'application. None None None None
NASA Astrophysics Data System (ADS)
Blanchard, Yann
An important goal, within the context of improving climate change modelling, is to enhance our understanding of aerosols and their radiative effects (notably their indirect impact as cloud condensation nuclei). The cloud optical depth (COD) and average ice particle size of thin ice clouds (TICs) are two key parameters whose variations could strongly influence radiative effects and climate in the Arctic environment. Our objective was to assess the potential of using multi-band thermal radiance measurements of zenith sky radiance for retrieving COD and effective particle diameter (Deff) of TICs in the Arctic. We analyzed and quantified the sensitivity of thermal radiance on many parameters, such as COD, Deff, water vapor content, cloud bottom altitude and thickness, size distribution and shape. Using the sensitivity of IRT to COD and Deff, the developed retrieval technique is validated in comparison with retrievals from LIDAR and RADAR. Retrievals were applied to ground-based thermal infrared data acquired for 100 TICs at the high-Arctic PEARL observatory in Eureka, Nunavut, Canada and were validated using AHSRL LIDAR and MMCR RADAR data. The results of the retrieval method were used to successfully extract COD up to values of 3 and to separate TICs into two types : TIC1 characterized by small crystals (Deff < 30 mum) and TIC2 by large ice crystals (Deff > 30 mum, up to 300 mum). Inversions were performed across two polar winters. At the end of this research, we proposed different alternatives to apply our methodology in the Arctic. Keywords : Remote sensing ; ice clouds ; thermal infrared multi-band radiometry ; Arctic.
Evidences of Attenuation Zones Under Vesuvius Volcano By Local and Regional Seismicity
NASA Astrophysics Data System (ADS)
Cubellis, E.; Marturano, A.
The seismicity at Vesuvius is characterised by events of moderate-energy concentrated in the caldera area. The foci of events are shallow, with depths less than 6 km under sea level. Periods of greater actvity were recorded in 1989, 1990, and, more recently, in 1995 and 1996. On October, 9, 1999 an earthquake (Ml=3.6) felt outside vesuvian area took place at Vesuvius-crater. It was not only the most energetic one since the last eruption of 1944 but also one of the most energetic among those occurring in the Vesuvian area since Roman times, as shown by an analysis of historical seismicity. Following the 9 october 1999 event, questionnaires were sent to all middle schools in the Vesuvian area and surrounding towns in order to define the extent to which the earthquake had been felt. The felt index was thus obtained, which represent the per- centage response to the question: Did you feel the earthquake? and used in later data processing. The felt index is a continuous parameter and this feature makes it possible, among other things, to relate it to ground motion parameters and overcome the prob- lem of the limits involved in using integer values of intensity. In particular, Q quality factor was determined by assuming direct proportionality between energy and felt in- dex. The values obtained were Q=60-90 and, Qa=100-150, in reasonable agreement with the P-wave quality factor of 70 to 100 reported below active volcanoes, consis- tent with high temperatures and generally associated with the presence of magmatic bodies. The near Southern Apennine seismogenetic zone, 50-100 km from Vesuvius, is characterised by prevalent normal faulting and large historical earthquakes. The last, the Irpinia earthquake of November 23, 1980 (Ms=6.9), developed on three fault sources at least, with apenninic trend (NW-SE), was characterised by elevated atten- uation zones in epicentral and external areas too. In particular, the macroseismic field showed a 25 km wide circular attenuation zone corresponding to the vesuvian area testifying the presence of a probable shallow large structure characterized by ductile beahaviour . The quality factor, obtained from local seismicity, and the extension of the circular attenuation zone, observed by regional earthquake, caracterise the attenu- ation source under Vesuvius volcano.
NASA Astrophysics Data System (ADS)
Allen, Steve
2000-10-01
Dans cette these nous presentons une nouvelle methode non perturbative pour le calcul des proprietes d'un systeme de fermions. Notre methode generalise l'approximation auto-coherente a deux particules proposee par Vilk et Tremblay pour le modele de Hubbard repulsif. Notre methode peut s'appliquer a l'etude du comportement pre-critique lorsque la symetrie du parametre d'ordre est suffisamment elevee. Nous appliquons la methode au probleme du pseudogap dans le modele de Hubbard attractif. Nos resultats montrent un excellent accord avec les donnees Monte Carlo pour de petits systemes. Nous observons que le regime ou apparait le pseudogap dans le poids spectral a une particule est un regime classique renormalise caracterise par une frequence caracteristique des fluctuations supraconductrices inferieure a la temperature. Une autre caracteristique est la faible densite de superfluide de cette phase demontrant que nous ne sommes pas en presence de paires preformees. Les resultats obtenus semblent montrer que la haute symetrie du parametre d'ordre et la bidimensionalite du systeme etudie elargissent le domaine de temperature pour lequel le regime pseudogap est observe. Nous argumentons que ce resultat est transposable aux supraconducteurs a haute temperature critique ou le pseudogap apparait a des' temperatures beaucoup plus grandes que la temperature critique. La forte symetrie dans ces systemes pourraient etre reliee a la theorie SO(5) de Zhang. En annexe, nous demontrons un resultat tout recent qui permettrait d'assurer l'auto-coherence entre les proprietes a une et a deux particules par l'ajout d'une dynamique au vertex irreductible. Cet ajout laisse entrevoir la possibilite d'etendre la methode au cas d'une forte interaction.
Radar response to crop residue cover and tillage application on postharvest agricultural surfaces
NASA Astrophysics Data System (ADS)
McNairn, Heather
Les informations sur les pratiques de conservation des sols comme le labourage et la gestion des residus de culture sont requises afin d'estimer avec exactitude les risques d'erosion des sols. Quoique les micro-ondes soient sensibles aux conditions d'humidite et aux proprietes geometriques des surfaces, il n'en demeure pas moins que l'on connait encore peu sur la sensibilite des micro-ondes polarisees lineaires ou des parametres polarimetriques du ROS en fonction des caracteristiques des residus. A partir de donnees prises a l'aide d'un diffusometre monte sur un camion en 1996 et lors d'une mission SIR-C menee en 1994, cette recherche a demontre que les micro-ondes sont sensibles a la fois a la quantite et au type de couverture de residus, de meme qu'a la teneur en eau des residus. La reponse des polarisations croisees lineaires et de plusieurs parametres polarimetriques, incluant la hauteur pedestre, a permis d'observer qu'une diffusion volumique importante avait lieu en presence de vegetation senescente qui se tenait debout et pour les champs non laboures. La diffusion de surface dominait cependant pour les champs avec de faibles quantites de residus et des residus plus fins. La recherche a toutefois demontre que des conditions de surface complexes etaient crees par differentes combinaisons de residus et de pratiques de labourage. Par consequent, il faudra attendre que des donnees multi-polarisees ou polarimetriques soient acquises par les capteurs prevus a bord du satellite canadien RADARSAT-2 et du satellite ENVISAT de l'Agence spatiale europeenne avant de pouvoir completement caracteriser les champs apres la recolte.
NASA Astrophysics Data System (ADS)
Tutashkonko, Sergii
Le sujet de cette these porte sur l'elaboration du nouveau nanomateriau par la gravure electrochimique bipolaire (BEE) --- le Ge mesoporeux et sur l'analyse de ses proprietes physico-chimiques en vue de son utilisation dans des applications photovoltaiques. La formation du Ge mesoporeux par gravure electrochimique a ete precedemment rapportee dans la litterature. Cependant, le verrou technologique important des procedes de fabrication existants consistait a obtenir des couches epaisses (superieure a 500 nm) du Ge mesoporeux a la morphologie parfaitement controlee. En effet, la caracterisation physico-chimique des couches minces est beaucoup plus compliquee et le nombre de leurs applications possibles est fortement limite. Nous avons developpe un modele electrochimique qui decrit les mecanismes principaux de formation des pores ce qui nous a permis de realiser des structures epaisses du Ge mesoporeux (jusqu'au 10 mum) ayant la porosite ajustable dans une large gamme de 15% a 60%. En plus, la formation des nanostructures poreuses aux morphologies variables et bien controlees est desormais devenue possible. Enfin, la maitrise de tous ces parametres a ouvert la voie extremement prometteuse vers la realisation des structures poreuses a multi-couches a base de Ge pour des nombreuses applications innovantes et multidisciplinaires grace a la flexibilite technologique actuelle atteinte. En particulier, dans le cadre de cette these, les couches du Ge mesoporeux ont ete optimisees dans le but de realiser le procede de transfert de couches minces d'une cellule solaire a triple jonctions via une couche sacrificielle en Ge poreux. Mots-cles : Germanium meso-poreux, Gravure electrochimique bipolaire, Electrochimie des semi-conducteurs, Report des couches minces, Cellule photovoltaique
Etude de la dynamique des porteurs dans des nanofils de silicium par spectroscopie terahertz
NASA Astrophysics Data System (ADS)
Beaudoin, Alexandre
Ce memoire presente une etude des proprietes de conduction electrique et de la dynamique temporelle des porteurs de charges dans des nanofils de silicium sondes par rayonnement terahertz. Les cas de nanofils de silicium non intentionnellement dopes et dopes type n sont compares pour differentes configurations du montage experimental. Les mesures de spectroscopie terahertz en transmission montre qu'il est possible de detecter la presence de dopants dans les nanofils via leur absorption du rayonnement terahertz (˜ 1--12 meV). Les difficultes de modelisation de la transmission d'une impulsion electromagnetique dans un systeme de nanofils sont egalement discutees. La detection differentielle, une modification au systeme de spectroscopie terahertz, est testee et ses performances sont comparees au montage de caracterisation standard. Les instructions et des recommendations pour la mise en place de ce type de mesure sont incluses. Les resultats d'une experience de pompe optique-sonde terahertz sont egalement presentes. Dans cette experience, les porteurs de charge temporairement crees suite a l'absorption de la pompe optique (lambda ˜ 800 nm) dans les nanofils (les photoporteurs) s'ajoutent aux porteurs initialement presents et augmentent done l'absorption du rayonnement terahertz. Premierement, l'anisotropie de l'absorption terahertz et de la pompe optique par les nanofils est demontree. Deuxiemement, le temps de recombinaison des photoporteurs est etudie en fonction du nombre de photoporteurs injectes. Une hypothese expliquant les comportements observes pour les nanofils non-dopes et dopes-n est presentee. Troisiemement, la photoconductivite est extraite pour les nanofils non-dopes et dopes-n sur une plage de 0.5 a 2 THz. Un lissage sur la photoconductivite permet d'estimer le nombre de dopants dans les nanofils dopes-n. Mots-cles: nanofil, silicium, terahertz, conductivite, spectroscopie, photoconductivite.
NASA Astrophysics Data System (ADS)
Ayari-Kanoun, Asma
Ce travail de these porte sur le developpement d'une nouvelle approche pour la localisation et l'organisation de nanocristaux de silicium realises par gravure electrochimique. Cette derniere represente une technique simple et peu couteuse par rapport aux autres techniques couramment utilisees pour la fabrication de nanocristaux de silicium. L'idee de ce travail a ete d'etudier la nanostructuration de minces couches de nitrure de silicium, d'environ 30 nm d'epaisseur pour permettre par la suite un arrangement periodique des nanocristaux de silicium. Cette pre-structuration est obtenue de facon artificielle en imposant un motif periodique via une technique de lithographie par faisceau d'electrons combinee avec une gravure plasma. Une optimisation des conditions de lithographie et de gravure plasma ont permis d'obtenir des reseaux de trous de 30 nm de diametre debouchant sur le silicium avec un bon controle de leur morphologie (taille, profondeur et forme). En ajustant les conditions de gravure electrochimique (concentration d'acide, temps de gravure et densite de courant), nous avons obtenu des reseaux -2D ordonnes de nanocristaux de silicium de 10 nm de diametre a travers ces masques de nanotrous avec le controle parfait de leur localisation, la distance entre les nanocristaux et leur orientation cristalline. Des etudes electriques preliminaires sur ces nanocristaux ont permis de mettre en evidence des effets de chargement. Ces resultats tres prometteurs confirment l'interet des nanocristaux de silicium realises par gravure electrochimique dans le futur pour la fabrication a grande echelle de dispositifs nanoelectroniques. Mots-cles : localisation, organisation, nanocristaux de silicium, gravure electrochimique, lithographie electronique, gravure plasma, nitrure de silicium.
NASA Astrophysics Data System (ADS)
Ammar Khodja, L'Hady
The rehabilitation and strengthening concrete structures in shear using composite materials such as externally bonded (EB) or near surface mounted rebar (NSMR) are well established techniques. However, debonding of these strengthening materials is still present and constitute the principal cause of shear failure of beams strengthened with composite materials. A new method called ETS (Embedded Through Section) was recently developed in order to avoid premature failures due to debonding of composite materials. The objective of this study is to highlight the importance and influence of important parameters on the behavior of CFRP bars anchorages subjected to pullout forces. These parameters are: concrete strength, anchorage length of CFRP bars, hole diameter in concrete, diameter of the bar and CFRP surface type (smooth versus sanded). Understanding the influence of these parameters on the relationship between the pullout force and the slip is paramount. This allows an accurate description of the behavior of all elements that contribute to the resistance of the CFRP bars pullout. A series of 25 specimens were subjected to pullout tests. The impact of these parameters on the pullout performance of CFRP rods is summarized in terms of failure mode, ultimate tensile strength and loading force slip relationship. The results of these investigations show that using the ETS method, failure of the anchors can be avoided by providing adequate anchorage length and concrete strength. The method provides greater confinement and thus leads to a substantial improvement in the performance of anchors. As a result, designers will be able to avoid failures that are due to debonding of anchors using thereby the full capabilities of reinforced beams strengthened in shear with EB FRP. Keywords: ETS method, shear, strengthening, anchor, slip, FRP, NSM.
NASA Astrophysics Data System (ADS)
Le Du, Mathieu
The use of phase change materials (PCMs) allows to store and release large amounts of energy in reduced volumes by using latent heat storage through melting and solidifying at specific temperatures. Phase change materials received a great interest for reducing energy consumption by easing the implementation of passive solar heating and cooling. They can be integrated to buildings as wallboards to improve the heat storage capacity. In this study, an original experimental device has allowed to characterize the thermophysical proprieties of a composite wallboard constituted of PCMs. Generally, PCMs are characterized by calorimetric methods which use very small quantities of material. The device used can characterize large sample's dimensions, as they could be used in real condition. Apparent thermal conductivity and specific heat have been measured for various temperatures. During phase change process, total and latent heat storage capacities have been evaluated with the peak melting and freezing temperatures. Results are compared to the manufacturer's data and data from literature. Incoherencies have been found between sources. Despite several differences with published data, overall results are similar to the latest information, which allow validate the original experimental device. Thermal disturbances due to hysteresis have been noticed and discussed. Results allow suggesting recommendations on thermal procedure and experimental device to characterize efficiently this kind of materials. Temperature's ranges and heating and freezing rates affect results and it must be considered in the characterization. Moreover, experimental devices have to be designed to allow similar heating and freezing rates in order to compare results during melting and freezing.
Modelisation par elements finis du muscle strie
NASA Astrophysics Data System (ADS)
Leonard, Mathieu
Ce present projet de recherche a permis. de creer un modele par elements finis du muscle strie humain dans le but d'etudier les mecanismes engendrant les lesions musculaires traumatiques. Ce modele constitue une plate-forme numerique capable de discerner l'influence des proprietes mecaniques des fascias et de la cellule musculaire sur le comportement dynamique du muscle lors d'une contraction excentrique, notamment le module de Young et le module de cisaillement de la couche de tissu conjonctif, l'orientation des fibres de collagene de cette membrane et le coefficient de poisson du muscle. La caracterisation experimentale in vitro de ces parametres pour des vitesses de deformation elevees a partir de muscles stries humains actifs est essentielle pour l'etude de lesions musculaires traumatiques. Le modele numerique developpe est capable de modeliser la contraction musculaire comme une transition de phase de la cellule musculaire par un changement de raideur et de volume a l'aide des lois de comportement de materiau predefinies dans le logiciel LS-DYNA (v971, Livermore Software Technology Corporation, Livermore, CA, USA). Le present projet de recherche introduit donc un phenomene physiologique qui pourrait expliquer des blessures musculaires courantes (crampes, courbatures, claquages, etc.), mais aussi des maladies ou desordres touchant le tissu conjonctif comme les collagenoses et la dystrophie musculaire. La predominance de blessures musculaires lors de contractions excentriques est egalement exposee. Le modele developpe dans ce projet de recherche met ainsi a l'avant-scene le concept de transition de phase ouvrant la porte au developpement de nouvelles technologies pour l'activation musculaire chez les personnes atteintes de paraplegie ou de muscles artificiels compacts pour l'elaboration de protheses ou d'exosquelettes. Mots-cles Muscle strie, lesion musculaire, fascia, contraction excentrique, modele par elements finis, transition de phase
NASA Astrophysics Data System (ADS)
El Mansouri, Souleimane
Dans le domaine viscoelastique lineaire (VEL, domaine des petites deformations), le comportement thermomecanique du bitume et du mastic bitumineux (melange uniforme de bitume et de fillers) a ete caracterise au Laboratoire des Chaussees et Materiaux Bitumineux (LCMB) de l'Ecole de technologie superieure (ETS) avec l'appui de nos partenaires externes : la Societe des Alcools du Quebec (SAQ) et Eco Entreprises Quebec (EEQ). Les proprietes rheologiques des bitumes et des mastics ont ete mesurees grâce a un nouvel outil d'investigation appele, Rheometre a Cisaillement Annulaire (RCA), sous differentes conditions de chargement. Cet appareil permet non seulement de solliciter des eprouvettes de tailles importantes par rapport a celles utilisees lors des essais classiques, mais aussi d'effectuer des essais en conditions quasi-homogenes, ce qui permet de donner acces a la loi de comportement des materiaux. Les essais sont realises sur une large gamme de temperatures et de frequences (de -15 °C a 45°C et de 0,03Hz a 10 Hz). Cette etude a ete menee principalement pour comparer le comportement d'un bitume avec celui d'un mastic bitumineux dans le domaine des petites deformations. neanmoins, dans une seconde perspective, on s'interesse a l'influence des fillers de verre de post-consommation sur le comportement d'un mastic a faibles niveaux de deformations en comparant l'evolution des modules complexes de cisaillements (G*) d'un mastic avec fillers de verre et un mastic avec fillers conventionnels (calcaire). Enfin, le modele analogique 2S2P1D est utilise pour simuler le comportement viscoelastique lineaire des bitumes et des mastics bitumineux testes lors de la campagne experimentale.
NASA Astrophysics Data System (ADS)
Carrier, Jean-Francois
Les nanotubes de carbone de type monoparoi (C-SWNT) sont une classe recente de nanomateriaux qui ont fait leur apparition en 1991. L'interet qu'on leur accorde provient des nombreuses proprietes d'avant-plan qu'ils possedent. Leur resistance mecanique serait des plus rigide, tout comme ils peuvent conduire l'electricite et la chaleur d'une maniere inegalee. Non moins, les C-SWNT promettent de devenir une nouvelle classe de plateforme moleculaire, en servant de site d'attache pour des groupements reactifs. Les promesses de ce type particulier de nanomateriau sont nombreuses, la question aujourd'hui est de comment les realiser. La technologie de synthese par plasma inductif thermique se situe avantageusement pour la qualite de ses produits, sa productivite et les faibles couts d'operation. Par contre, des recherches recentes ont permis de mettre en lumiere des risques d'expositions reliees a l'utilisation du cobalt, comme catalyseur de synthese; son elimination ou bien son remplacement est devenu une preoccupation importante. Quatre recettes alternatives ont ete mises a l'essai afin de trouver une alternative plus securitaire a la recette de base; un melange catalytique ternaire, compose de nickel, de cobalt et d'oxyde d'yttrium. La premiere consiste essentiellement a remplacer la proportion massique de cobalt par du nickel, qui etait deja present dans la recette de base. Les trois options suivantes contiennent de nouveaux catalyseurs, en remplacement au Co, qui sont apparus dans plusieurs recherches scientifiques au courant des dernieres annees: le dioxyde de zircone (ZrO2), dioxyde de manganese (MnO2) et le molybdene (Mo). La methode utilisee consiste a vaporiser la matiere premiere, sous forme solide, dans un reacteur plasma a haute frequence (3 MHz) a paroi refroidi. Apres le passage dans le plasma, le systeme traverse une section dite de "croissance", isolee thermiquement a l'aide de graphite, afin de maintenir une certaine plage de temperature favorable a la synthese de C-SWNT. Le produit final est par la suite recolte sur des filtres metalliques poreux, une fois le systeme mis a l'arret. Dans un premier temps, une analyse thermodynamique, calculee avec le logiciel Fact-Sage, a permis de mettre en lumiere l'etat des differentes produits et reactifs, tout au long de leur passage dans le systeme. Elle a permis de reveler la similitude de composition de la phase liquide du melange catalytique ternaire de base, avec celui du melange binaire, avec nickel et oxyde d'yttrium. Par la suite, une analyse du bilan d'energie, a l'aide d'un systeme d'acquisition de donnees, a permis de determiner que les conditions operatoires des cinq echantillons mis a l'essai etaient similaires. Au total, le produit final a ete caracterise a l'aide de six methodes de caracterisations differentes : l'analyse thermogravimetrique, la diffraction de rayons X, la microscopie electronique a balayage a haute resolution (HRSEM), la microscopie electronique a transmission (MET), la spectroscopie RAMAN, ainsi que la mesure de la surface specifique (BET). Les resultats de ces analyses ont permis de constater, de facon coherente, que le melange a base de molybdene etait celui qui produisait la moins bonne qualite de produit. Ensuite, en ordre croissant, s'en suivait du melange a base de MnO2 et de ZrO2. Le melange de reference, a base de cobalt, est au deuxieme rang en matiere de qualite. La palme revient au melange binaire, dont la proportion est double en nickel. Les resultats de ce travail de recherche permettent d'affirmer qu'il existe une alternative performante au cobalt pour effectuer la synthese de nanotubes de carbone monoparoi, par plasma inductif thermique. Cette alternative est l'utilisation d'un melange catalytique binaire a base de nickel et d'oxyde d'yttrium. Il est suggere que les performances plus faibles des recettes alternatives, moins performantes, pourraient etre expliquees par le profil thermique fixe du reacteur. Ceci pourrait favoriser certains melanges, au detriment des autres, qui possedent des proprietes thermodynamiques differentes. Le montage, l'equipement, ainsi que les parametres d'operations, pourraient etre modifies en fonction de ces catalyseurs afin d'optimiser la synthese. Mots cles : nanotubes de carbone mono paroi, plasma inductif thermique, cobalt, nickel, dioxyde de zirconium, dioxyde de manganese, molybdene, trioxyde d'yttrium et noir de carbone
Detection de la fin de la compaction des anodes par le son
NASA Astrophysics Data System (ADS)
Sanogo, Bazoumana
L'objectif de ce projet etait de developper un outil de controle en temps reel du temps de compaction en se servant du son genere par le vibrocompacteur pendant le formage des anodes crues. Ainsi, une application a ete developpee pour l'analyse des sons enregistres. Des essais ont ete realises avec differents microphones pour une meilleure qualite des mesures et un a ete choisi pour la suite du projet. De meme, differents tests ont ete realises sur des anodes de laboratoire ainsi que des anodes a l'echelle industrielle afin de mettre en place une methode pour la detection du temps optimal necessaire au formage des anodes. Les travaux au laboratoire de carbone a l'Universite du Quebec a Chicoutimi (UQAC) ont consiste a l'enregistrement de son des anodes fabriquees sur place avec differentes configurations; et a la caracterisation de certaines anodes de l'usine. Les anodes fabriquees au laboratoire sont reparties en deux groupes. Le premier regroupe les anodes pour la validation de notre methode. Ce sont des anodes produites avec des temps de compaction differents. Le laboratoire de carbone a l'UQAC est unique et il est possible de produire des anodes avec les memes proprietes que celles des anodes industrielles. Par consequent, la validation initialement prevue a l'usine a ete effectuee avec les anodes de laboratoire. Le deuxieme groupe a servi a etudier les effets des matieres premieres sur le temps de compaction. Le type de coke et le type de brai ont constitue les differentes variations dans ce deuxieme groupe. Quant aux tests et mesures a l'usine, ils ont ete realises en trois campagnes de mesure. La premiere campagne en juin 2014 a servi a standardiser et a trouver le meilleur positionnement des appareils pour les mesures, a regler le logiciel et a faire les premieres mesures. Une deuxieme campagne en mai 2015 a fait l'objet d'enregistrement de son en classant les anodes selon differents temps de compaction. La troisieme et derniere campagne en decembre 2015 a ete le lieu de tests finaux a l'usine en fabriquant des anodes avec differents criteres (variation du temps de compaction, taux de brai, arret manuel du compacteur, variation de la pression des ballons du haut du compacteur). Ces anodes ont ete ensuite analysees au laboratoire a l'UQAC. En parallele a ces travaux precites, l'amelioration de l'application d'analyse du son a ete faite avec le choix des parametres d'analyse et leur standardisation. Les resultats des premiers tests au laboratoire et ceux de la campagne de juin 2014 ont montre que la formation des anodes se fait suivant trois etapes : rearrangement des particules et du brai, compaction et consolidation et enfin la finition. Ces travaux ont montre en outre que le temps de compaction joue un role tres important dans la definition des proprietes finales des anodes. Ainsi, en plus du type de brai, du taux de brai et du type de coke, il faut tenir compte du temps de sur-compaction et de sous-compaction. En effet, ceci a ete demontre a travers les deux validations qui ont ete realisees. Les resultats de la caracterisation des echantillons (venant des anodes de la campagne de decembre 2015) ont montre qu'une anode compactee a un temps optimal acquiert une bonne resistance a la compression et sa resistivite electrique baisse. En outre, on note que le temps de compaction dans notre cas a baisse legerement avec l'augmentation de la pression des ballons de haut du vibrocompacteur. Ce qui a eu pour effet d'augmenter la densite crue de l'anode. Toutefois, il faut s'abstenir de generaliser ce constat car le nombre d'anodes testees est faible dans notre cas. Par ailleurs, cette etude montre que le temps necessaire pour le formage d'une anode croit avec l'augmentation du taux de brai et baisse legerement avec l'augmentation de la pression des ballons. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Ecoffet, Robert; Maget, Vincent; Rolland, Guy; Lorfevre, Eric; Bourdarie, Sébastien; Boscher, Daniel
2016-07-01
We have developed a series of instruments for energetic particle measurements, associated with component test beds "MEX". The aim of this program is to check and improve space radiation engineering models and techniques. The first series of instruments, "ICARE" has flown on the MIR space station (SPICA mission), the ISS (SPICA-S mission) and the SAC-C low Earth polar orbiting satellite (ICARE mission 2001-2011) in cooperation with the Argentinian space agency CONAE. A second series of instruments "ICARE-NG" was and is flown as: - CARMEN-1 mission on CONAE's SAC-D, 650 km, 98°, 2011-2015, along with three "SODAD" space micro-debris detectors - CARMEN-2 mission on the JASON-2 satellite (CNES, JPL, EUMETSAT, NOAA), 1336 km, 66°, 2008-now, along with JAXA's LPT energetic particle detector - CARMEN-3 mission on the JASON-3 satellite in the same orbit as JASON-2, launched 17 January 2016, along with a plasma detector "AMBRE", and JAXA's LPT again. The ICARE-NG is spectrometer composed of a set of three fully depleted silicon solid state detectors used in single and coincident mode. The on-board measurements consist in accumulating energy loss spectra in the detectors over a programmable accumulation period. The spectra are generated through signal amplitude classification using 8 bit ADCs and resulting in 128/256 channels histograms. The discriminators reference levels, amplifier gain and accumulation time for the spectra are programmable to provide for possible on-board tuning optimization. Ground level calibrations have been made at ONERA-DESP using radioactive source emitting alpha particles in order to determine the exact correspondence between channel number and particle energy. To obtain the response functions to particles, a detailed sectoring analysis of the satellite associated with GEANT-4/MCNP-X calculations has been performed to characterize the geometrical factors of the each detector for p+ as well as for e- with different energies. The component test bed "MEX" is equipped with two different types of active dosimeters, P-MOS silicon dosimeters and OSL (optically stimulated luminescence). Those dosimeters provide independent measurements of ionizing and displacement damage doses and consolidate spectrometers' observations. The data sets obtained cover more than one solar cycle. Dynamics of the radiation belts, effects of solar particle events, coronal mass ejections and coronal holes were observed. Spectrometer measurements and dosimeter readings were used to evaluate current engineering models, and helped in developing improved ones, along with "space weather" radiation belt indices. The presentation will provide a comprehensive review of detector features and mission results.
NASA Astrophysics Data System (ADS)
Barrette, Jeremie
The current project is inscribed in the research axis on bioactive coatings of Dr. Sophie Lerouge's
Caracterisation des occupations du sol en milieu urbain par imagerie radar
NASA Astrophysics Data System (ADS)
Codjia, Claude
This study aims to test the relevance of medium and high-resolution SAR images on the characterization of the types of land use in urban areas. To this end, we have relied on textural approaches based on second-order statistics. Specifically, we look for texture parameters most relevant for discriminating urban objects. We have used in this regard Radarsat-1 in fine polarization mode and Radarsat-2 HH fine mode in dual and quad polarization and ultrafine mode HH polarization. The land uses sought were dense building, medium density building, low density building, industrial and institutional buildings, low density vegetation, dense vegetation and water. We have identified nine texture parameters for analysis, grouped into families according to their mathematical definitions in a first step. The parameters of similarity / dissimilarity include Homogeneity, Contrast, the Differential Inverse Moment and Dissimilarity. The parameters of disorder are Entropy and the Second Angular Momentum. The Standard Deviation and Correlation are the dispersion parameters and the Average is a separate family. It is clear from experience that certain combinations of texture parameters from different family used in classifications yield good results while others produce kappa of very little interest. Furthermore, we realize that if the use of several texture parameters improves classifications, its performance ceils from three parameters. The calculation of correlations between the textures and their principal axes confirm the results. Despite the good performance of this approach based on the complementarity of texture parameters, systematic errors due to the cardinal effects remain on classifications. To overcome this problem, a radiometric compensation model was developed based on the radar cross section (SER). A radar simulation from the digital surface model of the environment allowed us to extract the building backscatter zones and to analyze the related backscatter. Thus, we were able to devise a strategy of compensation of cardinal effects solely based on the responses of the objects according to their orientation from the plane of illumination through the radar's beam. It appeared that a compensation algorithm based on the radar cross section was appropriate. Some examples of the application of this algorithm on HH polarized RADARSAT-2 images are presented as well. Application of this algorithm will allow considerable gains with regard to certain forms of automation (classification and segmentation) at the level of radar imagery thus generating a higher level of quality in regard to visual interpretation. Application of this algorithm on RADARSAT-1 and RADARSAT-2 images with HH, HV, VH, and VV polarisations helped make considerable gains and eliminate most of the classification errors due to the cardinal effects.
NASA Astrophysics Data System (ADS)
Vincent, Jean-Baptiste
This Master's thesis is part of a multidisciplinary optimisation project initiated by the Consortium for Research and Innovation in Aerospace in Quebec (CRIAQ) ; this project is about designing and manufacturing a morphing wing demonstrator. The morphing design adopted in this project is based on airfoil thickness variation applied to the upper skin. This morphing generates a change in the laminar to turbulent boundary layer transition position on top of the wing. The position of this transition area leads to significant changes in the aerodynamic performance of the wing. The study presented here focuses on the design of the conventional aileron actuation system and on the characterization of the high sensitivity differential pressure sensors installed on the upper skin in order to determine the laminar to turbulent transition position. Furthermore, the study focuses on the data acquisition system for the morphing wing structural test validation. The aileron actuation system is based on a linear actuator actuated by a brushless motor. The component choice is presented as well as the command method. A static validation as well as wind tunnel validation is presented. The pressure sensor characterization is performed by installing three of those high sensitivity differential pressure sensors in a bi-dimensional known airfoil. This study goes through the process of determining the sensor position in order to observe the transition area by using a computational fluid dynamics (CFD) statistic approach. The validation of the laminar to turbulent transition position is carried out with a series of wind tunnel tests. A structural test has been executed in order to validate the wing structure. This Master's thesis shows the data acquisition system for the microstrain measurement installed inside the morphing wing. A hardware and software architecture description is developed and presented as well as the practical results.
Le point sur les amas de galaxies
NASA Astrophysics Data System (ADS)
Pierre, M.
Clusters of galaxies: a review After having briefly described the 3 main components of clusters of galaxies (dark matter, gas and galaxies) we shall present clusters from a theoretical viewpoint: they are the largest entities known in the universe. Consequently, clusters of galaxies play a key role in any cosmological study and thus, are essential for our global understanding of the universe. In the general introduction, we shall outline this fundamental aspect, showing how the study of clusters can help to constrain the various cosmological scenarios. Once this cosmological framework is set, the next chapters will present a detailed analysis of cluster properties and of their cosmic evolution as observed in different wavebands mainly in the optical (galaxies), X-ray (gas) and radio (gas and particles) ranges. We shall see that the detailed study of a cluster is conditioned by the study of the interactions between its different components; this is the necessary step to ultimately derive the fundamental quantity which is the cluster mass. This will be the occasion to undertake an excursion into extremely varied physical processes such as the multi-phase nature of the intra-cluster medium, lensing phenomena, starbursts and morphology evolution in cluster galaxies or the interaction between the intra-cluster plasma and relativistic particles which are accelerated during cluster merging. For each waveband, we shall outline simply the dedicated observing and analysis techniques, which are of special interest in the case of space observations. Finally, we present several ambitious projects for the next observatory generation as well as their expected impact on the study of clusters of galaxies. Après avoir brièvement décrit les 3 constituants fondamentaux des amas de galaxies (matière noire, gaz et galaxies) nous présenterons les amas d'un point de vue plus théorique : ce sont les entités les plus massives à l'équilibre connues dans l'univers. Les amas de galaxies jouent donc un rôle de premier plan dans toute étude cosmologique et par conséquent, sont indispensables à notre compréhension globale de l'univers. Dans l'introduction générale, nous détaillons cet aspect fondamental en montrant comment l'étude des amas peut contribuer à contraindre les scénarios cosmologiques. Une fois le contexte scientifique délimité, les chapitres suivants s'attachent à présenter les diverses propriétés des amas et leur évolution cosmique observée dans diverses longueurs d'onde principalement dans les domaines visible (galaxies), X (gaz) et radio (gaz et particules). Nous verrons que l'étude détaillée d'un amas implique celle de l'interaction entre ses différentes composantes et est un passage obligé pour remonter au paramètre ultime (ou premier) qu'est sa masse. Loin d'être un détour ennuyeux, ceci sera l'occasion d'aborder des phénomènes physiques extrêmement variés tel l'aspect multi-phases du milieu intra-amas, les sursauts de formation d'étoiles et l'évolution morphologique des galaxies capturées par les amas, ou bien encore, l'interaction entre le plasma intra-amas et les particules relativistes accélérées lors de la fusion entre deux amas. Bien sûr, pour chaque longueur d'onde, nous ne manquerons pas de décrire simplement les techniques d'observation et d'analyse mises en oeuvre ; celles-ci sont particulièrement intéressantes dans le cas de l'instrumentation spatiale. Nous terminerons en présentant des projets d'observatoires pour l'horizon 2010 et leur impact prévu sur l'étude des amas de galaxies.
NASA Astrophysics Data System (ADS)
Favre, Audrey
Rubber composites are widely used in several engineering fields, such as automotive, and more recently for inflatable dams and other innovative underwater applications. These rubber materials are composed by an elastomeric matrix while the reinforcing phase is a synthetic fabric. Since these components are expected to operate several years in water environment, their durability must be guaranteed. The use of rubber materials immersed in water is not new, in fact, these materials have been studied for almost one century. However, the knowledge on reinforced rubber composites immersed several years in water is still limited. In this work, investigations on reinforced rubbers were carried out in the framework of a research project in partnership with Alstom and Hydro-Quebec. The objective of this study was to identify rubber composites that could be used under water for long periods. Various rubber composites with ethylene-propylene-diene monomer (EPDM), silicone, EPDM/silicone and polychloroprene (Neoprene) matrices reinforced with E-glass fabric were studied. Thus, these materials were exposed to an accelerated ageing at 85 °C underwater for periods varying from 14 to 365 days. For comparison purposes, they were also immersed and aged one year at room temperature (21 °C). The impact of accelerated aging was estimated through three different characterization methods. Scanning electron microscopy (SEM) was first used to assess the quality of fiber-matrix interface. Then, water absorption tests were performed to quantify the rate of water absorption during immersion. Finally the evolution of the mechanical properties was followed by the determination of Young's modulus (E) and ultimate stress (sigmau) using a dedicated traction test. This analysis allowed to point out that the quality of the fiber-matrix interface was the main factor influencing the drop of the mechanical properties and their durability. Moreover, it was noticed that this interface could be improved by using appropriate coupling agent as confirmed by the silicone composite with treated fabric. It was also observed that fiber-matrix interface could be a place where high stresses were localized because of differential swelling leading to an important loss of mechanical properties. The results revealed very different behaviors from one composite to another. The accelerated aging of EPDM/silicone and Neoprene composites led to a rapid diminution of mechanical properties in only 14 days. Conversely, silicone composites showed a 20 % increase of mechanical properties after 75 days of immersion. EPDM composites exhibited an important variability from one sample to another. It can be concluded from this study that composites made from silicone matrix with treated E-glass result in a better durability underwater. Keywords: composite elastomer, accelerated ageing, immersion in the water
NASA Astrophysics Data System (ADS)
Zoghlami, Karima; Lopez-Arce, Paula; Navarro, Antonia; Zornoza-Indart, Ainara; Gómez, David
2017-04-01
Monuments and historical buildings of Bizerte show a disturbing state of degradation. In order to propose a compatible materials for the restauration works such as stone of substitution and restauration mortars, a geological context was analysed with the objectif to localize historical quarries accompanied by a sedimentological study to identify the exploited geological formations. Petrophysical and chemical caracterisation of both stone and mortars have been carried out. With the aim to determine the origin of the erosion and the degree of stone decay, a combination of micro-destructive and non-destructive techniques have been used on-site and in-lab. Moisture measurements, ultrasonic velocity propagation and water absorption by Karsten pipe test together with polarized light and fluorescence optical microscopy, mercury intrusion porosimetry and ion chromatography analyses were carried out to perform petrophysical characterization of stone samples and determination of soluble salts. For the characterization of mortars, granulometric study was performed to determine the nature of components and their grain size distribution. Thin sections of mortar samples were examined for the petrographical and mineralogical characterization. X-ray diffraction (XRD) analysis of finely pulverized samples was performed in order to identify the mineral crystalline phases of the mortars. Thermal analyses [thermogravimetry (TG)] were performed in order to determine the nature of the binder and its properties. Porosity was determined following UNE-EN 1936 (2007) standart test. Geological and petrographical study showed that historical buildings are essentially built with high porous bioclastic calcarenite partially cemented by calcite which is Würm in age and outcrops all along the northern coast of Bizerte where several historical quarries were identified. Occasionally, two other types of lithologies were used as building stones and they correspond to two varieties of oligocene sandstones (brown quartz-arenite cemented by iron oxide and ochre-green colored sandstone cemented by calcite) and an eocene white limestone corresponding to a fine-grained globigerine wackstone according to Dunham classification. Results of the petrophysical study show that small variations in the petrographic characteristics of the building geomaterials, such as type and degree of cementation, porous network configuration and presence or absence of soluble salts leads to differential stone weathering. Results of study's mortars show that original and restoration mortars have similar mineralogical composition but different grain size distribution and proportion of binder/agregats. They differ equally by the nature of raw materials as demonstrated by the thermal analyses. The study show that little variation of these parameters can affect the durability and the performance of mortars and can accelerate the degradation process of the building stones, especially the oligocene and eocene lithotypes.
NASA Astrophysics Data System (ADS)
Mijiyawa, Faycal
Cette etude permet d'adapter des materiaux composites thermoplastiques a fibres de bois aux engrenages, de fabriquer de nouvelles generations d'engrenages et de predire le comportement thermique de ces engrenages. Apres une large revue de la litterature sur les materiaux thermoplastiques (polyethylene et polypropylene) renforces par les fibres de bois (bouleau et tremble), sur la formulation et l'etude du comportement thermomecanique des engrenages en plastique-composite; une relation a ete etablie avec notre presente these de doctorat. En effet, beaucoup d'etudes sur la formulation et la caracterisation des materiaux composites a fibres de bois ont ete deja realisees, mais aucune ne s'est interessee a la fabrication des engrenages. Les differentes techniques de formulation tirees de la litterature ont facilite l'obtention d'un materiau composite ayant presque les memes proprietes que les materiaux plastiques (nylon, acetal...) utilises dans la conception des engrenages. La formulation des materiaux thermoplastiques renforces par les fibres de bois a ete effectuee au Centre de recherche en materiaux lignocellulosiques (CRML) de l'Universite du Quebec a Trois-Rivieres (UQTR), en collaboration avec le departement de Genie Mecanique, en melangeant les composites avec deux rouleaux sur une machine de type Thermotron-C.W. Brabender (modele T-303, Allemand) ; puis des pieces ont ete fabriquees par thermocompression. Les thermoplastiques utilises dans le cadre de cette these sont le polypropylene (PP) et le polyethylene haute densite (HDPE), avec comme renfort des fibres de bouleau et de tremble. A cause de l'incompatibilite entre la fibre de bois et le thermoplastique, un traitement chimique a l'aide d'un agent de couplage a ete realise pour augmenter les proprietes mecaniques des materiaux composites. Pour les composites polypropylene/bois : (1) Les modules elastiques et les contraintes a la rupture en traction des composites PP/bouleau et PP/tremble evoluent lineairement en fonction du taux de fibres, avec ou sans agent de couplage (Maleate de polypropylene MAPP). De plus, l'adherence entre les fibres de bois et le plastique est amelioree en utilisant seulement 3 % MAPP, entrainant donc une augmentation de la contrainte maximale bien qu'aucun effet significatif ne soit observe sur le module d'elasticite. (2) Les resultats obtenus montrent que, en general, les proprietes en traction des composites polypropylene/bouleau, polypropylene/tremble et polypropylene/bouleau/ tremble sont tres semblables. Les composites plastique-bois (WPCs), en particulier ceux contenant 30 % et 40 % de fibres, ont des modules elastiques plus eleves que certains plastiques utilises dans l'application des engrenages (ex. Nylon). Pour les composites polyethylene/bois, avec 3%Maleate de polyethylene (MAPE): (1) Tests de traction : le module elastique passe de 1.34 GPa a 4.19 GPa pour le composite HDPE/bouleau, alors qu'il passe de 1.34 GPa a 3.86 GPa pour le composite HDPE/tremble. La contrainte maximale passe de 22 MPa a 42.65 MPa pour le composite HDPE/bouleau, alors qu'elle passe de 22 MPa a 43.48 MPa pour le composite HDPE/tremble. (2) Tests de flexion : le module elastique passe de 1.04 GPa a 3.47 GPa pour le composite HDPE/bouleau et a 3.64 GPa pour le composite HDPE/tremble. La contrainte maximale passe de 23.90 MPa a 66.70 MPa pour le composite HDPE/bouleau, alors qu'elle passe a 59.51 MPa pour le composite HDPE/tremble. (3) Le coefficient de Poisson determine par impulsion acoustique est autour de 0.35 pour tous les composites HDPE/bois. (4) Le test de degradation thermique TGA nous revele que les materiaux composites presentent une stabilite thermique intermediaire entre les fibres de bois et la matrice HDPE. (5) Le test de mouillabilite (angle de contact) revele que l'ajout de fibres de bois ne diminue pas de facon significative les angles de contact avec de l'eau parce que les fibres de bois (bouleau ou tremble) semblent etre enveloppees par la matrice sur la surface des composites, comme le montrent des images prises au microscope electronique a balayage MEB. (6) Le modele de Lavengoof-Goettler predit mieux le module elastique du composite thermoplastique/bois. (7) Le HDPE renforce par 40 % de bouleau est mieux adapte pour la fabrication des engrenages, car le retrait est moins important lors du refroidissement au moulage. La simulation numerique semble mieux predire la temperature d'equilibre a la vitesse de 500 tr/min; alors qu'a 1000 tr/min, on remarque une divergence du modele. (Abstract shortened by ProQuest.). None None None None None None None None
Dispositifs semi-conducteurs pour biodetection photonique et imagerie hyperspectrale
NASA Astrophysics Data System (ADS)
Lepage, Dominic
La creation d'un microsysteme d'analyse biochimique, capable de livrer des diagnostics preliminaires sur la quantification d'elements pathogenes, est un defi multidisciplinaire ayant un impact potentiel important sur la majorite des activites humaines en sante et securite. En effet, un dispositif integre, peu dispendieux et livrant des resultats facilement interpretables, permettrait une vulgarisation des capacites de biodetection a travers differents domaines d'applications societaires et industriels. Le present document se concentre sur l'integration monolithique d'une methode de biocaracterisation dans le but de generer un transducteur miniaturise et efficace, element central d'un microsysteme de detection. Le projet de recherche ici presente vise l'etude de l'applicabilite d'un capteur plasmonique integre par l'entremise de nanostructures semi-conductrices aux proprietes quantiques et luminescentes. L'approche presentee est globale; c'est-a-dire qu'on vise a repondre aux questions fondamentales impliquant la comprehension des phenomenes photoniques, le developpement et la fabrication des dispositifs, les methodes de caracterisations possibles ainsi que l'application d'un transducteur SPR integre a la biodetection. En d'autres termes : dans quelles circonstances et comment un transducteur plasmonique integre doit-il etre realise pour l'application a la detection delocalisee d'elements pathogenes? Dans le but d'engendrer un instrument simple a l'echelle de l'usager, l'integration de la connaissance a l'echelle du design est donc effectuee. Ainsi, des capteurs plasmoniques monolithiques sont concus a l'aide de modeles theoriques ici presentes. Un instrument de mesure hyperspectrale conjuguee permettant de cartographier directement la relation de dispersion des plasmons diffractes a ete construit et teste. Cet instrument est employe a la cartographie d'elements de diffusion. Finalement, une demonstration du fonctionnement du dispositif, appliquee a la biocaracterisation d'evenements simples, tels que l'albumine de serum bovin et la detection d'une souche specifique d'influenza A, est livree. Ceci repond donc a la question de faisabilite d'un nanosysteme plasmonique applicable a la detection de pathogenes. Mots-Clefs: Biocapteur; Plasmons de surface; Diffusion lumineuse; Semi-conducteur quantique; Microscopie conjuguee; Virus Influenza A
NASA Astrophysics Data System (ADS)
Aubree, Nathan
Since 1990, constitutive concrete model EPM3D (Multiaxial Progressive Damage in 3 Dimensions) has been developed at Polytechnique Montreal. Bouzaiene and Massicotte (1995) choose the hypoelastic approach with the concept of equivalent deformation and the implementation of a scalar damage parameter to represent the microcracking of concrete in pre-peak compression. The post-peak softening behaviour, in tension and in compression, is based on the concept of conservation of the fracture energy. In the finite elements context, it requires defining a localisation limiter acting on the softening modulus depending on the element size. The formulation of EPM3D model in the case of the post-peak compression required revisions. Mesh-dependence problems and the absence of the consideration of the confinement effect were the most important points to improve, with as main goal the modelling of the fracture of the reinforced concrete columns. With a complete literature review, we try to establish an exhaustive list of the numerous parameters having an influence on the softening behavior under uniaxial and multiaxial loads. In the second part of this review, we exhibit the difficulties of modelling a softening material with finite elements theory and the principle of the set up localization limiter. Inspired by models we met in literature, modifications of the previously established relation are proposed by focusing on a more adequate representation of the behavior under confinement loads. Then we proceed to the validation of the model by means of simple analyses with the software ABAQUS and the module of explicit dynamic resolution, called Explicit. Also we present its specificities compared with a classic implicit static resolution. We supply some advice to the reader and future students who are susceptible to model real reinforced concrete columns with EPM3D. Finally we made an experimental program to characterize the post-peak behavior in uniaxial compression of a fiber reinforced concrete mixture (FRC) with the aim of considering the possibility or not of an extrapolation of our model for FRC.
NASA Astrophysics Data System (ADS)
Diakite, Ibrahim Soumaila
The annual production of grave-bitume (GB) in the province of Quebec, of 682 757 tons represents 17% of the total production, just behind ESG-10 with 49,9%. Used as base layer, the grave-bitume can withstand easily the stresses from traffic. The mix design of GB requires two different aggregates stockpile. Also, the amount of binder needed is rather high, which makes those mixes expensive. The annual production of grave-bitume (GB) in the province of Quebec, of 682 757 tons represents 17% of the total production, just behind ESG-10 with 49,9%. Used as base layer, the grave-bitume can withstand easily the stresses from traffic. The mix design of GB requires two different aggregates stockpile. Also, the amount of binder needed is rather high, which makes those mixes expensive. In order to achieve this, different mixes with different filler content, from 9,5% to 15% were designed with a PG 64-28 bitumen. Those mixes were designed following the aggregates packing optimization used for the SMA-Cpack developed at ETS. Only two mixes were characterized, the EBHP-20_11% with 11% filler and the EBHP-20_15% with 15% filler. They were selected to study the influence of the amount of filler on the thermo-mechanical characteristics. The results have shown that the EBHP-20_11% can replace the GB-20 as a base layer mix in a flexible pavement because rutting resistance results from both mixes are similar, same goes for the thermal cracking. For the complex modulus, the value of the norm of the complex modulus at 10°C and 10 Hz are a bit higher for the EBHP compared to the GB-20. However, for fatigue resistance, the EBHP-20_11% has shown a fatigue resistance lower than the GB- 20. The EBHP-20_15% did however have better fatigue resistance than the GB-20, which if probably due to the higher amount of filler in the mix.
Formability Extension of Aerospace Alloys for Tube Hydroforming Applications =
NASA Astrophysics Data System (ADS)
Anderson, Melissa
L'hydroformage de tube est un procede novateur de mise en forme du metal qui utilise la pression d'un fluide, generalement de l'eau, dans une matrice fermee pour deformer plastiquement des pieces d'epaisseur faible et fabriquer ainsi des composants tubulaires de geometries complexes. Ce procede possede de nombreux avantages tels que la reduction du poids des pieces, la diminution des couts lies a l'outillage et l'assemblage, la reduction du nombre d'etapes de fabrication et l'excellent etat de surface des pieces hydroformees. Cependant, malgre tous ces atouts, l'hydroformage reste un procede marginal dans le domaine aerospatial a cause de plusieurs facteurs dont la formabilite limitee des alliages aeronautiques. L'objectif principal de la recherche conduite dans le cadre de cette these est d'etudier une methode pour augmenter la formabilite de deux alliages aeronautiques designes par l'utilisation d'un procede de mise en forme multi-etapes qui inclue des cycles de mise en forme suivis d'etapes de traitement thermique intermediaires. Une revue de litterature exhaustive sur les methodes existantes pour ameliorer la formabilite des materiaux vises ainsi que les traitements thermiques d'adoucissement disponibles a permis d'etablir une procedure experimentale appropriee. Ce procede comprend plusieurs sequences de mise en forme suivie de traitement thermique successives jusqu'a l'obtention de la piece finale. L'insertion d'etapes de traitements thermiques intermediaires ainsi que la combinaison " deformation + traitement thermique " influencent le comportement mecanique et metallurgique des alliages. De ce fait, une caracterisation complete des materiaux a ete conduite a chaque etape du procede. D'un point de vue mecanique, l'effet des traitements thermiques et plus generalement du procede multi-etapes sur les proprietes mecaniques et lois constitutives des alliages a ete etudie en detail. Au niveau metallurgique, l'influence du procede sur les caracteristiques microstructurales des alliages tels la taille de grains, les phases en presence et la texture a ete analysee. Ces deux etudes paralleles ont permis de comprendre de facon complete et detaillee l'impact des modifications metallurgiques induites par le procede multietapes sur le comportement mecanique macroscopique et les proprietes finales de la piece.
Fabrication par injection flexible de pieces coniques pour des applications aerospatiales
NASA Astrophysics Data System (ADS)
Shebib Loiselle, Vincent
Les materiaux composites sont presents dans les tuyeres de moteurs spatiaux depuis les annees soixante. Aujourd'hui, l'avenement des tissus tridimensionnels apporte une solution innovatrice au probleme de delamination qui limitait les proprietes mecaniques de ces composites. L'utilisation de ces tissus necessite toutefois la conception de procedes de fabrication mieux adaptes. Une nouvelle methode de fabrication de pieces composites pour des applications aerospatiales a ete etudiee tout au long de ce travail. Celle-ci applique les principes de l'injection flexible (procede Polyflex) a la fabrication de pieces coniques de fortes epaisseurs. La piece de validation a fabriquer represente un modele reduit de piece de tuyere de moteur spatial. Elle est composee d'un renfort tridimensionnel en fibres de carbone et d'une resine phenolique. La reussite du projet est definie par plusieurs criteres sur la compaction et la formation de plis du renfort et sur la formation de porosites de la piece fabriquee. Un grand nombre d'etapes ont ete necessaires avant la fabrication de deux pieces de validation. Premierement, pour repondre au critere sur la compaction du renfort, la conception d'un outil de caracterisation a ete entreprise. L'etude de la compaction a ete effectuee afin d'obtenir les informations necessaires a la comprehension de la deformation d'un renfort 3D axisymetrique. Ensuite, le principe d'injection de la piece a ete defini pour ce nouveau procede. Pour en valider les concepts proposes, la permeabilite du renfort fibreux ainsi que la viscosite de la resine ont du etre caracterisees. A l'aide de ces donnees, une serie de simulations de l'ecoulement pendant l'injection de la piece ont ete realisees et une approximation du temps de remplissage calculee. Apres cette etape, la conception du moule de tuyere a ete entamee et appuyee par une simulation mecanique de la resistance aux conditions de fabrication. Egalement, plusieurs outillages necessaires pour la fabrication ont ete concus et installes au nouveau laboratoire CGD (composites grandes dimensions). En parallele, plusieurs etudes ont ete effectuees pour comprendre les phenomenes influencant la polymerisation de la resine.
NASA Astrophysics Data System (ADS)
Saghir, Hassane
Aircraft systems are interconnected by cable bundles that may represent a hundred kilometres. Those wirings penalize the aircraft weight. Cable bundles favour electromagnetic interference on board aircraft and routing a new cable for integrating new equipment boxes in a sustained aircraft requires a lot of retrofit work. Consequently, the aviation industry and aerospace community are working in the scope of different projects on new alternatives that will better fit to the future generation of aircrafts and help to reduce interconnecting wires on board. Wireless technologies represent a coveted solution that could make significant improvements and benefits to new generations of aircrafts. This research work focuses on the study of the wireless propagation over some frequency bands inside commercial aircrafts. The main objective is to provide conclusions and recommendations on criteria that may help optimizing the wireless communication without impacting the existent systems. Targeted applications are the inflight entertainment (IFE) service and wireless sensing systems. This work was conducted in collaboration with Bombardier-Aerospace based in Montreal (QC) in the frame of AVIO-402 project under the grant of CRIAQ (http://www.criaq.aero/). In this study, an experimental characterization of the propagation channel in the ISM band around 2.4 GHz frequency 5.8 GHz has been performed in a CRJ700 aircraft from Bombardier Aerospace. This characterization allowed to extract the parameters needed to analyze the channel behavior. The measurements results have shown that the propagation characteristics are close to those of both typical indoor medium in terms of the delay spread and a tunnel in terms of path loss. Then, a 3D channel modeling and simulation have been achieved with an RF prediction software (Wireless Insite Remcom). The simulations also consider the millimeter band around 60 GHz. The simulations yielded to analytical models of radio coverage which were subsequently used to evaluate wireless link interference scenarios and performance metrics. Finally, these models were used to design a TDL (Tapped Delay Line) channel model with the goal of an implementation under Matlab in a wireless transmission chain.
Mise en oeuvre et caracterisation d'une methode d'injection de pannes a haut niveau d'abstraction
NASA Astrophysics Data System (ADS)
Robache, Remi
Nowadays, the effects of cosmic rays on electronics are well known. Different studies have demonstrated that neutrons are the main cause of non-destructive errors in embedded circuits on airplanes. Moreover, the reduction of transistor sizes is making all circuits more sensitive to those effects. Radiation tolerant circuits are sometimes used in order to improve the robustness of circuits. However, those circuits are expensive and their technologies often lag a few generations behind compared to non-tolerant circuits. Designers prefer to use conventional circuits with mitigation techniques to improve the tolerance to soft errors. It is necessary to analyse and verify the dependability of a circuit throughout its design process. Conventional design methodologies need to be adapted in order to evaluate the tolerance to non-destructive errors caused by radiations. Nowadays, designers need new tools and new methodologies to validate their mitigation strategies if they are to meet system requirements. In this thesis, we are proposing a new methodology allowing to capture the faulty behavior of a circuit at a low level of abstraction and to apply it at a higher level. In order to do that, we are introducing the new concept of faulty behavior Signatures that allows creating, at a high level of abstraction (system level) models that reflect with high fidelity the faulty behavior of a circuit learned at a low level of abstraction, at gate level. We successfully replicated the faulty behavior of an 8 bit adder and multiplier with Simulink, with respectively a correlation coefficient of 98.53% and 99.86%. We are proposing a methodology that permits to generate a library of faulty components, with Simulink, allowing designers to verify the dependability of their models early in the design flow. We are presenting and analyzing our results obtained for three different circuits throughout this thesis. Within the framework of this project a paper was published at the NEWCAS 2013 conference (Robache et al., 2013). This works presents the new concept of faulty behavior Signature, the methodology for generating Signatures we developed and also our experiments with an 8bit multiplier.
NASA Astrophysics Data System (ADS)
Pilon, Guillaume
This research aimed at the production of biomass char under pyrolytic conditions, targeting biochar as soil amendment, while also considering its application as biocoal, either for bioenergy or subsequent upgrading. The production of biomass char was performed using two bench-scale, batch-type, fixed-bed reactors, each with an operating capacity of 1 and 25 gw.b. /batch, respectively. Switchgrass (Panicum virgatum) has been used for the tests. Production conditions studied implied temperatures of 300, 400 and 500 °C with short residence times (2.5 and 5 min). As well, the effect of using CO2 as vector gas has been compared to a common inert environment of N2. The effects of the previously mentioned parameters were correlated with some important physicochemical characteristics of biomass char. Analyses were also performed on complementary pyrolytic products (bio-oil and gas). The biomass char extraction was performed using a Soxhlet and dichloromethane was used as extracting solvent. The extracts were then characterized by GC-MS thus allowing the identification of several compounds. Specific pyrolysis conditions used at 300 °C - N2 with the 1 g/batch reactor, such as high heating rates as well as high convection conditions, presented advantegeous biomass char yields and properties, and, possible torrefaction process productivity improvement (in comparison to reported literature, such as Gilbert et al. [2009]). The char extracts as well as the bio-oils analysis (also performed using GC-MS), all generated from the 25 g/batch reactor, showed major differences among the compounds obtained from the CO2 and N2 environments, respectively. Several compounds observed in the char extracts appeared less concentrated in the CO2 environment vs N2, for the same reaction temperatures. As an example, at 400 °C, furfural was found only in char extracts from N2 environment as compared to the CO2 environment. Among all studied conditions (for both reactors), only naphthalene and naphthalene derivatives constituted the PAHs content, which was only detected for the chars produced at 500 °C. The use of CO2 as pyrolysis vector gas led to a significant difference for every temperature conditions studied for the biomass char as well as for the liquid and gas products. At 300 °C, in CO 2 environment, it is possible to observe a bio-oil production significantly lower than within a N2 environment (18.0 vs 24.6 %; CO2 vs N2 for P<0.002), a result consistent with the biomass char volatile content that was shown to be significantly higher under the same conditions (0.29 vs 0.35 gchar volatiles content/graw biomass; P=0.1). In addition, at 500 °C, the char ash content was observed to be significantly lower in CO 2 than in N2 (P<0.06). Keywords: Pyrolysis, torrefaction, char, biochar, biocoal, CO 2, Soxhlet extractions, characterization.
NASA Astrophysics Data System (ADS)
Gosselin, Gabriel
Wetlands fill many important ecological functions and contribute to the biodiversity of fauna and flora. Although there is a growing recognition of the importance to protect these areas, it remains that their integrity is still threatened by the pressure of human activities. The inventory and the systematic monitoring of wetlands are a necessity and remote sensing is the only realistic way to achieve this goal. The primary objective of this thesis is to contribute and improve the wetland characterization using satellite polarimetric data acquired in L (ALOS-PALSAR) and C (RADARSAT-2) band. This thesis is based on two hypotheses (Ch. 1). The first hypothesis stipulate that classes of plant physiognomies, based on plant structure, are more appropriate than classes of plant species because they are best adapted to the information content of polarimetric radar data. The second hypothesis states that polarimetric decomposition algorithms allow an optimal extraction of polarimetric information compared to a multi-polarized approach based on the HH, HV and VV channels (Ch. 3). In particular, the contribution of the incoherent Touzi decomposition for the inventory and monitoring of wetlands is examined in detail. This decomposition allows the characterization of the scattering type, its phase, orientation, symmetry, degree of polarization and the backscattered power of a target with a series of parameters extracted from an analysis of the coherency matrix eigenvectors and eigenvalues. The lake Saint-Pierre region was chosen as the study site because of the great diversity of its wetlands that are covering more than 20 000 ha. One of the challenges posed by this thesis is that there is neither a standard system enumerating all the possible physiognomic classes nor an accurate description of their characteristics and dimensions. Special attention was given to the creation of these classes by combining several data sources and more than 50 plant species were grouped into nine physiognomic classes (Ch. 7, 8 and 9). Several analyzes are proposed to validate the hypotheses of this thesis (Ch. 9). Sensitivity analysis using scatter plots are performs to study the characteristics and dispersion of plant physiognomic classes in various features space consisting of polarimetric parameters or polarization channels (Ch. 10 and 12). Time series of made of RADARSAT-2 images are used to deepen the understanding of the seasonal evolution of plant physiognomies (Ch. 12). The transformed divergence algorithm is used to quantify the separability between physiognomic classes and to identify the parameters (s) that contribute the most to their separability (Ch. 11 and 13). Classifications are also proposed and the results compared to an existing map of the lake Saint-Pierre wetlands (Ch. 14). Finally, an analysis of the potential of polarimetric parameters in C and L-band is proposed for the monitoring of peatlands hydrology (Ch. 15 and 16). Sensitivity analyses show that the parameters of the 1st component, relative to the dominant (polarized) part of the signal, are sufficient for a general characterization of plant physiognomies. The parameters of the second and third components are, however, needed for better class separability (Ch. 11 and 13) and a better discrimination between wetlands and uplands (Ch. 14). This thesis shows that it is preferable to consider individually the parameters of the 1st, 2nd and 3rd components rather than their weighted sum by their respective eigenvalues (Ch. 10 and 12). This thesis also examines the complementarity between the structural parameters and those related to the backscattered power, often ignored and normalized by most polarimetric decomposition. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Mathevet, T.; Joel, G.; Gottardi, F.; Nemoz, B.
2017-12-01
The aim of this communication is to present analyses of climate variability and change on snow water equivalent (SWE) observations, reconstructions (1900-2016) and scenarii (2020-2100) of a hundred of snow courses dissiminated within the french Alps. This issue became particularly important since a decade, in regions where snow variability had a large impact on water resources availability, poor snow conditions in ski resorts and artificial snow production. As a water resources manager in french mountainuous regions, EDF (french hydropower company) has developed and managed a hydrometeorological network since 1950. A recent data rescue research allowed to digitize long term SWE manual measurments of a hundred of snow courses within the french Alps. EDF have been operating an automatic SWE sensors network, complementary to the snow course network. Based on numerous SWE observations time-series and snow accumulation and melt model (Garavaglia et al., 2017), continuous daily historical SWE time-series have been reconstructed within the 1950-2016 period. These reconstructions have been extented to 1900 using 20 CR reanalyses (ANATEM method, Kuentz et al., 2015) and up to 2100 using GIEC Climate Change scenarii. Considering various mountainous areas within the french Alps, this communication focuses on : (1) long term (1900-2016) analyses of variability and trend of total precipitation, air temperature, snow water equivalent, snow line altitude, snow season length , (2) long term variability of hydrological regime of snow dominated watersheds and (3) future trends (2020 -2100) using GIEC Climate Change scenarii. Comparing historical period (1950-1984) to recent period (1984-2016), quantitative results within a region in the north Alps (Maurienne) shows an increase of air temperature by 1.2 °C, an increase of snow line height by 200m, a reduction of SWE by 200 mm/year and a reduction of snow season length by 15 days. These analyses will be extended from north to south of the Alps, on a region spanning 200 km. Caracterisation of the increase of snow line height and SWE reduction are particularly important at a local and watershed scale. This long term change of snow dynamics within moutainuous regions both impacts snow resorts and artificial snow production developments and multi-purposes dam reservoirs managments.
NASA Astrophysics Data System (ADS)
Fournier, Patrick
Le Modele de l'Etat Critique Generalise (MECG) est utilise pour decrire les proprietes magnetiques et de transport du YBa_2Cu_3O _7 polycristallin. Ce modele empirique permet de relier la densite de courant critique a la densite de lignes de flux penetrant dans la region intergrain. Deux techniques de mesures sont utilisees pour caracteriser nos materiaux. La premiere consiste a mesurer le champ au centre d'un cylindre creux en fonction du champ magnetique applique pour des temperatures comprises entre 20 et 85K. En variant l'epaisseur de la paroi du cylindre creux, il est possible de suivre l'evolution des cycles d'hysteresis et de determiner des champs caracteristiques qui varient en fonction de cette dimension. En utilisant un lissage des resultats experimentaux, nous determinons J _{co}, H_ {o} et n, les parametres du MECG. La forme des cylindres, avec une longueur comparable au diametre externe, entrai ne la presence d'un champ demagnetisant qui peut etre inclus dans le modele theorique. Ceci nous permet d'evaluer la fraction du volume ecrante, f _{g}, ainsi que le facteur demagnetisant N. Nous trouvons que J_{ co}, H_{o} et f_{g} dependent de la temperature, tandis que n et N (pour une epaisseur de paroi fixe) n'en dependent pas. La deuxieme technique consiste a mesurer le courant critique de lames minces en fonction du champ applique pour differentes temperatures. Nous utilisons un montage que nous avons developpe permettant d'effectuer ces mesures en contact direct avec le liquide refrigerant, i.e. dans l'azote liquide. Nous varions la temperature du liquide en variant la pression du gaz au-dessus du bain d'azote. Cette methode nous permet de balayer des temperatures entre 65K et la temperature critique du materiau ({~ }92K). Nous effectuons le lissage des courbes de courant critique en fonction du champ applique encore a l'aide du MECG, pour a nouveau obtenir ses parametres. Pour trois echantillons avec des traitements thermiques differents, les parametres sont differents confirmant que la variation des proprietes macroscopiques de ces supraconducteurs est intimement reliee a la nature des jonctions entre les grains et de la surface des grains. L'oxygenation prolongee retablit les proprietes initiales des echantillons qui se sont degrades durant le recuit des contacts.
Préparation et caracterisation de composites ll verre-supraconducteur gg
NASA Astrophysics Data System (ADS)
Roblin-Semène, L.; Pradel, A.; Ribes, M.; Belouet, C.
1994-11-01
Several types of " glass/superconductor " composites were prepared by a uniaxial hot pressing method. The BiSrCaCuO oxides were the materials under investigation in this work. A preliminary study of glasses obtained in this BiSrCaCuO system indicated that phase separation with nodules of 50 to 100 nm generally occurs. Glasses used as parts of composites are : 1) PbOB2O3 ones because of their low transition temperature and their large thermal stability which were favourable to texturation of superconducting grains, 2) BiSrCaCuO (2212, 2223, 4334) glasses because further recrystallisation could be carried out to improve grain connectivity and at last 3) mixture of BiSrCaCuO glass and V2O5 or PbO-B2O3 to insure a compromise between texturing and connectivity. Resistivity and current density measurements indicated that these types of composites were potential candidates for use as current limiters. Différents composites " verre/supraconducteur " ont été préparés par pressage à chaud uniaxial. Nos travaux ont porté sur les oxydes " BiSrCaCuO ". Une étude préalable des verres de ce système a permis de mettre en évidence une séparation de phase avec présence de nodules dont la taille est comprise entre 50 et 100 nm. Les verres choisis pour l'élaboration de ces composites ont été : 1 ) le verre PbO-B2O3 dont la basse température de transition vitreuse et la grande stabilité thermique étaient des éléments favorables à une bonne texturation des grains supraconducteurs, 2) les verres BiSrCaCuO purs (2212, 2223, 4334) dont la recristallisation partielle par recuit a posteriori devait assurer une meilleure connectivité entre grains, et enfin 3) des mélanges " BiSrCaCuO (V) " + V2O5 OU PbO-B2O3 qui devaient permettre d'assurer un compromis entre texturation et connectivité. Les mesures de résistivité et de densité de courant montrent que ces composites sont de bons candidats pour des applications de limiteur de courant.
NASA Astrophysics Data System (ADS)
Frikach, Kamal
2001-09-01
Dans ce travail je presente une etude de l'impedance de surface, ainsi que de l'attenuation et la variation de la vitesse ultrasonores dans les etats normal et supraconducteur sur les composes organiques k-(ET)2X (X = Cu(SCN) 2, Cu[N(CN)2]Br). A partir des mesures d'impedance de surface, les deux composantes sigma 1 et sigma2 de la conductivite complexe sont extraites en utilisant le modele de Drude. Ces mesures montrent que la symetrie du parametre d'ordre dans ces composes est differente de celle du cas BCS. Afin de comprendre le profil de sigma1 (T) nous avons etudie les fluctuations supraconductrices a partir de la paraconductivite sigma'( T). Cette etude est rendue possible grace a la structure quasi-2D des composes k-(ET)2X dans lesquelles les fluctuations supraconductrices sont fortes. Ces dernieres sont observees sur deux decades de temperatures dans le Cu(SCN)2. L'application du modele de Aslamazov-Larkin 2D et 3D montre la possibilite du passage du regime 2D a haute temperature au regime 3D au voisinage de Tc. En se basant sur ce resultat, nous avons calcule la paraconductivite en utilisant une approche a l'ordre d'une boucle a partir du modele de Lawrence-Doniach. En tenant compte de la correction par la self energie dans la limite dynamique (17 GHz), l'ajustement de la paraconductivite calculee est en bon accord avec les donnees experimentales. Le couplage interplan obtenu est compatible avec le caractere quasi-2D des composes organiques. Le temps de relaxation des quasi-particules dans l'etat supraconducteur est ensuite extrait pour la premiere fois dans ces composes dont le comportement en fonction de la temperature est compatible avec la presence des noeuds dans le gap. Dans l'etat normal, la variation de la vitesse ultrasonore presente un comportement anormal caracterise par un fort ramollissement a T = 38 K et 50 K dans k-(ET) 2Cu(SCN)2 et k-(ET)2Cu[N(CN) 2]Br respectivement dont l'amplitude est independante du champ magnetique jusqu'a H = Hc 2. Cette anomalie semble exister seulement dans les modes qui sondent le couplage interplan. Ce comportement est attribue au couplage entre les fluctuations antiferromagnetiques et les phonons acoustiques.
Elaboration et caracterisation de nanocomposites polyethylene/montmorillonite
NASA Astrophysics Data System (ADS)
Stoeffler, Karen
This research project consists in preparing polyethylene/montmorillonite nanocomposites for film packaging applications. Montmorillonite is a natural clay with an exceptional aspect ratio. In recent years, its incorporation in polymer matrices has attracted great interest. The pioneer work from Toyota on polyamide-6/montmorillonite composites has shown that it was possible to disperse the clay at a nanometric scale. Such a structure, so-called exfoliated, leads to a significant increase in mechanical, barrier and fire retardant properties, even at low volumetric fractions of clay. This allows a valorization of the polymeric material at moderate cost. Due to its high polarity, montmorilloite exfoliation in polymeric matrices is problematic. In the particular case of polyolefin matrices, the platelets dispersion remains limited: most frequently, the composites obtained exhibit conventional structures (microcomposites) or intercalated structures. To solve this problem, two techniques are commonly employed: the surface treatment of the clay, which allows the expansion of the interfoliar gallery while increasing the affinity between the clay and the polymer, and the use of a polar compatibilizing agent (grafted polyolefin). The first part of this thesis deals with the preparation and the characterization of highly thermally stable organophilic montmorillonites. Commercial organophilic montmorillonites are treated with quaternary ammonium intercalating agents. However, those intercalating agents present a poor thermal stability and are susceptible to decompose upon processing, thus affecting the clay dispersion and the final properties of the nanocomposites. In this work, it was proposed to modify the clay with alkyl pyridinium, alkyl imidazolium and alkyl phosphonium intercalating agents, which are more stable than ammonium based cations. Organophilic montmorillonites with enhanced thermal stabilites compared to commercial organoclays (+20°C to +70°C) were prepared. The effect of the chemical structure of the intercalating agent on the capacity of the organoclay to be dispersed in polyethylene matrices was analyzed. In addition, the influence of the dispersion on the thermal stability of the nanocomposites prepared is discussed. In a second part, the effect of the compatibilizing agent characteristics on the quality of the clay dispersion in polyethylene/montmorillonite nanocomposites was analyzed. The mechanical properties and the oxygen permeability of the nanocomposites were evaluated and related to the level of clay delamination and to the strength of the polymer/clay interface, which was evaluated through surface tension measurements.
Fabrication et caracterisation de cavites organiques a modes de galerie
NASA Astrophysics Data System (ADS)
Amrane, Tassadit
The aim of this master project is to combine the high quality factor of whispering gallery optical microcavities with the high photoluminescence efficiency of conjugated polymers. These polymer-cavity composite systems have a great potential for studying the interaction of light and matter in the strong coupling regime. In particular, this system would offer a great opportunity to create a Bose-Einstein condensate of polaritons, the quasi-particles made from a strong interaction between excitons and photons. Organic semiconductors, with their large delocalized excitons, coupled to good whispering gallery cavities with high quality factors and small volumes are an ideal system for this purpose. Two approaches toward this end were explored: in the first approach a pre-existing dielectric whispering gallery cavity was coated with a thin film of conjugated polymer, while in the second one the whispering gallery cavity was fabricated directly with the organic semi-conductor. For testing the first approach, a silica microsphere was dip-coated with copolymer, and the interaction between the whispering gallery modes in the microcavity and the copolymer was studied using photoluminescence spectroscopy. The well-defined resonances obtained at the emission wavelength of the organic material confirm the effective coupling between the photoluminescence and the modes of the cavity. In the second approach, we developed a process to fabricate microdisk cavities with the copolymer. The difficulty in this approach lies in the sensitivity of the organic semiconductor to the microfabrication process. It is critical to avoid dissolving or otherwise altering it during the photolithographic steps. For this purpose a protective polymer, parylene-C, is deposited on the top of the copolymer. This protective polymer was chosen to be transparent at the absorption and emission wavelengths of the copolymer and inert in the solvents used during the different steps of microfabrication. The development of this fabrication process allowed us to obtain a whispering gallery cavity with a quality factor of 5x104. These promising results suggest future uses of this cavity to explore the interactions between the polymer and the cavity modes. The adequate setup for the detection of edge-emitted photoluminescence in copolymer microdisks is in progress and will be available for the future characterisation of organic whispering gallery cavities. The development of this polymer-based whispering gallery cavities is the first step along the way toward demonstrating a polariton Bose-Einstein condensate.
NASA Astrophysics Data System (ADS)
Quintero Malpica, Alfonso
Les revetements par projection thermique HVOF (High Velocity Oxy-Fuel) sont communement utilises dans l'industrie aeronautique, notamment au sein du partenaire industriel du projet (Tecnickrome Aeronautique Inc), comme des remplacants pour les revetements produits par l'electrodeposition du chrome dur due aux problemes environnementaux. Ce projet avait pour but de trouver une poudre alternative a celle qui est actuellement utilisee pour la production des revetements de type WC-10Co-4Cr obtenus avec la technologie de projection thermique a haute vitesse HVOF et en utilisant le systeme de projection HVOF-JET KOTERTM III. Dans un premier temps, cinq poudres incluant celle de reference, ayant des distributions granulometriques differentes, ont ete projetees dans le but d'identifier quelles poudres pouvaient etre utilisees avec le systeme de projection HVOF-JET KOTERTM III en gardant des parametres similaires (debit d'hydrogene, debit d'oxygene, debit de poudre et distance de projection) que pour la poudre de reference. Les revetements obtenus a partir des poudres etudiees ont ete evalues selon les criteres d'acceptation des revetements sollicites par les principaux manufacturiers des trains d'atterrissage. Les tests ont porte sur l'epaisseur, l'adhesion, la microstructure, la microdurete, les contraintes residuelles et la rugosite. A partir des resultats obtenus, seulement deux poudres ont rencontre toutes les proprietes demandees par les specifications aeronautiques. L'influence de la variation de la distance de projection sur la qualite des revetements a ete etudiee. Cinq distances (100, 125, 150, 175 et 200 mm) ont ete choisies pour faire la projection des deux poudres selectionnees. Les revetements obtenus ont montre de proprietes des revetements similaires (epaisseur, adhesion, microstructure, microdurete, contraintes residuelles et rugosite). Il a ete trouve que la distance de projection est un parametre indirect du systeme de projection HVOF-JET KOTERTM III et qu'autant la vitesse que la temperature des particules semblent mieux determiner les proprietes du revetement, en particulier, les niveaux de contraintes residuelles finales dans les revetements. Lorsque les revetements sont produits a de courtes distances de projection, les particules arrivent avec plus grande vitesse et temperature ce qui resulte dans de plus grandes temperatures du substrat et plus grande energie d'impact, resultant dans de plus grandes contraintes residuelles de compression dans les revetements.
NASA Astrophysics Data System (ADS)
Ingraham, Patrick Jon
This thesis determines the capability of detecting faint companions in the presence of speckle noise when performing space-based high-contrast imaging through spectral differential imagery (SDI) using a low-order Fabry-Perot etalon as a tunable filter. The performance of such a tunable filter is illustrated through the Tunable Filter Imager (TFI), an instrument designed for the James Webb Space Telescope (JWST). Using a TFI prototype etalon and a custom designed test bed, the etalon's ability to perform speckle-suppression through SDI is demonstrated experimentally. Improvements in contrast vary with separation, ranging from a factor of ˜10 at working angles greater than 11 lambda/D and increasing up to a factor of ˜60 at 5 lambda/D. These measurements are consistent with a Fresnel optical propagation model which shows the speckle suppression capability is limited by the test bed and not the etalon. This result demonstrates that a tunable filter is an attractive option to perform high-contrast imaging through SDI. To explore the capability of space-based SDI using an etalon, we perform an end-to-end Fresnel propagation of JWST and TFI. Using this simulation, a contrast improvement ranging from a factor of ˜7 to ˜100 is predicted, depending on the instrument's configuration. The performance of roll-subtraction is simulated and compared to that of SDI. The SDI capability of the Near-Infrared Imager and Slitless Spectrograph (NIRISS), the science instrument module to replace TFI in the JWST Fine Guidance Sensor is also determined. Using low resolution, multi-band (0.85-2.4 microm) multi-object spectroscopy, 104 objects towards the central region of the Orion Nebular Cluster have been assigned spectral types including 7 new brown dwarfs, and 4 new planetary mass candidates. These objects are useful for determining the substellar initial mass function and for testing evolutionary and atmospheric models of young stellar and substellar objects. Using the measured H band magnitudes, combined with our determined extinction values, the classified objects are used to create an Hertzsprung-Russell diagram for the cluster. Our results indicate a single epoch of star formation beginning ˜1 Myr ago. The initial mass function of the cluster is derived and found to be consistent with the values determined for other young clusters and the galactic disk.
Caracterisation electrique de materiaux en composite pour fuselages d'avions
NASA Astrophysics Data System (ADS)
Tse, William
2011-12-01
In the last decade or so, the rise of oil price is being felt all over the world. Oil being one of the primary sources of energy highly exploited, it plays a great role in the today's world economy, especially in the transport domain. To remain competitive, companies striving in this domain need therefore to modify their approach in the design phase of new or improved products. In the aerospace industry for example, weight reduction in aircraft structures have become a primordial aspect in the design phase of new models making them lighter and more efficient. In the framework of this project, the research is related to new weight-reduction of structural materials used in aircrafts. As of today, much research effort has been undertaken to find good substitutes to replace the materials presently used (aluminum). Several materials such as aluminum-lithium and carbon fibre composite bring great interest as substitutes. This last one presents superior mechanical properties over aluminum such as lightweight and rigidity; its electrical properties though remain still ambiguous. The objective of this project, proposed by Bombardier Core EMC, is to find a way to characterize the composite in a conventional way that would allow an extraction of its electrical properties (permittivity (epsilonr), conductivity (sigma), etc). In this Master thesis, the existing studies and characterization approaches for the composite material are presented and discussed. These approaches will help anticipate the electrical behaviour of the composite material under test. A comparison between known elements (ex: aluminum) and the composite material will also be tackled in order to gauge its conductivity level, particularly for low frequencies (≈ MHz), and up to high frequencies (≈ 12 GHz). Finally, some tests have been simulated with electromagnetic modelling software in order to reproduce and validate the experimental results. At the end of the thesis, a discussion/conclusion presenting the results and validating their integrity is given. The results enable us to do an estimation of the composite's conductivity and to observe its attenuation properties in function of the frequency. The tests were made with composite laminated panels without wire mesh. The wire mesh here is a copper matrix integrated at the exterior surface of the composite for added electromagnetic protection.
Prediction du profil de durete de l'acier AISI 4340 traite thermiquement au laser
NASA Astrophysics Data System (ADS)
Maamri, Ilyes
Les traitements thermiques de surfaces sont des procedes qui visent a conferer au coeur et a la surface des pieces mecaniques des proprietes differentes. Ils permettent d'ameliorer la resistance a l'usure et a la fatigue en durcissant les zones critiques superficielles par des apports thermiques courts et localises. Parmi les procedes qui se distinguent par leur capacite en terme de puissance surfacique, le traitement thermique de surface au laser offre des cycles thermiques rapides, localises et precis tout en limitant les risques de deformations indesirables. Les proprietes mecaniques de la zone durcie obtenue par ce procede dependent des proprietes physicochimiques du materiau a traiter et de plusieurs parametres du procede. Pour etre en mesure d'exploiter adequatement les ressources qu'offre ce procede, il est necessaire de developper des strategies permettant de controler et regler les parametres de maniere a produire avec precision les caracteristiques desirees pour la surface durcie sans recourir au classique long et couteux processus essai-erreur. L'objectif du projet consiste donc a developper des modeles pour predire le profil de durete dans le cas de traitement thermique de pieces en acier AISI 4340. Pour comprendre le comportement du procede et evaluer les effets des differents parametres sur la qualite du traitement, une etude de sensibilite a ete menee en se basant sur une planification experimentale structuree combinee a des techniques d'analyse statistiques eprouvees. Les resultats de cette etude ont permis l'identification des variables les plus pertinentes a exploiter pour la modelisation. Suite a cette analyse et dans le but d'elaborer un premier modele, deux techniques de modelisation ont ete considerees, soient la regression multiple et les reseaux de neurones. Les deux techniques ont conduit a des modeles de qualite acceptable avec une precision d'environ 90%. Pour ameliorer les performances des modeles a base de reseaux de neurones, deux nouvelles approches basees sur la caracterisation geometrique du profil de durete ont ete considerees. Contrairement aux premiers modeles predisant le profil de durete en fonction des parametres du procede, les nouveaux modeles combinent les memes parametres avec les attributs geometriques du profil de durete pour refleter la qualite du traitement. Les modeles obtenus montrent que cette strategie conduit a des resultats tres prometteurs.
Profil épidémiologique des hémoglobinopathies: étude transversale descriptive autour du cas index
Dahmani, Fatima; Benkirane, Souad; Kouzih, Jaafar; Woumki, Aziz; Mamad, Hassan; Masrar, Azlarab
2017-01-01
Les hémoglobinopathies sont des affections constitutionnelles conséquentes à des anomalies des hémoglobines. Elles sont souvent graves dans leurs formes majeures, leur prise en charge est lourde avec un grand impact psycho-social sur les patients et leur famille. Classées parmi les maladies rares, elles sont encore insuffisamment connues des professionnels de santé. Cette méconnaissance est à l'origine d'une errance diagnostique, d'un retard dans leur prise en charge et par conséquent une morbidité et une mortalité élevée chez ces patients. L'Organisation Mondiale de la Santé (OMS) a publié en 2008 des données concernant l'épidémiologie des hémoglobinopathies: plus de 330000 cas naissent chaque année avec une hémoglobinopathie (83% des cas de drépanocytose, 17% des cas de thalassémie). Les troubles de l'hémoglobine sont responsables d'environ 3,4% des décès chez les moins de 5 ans. A l'échelle mondiale, 7% environ des femmes enceintes seraient porteuses d'une forme de la thalassémie et 1% des couples sont à risque. Toutefois, elles sont relativement fréquentes dans certaines régions du globe où les mariages consanguins sont communs. Afin de décrire les caractéristiques épidémiologiques des familles à risque d'hémoglobinopathies (étude autour du cas) dont les cas index sont suivis au service de pédiatrie à l'Hôpital Provincial El Idrisi de Kenitra au Maroc, une étude transversale descriptive a été réalisée durant deux enquêtes la première en mai 2015 et la deuxième en juin de la même année lors des journées de vaccination des cas index contre le pneumocoque. Après avoir recueilli les données épidémiologiques de nos patients, nous avons réalisé une étude biologique comportant: l'hémogramme avec étude morphologique des globules rouges en coloration MGG et numération automatique des réticulocytes; les électrophorèses de l'hémoglobine à pH alcalin (8.8) et secondairement à pH acide (5.4) sur gel d'agarose avec intégration densitométrique. 275 patients ont présenté des profils compatibles à une hémoglobinopathie. La majorité de ces malades étaient issus de mariages consanguins (83.1%) et originaires de régions situées dans le nord du Maroc. L'enquête familiale a permis de retrouver les familles à risque, chez lesquelles on note une fréquence élevée de drépanocytose. Nos résultats confirment l'existence de différents types d'hémoglobinopathies dans la population marocaine. PMID:28904678
Fabrication et caracterisation d'hybrides optiques tout-fibre
NASA Astrophysics Data System (ADS)
Madore, Wendy Julie
In this thesis, we present the fabrication and characterization of optical hybrids made of all fibre 3 × 3 and 4 × 4 couplers. The three-fibre components are made with a triangular cross section, while the four-fibre components are made with a square cross section. All of these couplers have to exhibit equipartition of output amplitudes and specific relative phases of the output signals to be referred to as optical hybrids. These two types of couplers are first modelled to determine the appropriate set of experimental parameters to make hybrids out of them. The prototypes are made in standard telecommunication fibres and then characterized to quantify the performances in transmission and in phase. The objectives of this work is first to model the behaviour and physical properties of 3×3 and 4 × 4 couplers to make sure they can meet the requirements of optical hybrids with an appropriate set of fabrication parameters. The next step is to make prototypes of these 3×3 and 4 × 4 couplers and test their behaviour to check how they fulfill the requirements of optical hybrids. The experimental set-up selected is based on the fusion-tapering technique to make optical fibre components. The heat source is a micro-torch fuelled with a gas mix including propane and oxygen. This type of set-up gives the required freedom to adjust experimental parameters to suit both 3×3 and 4×4 couplers. The versatility of the set-up is also an advantage towards a repeatable and stable process to fuse and taper the different structures. The fabricated triangular-shape couplers have a total transmission of 85 % (-0,7 dB), the crossing is typically located around 1 550 nm with a transmission of around 33 % (-4 dB) per branch. In addition, the relative phases between the output signals are 120±9°. The fabricated square-shape couplers have a total transmission of 89 % (-0,5 dB) with a crossing around 1 550 nm and a transmission around 25 % (-6 dB) per branch. The relative phases between the output signals are 90±3°. As standard telecommunications fibres are used to make the couplers, the prototypes are compatible with all standard fibered set-ups and benches. The properties of optical hybrids are very interesting in coherent detection, where an unambiguous phase measurement is desired. For instance, some standard telecommunication systems use phase-shift keying (PSK), which means information is encoded in the phase of the electromagnetic wave. An all-optical decoding of signal is possible using optical hybrids. Another application is in biomedical imaging with techniques such as optical coherence tomography (OCT), or to a more general extend, profilometry systems. In state-of-the-art techniques, a conventional interferometer combined with Fourier analysis only gives absolute value of the phase. Therefore, the achievable imaging depth in the sample is decreased by a factor 2. Using optical hybrids would simply allow that unambiguous phase measurement, giving the sign and value of the phase at the same time.
Algorithmes de couplage RANS et ecoulement potentiel
NASA Astrophysics Data System (ADS)
Gallay, Sylvain
Dans le processus de developpement d'avion, la solution retenue doit satisfaire de nombreux criteres dans de nombreux domaines, comme par exemple le domaine de la structure, de l'aerodynamique, de la stabilite et controle, de la performance ou encore de la securite, tout en respectant des echeanciers precis et minimisant les couts. Les geometries candidates sont nombreuses dans les premieres etapes de definition du produit et de design preliminaire, et des environnements d'optimisations multidisciplinaires sont developpes par les differentes industries aeronautiques. Differentes methodes impliquant differents niveaux de modelisations sont necessaires pour les differentes phases de developpement du projet. Lors des phases de definition et de design preliminaires, des methodes rapides sont necessaires afin d'etudier les candidats efficacement. Le developpement de methodes ameliorant la precision des methodes existantes tout en gardant un cout de calcul faible permet d'obtenir un niveau de fidelite plus eleve dans les premieres phases de developpement du projet et ainsi grandement diminuer les risques associes. Dans le domaine de l'aerodynamisme, les developpements des algorithmes de couplage visqueux/non visqueux permettent d'ameliorer les methodes de calcul lineaires non visqueuses en methodes non lineaires prenant en compte les effets visqueux. Ces methodes permettent ainsi de caracteriser l'ecoulement visqueux sur les configurations et predire entre autre les mecanismes de decrochage ou encore la position des ondes de chocs sur les surfaces portantes. Cette these se focalise sur le couplage entre une methode d'ecoulement potentiel tridimensionnelle et des donnees de section bidimensionnelles visqueuses. Les methodes existantes sont implementees et leurs limites identifiees. Une methode originale est ensuite developpee et validee. Les resultats sur une aile elliptique demontrent la capacite de l'algorithme a de grands angles d'attaques et dans la region post-decrochage. L'algorithme de couplage a ete compare a des donnees de plus haute fidelite sur des configurations issues de la litterature. Un modele de fuselage base sur des relations empiriques et des simulations RANS a ete teste et valide. Les coefficients de portance, de trainee et de moment de tangage ainsi que les coefficients de pression extraits le long de l'envergure ont montre un bon accord avec les donnees de soufflerie et les modeles RANS pour des configurations transsoniques. Une configuration a geometrie hypersustentatoire a permis d'etudier la modelisation des surfaces hypersustentees de la methode d'ecoulement potentiel, demontrant que la cambrure peut etre prise en compte uniquement dans les donnees visqueuses.
NASA Astrophysics Data System (ADS)
Kamli, Emna
Les radars hautes-frequences (RHF) mesurent les courants marins de surface avec une portee pouvant atteindre 200 kilometres et une resolution de l'ordre du kilometre. Cette etude a pour but de caracteriser la performance des RHF, en terme de couverture spatiale, pour la mesure des courants de surface en presence partielle de glace de mer. Pour ce faire, les mesures des courants de deux radars de type CODAR sur la rive sud de l'estuaire maritime du Saint-Laurent, et d'un radar de type WERA sur la rive nord, prises pendant l'hiver 2013, ont ete utilisees. Dans un premier temps, l'aire moyenne journaliere de la zone ou les courants sont mesures par chaque radar a ete comparee a l'energie des vagues de Bragg calculee a partir des donnees brutes d'acceleration fournies par une bouee mouillee dans la zone couverte par les radars. La couverture des CODARs est dependante de la densite d'energie de Bragg, alors que la couverture du WERA y est pratiquement insensible. Un modele de fetch appele GENER a ete force par la vitesse du vent predite par le modele GEM d'Environnement Canada pour estimer la hauteur significative ainsi que la periode modale des vagues. A partir de ces parametres, la densite d'energie des vagues de Bragg a ete evaluee pendant l'hiver a l'aide du spectre theorique de Bretschneider. Ces resultats permettent d'etablir la couverture normale de chaque radar en absence de glace de mer. La concentration de glace de mer, predite par le systeme canadien operationnel de prevision glace-ocean, a ete moyennee sur les differents fetchs du vent selon la direction moyenne journaliere des vagues predites par GENER. Dans un deuxieme temps, la relation entre le ratio des couvertures journalieres obtenues pendant l'hiver 2013 et des couvertures normales de chaque radar d'une part, et la concentration moyenne journaliere de glace de mer d'autre part, a ete etablie. Le ratio des couvertures decroit avec l'augmentation de la concentration de glace de mer pour les deux types de radars, mais pour une concentration de glace de 20% la couverture du WERA est reduite de 34% alors que pour les CODARs elle est reduite de 67%. Les relations empiriques etablies entre la couverture des RHF et les parametres environnementaux (vent et glace de mer) permettront de predire la couverture que pourraient fournir des RHF installes dans d'autres regions soumises a la presence saisonniere de glace de mer.
NASA Astrophysics Data System (ADS)
Péan, Stéphane; Drucker, Dorothée.; Bocherens, Hervé; Haesaerts, Paul; Valladas, Hélène; Stupak, Dmytro; Nuzhnyi, Dmytro
2010-05-01
In the Central Ukraine area of the Middle Dnipro Basin, including the Desna river valley, there are exceptional Upper Palaeolithic open air sites with mammoth bone dwelling structures. Mezhyrich is one of these settlements, which are attributed to the Epigravettian cultural facies and occurred in a periglacial environment, during Oxygen Isotope Stage (OIS) 2. Mammoth bone buildings are surrounded by pits, which are filled with archaeological material (tools, hunting weapons, ivory and bone ornaments) and bones of mammoth and other large mammals such as hare, fox, wolf, horse. A new site Buzhanka 2 has yielded a pit which could be related to an expected dwelling structure. These Final Pleistocene sites are particularly appropriate to shed new light upon the relation between man and environment at the time of the mammoth steppe disappearance. Multidisciplinar studies have been carried on, to cross results from zooarchaeology of the pit contents, carbon and nitrogen stable isotope (13C and 15N) analyses of bone collagen, direct 14C dates on mammal bones and microstratigraphic analyses of the loessic sediment. With almost twenty 14C dates available, from mammoth and wolf bones and from charcoals, Mezhyrich is the best dated Epigravettian mammoth bone dwelling site: around 14 500 years BP. Mammoth treatment is zooarchaeologically evidenced in Buzhanka 2, but limited excavated areas do not allow to interpret their procurement yet. In Mezhyrich, consumption of mammoth meat is evidenced from the pit contents, coming from a few juveniles and young adults, probably hunted. The bones used in the dwelling structure no. 4, which are attributed to at least 37 individuals, have two different origins: mostly isolated elements gathered from other deposits, natural accumulations or previous kill sites; a few skeletal portions in anatomical position taken from at least one quite freshly dead mammoth body, for instance a hunted individual. From the stable isotope analyses, it appears that a modification of the regional plant and climatic context may have inferred a change of food resource for mammoths, which could have been put into food competition with horses. Mammoths from Central Ukraine at late OIS 2 may have formed an isolated local population, under the pressure of modified ecological conditions, compared to the period of maximal extension of the mammoth steppe. Thus, thanks to a combined approach of zooarchaeology, stable isotopes and radiocarbon dating, in the stratigraphic context, a better knowledge of the palaeoecological context of the last mammoths at late Pleniglacial in Central Ukraine is expected.
Matubi, Emery Metelo; Bukaka, Eric; Luemba, Trésor Bakambana; Situakibanza, Hyppolite; Sangaré, Ibrahim; Mesia, Gauthier; Ngoyi, Dieudonné Mumba; Maniania, Nguya Kalemba; Akikwa, Charles Ngandote; Kanza, Jean Pierre Basilua; Tamfum, Jean-Jacques Muyembe; sudi, Jonas Nagahuedi Bongo
2015-01-01
Introduction La présente étude a été menée à Bandundu-ville (RDC) en vue d'identifier les paramètres écologiques et entomologiques modulant la transmission du paludisme ainsi que leur tendance saisonnière dans cette agglomération. Méthodes Cette étude a été réalisée dans la période du 1er juin au 31 décembre 2011. Des prospections des gîtes larvaires d'anophèles avec récolte ont été réalisées, les paramètres physiques, physico-chimiques et environnementaux déterminés. La densité larvaire a été estimée selon une échelle de classes de densité, inspirée de la méthode de Carron pour chaque type de gîtes. Quarante-huit maisons ont été sélectionnées et prospectées pour la récolte des moustiques par pulvérisation intradomicilaire. L'identification des moustiques a été faite sur base des critères morphologiques de Gilles et Demeillon. L'Indice sporozoïtique (Is) a été déterminé par le test ELISA CSP de Plasmodium falciparum à l'Institut National de Recherche Biomédicale selon le protocole de Robert Wirtz. Les autres paramètres entomologiques comme la densité, le taux d'agressivité, le taux d'inoculation entomologique (TIE) ainsi que l'indice de stabilité ont été déterminés selon le protocole de l'OMS. La régression linéaire a été réalisée au seuil de signification de 0,05 pour identifier les déterminants de la densité larvaire. Résultats Cent-sept gîtes larvaires ont été identifiés et caractérisés en 5 types (digues et puits d'eau, collections d'eau maraîchère et concasseurs moellons, marais Régie de distribution d'eau, marais le long des rivières et ruisseaux et flaques d'eau de pluies). La densité larvaire moyenne a été de 117,4±64,1. Quatre mille cinq cents quatre-vingt-huit moustiques ont été capturés et identifiés, parmi lesquels 1.258 Anopheles gambiae sl avec une densité de 8,86, un taux d'agressivité de 1,55 piqûre par homme par nuit, l'Is de 5,6%, un TIE de 0,085 piqûre infectante par homme par nuit, l'espérance de vie moyenne d'anophèles de 16,4 jours et un indice stabilité de 6,512. L'analyse des données a montré que la superficie des gîtes larvaires influençait significativement la densité larvaire (p < 0,001). Par contre, la turbidité et la conductivité des gîtes influençaient négativement la densité larvaire (p < 0,05, IC 95%). Conclusion Les diverses biotopes, la forte densité d’Anopheles gambiae sl, le TIE et l'indice de stabilité placent Bandundu-ville en zone endémique stable. PMID:26848355
Synthese et caracterisation d'heterostructures de (In)GaAsN pour l'optoelectronique
NASA Astrophysics Data System (ADS)
Beaudry, Jean-Nicolas
2007-12-01
This doctoral project proposes to study the incorporation of nitrogen to GaAs epitaxial layers grown on GaAs(001) substrates, a system that allows for systematically isolating the effect of nitrogen from that of indium. In this thesis we report on the results of a work where the focus was brought on (i) the growth kinetics of GaAs1-xNx during the metal-organic vapour phase epitaxy growth (OMVPE) (ii) the analysis of the physical and structural properties of GaAs1-xNx/GaAs heterostructures and (iii) the characterization of the nitrogen incorporation sites in the GaAs crystal lattice. Moreover, we present the results of exploratory studies aiming at the production of GaAs1-xN x/GaAs multilayers and to the growth of InyGa1-yAs1-x Nx quaternary alloys. These latter studies address issues that are closer to technological applications since they focus on process details pertaining to the fabrication of devices. Trimethylindum (TMIn), trimethylgallium (TMGa), tertiarybutylarsine (TBAs) and dimethylhydrazine (DMHy) were used as organometallic sources, a quite original combination since not widely encountered in the epitaxial growth field. TBAs has the great advantage of being far less dangerous than arsine in OMVPE processes, the latter being highly toxic and more prone to causing leaks on a large scale. Regarding the diversity of the growth parameters, the GaAs1-xNx/GaAs samples grown for this project definitely constitute one of the largest bank of its kind. The systematic monitoring of both the growth rate and the composition of these materials under varying growth conditions has, as a consequence, generated an impressive quantity of experimental data. In addition to the DMHy flow rate, the investigated parameters include, among others, the reactor pressure, the TMGa flow rate, the substrate temperature (from 500 to 650°C), and the V/III ratio. Not only have those results allowed to highlight important behaviors of the chemical species involved in surface reactions, but they also allowed for pointing out an important lack of knowledge on the decomposition pathways of the organometallics sources. Nitrogen incorporation in GaAs being very inefficient, exceptionally high flow rates of DMHy are required, which sometimes lead to V/III ratios greater than 500. Depending on the growth temperature, this excess of DMHy molecules on the growth surface affects the growth rate and the incorporation efficiency in a complex way. Moreover, the sensitivity of x with respect to the gas phase composition translates into a laterally non-uniform incorporation of N during the growth of epilayers with high nitrogen content. For low temperatures and extremely large flow rates of DMHy, this precursor occupies most of the adsorption sites on the growth surface, thus leading to drastic reduction of the growth rate accompanied by a very large N incorporation (x > 0,1). High resolution X-ray diffraction (HR-XRD) and heavy ion Rutherford backscattering spectroscopy (HIRBS) analyses suggest that the epilayers deposited under such conditions undergo a phase separation and exhibit an important non-stoechiometry, probably indicative of an amorphous matrix. Our results also allowed us to identify and explain a nonlinear variation of the GaAs1-xNx lattice parameter a as a function of its composition x. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Girard-Lauriault, Pierre-Luc
Nitrogen (N)-containing polymer surfaces are attractive in numerous technological contexts, for example in biomedical applications. Here, we have used an atmospheric-pressure dielectric barrier discharge (DBD) apparatus to deposit novel families of N-rich plasma polymers, designated PP:N, using mixtures of three different hydrocarbon precursors (methane, ethylene, and acetylene) in nitrogen at varying respective gas flow ratios, typically parts per thousand. In preparation for subsequent cell-surface interaction studies, the first part of this research focuses on the chemical mapping of those materials, with specific attention to (semi)- quantitative analyses of functional groups. Well-established and some lesser-known analytical techniques have been combined to provide the best possible chemical and structural characterisations of these three families of PP:N thin films; namely, X-ray photoelectron spectroscopy (XPS), Near-edge X-ray absorption fine structure (NEXAFS), Fourier transform infrared spectroscopy (FTIR), contact angle goniometry (CAG), and elemental analysis (EA). High, "tunable" total nitrogen content was measured by both XPS and EA (between 6% and 25% by EA, or between 10% and 40% by XPS, which cannot detect hydrogen). Chemical derivatisation with 4-trifluoromethylbenzaldehyde (TFBA) enabled measurements of primary amine concentrations, the functionality of greatest bio-technological interest, which were found to account for 5 % to 20 % of the total bound nitrogen. By combining the above-mentioned complementary methods, we were further able to determine the complete chemical formulae, the degrees of unsaturation, and other major chemical functionalities in PP:N film structures. Several of these features are believed to be without precedents in the literature on hydrocarbon plasma polymers, for example measurements of absolute compositions (including hydrogen), and of unsaturation. It was shown that besides amines, nitriles, isonitriles and imines are the main nitrogenated functional groups in those materials. In a second part of this work, we have studied the interraction of these well-characterised surfaces with living cells. We have first demonstrated the adhesion, on both uniformly coated and micro-patterned PP:N deposits on BOPP, of three different cell types, namely, growth plate and articular chondrocytes, as well as U937 monocytes, the latter of which do not adhere at all to synthetic polymers used in tissue culture. In an effort to gain insight into cell adhesion mechanisms, we conducted a series of experiments where we cultured U937 monocytes on PP:N, as well as on two other families of chemically well-characterised N-rich thin films, the latter deposited by low pressure RF plasma and by vacuum ultra-violet (VUV) photo-polymerisation ("PVP:N" films). It was first shown that there exist sharply-defined ("critical") surface-chemical conditions that are necessary to induce cell adhesion. By comparing the extensively-characterised film chemistries at the " critical " conditions, we have clearly demonstrated the dominant role of primary amines in the cell adhesion mechanism. In the final aspect of this work, quantitative real-time reverse transcription-polymerase chain reaction (real-time RT-PCR) experiments were conducted using U937 cells that had been made to adhere on PP:N and PVP:N materials for up to 24h. We have shown that the adhesion of U937 monocytes to PP:N and PVP:N surfaces induced a transient expression of cytokines, markers of macrophage activation, as well as a sustained expression of PPARgamma and ICAM-I, implicated in the adhesion and retention of monocytes. Keywords: biomaterials; dielectric barrier discharges (DBD); deposition; plasma polymerisation; ESCA/XPS; NEXAFS; FTIR; primary amines; cell adhesion; gene expression.
Analyse et caracterisation d'interactions fluide-structure instationnaires en grands deplacements
NASA Astrophysics Data System (ADS)
Cori, Jean-Francois
Flapping wings for flying and oscillating fins for swimming stand out as the most complex yet efficient propulsion methods found in nature. Understanding the phenomena involved is a great challenge generating significant interests, especially in the growing field of Micro Air Vehicles. The thrust and lift are induced by oscillating foils thanks to a complex phenomenon of unsteady fluid-structure interaction (FSI). The aim of the dissertation is to develop an efficient CFD framework for simulating the FSI process involved in the propulsion or the power extraction of an oscillating flexible airfoil in a viscous incompressible flow. The numerical method relies on direct implicit monolithic formulation using high-order implicit time integrators. We use an Arbitrary Lagrangian Eulerian (ALE) formulation of the equations designed to satisfy the Geometric Conservation Law (GCL) and to guarantee that the high order temporal accuracy of the time integrators observed on fixed meshes is preserved on ALE deforming meshes. Hyperelastic structural Saint-Venant Kirchhoff model, viscous incompressible Navier-Stokes equations for the flow, Newton's law for the point mass and equilibrium equations at the interface form one large monolithic system. The fully implicit FSI approach uses coincidents nodes on the fluid-structure interface, so that loads, velocities and displacements are evaluated at the same location and at the same time. The problem is solved in an implicit manner using a Newton-Raphson pseudo-solid finite element approach. High-order implicit Runge-Kutta time integrators are implemented (up to 5th order) to improve the accuracy and reduce the computational cost. In this context of stiff interaction problems, the highly stable fully implicit one-step approach is an original alternative to traditional multistep or explicit one-step finite element approaches. The methodology has been verified with three different test-cases. Thorough time-step refinement studies for a rigid oscillating airfoil on deforming meshes, for flow induced vibrations of a flexible strip and for a self-propulsed flapping airfoil indicate that the stability of the proposed approach is always observed even with large time steps, spurious oscillations on the structure are avoided without any damping and the high order accuracy of the IRK schemes is maintained. We have applied our powerful FSI framework on three interesting applications, with a detailed dimensional analysis to obtain their characteristic parameters. Firstly, we have studied the vibrational characteristics of a well-documented fluid-structure interaction case : a flexible strip fixed behind a rigid square cylinder. Our results compare favorably with previous works. The accuracy of the IRK time integrators (even for the pressure field of incompressible flow), their unconditional stability and their non-dissipative nature produced results revealing new, never previously reported, higher frequency structural forces weakly coupled with the fluid. Secondly, we have explored the propulsive and power extraction characteristics of rigid and flexible flapping airfoils. For the power extraction, we found an excellent agreement with literature results. A parametric study indicates the optimal motion parameters to get high propulsive efficiencies. An optimal flexibility seems to improve power extraction efficiency. Finally, a survey on flapping propulsion has given initial results for a self-propulsed airfoil and has opened a new way of studying propulsive efficiency. (Abstract shortened by UMI.)
Fabrication et caracterisation de cristaux photoniques pour exaltation de fluorescence
NASA Astrophysics Data System (ADS)
Gascon, Annabelle
2011-12-01
In today's world, there is a pressing need for point-of-care molecular analysis that is fast, inexpensive and transportable. Lab-on-a- chips are designed to fulfill that need. They are micro-electromechanical systems (MEMS), fabricated with microelectronic techniques, that use the analytes physical properties to detect their presence in liquid samples. This detection can be performed by attaching the analyte to quantum dots. These quantum dots are semiconducting nanoparticles with narrow fluorescence band. In our project, we use a tuneable system with a two-slab photonic crystal that serves as a tuneable optical filter, detecting the presence and wavelength of these quantum dots. Photonic crystals are dielectrics with a variable refractive index, with a period near the visible light wavelength. They are called photonic crystals because they have a photonic band gap just as atomic crystals, periodic structure of atoms, have an electronic band gap. They are photonic because photons instead of electrons propagate through them. They can also enhance fluorescence from quantum dots at the photonic crystals guided resonance wavelength. My project objectives are to: (1) Fabricate two-slab photonic crystal, (2) Characterize photonic crystals, (3) Place quantum dots on photonic crystals, (4) Measure fluorescence enhancement. The device made during this project consists of a silicon wafer on which were deposited a 200 nm silicon nitride layer, then a 200 nm silicon dioxide layer and finally another 200 nm silicon nitride layer. An electron-beam lithography defines the photonic crystals and the MEMS. The photonic crystals are square lattices of holes 180 nm in diameter, at a period of 460 nm, etched through the two silicon nitride slabs. The two slabs are etched in a single step of Reactive Ion Etching (RIE). Then, the silicon under the photonic crystal is etched from the backside up to the nitride by deep-RIE. Finally, the oxide layer is removed in order to completely suspend the two-slab photonic crystal. The M EMS can change the gap between the two slabs in order to tune the guided resonance wavelength. An optical set-up is used to trace the photonic crystals transmission and reflection spectrum, in order to know the guided resonance position. A supercontinuum source illuminates the device at a normal incidence angle for wavelength between 400 nm and 800 nm. High-resolution spectra are obtained with a CCD camera spectrometer. Different types of one-slab photonic crystals are analyzed with this approach: we observe guided resonance peaks near 550 nm, 615 nm and 700 nm. Finally, a quantum dots microdrop is placed on the photonic crystal. The quantum dots emission wavelength matches with the photonic crystal guided resonance. A hyperspectral fluorescence microscope excites quantum dots between 436 nm and 483 nm, detects emission greater than 500 nm and plots a fluorescence wavelength spectrum. This set-up measures and compares the fluorescence of the quantum dots placed on and next to the photonic crystals. Our results show that the fluorescence is 30 times higher on the photonic crystals, but the fluorescence wavelength corresponds neither to the quantum dots emission nor to the photonic crystal guided resonance. In conclusion, this master thesis project demonstrates that it is possible to fabricate two-slab photonic crystals in silicon nitride and to plot their transmission and reflection spectra in order to find their guided resonance position. A fluorescence enhancement is visible, but at a different wavelength than of the quantum dots.
NASA Astrophysics Data System (ADS)
Louis, Ognel Pierre
Le but de cette etude est de developper un outil permettant d'estimer le niveau de risque de perte de vigueur des peuplements forestiers de la region de Gounamitz au nord-ouest du Nouveau-Brunswick via des donnees d'inventaires forestiers et des donnees de teledetection. Pour ce faire, un marteloscope de 100m x 100m et 20 parcelles d'echantillonnages ont ete delimites. A l'interieur de ces derniers, le niveau de risque de perte de vigueur des arbres ayant un DHP superieur ou egal a 9 cm a ete determine. Afin de caracteriser le risque de perte de vigueur des arbres, leurs positions spatiales ont ete repertoriees a partir d'un GPS en tenant compte des defauts au niveau des tiges. Pour mener a bien ce travail, les indices de vegetation et de textures et les bandes spectrales de l'image aeroportee ont ete extraits et consideres comme variables independantes. Le niveau de risque de perte de vigueur obtenu par espece d'arbre a travers les inventaires forestiers a ete considere comme variable dependante. En vue d'obtenir la superficie des peuplements forestiers de la region d'etude, une classification dirigee des images a partir de l'algorithme maximum de vraisemblance a ete effectuee. Le niveau de risque de perte de vigueur par type d'arbre a ensuite ete estime a l'aide des reseaux de neurones en utilisant un reseau dit perceptron multicouches. Il s'agit d'un modele de reseau de neurones compose de : 11 neurones sur la couche d'entree, correspondant aux variables independantes, 35 neurones sur la couche cachee et 4 neurones sur la couche de sortie. La prediction a partir des reseaux de neurones produit une matrice de confusion qui permet d'obtenir des mesures quantitatives d'estimation, notamment un pourcentage de classification globale de 91,7% pour la prediction du risque de perte de vigueur du peuplement de resineux et de 89,7% pour celui du peuplement de feuillus. L'evaluation de la performance des reseaux de neurones fournit une valeur de MSE globale de 0,04, et une RMSE (Mean Square Error) globale de 0,20 pour le peuplement de feuillus. Quant au peuplement de resineux, une valeur de MSE (Mean Square Error) globale de 0,05 et une valeur de RMSE globale de 0,22 ont ete obtenues. Pour la validation des resultats, le niveau de risque de perte de vigueur predit a ete compare avec le risque de perte de vigueur de reference. Les resultats obtenus donnent un coefficient de determination de 0,98 pour le peuplement de feuillus et 0,93 pour le peuplement de resineux.
Etude thermo-hydraulique de l'ecoulement du moderateur dans le reacteur CANDU-6
NASA Astrophysics Data System (ADS)
Mehdi Zadeh, Foad
Etant donne la taille (6,0 m x 7,6 m) ainsi que le domaine multiplement connexe qui caracterisent la cuve des reacteurs CANDU-6 (380 canaux dans la cuve), la physique qui gouverne le comportement du fluide moderateur est encore mal connue de nos jours. L'echantillonnage de donnees dans un reacteur en fonction necessite d'apporter des changements a la configuration de la cuve du reacteur afin d'y inserer des sondes. De plus, la presence d'une zone intense de radiations empeche l'utilisation des capteurs courants d'echantillonnage. En consequence, l'ecoulement du moderateur doit necessairement etre etudie a l'aide d'un modele experimental ou d'un modele numerique. Pour ce qui est du modele experimental, la fabrication et la mise en fonction de telles installations coutent tres cher. De plus, les parametres de la mise a l'echelle du systeme pour fabriquer un modele experimental a l'echelle reduite sont en contradiction. En consequence, la modelisation numerique reste une alternative importante. Actuellement, l'industrie nucleaire utilise une approche numerique, dite de milieu poreux, qui approxime le domaine par un milieu continu ou le reseau des tubes est remplace par des resistances hydrauliques distribuees. Ce modele est capable de decrire les phenomenes macroscopiques de l'ecoulement, mais ne tient pas compte des effets locaux ayant un impact sur l'ecoulement global, tel que les distributions de temperatures et de vitesses a proximite des tubes ainsi que des instabilites hydrodynamiques. Dans le contexte de la surete nucleaire, on s'interesse aux effets locaux autour des tubes de calandre. En effet, des simulations faites par cette approche predisent que l'ecoulement peut prendre plusieurs configurations hydrodynamiques dont, pour certaines, l'ecoulement montre un comportement asymetrique au sein de la cuve. Ceci peut provoquer une ebullition du moderateur sur la paroi des canaux. Dans de telles conditions, le coefficient de reactivite peut varier de maniere importante, se traduisant par l'accroissement de la puissance du reacteur. Ceci peut avoir des consequences majeures pour la surete nucleaire. Une modelisation CFD (Computational Fluid Dynamics) detaillee tenant compte des effets locaux s'avere donc necessaire. Le but de ce travail de recherche est de modeliser le comportement complexe de l'ecoulement du moderateur au sein de la cuve d'un reacteur nucleaire CANDU-6, notamment a proximite des tubes de calandre. Ces simulations servent a identifier les configurations possibles de l'ecoulement dans la calandre. Cette etude consiste ainsi a formuler des bases theoriques a l'origine des instabilites macroscopiques du moderateur, c.-a-d. des mouvements asymetriques qui peuvent provoquer l'ebullition du moderateur. Le defi du projet est de determiner l'impact de ces configurations de l'ecoulement sur la reactivite du reacteur CANDU-6.
NASA Astrophysics Data System (ADS)
Freuchet, Florian
Dans le milieu marin, l'abondance du recrutement depend des processus qui vont affecter les adultes et le stock de larves. Sous l'influence de signaux fiables de la qualite de l'habitat, la mere peut augmenter (effet maternel anticipatoire, 'anticipatory mother effects', AME) ou reduire (effet maternel egoiste, 'selfish maternai effects', SME) la condition physiologique de la progeniture. Dans les zones tropicales, generalement plus oligotrophes, la ressource nutritive et la temperature sont deux composantes importantes pouvant limiter le recrutement. Les effets de l'apport nutritionnel et du stress thermique sur la production de larves et sur la stategie maternelle adoptee ont ete testes dans cette etude. Nous avons cible la balane Chthamalus bisinuatus (Pilsbry) comme modele biologique car el1e domine les zones intertidales superieures le long des cotes rocheuses du Sud-Est du Bresil (region tropicale). Les hypotheses de depart stipulaient que l'apport nutritionnel permet aux adultes de produire des larves de qualite elevee et que le stress thermique genere une ponte precoce, produisant des larves de faible qualite. Afin de tester ces hypotheses, des populations de C. bisinuatus ont ete elevees selon quatre groupes experimentaux differents, en combinant des niveaux d'apport nutritionnel (eleve et faible) et de stress thermique (stresse et non stresse). Des mesures de survie et de conditions physiologiques des adultes et des larves ont permis d'identifier les reponses parentales pouvant etre avantageuses dans un environnement tropical hostile. L'analyse des profils en acides gras a ete la methode utilisee pour evaluer la qualite physiologique des adultes et de larves. Les resultats du traitement alimentaire (fort ou faible apport nutritif), ne montrent aucune difference dans l'accumulation de lipides neutres, la taille des nauplii, l'effort de reproduction ou le temps de survie des nauplii en condition de jeune. Il semble que la faible ressource nutritive est compensee par les meres qui adoptent un modele AME qui se traduit par l'anticipation du milieu par les meres afin de produire des larves au phenotype approprie. A l'ajout d'un stress thermique, on observe des diminutions de 47% de la production de larves et celles-ci etaient 18 microm plus petites. Les meres semblent utiliser un modele SME caracterise par une diminution de la performance des larves. Suite a ces resultats, nous emettons l'hypothese qu'en zone subtropicale, comme sur les cotes de l'etat de Sao Paulo, l'elevation de la temperature subie par les balanes n'est, a priori, pas dommageable pour leur organisme si eIle est combinee a un apport nutritif suffisant.
NASA Astrophysics Data System (ADS)
Fantoni, Julie
2011-12-01
Several classes of integrated microelectronic circuits require highly precise and stable analog components that cannot be obtained directly through standard CMOS fabrication processes. Those components must thus be calibrated either by a modification of the fabrication process or by the application of a post-fabrication tuning procedure. Many successful post-fabrication tuning processes have been introduced in the field of resistor calibration, including resistor laser trimming which is the core subject of this thesis. In this thesis, trimmed components are standard CMOS 180nm technology polysilicon resistors, integrated in circuits specially designed to allow laser intervention on their surface. The laser used is a nanosecond pulsed laser for which the fluence is set below the melting threshold of polysilicon in order to prevent damage to the material structure. This novel low-power highly localized procedure reduces the risk of damaging sensitive surrounding circuits and requires no additional fabrication step, allowing smaller dies areas and reduced costs. Precise, reliable and reproducible devices have been tuned using this technique with a precision below 500 ppm. The main objective of this research is to study and analyze the effect of the laser parameters variation on the trimmed component properties and to optimize those parameters in regard of the desired precision and stability of the final product. Raman spectroscopic measurements are performed to observe and characterize structural modifications of the polysilicon material following laser irradiation as precise resistance measurements and standardized in-oven aging tests allow the complete characterization of the device in regard of precision and stability. It is shown that for a given precision, this novel low-power trimming technique produces devices with a stability comparable to those obtained with another trimming technology such as the pulsed current method. An electrical model is also developed to predict the resistance modification with the laser fluence, the number of pulses as well as the duration of those pulses. The model is shown to be 1 500 ppm accurate when laser fluence is set accordingly to the melting threshold of polysilicon. Concerning stability, results show that, following a 300 h, 150 °C aging procedure, laser trimmed components present a 1.2% resistance drift from their initial resistance value whereas a 0.7% drift is observed on untrimmed samples. Those results are comparable to those obtained with the pulsed current trimming technique which produces trimmed component with a 1% resistance drift following a 200 h 162 °C aging procedure. Recommendations are given in the conclusion as to which laser parameters to modify and how to modify them in order to produce the desired trimmed devices with the best performance possible.
Proprietes ionochromes et photochromes de derives du polythiophene
NASA Astrophysics Data System (ADS)
Levesque, Isabelle
La synthese et la caracterisation de derives regioreguliers du polythiophene ont ete effectuees en solution et sur des films minces. La spectroscopie UV-visible de ces derives a permis de constater qu'ils peuvent posseder des proprietes chromiques particulieres selon le stimulus auquel ils sont soumis. Par exemple, une augmentation de la temperature permet en effet aux polymeres de passer d'une couleur violette a jaune, et ce, a l'etat solide aussi bien qu'en solution. Ces proprietes chromiques semblent regies par une transition conformationnelle (plane a non-plane) de la chaine principale. Ce travail avait pour but de mieux comprendre l'influence de l'organisation des chaines laterales sur les transitions chromiques. Deux derives synthetises possedant des chaines laterales sensibles aux cations alcalins se sont averes etre ionochromes en plus d'etre thermochromes. Il s'agit d'un polymere comportant des chaines laterales de type oligo(oxyethylene) et d'un autre comportant un groupement ether couronne specifique aux ions lithium. Les effets chromiques observes sont expliques par des interactions non-covalentes des cations avec les atomes d'oxygene des chaines laterales dans le cas du premier polymere, et par l'insertion de l'ion Li + dans la cavite de l'ether couronne dans le cas du second polymere. Ces interactions semblent provoquer une diminution de l'organisation induisant ainsi une torsion de la chaine principale. Les deux polymeres semblent specifiques a certains cations et pourraient donc servir comme detecteurs optiques. La specificite aux ions Li+ du second polymere pourrait aussi permettre la conduction ionique, en plus de la conductivite electronique caracteristique des polythiophenes, ce qui pourrait s'averer utile dans le cas de batteries legeres entierement faites de polymeres et de sels de lithium. D'autres derives comportant des chaines laterales de type azobenzene se sont averes etre photochromes en plus d'etre thermochromes. Le groupement lateral a la possibilite de changer de configuration de la forme trans a la forme cis lorsqu'il est soumis a une irradiation dans le domaine de l'ultraviolet ce qui provoque, selon toute evidence, un effet marque sur l'organisation des chaines laterales. Cela induit alors une torsion de la chaine principale thiophene entrainant une diminution de conjugaison marquee. Ces effets peuvent etre exploites entre autres dans l'ecriture optique. Il s'est avere que le polymere irradie peu conjugue peut etre force a retourner a son etat initial conjugue tres rapidement par un traitement electrochimique simple. En conclusion, on a pu prouver qu'une modification dans l'organisation des chaines laterales par un stimulus exterieur affecte considerablement la conformation de la chaine principale. Cela porte a croire que les chaines laterales stabilisent une conformation particuliere des polythiophenes.
NASA Astrophysics Data System (ADS)
Trimmel, Heidelinde; Weihs, Philipp; Oswald, Sandro M.; Masson, Valéry; Schoetter, Robert
2017-04-01
Urban settlements are generally known for their high fractions of impermeable surfaces, large heat capacity and low humidity compared to rural areas which results in the well known phenomena of urban heat islands. The urbanized areas are growing which can amplify the intensity and frequency of situations with heat stress. The distribution of the urban heat island is not uniform though, because the urban environment is highly diverse regarding its morphology as building heights, building contiguity and configuration of open spaces and trees vary, which cause changes in the aerodynamic resistance for heat transfers and drag coefficients for momentum. Furthermore cities are characterized by highly variable physical surface properties as albedo, emissivity, heat capacity and thermal conductivity. The distribution of the urban heat island is influenced by these morphological and physical parameters as well as the distribution of unsealed soil and vegetation. These aspects influence the urban climate on micro- and mesoscale. For larger Vienna high resolution vector and raster geodatasets were processed to derive land use surface fractions and building morphology parameters on block scale following the methodology of Cordeau (2016). A dataset of building age and typology was cross checked and extended using satellite visual and thermal bands and linked to a database joining building age and typology with typical physical building parameters obtained from different studies (Berger et al. 2012 and Amtmann M and Altmann-Mavaddat N (2014)) and the OIB (Österreichisches Institut für Bautechnik). Using dominant parameters obtained using this high resolution mainly ground based data sets (building height, built area fraction, unsealed fraction, sky view factor) a local climate zone classification was produced using an algorithm. The threshold values were chosen according to Stewart and Oke (2012). This approach is compared to results obtained with the methodology of Bechtel et al. (2015) which is based on machine learning algorithms depending on satellite imagery and expert knowledge. The data on urban land use and morphology are used for initialisation of the town energy balance scheme TEB, but are also useful for other urban canopy models or studies related to urban planning or modelling of the urban system. The sensitivity of canyon air and surface temperatures, air specific humidity and horizontal wind simulated by the town energy balance scheme TEB (Masson, 2000) regarding the dominant parameters within the range determined for the present urban structure of Vienna and the expected changes (MA 18 (2011, 2014a+b), PGO (2011), Amtmann M and Altmann-Mavaddat N (2014)) was calculated for different land cover zones. While the buildings heights have a standard deviation of 3.2m which is 15% of the maximum average building height of one block the built and unsealed surface fraction vary stronger with around 30% standard deviation. The pre 1919 structure of Vienna is rather uniform and easier to describe, the later building structure is more diverse regarding morphological as well as physical building parameters. Therefore largest uncertainties are possible at the urban rims where also the highest development is expected. The analysis will be focused on these areas. Amtmann M and Altmann-Mavaddat N (2014) Eine Typology österreichischer Wohngebäude, Österreichische Energieargentur - Austrian Energy Agency, TABULA/EPISCOPE Bechtel B, Alexander P, Böhner J, et al (2015) Mapping Local Climate Zones for a Worldwide Database of the Form and Function of Cities. ISPRS Int J Geo-Inf 4:199-219. doi: 10.3390/ijgi4010199 Berger T, Formayer H, Smutny R, Neururer C, Passawa R (2012) Auswirkungen des Klimawandelsauf den thermischen Komfort in Bürogebäuden, Berichte aus Energie- und Umweltforschung Cordeau E / Les îlots morphologiques urbains (IMU) / IAU îdF / 2016 Magistratsabteilung 18 - Stadtentwicklung und Stadtplanung, Wien - MA 18 (2011) Siedlungsformen für die Stadterweiterung, MA 18 (2014a) Smart City Wien - Rahmenstrategie MA 18 (2014b) Stadtentwicklungsplan STEP 2025, www.step.wien.at Masson V (2000) A physically-based scheme for the urban energy budget in atmospheric models. Bound-Layer Meteorol 94:357-397. doi: 10.1023/A:1002463829265 PGO (Planungsgemeinschaft Ost) (2011) stadtregion +, Planungskooperation zur räumlichen Entwicklung der Stadtregion Wien Niederösterreich Burgenland. Stewart ID, Oke TR (2012) Local climate zones for urban temperature studies. Bull Am Meteorol Soc 93:1879-1900.
Etude de l’hémogramme dans la drépanocytose homozygote: à propos de 87 patients
Dahmani, Fatima; Benkirane, Souad; Kouzih, Jaafar; Woumki, Aziz; Mamad, Hassan; Masrar, Azlarab
2016-01-01
La drépanocytose homozygote, fait partie des hémoglobinopathies les plus fréquentes au Maroc. La drépanocytose est caractérisée par une grande variabilité d’expressions clinique et biologique qui dépendent des facteurs génétiques modulateurs et environnementaux. Elle se manifeste par une anémie régénérative de gravité très variable selon les individus. L’évolution spontanée en l’absence de traitement est le décès précoce. La drépanocytose est caractérisée par une grande variabilité d’expression clinique et biologique qui dépend des facteurs génétiques et environnementaux. Un tableau clinique sévère marqué par une fréquence de transfusion élevée et précoce, des complications infectieuses graves et une mortalité précoce. Un état inflammatoire constant caractérisé par des protéines inflammatoires élevées et état nutritionnel compromis. L’objectif est de déterminer le profil des paramètres hématologiques du drépanocytaire homozygote (SS) marocain au cours des stades stationnaires. Nous avons fait une étude descriptive transversale de 87 patients drépanocytaires (SS). Nous avons réalisé une étude biologique comportant: l’hémogramme avec étude morphologique des globules rouges en coloration MGG et numération automatique des réticulocytes. Les électrophorèses de l’hémoglobine à pH alcalin (8.8) sur gel d’agarose avec intégration densitométrique. L’âge moyen est de 13.22 ans ± 16.36 avec un sex- ratio (H/F) de 1.175 et des extrêmes allant de 0.6 à 36 ans. La répercussion de l’anémie sur le plan biologique, est intense chez 88.5% des patients, 67.8% ont une anémie normocytaire contre 29.9% présentant une microcytose, et 2.3% qui présentaient une macrocytose. Le degré d’anisocytose est lié au degré d’anémie, très évocatrice chez les drépanocytaires homozygotes S/S (95,4%). Une réticulocytose était observée chez nos patients (81,6%) et 52.9% présentaient une thrombocytose. Une leucocytose était observée chez 64.4% des patients et 80.5% ont présenté une neutropénie. Les paramètres de l’hémogramme serviront de base de comparaison lors des crises et permettront d’évaluer l’efficacité de la prise en charge par le clinicien. Les valeurs élevées des globules blancs, plaquettes et CCHM semblent déterminants dans l’expression sévère de la drépanocytose au Maroc. Le profil hématologique du drépanocytaire marocain montre des données semblables à celles rapportées en littérature chez ceux de l’Afrique centrale, avec une leucocytose. Les résultats de notre étude suggèrent que, la drépanocytose est un problème de santé le plus fréquent chez les marocains et que nos résultats sont comparables à ceux décrits dans le syndrome drépanocytaire majeur. PMID:28293356
NASA Astrophysics Data System (ADS)
Lapalme, Maxime
Cold Expansion (CX) is a process which consists in plastically deforming assembly holes in metallic alloys by drawing an oversize mandrel through them. The major interference caused by the mandrel generates residual constraints around the hole. The tangential part of those constraints is beneficial for the hole fatigue life since an highly compressive zone is created which will retard fatigue cracks propagation. However, farther from this compressive zone, balancing tensile stresses are generated. The resultant of the CX process is a considerable increase in the fatigue life of the hole which has been demonstrated by the industry over the last decades. The present study objectives were the characterization of the residual stress field induced by CX and the development of a simulation method for it. The complexity of the generated stresses is increased tenfold by two main elements. First, the progressive drawing of the mandrel through the hole causes a scalable interference which produces variable stress states in the thickness of the perforated plate. Second, for easier application and productivity, the interference between the hole and the mandrel is actually caused by an interference object, the sleeve, that is rolled to a cylindrical form from a thin steel sheet. At its critical interference position, a split is opened in the sleeve which causes a non-uniform mechanical loading applied to the walls of the hole. In order to conceive a physically realistic tridimensional finite element model, laboratory measurements were first performed. The mandrel was digitized to introduce its exact shape in the model. Dimensional measurements have also helped to characterize the sleeve mechanical behavior during the CX and the effect of its split on the final hole state. These measurements and observations allowed defining the behavior of various interfaces of contact and geometries in the FE model. Characterization of the residual stress field and the validation of the simulation model of CX were performed using a variety of experimental data generated as part of this study. First, X-ray diffraction yielded measurements of stress on both sides of the sample. Then, full field planar strains were measured using digital image correlation on both sides of samples. Finally, optical measurements were carried out to determine the out-of-plane displacements at the vicinity of the hole, movement which is caused by the passage of the mandrel and the flow of material as it moves. The experimental data showed that through the thickness of a plate with a hardened hole, the residual stresses and strains are quite different, and therefore that the CX process has important three-dimensional effects. Moreover, the opening in the sleeve causes a state of nonuniform deformation on the circumference of the hole. The results of the simulation using the developed FE model show a very good correlation with the experimental data gathered for stress, strain and displacement. This comparison shows that to properly simulate the CX process, it is important to consider the exact geometry of the parts and tools as well as contacts between all of these interfaces. Following CX, the hole is generally reamed to the dimensions required for the subsequent assembly with a fastener. This machining causes a redistribution of the stress previously generated by CX. No experimental results have been collected on the impact of the reaming in the context of this study. However, a simulation method was used in the FE model to represent this last operation. The analysis shows that the reaming uniforms stress state across the thickness of the hardened sample. A validation of this observation would be necessary since the effect is significant on the final condition of the residual stresses generated by CX.
NASA Astrophysics Data System (ADS)
La Jeunesse, Isabelle; Cirelli, Claudia; Larrue, Corinne; Aubin, David
2013-04-01
The Mediterranean and neighboring countries are already experiencing broad range of natural and man-made threats to water security. According to the latest reports of the intergovernmental panel on climate change, the region is at risk due to its pronounced susceptibility to changes in the hydrological budget and extremes. Such changes are expected to have strong impacts on the management of water resources and security from an ecological, economic and social angle. This communication asks the question of the relevance of the comparison of the solutions implemented to face water scarcity in two cases a priori not comparable: (i) the Thau coastal lagoon and its catchment in the South of France, (ii) the Rio Mannu catchment in Sardinia, the second Island in the South of Italia. The Thau coastal lagoon on the French coast is caracterised by intensive shellfish farming production in the lagoon waters and summer tourism with regard to the mediterranean coast. Its territory is also supporting industrial and commercial activities concentrated around Frontignan and Sète ports and the expansion of the small villages of the catchment as the consequence of the connexion with the city of Montpellier. The catchment of the Rio Mannu in South Sardinia is part of the Campidano plain of the Sardinia Island in Italy and is located 30 km close to the city of Cagliari, the capital of the Island. The basin is mainly covered by agricultural fields and grassland, while only a small percentage of its area is occupied by forests in the south-east of the basin. The communication aims, by presenting results of the FP7 EU CLIMB project, to think about the degree of complexity of the dynamic of the stakeholders system for water allocation in the Mediterranean Region in the context of climate change. After the presentation of the case studies and the perception of the water uses by stakeholders, a reflexion on the capacity of stakeholders to represent the new hydrosystems limits is carried out. For the authors, in those two particular case studies, water scarcity problematics are similar even if water uses are differing. The answers to water scarcity, mainly depending of the capacity to import water, are generating new limits for the hydrosystems and induce an enhancement of the complexity of the stakeholders systems. This represents a risk for stakeholders not to be able to represent the uses in the hydrosystems which could cause difficulties to establish a dialogue for integrated solutions in a context of crisis. Acknowledgements The authors would like to thank the EC for the funding of the project, the CLIMB partners and in particular the case study leaders for their efficient support during the field investigations. Sincerely thanks are dedicated to stakeholders for the time they have kindly allocated to the interviews and to fill in the questionnaires.
NASA Astrophysics Data System (ADS)
Chabot, Vincent
L'elaboration de nouveaux medicaments repose sur les etudes pharmacologiques, dont le role est d'identifier de nouveaux composes actifs ou de nouvelles cibles pharmacologiques agissant entre autres au niveau cellulaire. Recemment, la detection basee sur la resonance des plasmons de surface (SPR) a ete appliquee a l'etude de reponses cellulaires. Cette methode de detection, permettant d'observer des variations d'indice de refraction associes a de faibles changements de masse a la surface d'un metal, a l'avantage de permettre l'etude d'une population de cellules vivantes en temps reel, sans necessiter l'introduction d'agents de marquage. Pour effectuer la detection au niveau de cellules individuelles, on peut employer la microscopie SPR, qui consiste a localiser spatialement la detection par un systeme d'imagerie. Cependant, la detection basee sur la SPR est une mesure sans marquage et les signaux mesures sont attribues a une reponse moyennee des differentes sources cellulaires. Afin de mieux comprendre et identifier les composantes cellulaires generant le signal mesure en SPR, il est pertinent de combiner la microscopie SPR avec une modalite complementaire, soit l'imagerie de fluorescence. C'est dans cette problematique que s'insere ce projet de these, consistant a concevoir deux plates-formes distinctes de microscopie SPR et de fluorescence optimisees pour l'etude cellulaire, de sorte a evaluer les possibilites d'integration de ces deux modalites en un seul systeme. Des substrats adaptes pour chaque plate-forme ont ete concus et realises. Ces substrats employaient une couche d'argent passivee par l'ajout d'une mince couche d'or. La stabilite et la biocompatibilite des substrats ont ete validees pour l'etude cellulaire. Deux configurations permettant d'ameliorer la sensibilite en sondant les cellules plus profondement ont ete evaluees, soit l'emploi de plasmons de surface a longue portee et de guides d'onde a gaine metallique. La sensibilite accrue de ces configurations a aussi ete demontree pour un usage en biodetection cellulaire. Une plate-forme permettant de mesurer la spectroscopie SPR simultanement avec l'acquisition d'images de fluorescence a ete realisee. Cette plate-forme a ensuite ete validee par l'etude de reponses cellulaires suite a une stimulation pharmacologique. Puis, un systeme base sur la microscopie SPR a ete concu et caracterise. Son emploi pour l'etude de reponses au niveau de cellules individuelles a ete demontre. Finalement, les forces et faiblesses des substrats et des plates-formes realisees au cours de la these ont ete evaluees. Des possibilites d'amelioration sont mises de l'avant et l'integration des modalites de microscopie SPR et de fluorescence suite aux travaux de la these est discutee. Les realisations au cours de cette etude ont donc permis d'identifier les composantes cellulaires impliquees dans la generation du signal mesure en biodetection SPR. Mots-cles : Resonance des plasmons de surface, microscopie SPR, plasmons de surface a longue portee LRSPR, guide d'onde a gaine metallique MCWG, fluorescence exaltee par plasmons de surface SPEF, biodetection cellulaire, imagerie SPR.
Mise en oeuvre et caracterisation de pieces autotrempantes elaborees avec de nouveaux alliages meres
NASA Astrophysics Data System (ADS)
Bouchemit, Arslane Abdelkader
: L'autotrempabilite des aciers en metallurgie des poudres (MP) permet d'obtenir des pieces ayant une microstructure de trempe (martensite et/ou bainite), et ce, directement lors du refroidissement a la sortie du four de frittage (frittage industriel : 10 45 °C/min [550 a 350 °C]). Cela permet entre autres d'eliminer les traitements thermiques d'austenisation et de trempe (a l'eau : ≈ 2700 °C/min ou a l'huile : ≈ 1100 °C/min [550 a 350 °C] [17]) generalement requis apres le frittage afin d'obtenir une microstructure martensitique. Ainsi, le procede de fabrication est simplifie, moins couteux et la distorsion des pieces due au refroidissement rapide lors de la trempe est evitee. De plus, l'utilisation des bains d'huile est eliminee ce qui rend le procede plus securitaire et ecologique. Les principaux parametres commandant l'autotrempabilite sont : le taux de refroidissement et la composition chimique de l'acier. De nos jours, les systemes de refroidissement a convection forcee combines aux fours industriels permettent d'obtenir des taux de refroidissement eleves a la sortie des fours (60 300 °C/min [550 a 350 °C]) [18, 19]. De plus, le taux de refroidissement critique induisant la formation de la structure de trempe est largement influence par la composition chimique de l'acier. Ainsi, plus l'acier est allie (jusqu'a une certaine limite), plus ce taux de refroidissement critique est moindre. Le molybdene, le nickel et le cuivre sont les elements usuellement utilises en MP. Cependant, le manganese et le chrome sont moins couteux et ont un impact plus marque sur l'autotrempabilite, malgre cela, ils sont rarement utilises a cause de leur susceptibilite a l'oxydation et la degradation de la compressibilite causee par le manganese. L'objectif principal de ce projet est de developper des melanges autotrempants en ajoutant des alliages meres (MA : MA1, MA2 et MA4) fortement allies au manganese (5 15 %m) et au chrome (5 15 %m) qui contiennent beaucoup de carbone (≈ 4 %m) developpes par Ian Bailon-Poujol lors de sa maitrise [20]. La haute teneur en carbone de ces alliages meres assure la protection des elements d'alliage susceptibles a l'oxydation durant toutes les etapes du procede : dans le bain liquide lors de la fusion et l'atomisation a l'eau, pendant le broyage ainsi que lors du frittage des pieces contenant ces alliages meres. Precedemment, Ian Bailon-Poujol avait etudie le broyage de certains alliages meres atomises a l'eau et avait amorce le developpement de melanges autotrempants ainsi que des etudes de diffusion des elements d'alliage. Pour ce projet, le developpement des melanges autotrempants a implique l'optimisation de toutes les etapes de la mise en oeuvre afin d'obtenir les meilleures proprietes possibles des melanges avant frittage (ecoulement, resistance a cru...) et apres frittage (durete, microstructure...), et ce, pour des alliages meres atomises a l'eau par Ian Bailon-Poujol ainsi qu'un alliage mere de chimie similaire qui fut atomise au gaz. (Abstract shortened by ProQuest.).
Caracterisation experimentale et numerique de la flamme de carburants synthetiques gazeux
NASA Astrophysics Data System (ADS)
Ouimette, Pascale
The goal of this research is to characterize experimentally and numerically laminar flames of syngas fuels made of hydrogen (H2), carbon monoxide (CO), and carbon dioxide (CO2). More specifically, the secondary objectives are: 1) to understand the effects of CO2 concentration and H2/CO ratio on NOx emissions, flame temperature, visible flame height, and flame appearance; 2) to analyze the influence of H2/CO ratio on the lame structure, and; 3) to compare and validate different H2/CO kinetic mechanisms used in a CFD (computational fluid dynamics) model over different H2/CO ratios. Thus, the present thesis is divided in three chapters, each one corresponding to a secondary objective. For the first part, experimentations enabled to conclude that adding CO2 diminishes flame temperature and EINOx for all equivalence ratios while increasing the H2/CO ratio has no influence on flame temperature but increases EINOx for equivalence ratios lower than 2. Concerning flame appearance, a low CO2 concentration in the fuel or a high H2/CO ratio gives the flame an orange color, which is explained by a high level of CO in the combustion by-products. The observed constant flame temperature with the addition of CO, which has a higher adiabatic flame temperature, is mainly due to the increased heat loss through radiation by CO2. Because NOx emissions of H2/CO/CO 2 flames are mainly a function of flame temperature, which is a function of the H2/CO ratio, the rest of the thesis concentrates on measuring and predicting species in the flame as a good prediction of species and heat release will enable to predict NOx emissions. Thus, for the second part, different H2/CO fuels are tested and major species are measured by Raman spectroscopy. Concerning major species, the maximal measured H 2O concentration decreases with addition of CO to the fuel, while the central CO2 concentration increases, as expected. However, at 20% of the visible flame height and for all fuels tested herein, the measured CO2 concentration is lower than its stoechiometric value while the measured H2O already reached its stoechiometric concentration. The slow chemical reactions necessary to produce CO2 compared to the ones forming H2O could explain this difference. For the third part, a numerical model is created for a partially premixed flame of 50% H 2 / 50% CO. This model compares different combustion mechanisms and shows that a reduced kinetic mechanism reduces simulation times while conserving the results quality of more complex kinetic schemes. This numerical model, which includes radiation heat losses, is also validated for a large range of fuels going from 100% H2 to 5% H2 / 95% CO. The most important recommendation of this work is to include a NOx mechanism to the numerical model in order to eventually determine an optimal fuel. It would also be necessary to validate the model over a wide range for different parameters such as equivalence ratio, initial temperature and initial pressure.
NASA Astrophysics Data System (ADS)
Taing, Eric
The environmental fate of dioxins and furans, or polychlorodibenzo-p-dioxins and -furans (PCDD/Fs), leaching from wood poles treated with pentachlorophenol (PCP) oil is modified by the presence of oil. Interactions between co-contaminants, which also exist for other pollutants within the mixtures, were shown in the specific context of risk analysis, but have never been taken into account for the generic context of life cycle assessment (LCA). This decision-making tool relies on characterization factors (CF) to estimate the potential impacts of an emitted amount of a pollutant in different impact categories such as aquatic ecotoxicity and human toxicity. For these two impact categories, CFs are calculated from a cause-effect chain that models the environmental fate, exposure and effects of the pollutant (represented by a matrix of fate FF, exposure XF and effect EF, respectively), meaning that a modification of PCDD/Fs fate induces a change in PCDD/Fs CFs. The research question is therefore as follows: In life cycle impact assessment (LCIA), to what extent would the potential impacts of PCDD/Fs on aquatic ecotoxicity and human toxicity change when taking into account the influence of a complex organic mixture on PCDD/Fs fate?. Thus, the main objective is to develop CFs of PCDD/Fs when their fate is influenced by PCP oil and compare them with the CFs of PCDD/Fs without oil for the aquatic ecotoxicity and human toxicity impact categories. A mathematic approach is established to determine the new environmental distribution of PCDD/Fs in the presence of oil and a new FF' matrix is calculated from this new distribution to obtain new CFs' integrating oil influence. FF' and CF' are then compared to FF and CF of PCDD/Fs without the oil. Finally, potential (eco)toxic impacts of the PCDD/F Canadian emissions are calculated with the new CFs' of PCDD/Fs in presence of oil. By only focusing on the results for an emission into air, freshwater and natural soil on a continental scale, the overall elimination fractions of 2,3,7,8-TCDD changed significantly. For the three emissions, organic fractions increased the overall elimination fraction of 2,3,7,8-TCDD into the continental air compartment, induced by a higher volatility of organic fractions than 2,3,7,8-TCDD: for an emission into continental air, organic fractions increased the overall elimination fraction of 2,3,7,8-TCDD in the continental air from 29% to 49% at most. For an emission into continental freshwater, 2,3,7,8-TCDD fate was mainly influenced by two groups of organic fractions: the lightest ones that volatilize into continental air (overall elimination fraction of 2,3,7,8-TCDD increasing from 2% to 35%) and the heaviest ones that are removed by sedimentation (DTCDD,fwC,fwC from 87% up to 96%). Therefore, an approach has been proposed to represent the carrier behaviour of the oil for PCDD/Fs. PCDD/F potential impacts in aquatic ecotoxicity and human toxicity change in a range up to two orders of magnitude depending on the emitting compartment (except for the seawater and ocean compartments). As 2,3,7,8-TCDD is one of the most toxic pollutant, this changing is significant in LCA. To assess the validity of the model's result, it is recommended to carry out laboratory experiments on the PCDD/F volatilization with oil. In addition, it could be interesting to integrate the influence of PCP on PCDD/Fs fate and, more broadly, the influence of all cocontaminants on PCDD/Fs exposure and effects. Moreover having a unique CFeco and CF tox via a weighting of the 17 CF'eco and the 17 CF' tox, respectively, is necessary for a use in LCA. Unfortunately the variability of the composition makes the weighting difficult, so it is suggested to calculate the mean CF'eco and CF'tox. Finally this research could be carried out on other pollutants whose fate is known to be modified by a complex organic mixture in an effort to ensure that impact characterization better reflects reality. (Abstract shortened by UMI.).
NASA Astrophysics Data System (ADS)
Bedwani, Stephane
To assess the importance of charge-transfer on the interface properties, we studied the interaction of the tetracyanoethylene (TCNE) molecule with various copper surfaces. TCNE, a highly electrophilic molecule, appears as an ideal candidate to study the influence of high charge-transfer on the electronic and structural properties of molecule-surface interfaces. Indeed, various TCNE-transition metal complexes exhibit magnetism at room temperature, which is in agreement with a very significant change of the residual charge on the TCNE molecule. The adsorption of TCNE molecules on Cu(100) and Cu(111) surfaces was studied by scanning tunneling microscopy (STM) and by density functional theory (DFT) calculations with a local density approximation (LDA). DFT-LDA calculations were performed to determine the geometric and electronic structure of the studied interfaces. Mulliken analysis was used to evaluate the partial net charge on the adsorbed species. The density of states (DOS) diagrams provided informations on the nature of the frontier orbitals involved in the charge-transfer at molecule-metal interfaces. To validate the theoretical observations, a comparative study was conducted between our simulated STM images and experimental STM images provided by our collaborators. The theoretical STM images were obtained with the SPAGS-STM software using the Landauer-Buttiker formalism with a semi-empirical Hamiltonian based on the extended Huckel theory (EHT) and parameterized using DFT calculations. During the development of the SPAGS-STM software, we have created a discretization module allowing rapid generation of STM images. This module is based on an adaptive Delaunay meshing scheme to minimize the amount of tunneling current to be computed. The general idea consists into refining the mesh, and therefore the calculations, near large contrast zones rather than over the entire image. The adapted mesh provides an STM image resolution equivalent to that obtained with a conventional Cartesian grid but with a significantly smaller number of calculated pixels. This module is independent of the solver used to compute the tunneling current and can be transposed to different imaging techniques. Our work on the adsorption of TCNE molecules on Cu(100) surfaces revealed that the molecules assemble into a 1D chain, thereby buckling excessively a few Cu atoms from the surface. The large deformations observed at the molecule-metal interface show that the Cu atoms close to the TCNE nitrile groups assist the molecular assembly and show a distinct behavior compared with other Cu atoms. A strong charge-transfer is observed at the interface leading to an almost complete occupation of the state ascribed to the lowest unoccupied molecular orbital (LUMO) of TCNE in gas phase. In addition, a back-donation of charge from the molecule to the metal via the states associated with the highest occupied molecular orbitals (HOMO) of TCNE in gas phase may be seen. The magnitude of the charge-transfer between a TCNE molecule and Cu atoms is of the same order on the Cu(111) surface but causes much less buckling than that on the Cu(100) surface. However, experimental STM images of single TCNE molecules adsorbed on Cu(111) surfaces reveal a surprising electronic multistability. In addition, scanning tunneling spectroscopy (STS) reveals that one of these states has a magnetic nature and shows a Kondo resonance. STM simulations identified the source of two non-magnetic states. DFT-LDA calculations were able to ascribe the magnetic state to the partial occupation of a state corresponding to the LUMO+2 of TCNE. Moreover, the calculations showed that additional molecular deformations to those of TCNE in adsorbed phase, such the elongation of the C=C central bond and the bend of nitrile groups toward the surface, favor this charge-transfer to the LUMO+2. This suggested the presence of a Kondo state through the vibrational excitation of the stretching mode of the C=C central bond. The main results of this thesis led to the conclusion that strong charge-transfer between adsorbed molecules on a metallic surface may induce significant buckling of the surface. This surface reconstruction mechanism that involves a bidirectional charge-transfer between the species results into a partial net charge over the molecule. This mechanism is involved in the supramolecular self-assembly process that appears similar to a coordination network. Moreover, the adsorbed molecule presents some important geometric distortions that alter its electronic structure. Additional distortions on the adsorbed molecule induced by some molecular vibration modes seem to explain a stable magnetic state that can be switch on or off by an electrical impulse. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Zebdi, Oussama
High performance composites reinforced by woven or braided fabrics have several different applications in various fields such as in the aerospace, automobile and marine industry. This research project was carried out at the Ecole Polytechnique de Montreal in collaboration with an industrial sponsor, the company Composites Atlantic Ltd. Composite springs often represent an interesting alternative, given the reduction in weight that they allow with equal mechanical performance compared to metallic springs. Their good resistance to fatigue and corrosion bring additional benefits in several industrial applications. Moreover, the use of the composites increases safety by avoiding the risks of brutal rupture because of the low propagation velocity of cracks in this type of material. Lastly, in electrotechnics, another significant advantage comes into play because of the electrical insulation capability of composite springs. Few research results can be found on composite springs in the scientific literature. The first part of this thesis studies the problems connected with the design of composite springs. The results are promising, because it was confirmed that composite springs can be devised with the same mechanical performance in term of stiffness as metallic ones. Two solutions were found to replace the metallic springs of the suspension of a four wheel drive: the first spring was in carbon-epoxy, and the second one in glass-epoxy. In the second part, software was developed in order to devise a new approach to predict the mechanical properties of woven or braided composites. This work shows how an inverse method based on plate laminate theory allows creating, from experimental results on braided composites, a virtual basic ply that includes the effect of fiber architecture (undulation and braiding angle). Using this model, the properties of the composite can be predicted for any braid angle. The comparison with the experimental results shows a good correlation with numerical predictions. In third part, an experimental study on creep was conducted on composite plates manufactured with the same constitutive materials as the composite springs. Creep tests in three point bending were carried out with Q800 DMA machine. The results showed that creep behavior depends primarily on the polymer matrix. However, rigidity is a function of the fiber-matrix mixture. The braiding angle of 35° corresponds to a characteristic threshold for braided composites: beyond this value, rigidity falls in a creep test at a temperature higher than Tg. It represents also a critical angle in bending or in tensile tests. Above 35°, the failure mode of the composite goes from fragile (rupture of fibers) to a mixed mode, in which the polymer matrix comes also into play with fibers. A good stability was observed for the composites with a braiding angle lower than +/- 35° or higher than +/-60°. Long-term tests were also carried out for two braided composites at +/- 45° and +/- 55° in order to check the predictive model of the DMA. The shift factors obtained from the short and long term tests are roughly equal. This thesis has set the ground for the future development on industrial applications of composite springs. The design software predicts the mechanical effectiveness of helical composite springs. The software developed to predict the elastic properties of braided composites accelerates the preparation of characterization results for the design stage. This numerical tool could be generalized for other fiber architectures. It represents a practical tool for further investigations. Finally, the study on creep, although preliminary, provides a first evaluation of the life cycle of composite springs. It would be interesting to proceed now to the design of a first industrial application.
NASA Astrophysics Data System (ADS)
Lacasse, Simon
This research project has developed a tool to predict the geometry of an adaptive panel which has the ability to change its geometry according to the surrounding conditions under which it is subjected. This panel, as designed for this project, consists of two main components: the host structure that ensures the structural integrity of the panel and the activation system embedded in the host structure. The host structure is made of a fiber-reinforced (carbon: Toray T300 unidirectional) polymer (Epoxy: Huntsman Araldite 8605). The actuation system consists of shape memory alloy wire (SAES Getters Ti-50.26at%Ni) of one mm diameter. To generate the movement, the actuators are positioned to create an offset, along the thickness, between the neutral plane of the laminate and the axis of the actuators. Shape memory alloys are special materials that have the ability to contract themselves when heated. When heated by Joule effect, the actuators contract and generate forces which are transmitted to the adaptive panel through a fixation device. A bending moment is thus generated by the difference between the actuator and the neutral plane of the panel, deforming the adaptive panel. The design tool is based on the combination of the rigidity of the host structure and the operating capacity of the SMA. A finite element model is developed on the commercial software ANSYS 13. This model provides the stiffness of the host structure depending on various parameters of the laminate (orientation and number of plies) and of the actuator (position along the thickness, distance between two actuators). According to this model, it appears that the radius of curvature of such a panel is constant throughout its length and that the panel's length does not influence the results. In addition, the results show that the stiffness is constant regardless of the axial deformation of the actuator. Interestingly, the greater the distance between the actuators, the greater is the stiffness felt by each actuator. The operating capacity of the SMA is evaluated experimentally. It has been shown that heat treatment of 550°C for one hour significantly increases the energy produced by the actuators while changing their transformation temperature. Thereafter, a stabilization of 100 cycles at 150 MPa of the actuators creates the two-way shape memory effect while producing a sufficiently high generated stress. Finally, the operating envelope of the actuator is created based on the activation temperatures ranging from 50°C to 150°C. The respective SMA and host structure properties are then used to create the adaptive panel's design diagram. Thus, it is possible to express the radius of curvature (target) depending on the actuation temperature and on the laminate configuration. This relationship is finally verified experimentally. To do this, a 4-layer adaptive panel [903/WIRE/90] is produced by the vacuum assisted resin transfer molding method and installed on a testing bench designed for this purpose. In this regard, various parameters were investigated during manufacture to find the ideal manufacturing conditions. It appears that an infusion flow direction perpendicular to the actuators orientation offer better results. In addition, the use of a sheath eliminates the use of jigs which are necessary to keep the actuator in place during the forming processing and post-polymerization treatment. The results show that when the actuators are heated by Joule effect, the measured radius of curvature is comparable to the one established from the design tool. However, the measured temperatures are not consistent with the theoretical values. Thus, it is necessary to apply a correction factor to the measured temperature based on the SMA properties. Such a factor is used to establish a correspondence between the measured radius of curvature and the radius of curvature obtained from the design tool. Thus, a more efficient method of temperature measurement is required.
Fabrication et caracterisation d'hybrides optiques tout-fibre 120° et 90° achromatiques
NASA Astrophysics Data System (ADS)
Khettal, Elyes
This thesis presents the fabrication and characterization of optical hybrids as all-fiber 3 x 3 and 4 x 4 couplers. A hybrid does two things; it splits power equally and acts as an interferometer. As an interferometer, it allows to accurately measure the amplitude and phase of an optical signal with respect to a reference signal. Like in a radio receiver, a local oscillator is used to interfere with the incoming signal to produce a beating signal. The complex amplitude is then rebuilt using the output signals of the hybrid. This is known as coherent detection. Since this thesis is a follow-up to a previous project, the main goal is to improve the fabrication process of the couplers in order to give it a certain level of repeatability and reproducibility. The 3 x 3 coupler will be used as a platform of development since the fabrication process is pretty much the same for both couplers. The secondary objective is to validate the theoretical concepts of a broadband hybrid in the form of an asymmetric 4 x 4 coupler. The theory explaining the functioning these couplers is presented and the experimental parameters necessary to their fabrication are derived. The fabrication method used is that of fusion-tapering that has been used for many years to produce 2 x 2 couplers and fiber tapers. The procedure consists of holding fibers together tangentially and fusing them into a monolithic structure with the help a propane flame. The structure is then tapered by linear motorized stages and the procedure is stopped when the desired optical response is achieved. The component is then securely packaged in a hollow metal tube. The critically step of the procedure is holding the fibers together in a desired pattern - a triangle for 3 x 3 couplers and a square or a diamond for 4 x 4 couplers. New methods to make this step more repeatable are highlighted. Several cross-sections of fused couplers are shown and the level of success of the new methods is discussed. The characterization methods in transmission and phase are described and the experimental results are presented. The transmission spectra of the 3 x 3 coupler that was built are presented. Its performances in phase at several wavelengths of the C band (1530-1565 nm) are measured and analyzed. The built hybrid has low loss (<0,8 dB) and shows a phase drift lower than 5° on about 40 nm. Its ability to measure phase accurately is demonstrated by demodulating a digital QPSK signal. In order to validate the theory of the broadband 4 x 4 hybrid, a new fusion-tapering approach is developed and tested. It is used to make biconical 2 x 2 couplers that allow to test the adiabatic transfer of supermodes, a core concept of broadband hybrids. This however does not yield the expected result and an alternative approach is proposed and tested. This new approach gives more encouraging results, confirming the hypothesis and forecasting a viable way to build broadband hybrids. The main goal of the project cannot be considered as achieved since the procedure to hold the fibers together does not guarantee that they stay in the desired pattern. Since this step is so crucial for the hybrids to work correctly, it casts doubt on whether it is possible to build a broadband hybrid that requires a very precise structure made of four fibers. Despite this, the results show that such a component is possible and the question is only about how to build it.
NASA Astrophysics Data System (ADS)
St-Georges-Robillard, Amelie
Biomaterials have evolved significantly over the past decades. There are now several types of polymeric biomaterials with physical characteristics suited to different applications. This project focuses on improving the physico-chemical properties of the surface of these materials by incorporating primary amines (R-NH2), a functional group known to promote adhesion and cell growth, in the context of two biomedical applications. First, it is necessary to develop a cell culture surface that enables the adhesion of U937 monocytes. These cells are used to evaluate the effect of wear particles produced by the prosthesis in periprosthetic osteolysis, a major cause of failure of a hip replacement. Second, one of the strategies used to improve the success rate of polymeric vascular grafts is to create a layer of endothelial cells on the lumen of the prosthesis. A coating that promotes the adhesion and growth of human umbilical vein endothelial cells (HUVEC) is required to achieve that layer. Previous studies have demonstrated that the addition of R-NH2 groups on the coating allows the adhesion of U937 monocytes, provided that their concentration [NH2] is higher than a certain critical value, [NH2]crit; R-NH2 groups were also found to enhance the adhesion and proliferation of HUVEC. Two different primary amine-rich coatings are investigated in this work: organic thin films deposited by vacuum ultraviolet (VUV) photo-polymerization, UV-PE:N; and parylene diX AM, deposited by chemical vapor deposition (CVD). The physico-chemical stability of these coatings in air and in water, essential for biomedical applications, was first studied. “Aging” of parylene diX AM in contact with the ambient air caused a diminution of [NH2]/[C] of around 6 % during 22 days and is caused by the oxidation of R-NH2 by atmospheric oxygen, while in the case of UV-PE:N, the diminution is only of 2,5 % over 26 days. Also, a second aging mechanism is present: the reaction of trapped free radicals in the coating with oxygen in air or dissolved in water. The UV-PE:N coating proved virtually insoluble, despite a high concentration of nitrogen and showed excellent retention of the R-NH 2 groups when immersed in water, two essential properties for applications in cell culture. These studies have also shown that UV-PE:N coatings (deposited with two gas ratios, R = 0.75 and 1) permit adhesion and survival of U937 monocytes without causing any significant inflammatory response, which enables one to study wear particle effects. However, the adhesion of U937 monocytes on parylene diX AM manifests a rather different behavior, adhesion being proportional to [NH2] and not controlled by the critical threshold, [NH 2]crit, observed for different types of plasma-polymer coatings. Also, monocytes do not survive for 24 hours on parylene diX AM. The cause for these differences remains to be elucidated. Finally, the adhesion and growth of HUVEC on both types of UV-PE:N (R = 0.75 and 1), as well as on L-PPE:N and on gelatinized polystyrene, were statistically higher than on untreated PET. Therefore, UV-PE:N has proven to be a cell culture surface well-adapted for HUVEC, of similar efficiency to gelatinized polystyrene, a surface known to promote the adhesion and growth of HUVEC. UV-PE: N is therefore a promising coating that provides stability in air and in water for use in cell culture and has demonstrated its performance for two biomedical applications. Keywords: biomaterials, primary amines, thin film deposition, photo-polymerization, plasma polymerization, XPS, chemical derivatization, ellipsometry, cellular adhesion, arthroplasty, vascular graft.
Caracterisation experimentale de la transmission acoustique de structures aeronautiques
NASA Astrophysics Data System (ADS)
Pointel, Vincent
Le confort des passagers à l'intérieur des avions pendant le vol est un axe en voie d'amélioration constante. L'augmentation de la proportion des matériaux composites dans la fabrication des structures aéronautiques amène de nouvelles problématiques à résoudre. Le faible amortissement de ces structures, en contre partie de leur poids/raideur faible, est non favorable sur le plan acoustique, ce qui oblige les concepteurs à devoir trouver des moyens d'amélioration. De plus, les mécanismes de transmission du son au travers d'un système double paroi de type aéronautique ne sont pas complètement compris, c'est la raison qui motive cette étude. L'objectif principal de ce projet est de constituer une base de données pour le partenaire industriel de ce projet : Bombardier Aéronautique. En effet, les données expérimentales de performance d'isolation acoustique, de systèmes complets représentatifs d'un fuselage d'avion sont très rares dans la littérature scientifique. C'est pourquoi une méthodologie expérimentale est utilisée dans ce projet. Deux conceptions différentes de fuselage sont comparées. La première possède une peau (partie extérieure du fuselage) métallique raidie, alors que la deuxième est constituée d'un panneau sandwich composite. Dans les deux cas, un panneau de finition de fabrication sandwich est utilisé. Un traitement acoustique en laine de verre est placé à l'intérieur de chacun des fuselages. Des isolateurs vibratoires sont utilisés pour connecter les deux panneaux du fuselage. La simulation en laboratoire de la couche limite turbulente, qui est la source d'excitation prépondérante pendant la phase de vol, n'est pas encore possible hormis en soufflerie. C'est pourquoi deux cas d'excitation sont considérés pour essayer d'approcher cette sollicitation : une excitation mécanique (pot vibrant) et une acoustique (champ diffus). La validation et l'analyse des résultats sont effectuées par le biais des logiciels NOVA et VAONE, utilisés par le partenaire industriel de ce projet. Un des objectifs secondaires est de valider le modèle double paroi implémenté dans NOVA. L'investigation de l'effet de compression local du traitement acoustique, sur la perte par transmission d'une simple paroi, montre que cette action n'a aucun effet bénéfique notable. D'autre part, il apparaît que la raideur des isolateurs vibratoires a un lien direct avec les performances d'isolation du système double paroi. Le système double paroi avec peau composite semble moins sensible à ce paramètre. Le modèle double paroi de NOVA donne de bons résultats concernant le système double paroi avec une peau métallique. Des écarts plus importants sont observés en moyennes et hautes fréquences dans le cas du système avec une peau composite. Cependant, la bonne tendance de la prédiction au vu de la complexité de la structure est plutôt prometteuse.
L'etude de l'InP et du GaP suite a l'implantation ionique de Mn et a un recuit thermique
NASA Astrophysics Data System (ADS)
Bucsa, Ioan Gigel
Cette these est dediee a l'etude des materiaux InMnP et GaMnP fabriques par implantation ionique et recuit thermique. Plus precisement nous avons investigue la possibilite de former par implantation ionique des materiaux homogenes (alliages) de InMnP et GaMnP contenant de 1 a 5 % atomiques de Mn qui seraient en etat ferromagnetique, pour des possibles applications dans la spintronique. Dans un premier chapitre introductif nous donnons les motivations de cette recherche et faisons une revue de la litterature sur ce sujet. Le deuxieme chapitre decrit les principes de l'implantation ionique, qui est la technique utilisee pour la fabrication des echantillons. Les effets de l'energie, fluence et direction du faisceau ionique sur le profil d'implantation et la formation des dommages seront mis en evidence. Aussi dans ce chapitre nous allons trouver des informations sur les substrats utilises pour l'implantation. Les techniques experimentales utilisees pour la caracterisation structurale, chimique et magnetique des echantillons, ainsi que leurs limitations sont presentees dans le troisieme chapitre. Quelques principes theoriques du magnetisme necessaires pour la comprehension des mesures magnetiques se retrouvent dans le chapitre 4. Le cinquieme chapitre est dedie a l'etude de la morphologie et des proprietes magnetiques des substrats utilises pour implantation et le sixieme chapitre, a l'etude des echantillons implantes au Mn sans avoir subi un recuit thermique. Notamment nous allons voir dans ce chapitre que l'implantation de Mn a plus que 1016 ions/cm 2 amorphise la partie implantee du materiau et le Mn implante se dispose en profondeur sur un profil gaussien. De point de vue magnetique les atomes implantes se trouvent dans un etat paramagnetique entre 5 et 300 K ayant le spin 5/2. Dans le chapitre 7 nous presentons les proprietes des echantillons recuits a basses temperatures. Nous allons voir que dans ces echantillons la couche implantee est polycristalline et les atomes de Mn sont toujours dans un etat paramagnetique. Dans les chapitres 8 et 9, qui sont les plus volumineux, nous presentons les resultats des mesures sur les echantillons recuits a hautes temperatures: il s'agit d'InP et du GaP implantes au Mn, dans le chapitre 8 et d'InP co-implante au Mn et au P, dans le chapitre 9. D'abord, dans le chapitre 8 nous allons voir que le recuit a hautes temperatures mene a une recristallisation epitaxiale du InMnP et du GaMnP; aussi la majorite des atomes de Mn se deplacent vers la surface a cause d'un effet de segregation. Dans les regions de la surface, concentres en Mn, les mesures XRD et TEM identifient la formation de MnP et d'In cristallin. Les mesures magnetiques identifient aussi la presence de MnP ferromagnetique. De plus dans ces mesures on trouve qu'environ 60 % du Mn implante est en etat paramagnetique avec la valeur du spin reduite par rapport a celle trouvee dans les echantillons non-recuits. Dans les echantillons InP co-implantes au Mn et au P la recristallisation est seulement partielle mais l'effet de segregation du Mn a la surface est beaucoup reduit. Dans ce cas plus que 50 % du Mn forme des particules MnP et le restant est en etat paramagnetique au spin 5/2, dilue dans la matrice de l'InP. Finalement dans le dernier chapitre, 10, nous presentons les conclusions principales auxquels nous sommes arrives et discutons les resultats et leurs implications. Mots cles: implantation ionique, InP, GaP, amorphisation, MnP, segregation, co-implantation, couche polycristalline, paramagnetisme, ferromagnetisme.
NASA Astrophysics Data System (ADS)
Zidane, Shems
This study is based on data acquired with an airborne multi-altitude sensor on July 2004 during a nonstandard atmospheric event in the region of Saint-Jean-sur-Richelieu, Quebec. By non-standard atmospheric event we mean an aerosol atmosphere that does not obey the typical monotonic, scale height variation employed in virtually all atmospheric correction codes. The surfaces imaged during this field campaign included a diverse variety of targets : agricultural land, water bodies, urban areas and forests. The multi-altitude approach employed in this campaign allowed us to better understand the altitude dependent influence of the atmosphere over the array of ground targets and thus to better characterize the perturbation induced by a non-standard (smoke) plume. The transformation of the apparent radiance at 3 different altitudes into apparent reflectance and the insertion of the plume optics into an atmospheric correction model permitted an atmospheric correction of the apparent reflectance at the two higher altitudes. The results showed consistency with the apparent validation reflectances derived from the lowest altitude radiances. This approach effectively confirmed the accuracy of our non-standard atmospheric correction approach. This test was particularly relevant at the highest altitude of 3.17 km : the apparent reflectances at this altitude were above most of the plume and therefore represented a good test of our ability to adequately correct for the influence of the perturbation. Standard atmospheric disturbances are obviously taken into account in most atmospheric correction models, but these are based on monotonically decreasing aerosol variations with increasing altitude. When the atmospheric radiation is affected by a plume or a local, non-standard pollution event, one must adapt the existing models to the radiative transfer constraints of the local perturbation and to the reality of the measurable parameters available for ingestion into the model. The main inputs of this study were those normally used in an atmospheric correction : apparent at-sensor radiance and the aerosol optical depth (AOD) acquired using ground-based sunphotometry. The procedure we employed made use of a standard atmospheric correction code (CAM5S, for Canadian Modified 5S, which comes from the 5S radiative transfer model in the visible and near infrared) : however, we also used other parameters and data to adapt and correctly model the special atmospheric situation which affected the multi-altitude images acquired during the St. Jean field campaign. We then developed a modeling protocol for these atmospheric perturbations where auxiliary data was employed to complement our main data-set. This allowed for the development of a robust and simple methodology adapted to this atmospheric situation. The auxiliary data, i.e. meteorological data, LIDAR profiles, various satellite images and sun photometer retrievals of the scattering phase function, were sufficient to accurately model the observed plume in terms of a unusual, vertical distribution. This distribution was transformed into an aerosol optical depth profile that replaced the standard aerosol optical depth profile employed in the CAM5S atmospheric correction model. Based on this model, a comparison between the apparent ground reflectances obtained after atmospheric corrections and validation values of R*(0) obtained from the lowest altitude data showed that the error between the two was less than 0.01 rms. This correction was shown to be a significantly better estimation of the surface reflectance than that obtained using the atmospheric correction model. Significant differences were nevertheless observed in the non-standard solution : these were mainly caused by the difficulties brought about by the acquisition conditions, significant disparities attributable to inconsistencies in the co-sampling / co-registration of different targets from three different altitudes, and possibly modeling errors and / or calibration. There is accordingly room for improvement in our approach to dealing with such conditions. The modeling and forecasting of such a disturbance is explicitly described in this document: our goal in so doing is to permit the establishment of a better protocol for the acquisition of more suitable supporting data. The originality of this study stems from a new approach for incorporating a plume structure into an operational atmospheric correction model and then demonstrating that the approach was a significant improvement over an approach that ignored the perturbations in the vertical profile while employing the correct overall AOD. The profile model we employed was simple and robust but captured sufficient plume detail to achieve significant improvements in atmospheric correction accuracy. The overall process of addressing all the problems encountered in the analysis of our aerosol perturbation helped us to build an appropriate methodology for characterizing such events based on data availability, distributed freely and accessible to the scientific community. This makes our study adaptable and exportable to other types of non-standard atmospheric events. Keywords : non-standard atmospheric perturbation, multi-altitude apparent radiances, smoke plume, Gaussian plume modelization, radiance fit, AOD, CASI
Caracterisation des mecanismes d'usure en cavitation de revetements HVOF a base de CaviTec
NASA Astrophysics Data System (ADS)
Lavigne, Sebastien
The increasing demand for high performance power conversion systems continuously pushes for improvement in efficiency and power density. This dissertation focuses on a topological effort to efficiently utilize the active and passive devices. In particular, a hybrid approach is adopted, where both capacitors and inductors are used in the voltage conversion and power transfer process. Conventional capacitor-based converters, called switched-capacitor (SC) converters, suffer from poor efficiency due to the inevitable charge redistribution process. With a strategic placement of one or more inductors, the charge redistribution loss can be eliminated by inductively charging/discharging the capacitors, a process called soft-charging operation. As a result, the capacitor size can be greatly reduced without reducing the efficiency. A general analytical framework is presented, which determines whether an arbitrary SC topology is able to achieve full soft-charging operation with a single inductor. For topologies that cannot, a split-phase control technique is introduced, which amends existing two-phase controls to completely eliminate the charge redistribution loss. In addition, alternative placements of inductors are explored to extend the family of hybrid converters. The hybrid converters can have two modes of operation, the fixed-ratio mode and pulse width modulated (PWM) mode. The fixed-conversion-ratio hybrid converters operate in a similar manner to that of a conventional SC converter, with the addition of a soft-charging inductor. The switching frequency of such converters can be adjusted to operate in either zero current switching (ZCS) mode or continuous conduction mode (CCM), which allows for the trade-off of switching loss and conduction loss. It is shown that the capacitor and inductor values can be selected to achieve a minimal passive component volume, which can be significantly smaller than that of a conventional SC converter or a magnetic-based converter. On the other hand, PWM-based hybrid converters generate a PWM rectangular wave as the terminal voltage to the inductor, similar to the operation of a buck converter. In contrast to conventional SC converters, such hybrid converters can achieve lossless and continuous regulation of the output voltage. Compared to buck converters, the required inductor is greatly reduced, as well as the switch stress. A 80-170 V input, 12-24 V output prototype PWM Dickson converter is implemented using GaN switches. The measured peak efficiency is 97%, and high efficiency can be maintained over the entire input and output operating range. In addition, the similarity between multilevel converters (for example, flying capacitor multilevel (FCML) converters) and the PWM-based hybrid SC converters is discussed. Both types of converters can be seen as a hybrid converter which uses both capacitors and inductors for energy transfer. A general framework to compare these converters, along with conventional buck converters, is proposed. In this framework, the power losses (including conduction loss and switching loss) are kept constant, while the total passive component volume is used as the figure of merit. Based on the principle of maximizing energy utilization of passive components, a 7-level FCML converter and an active energy buffer are designed and implemented for single phase dc-ac applications. In addition, the stand-alone system includes a start-up circuitry, EMC filter and auxiliary power supply. The enclosed box achieves a combined power density of 216 W/in3 and an efficiency of 97.4%, and compares favorably against the state-of-the-art designs under the same specification. To further improve the efficiency and power density, soft-switching techniques are investigated and applied on the hybrid converters. A zero voltage switching (ZVS) technique is introduced for both the fixed-ratio mode and the PWM mode operated hybrid converters. The previous hardware prototypes are modified for ZVS operation, and prove the feasibility of simultaneous soft-charging and soft-switching operation. Last but not the least, some of the practical issues associated with the hybrid converter are discussed, such as practical capacitor selection, capacitor voltage balancing and other circuit implementation challenges. Future work based on these topics is given. In summary, these hybrid converters are suited for applications where extreme efficiency and power density are critical. Through efficient utilization of active and passive devices, the hybrid topologies can offer a greater optimization opportunity and ability to take advantage of technology improvement than is possible with conventional designs.
Contribution a l'etude et au developpement de nouvelles poudres de fonte
NASA Astrophysics Data System (ADS)
Boisvert, Mathieu
L'obtention de graphite libre dans des pieces fabriquees par metallurgie des poudres (M/P) est un defi auquel plusieurs chercheurs se sont attardes. En effet, la presence de graphite apres frittage ameliore l'usinabilite des pieces, permettant donc de reduire les couts de production, et peut aussi engendrer une amelioration des proprietes tribologiques. L'approche utilisee dans cette these pour tenter d'obtenir du graphite libre apres frittage est par l'utilisation de nouvelles poudres de fontes atomisees a l'eau. L'atomisation a l'eau etant un procede de production de poudres relativement peu couteux qui permet de grandes capacites de production, le transfert des decouvertes de ce doctorat vers des applications industrielles sera donc economiquement plus favorable. En plus de l'objectif d'obtenir du graphite libre apres frittage, un autre aspect important des travaux est le controle de la morphologie du graphite libre apres frittage. En effet, il est connu dans la litterature des fontes coulees/moulees que la morphologie du graphite influencera les proprietes des fontes, ce qui est aussi vrai pour les pieces de M/P. Les fontes ductiles, pour lesquelles le graphite est sous forme de nodules spheroidaux isoles les uns des autres, possedent des proprietes mecaniques superieures aux fontes grises pour lesquelles le graphite est sous forme lamellaire et continu dans la matrice. Les resultats presentes dans cette these montrent qu'il est possible, dans des melanges contenant des poudres de fontes, d'avoir un controle sur la morphologie du graphite et donc sur les proprietes des pieces. Le controle de la morphologie du graphite a principalement ete realise par le type de frittage et le phenomene de diffusion " uphill " du carbone cause par des gradients en silicium. En effet, pour les frittages en phase solide, tous les nodules de graphite sont presents a l'interieur des grains de poudre apres frittage. Pour les frittages en phase liquide, l'intensite de la diffusion " uphill " permet de conserver plus ou moins de graphite nodulaire a l'interieur des regions riches en silicium, alors que le reste du graphite precipite sous forme lamellaire/vermiculaire dans les regions interparticulaires. L'etude des poudres de fontes et la recherche des mecanismes regissant la morphologie du graphite dans les fontes coulees/moulees nous ont amenes a produire des poudres de fontes traitees au magnesium avant l'atomisation. Plusieurs resultats fondamentaux ont ete obtenus de la caracterisation des poudres traitees au magnesium et de leur comparaison avec des poudres de chimies similaires non traitees au magnesium. D'abord, il a ete possible d'observer des bifilms d'oxyde de silicium dans la structure du graphite primaire d'une poudre de fonte grise hypereutectique. Il s'agit des premieres images montrant la structure double de ces defauts, venant ainsi appuyer la theorie elaboree par le professeur John Campbell. Ensuite, il a ete montre que le traitement au magnesium forme une atmosphere protectrice gazeuse autogeneree qui empeche l'oxydation de la surface du bain liquide et donc, la formation et l'entrainement des bifilms. Le role du magnesium sur la morphologie du graphite est de diminuer le soufre en solution en formant des precipites de sulfure de magnesium et ainsi d'augmenter l'energie d'interface graphite-liquide. En reponse a cette grande energie d'interface graphite-liquide, le graphite cherche a minimiser son ratio surface/volume, ce qui favorise la formation de graphite spheroidal. De plus, deux types de germination ont ete observes dans la poudre de fonte hypereutectique traitee au magnesium. Le premier type est une germination heterogene sur un nombre limite de particules faites de magnesium, d'aluminium, de silicium et d'oxygene. Le deuxieme type est une germination homogene des nodules dans certaines regions du liquide plus riches en silicium. L'observation du centre reel tridimensionnel, en microscopie electronique en transmission en haute resolution, d'un des nodules ayant subi une germination homogene a permis de confirmer que le mode de croissance du graphite spheroidal se produit selon le modele de la croissance en feuille de chou. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Boilard, Patrick
Even though powder metallurgy (P/M) is a near net shape process, a large number of parts still require one or more machining operations during the course of their elaboration and/or their finishing. The main objectives of the work presented in this thesis are centered on the elaboration of blends with enhanced machinability, as well as helping with the definition and in the characterization of the machinability of P/M parts. Enhancing machinability can be done in various ways, through the use of machinability additives and by decreasing the amount of porosity of the parts. These different ways of enhancing machinability have been investigated thoroughly, by systematically planning and preparing series of samples in order to obtain valid and repeatable results leading to meaningful conclusions relevant to the P/M domain. Results obtained during the course of the work are divided into three main chapters: (1) the effect of machining parameters on machinability, (2) the effect of additives on machinability, and (3) the development and the characterization of high density parts obtained by liquid phase sintering. Regarding the effect of machining parameters on machinability, studies were performed on parameters such as rotating speed, feed, tool position and diameter of the tool. Optimal cutting parameters are found for drilling operations performed on a standard FC-0208 blend, for different machinability criteria. Moreover, study of material removal rates shows the sensitivity of the machinability criteria for different machining parameters and indicates that thrust force is more regular than tool wear and slope of the drillability curve in the characterization of machinability. The chapter discussing the effect of various additives on machinability reveals many interesting results. First, work carried out on MoS2 additions reveals the dissociation of this additive and the creation of metallic sulphides (namely CuxS sulphides) when copper is present. Results also show that it is possible to reduce the amount of MoS2 in the blend so as to lower the dimensional change and the cost (blend Mo8A), while enhancing machinability and keeping hardness values within the same range (70 HRB). Second, adding enstatite (MgO·SiO2) permits the observation of the mechanisms occurring with the use of this additive. It is found that the stability of enstatite limits the diffusion of graphite during sintering, leading to the presence of free graphite in the pores, thus enhancing machinability. Furthermore, a lower amount of graphite in the matrix leads to a lower hardness, which is also beneficial to machinability. It is also found that the presence of copper enhances the diffusion of graphite, through the formation of a liquid phase during sintering. With the objective of improving machinability by reaching higher densities, blends were developed for densification through liquid phase sintering. High density samples are obtained (>7.5 g/cm3) for blends prepared with Fe-C-P constituents, namely with 0.5%P and 2.4%C. By systematically studying the effect of different parameters, the importance of the chemical composition (mainly the carbon content) and the importance of the sintering cycle (particularly the cooling rate) are demonstrated. Moreover, various heat treatments studied illustrate the different microstructures achievable for this system, showing various amounts of cementite, pearlite and free graphite. Although the machinability is limited for samples containing large amounts of cementite, it can be greatly improved with very slow cooling, leading to graphitization of the carbon in presence of phosphorus. Adequate control of the sintering cycle on samples made from FGS1625 powder leads to the obtention of high density (≥7.0 g/cm 3) microstructures containing various amounts of pearlite, ferrite and free graphite. Obtaining ferritic microstructures with free graphite designed for very high machinability (tool wear <1.0%) or fine pearlitic microstructures with excellent mechanical properties (transverse rupture strength >1600 MPa) is therefore possible. These results show that improvement of machinability through higher densities is limited by microstructure. Indeed, for the studied samples, microstructure is dominant in the determination of machinability, far more important than density, judging by the influence of cementite or of the volume fraction of free graphite on machinability for example. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Denoyer, Aurelie
La decouverte et l'elaboration de nouveaux materiaux laser solides suscitent beaucoup d'interet parmi la communaute scientifique. En particulier les lasers dans la gamme de frequence du micron debouchent sur beaucoup d'applications, en telecommunication, en medecine, dans le domaine militaire, pour la, decoupe des metaux (lasers de puissance), en optique non lineaire (doublage de frequence, bistabilite optique). Le plus couramment utilise actuellement est le Nd:YAG dans cette famille de laser, mais des remplacants plus performants sont toujours recherches. Les lasers a base d'Yb3+ possedent beaucoup d'avantages compares aux lasers Nd3+ du fait de leur structure electronique simple et de leur deterioration moins rapide. Parmi les matrices cristallines pouvant accueillir l'ytterbium, les orthosilicates Yb:Y 2SiO5, Yb:Lu2SiO5 et Yb:Sc2SiO 5 se positionnent tres bien, du fait de leur bonne conductivite thermique et du fort eclatement de leur champ cristallin necessaire a l'elaboration de lasers quasi-3 niveaux. De plus l'etude fine et systematique des proprietes microscopiques de nouveaux materiaux s'avere toujours tres interessante du point de vue de la recherche fondamentale, c'est ainsi que de nouveaux modeles sont concus (par exemple pour le champ cristallin) ou que de nouvelles proprietes inhabituelles sont decouvertes, menant a de nouvelles applications. Ainsi d'autres materiaux dopes a l'ytterbium sont connus pour leurs proprietes de couplage electron-phonon, de couplage magnetique, d'emission cooperative ou encore de bistabilite optique, mais ces proprietes n'ont encore jamais ete mises en evidence dans Yb:Y 2SiO5, Yb:Lu2SiO5 et Yb:Sc2SiO 5. Ainsi, cette these a pour but l'etude des proprietes optiques et des interactions microscopiques dans Yb:Y2SiO 5, Yb:Lu2SiO5 et Yb:Sc2SiO5. Nous utilisons principalement les techniques d'absorption IR et de spectroscopie Raman pour determiner les excitations du champ cristallin et les modes de vibration dans le materiau. Des mesures optiques sous champ magnetique ont egalement ete effectuees dans le but de caracteriser le comportement de ces excitations lorsqu'elles sont soumises a l'effet Zeeman. La resonance paramagnetique electronique a permis de completer cette etude de l'eclatement Zeeman suivant toutes les orientations du cristal. Enfin la fluorescence par excitation selective et la fluorescence induite par Raman FT, completent la description des niveaux d'energie et revelent l'existence d'emission cooperative de deux ions Yb3+ et de transferts d'energie. Les resultats de cette these apportent une contribution originale dans le domaine des nouveaux materiaux lasers par l'etude et la comprehension des interactions fines et des proprietes microscopiques d'un materiau en particulier. Ils debouchent a la fois sur des applications possibles dans le domaine de l'optique et des lasers, et sur la comprehension d'aspects fondamentaux. Cette these a prouve l'interet de ces matrices pour leur utilisation comme lasers solides: un fort eclatement du champ cristallin favorable a l'elaboration de laser quasi-3 niveaux, et de larges bandes d'absorption (dues a un fort couplage electron-phonon et a des raies satellites causees par une interaction d'echange entre deux ions Yb3+) qui permettent la generation d'impulsions laser ultra-courtes, l'accordabilite du laser, etc. De plus la miniaturisation des lasers est possible pour l'optique integree grace a des couches minces synthetisees par epitaxie en phase liquide dont nous avons demontre la tres bonne qualite structurale et l'ajustement possible de certains parametres. Nous avons reconstruit le tenseur g du niveau fondamental (qui donne des informations precieuses sur les fonctions d'onde), ceci dans le but d'aider les theoriciens a concevoir un modele de champ cristallin valide. Plusieurs mecanismes de transferts d'energie ont ete mis en evidence: un mecanisme de relaxation d'un site vers l'autre, un mecanisme d'emission cooperative, et un mecanisme d'excitation de l'Yb3+ par le Tm3+ (impurete presente dans le materiau). Ces transferts sont plutot nefastes pour la fabrication d'un laser mais sont interessants pour l'optique non lineaire (doublage de frequence, memoires optiques). Enfin, plusieurs elements (le couplage magnetique de paire, le couplage electron-phonon et l'emission cooperative) nous ont permis de conclure sur le caractere covalent de la matrice. Nous avons d'ailleurs demontre ici le role de la covalence dans l'emission cooperative, transition habituellement attribuee aux interactions multipolaires electriques.
NASA Astrophysics Data System (ADS)
Lambert-Milot, Samuel
The general objective of this work is to bring a better understanding of the growth mechanism and the influence of the growth parameters on the microstructure of the heterogeneous magnetic semiconductors layers. Toward this end, we have undertaken a detailed study on the structural characteristics of the GaP:MnP ferromagnetic semiconductor thin films grown by metal organic vapour phase epitaxy (MOVPE). We have focused our effort on three specific objectives: (1) to demonstrate the growth of epitaxial heterogeneous GaP:MnP layers; (2) to establish the influence of the growth parameters on the microstructure of the matrix and nanoclusters; (3) to obtain a detailed structural characterisation of the texture of the clusters as a function of the growth parameters. We have successfully grown epitaxial heterogeneous GaP:MnP layers without structural defects on GaP substrates at 650°C. The layers contain a uniform ensemble of 15-50 nm quasi-spherical MnP nanoclusters within a dislocation-free GaP epilayer matrix that is fully coherent with the substrate. The clusters occupy 3 to 8% of the total volume of the layer, controlled by the flow of the Mn precursor in the vapor phase. We showed that the growth temperature strongly affect the microstructure of the GaP matrix. At 700°C the surface roughness increases and we have observed 100 nm wide cavities in the GaP matrix. The layers grown at 600°C contain a large density of pile-up defects along GaP{111} facets. To explain these defects we propose the following mechanism: (1) the nucleation of clusters on the GaP growth surface change the morphology of the surrounding matrix; (2) these morphological changes increase the surface roughness and lead to the formation of GaP{111} facets; (3) at 600°C, the probability of the Ga and P atoms to find an epitaxial site on GaP{111} facets is reduced and leads to the formation of pile-up defects. The detailed microstructural characterization of the GaP:MnP layers have shown that the volume fraction and the dimension of the MnP clusters can be controlled by adjusting the Mn precursor flow rate and the growth temperature, respectively: (1) the volume fraction of the clusters increases with the Mn precursor flow rate; (2) its average dimension increases with the growth temperature. Our work reveals that 80-90% of the clusters were orthorhombic-MnP and 10-20% were hexagonal Mn2P in layer grown at 650°C on GaP(001) substrates. The formation of Mn2P clusters can be reduced by decreasing the growth temperature and can be avoided by growing on GaP(011) substrates. Our 3D reciprocal space maps measurements have enabled, for the first time, a precise description of the texture of the clusters as a function of the growth temperature, the layer thickness and the substrate orientation. Our results reveal that the orthorhombic MnP nanoclusters are highly textured and distributed in six crystallographic orientation families. They principally grow on GaP(001) and GaP{111} facets with a small fraction of cluster nucleating on higher-index GaP{hhl} facets. Most of epitaxial alignments share a similar component: the MnP(001) plane (c-axis plane) is parallel to the GaP{110} plane family. Along with the diffraction signals indicating specific epitaxial relationships with the substrate, we report the presence of axiotaxial ordering between a certain fraction of the MnP clusters and the GaP matrix. The texture characterization as a function of the growth parameters revealed that the MnP texture results from a complex growth process, with combined effects of the GaP matrix morphology, the lattice mismatch at the cluster/matrix interface, and the bonding configuration of the GaP seed planes. We propose a qualitative growth model that explains the order of appearance of the various cluster families and the evolution of the proportion of clusters in the different orientations with increasing film thickness. Finally, we have compared the crystallographic orientation of the MnP clusters determined from 3D reciprocal space mapping with those obtained from magnetic measurements. The agreement between the two sets of results confirms that the effective magnetic properties of the heterogeneous layer can be tuned by controlling the texture of the ferromagnetic nanoclusters. (Abstract shortened by UMI.).
Caracterisation de la cohesion de l'interface AMF/polymere dans une structure deformable adaptative
NASA Astrophysics Data System (ADS)
Fischer-Rousseau, Charles
Les structures déformables adaptatives (SDA) sont appelées à jouer un rôle important en aéronautique entre autres. Les alliages à mémoire de forme (AMF) sont un des candidats les plus prometteurs. Beaucoup de travail reste toutefois à faire avant que ces structures rencontrent les exigences élevées reliées à leur intégration dans un contexte aéronautique. Des travaux de recherche ont montré que la résistance à la décohésion de l’interface AMF/polymère peut être un élément limitant dans la performance des SDA. Dans ce travail, l’effet sur la résistance à la décohésion de l’interface AMF/polymère de divers traitements de surface, géométries de fil et types de polymère est évalué. La géométrie du fil est modifiée par une combinaison spécifique de laminage à froid et de recuit postdéformation qui maintient les propriétés de mémoire de forme tout en permettant de réduire l’aire de la section transversale du fil. Le traitement thermomécanique le plus prometteur est proposé. Une nouvelle méthode d’évaluation de la résistance à la décohésion est développée. Plutôt que de tester les fils en arrachement et de mesurer la force maximale, les tests en contraction sont basés sur la capacité des fils d’AMF à se contracter s’ils ont été encastrés dans un état tiré et qu’ils sont chauffés par effet Joule. L’hypothèse qu’on pose est que ces tests sont une meilleure approximation des conditions rencontrées dans une SDA, où les fils se contractent plutôt qu’ils sont arrachés par une force externe à la structure. Bien qu’une décohésion partielle ait été observée pour tous les échantillons, l’aire de la surface où il y a décohésion tait plus grande pour les échantillons avec une pré-déformation plus grande. Le front de décohésion a semblé cesser de progresser après les cycles de chauffage initiaux lorsque la vitesse de chauffage était faible. Un modèle numérique simulant la réponse thermique transitoire du polymère et du fil d’AMF lors d’un chauffage par effet Joule est programmé à l’aide du logiciel ANSYS. Le comportement du modèle est validé avec des résultats expérimentaux où des thermocouples encastrés dans l’échantillon permettent des mesures locales de la température. Les résultats calculés sont en accord avec les résultats expérimentaux d’un point de vue qualitatif, mais accusent des différences significatives d’un point de vue quantitatif. La mesure du champ de déformation à l’interface du fil et du polymère dans une SDA permettrait de développer et valider un modèle numérique prenant en compte l’effet mémoire de forme d’un fil encastré dans une matrice polymère. Dans ce but, une machine de traction miniature permettant l’analyse par microspectrométrie Raman in situ est présentée. Elle a une capacité de 1 kN et un déplacement maximal de 20 mm dans une enveloppe de conception totale de 160 mm de diamètre. La machine est conçue pour que le milieu de l’échantillon soit immobile grâce au fait que le mouvement des deux extrémités soit symétrique. Les résultats montrent que du travail supplémentaire est nécessaire avant de pouvoir encastrer des fils d’AMF dans une structure déformable adaptative. Mots-clés: décohésion, structure active, alliage à mémoire de forme, AMF, tests d’arrachement.
NASA Astrophysics Data System (ADS)
Ostiguy, Pierre-Claude
Les matériaux composites sont de plus en plus utilisés en aéronautique. Leurs excellentes propriétés mécaniques et leur faible poids leur procurent un avantage certain par rapport aux matériaux métalliques. Ceux-ci étant soumis à diverses conditions de chargement et environnementales, ils sont suceptibles de subir plusieurs types d'endommagements, compromettant leur intégrité. Des méthodes fiables d'inspection sont donc nécessaires pour évaluer leur intégrité. Néanmoins, peu d'approches non destructives, embarquées et efficaces sont présentement utilisées. Ce travail de recherche se penche sur l'étude de l'effet de la composition des matériaux composites sur la détection et la caractérisation par ondes guidées. L'objectif du projet est de développer une approche de caractérisation mécanique embarquée permettant d'améliorer la performance d'une approche d'imagerie par antenne piézoélectriques sur des structures composite et métalliques. La contribution de ce projet est de proposer une approche embarquée de caractérisation mécanique par ultrasons qui ne requiert pas une mesure sur une multitude d'échantillons et qui est non destructive. Ce mémoire par articles est divisé en quatre parties, dont les parties deux A quatre présentant les articles publiés et soumis. La première partie présente l'état des connaissances dans la matière nécessaires à l'acomplissement de ce projet de maîtrise. Les principaux sujets traités portent sur les matériaux composites, propagation d'ondes, la modélisation des ondes guidées, la caractérisation par ondes guidées et la surveillance embarquée des structures. La deuxième partie présente une étude de l'effet des propriétés mécaniques sur la performance de l'algorithme d'imagerie Excitelet. L'étude est faite sur une structure isotrope. Les résultats ont démontré que l'algorithme est sensible à l'exactitude des propriétés mécaniques utilisées dans le modèle. Cette sensibilité a également été explorée afin de développer une méthode embarquée permettant d'évaluer les propriétés mécaniques d'une structure. La troisième partie porte sur une étude plus rigoureuse des performances de la méthode de caractérisation mécanique embarquée. La précision, la répétabilité et la robustesse de la méthode sont validés à l'aide d'un simulateur par FEM. Les propriétés estimées avec l'approche de caractérisation sont à moins de 1% des propriétés utilisées dans le modèle, ce qui rivalise avec l'incertitude des méthodes ASTM. L'analyse expérimentale s'est avérée précise et répétable pour des fréquences sous les 200 kHz, permettant d'estimer les propriétés mécaniques à moins de 1% des propriétés du fournisseur. La quatrième partie a démontrée la capacité de l'approche de caractérisation à identifier les propriétés mécaniques d'une plaques composite orthotrope. Les résultats estimés expérimentalement sont inclus dans les barres d'incertitude des propriétés estimées à l'aide des tests ASTM. Finalement, une simulation FEM a démontré la précision de l'approche avec des propriétés mécaniques à moins de 4 % des propriétés du modèle simulé.
NASA Astrophysics Data System (ADS)
Valentin, Olivier
Selon l'Organisation mondiale de la sante, le nombre de travailleurs exposes quotidiennement a des niveaux de bruit prejudiciables a leur audition est passe de 120 millions en 1995 a 250 millions en 2004. Meme si la reduction du bruit a la source devrait etre toujours privilegiee, la solution largement utilisee pour lutter contre le bruit au travail reste la protection auditive individuelle. Malheureusement, le port des protecteurs auditifs n'est pas toujours respecte par les travailleurs car il est difficile de fournir un protecteur auditif dont le niveau d'attenuation effective est approprie a l'environnement de travail d'un individu. D'autre part, l'occlusion du canal auditif induit une modification de la perception de la parole, ce qui cree un inconfort incitant les travailleurs a retirer leurs protecteurs. Ces deux problemes existent parce que les methodes actuelles de mesure de l'effet d'occlusion et de l'attenuation sont limitees. Les mesures objectives basees sur des mesures microphoniques intra-auriculaires ne tiennent pas compte de la transmission directe du son a la cochlee par conduction osseuse. Les mesures subjectives au seuil de l'audition sont biaisees a cause de l'effet de masquage aux basses frequences induit par le bruit physiologique. L'objectif principal de ce travail de these de doctorat est d'ameliorer la mesure de l'attenuation et de l'effet d'occlusion des protecteurs auditifs intra-auriculaires. L'approche generale consiste a : (i) verifier s'il est possible de mesurer l'attenuation des protecteurs auditifs grâce au recueil des potentiels evoques stationnaires et multiples (PEASM) avec et sans protecteur auditif (protocole 1), (ii) adapter cette methodologie pour mesurer l'effet d'occlusion induit par le port de protecteur auditifs intra-auriculaires (protocole 2), et (iii) valider chaque protocole par l'intermediaire de mesures realisees sur sujets humains. Les resultats du protocole 1 demontrent que les PEASM peuvent etre utilises pour mesurer objectivement l'attenuation des protecteurs auditifs : les resultats obtenus a 500 Hz et 1 kHz demontrent que l'attenuation mesuree a partir des PEASM est relativement equivalente a l'attenuation calculee par la methode REAT, ce qui est en accord avec ce qui etait attendu puisque l'effet de masquage induit par le bruit physiologique aux basses frequences est relativement negligeable a ces frequences. Les resultats du protocole 2 demontrent que les PEASM peuvent etre egalement utilises pour mesurer objectivement l'effet d'occlusion induit par le port de protecteurs auditifs : l'effet d'occlusion mesure a partir des PEASM a 500 Hz est plus eleve que celui calcule par l'intermediaire de la methode subjective au seuil de l'audition, ce qui est en accord avec ce qui etait attendu puisqu'en dessous d'1 kHz, l'effet de masquage induit par le bruit physiologique aux basses frequences est source de biais pour les resultats obtenus par la methode subjective car il y a surestimation des seuils de l'audition en basse frequence lors du port de protecteurs auditifs. Toutefois, les resultats obtenus a 250 Hz sont en contradiction avec les resultats attendus. D'un point de vue scientifique, ce travail de these a permis de realiser deux nouvelles methodes innovantes pour mesurer objectivement l'attenuation et l'effet d'occlusion des protecteurs auditifs intra-auriculaires par electroencephalographie. D'un point de vue sante et securite au travail, les avancees presentees dans cette these pourraient aider a concevoir des protecteurs auditifs plus performants. En effet, si ces deux nouvelles methodes objectives etaient normalisees pour caracteriser les protecteurs auditifs intra-auriculaires, elles pourraient permettre : (i) de mieux apprehender l'efficacite reelle de la protection auditive et (ii) de fournir une mesure de l'inconfort induit par l'occlusion du canal auditif lors du port de protecteurs. Fournir un protecteur auditif dont l'efficacite reelle est adaptee a l'environnement de travail et dont le confort est optimise permettrait, sur le long terme, d'ameliorer les conditions des travailleurs en minimisant le risque lie a la degradation de leur appareil auditif. Les perspectives de travail proposees a la fin de cette these consistent principalement a : (i) exploiter ces deux methodes avec une gamme frequentielle plus etendue, (ii) explorer la variabilite intra-individuelle de chacune des methodes, (iii) comparer les resultats des deux methodes avec ceux obtenus par l'intermediaire de la methode "Microphone in Real Ear" (MIRE) et (iv) verifier la compatibilite de chacune des methodes avec tous les types de protecteurs auditifs. De plus, pour la methode de mesure de l'effet d'occlusion utilisant les PEASM, une etude complementaire est necessaire pour lever la contradiction observee a 250 Hz.
Developpement de techniques de diagnostic non intrusif par tomographie optique
NASA Astrophysics Data System (ADS)
Dubot, Fabien
Que ce soit dans les domaines des procedes industriels ou de l'imagerie medicale, on a assiste ces deux dernieres decennies a un developpement croissant des techniques optiques de diagnostic. L'engouement pour ces methodes repose principalement sur le fait qu'elles sont totalement non invasives, qu'elle utilisent des sources de rayonnement non nocives pour l'homme et l'environnement et qu'elles sont relativement peu couteuses et faciles a mettre en oeuvre comparees aux autres techniques d'imagerie. Une de ces techniques est la Tomographie Optique Diffuse (TOD). Cette methode d'imagerie tridimensionnelle consiste a caracteriser les proprietes radiatives d'un Milieu Semi-Transparent (MST) a partir de mesures optiques dans le proche infrarouge obtenues a l'aide d'un ensemble de sources et detecteurs situes sur la frontiere du domaine sonde. Elle repose notamment sur un modele direct de propagation de la lumiere dans le MST, fournissant les predictions, et un algorithme de minimisation d'une fonction de cout integrant les predictions et les mesures, permettant la reconstruction des parametres d'interet. Dans ce travail, le modele direct est l'approximation diffuse de l'equation de transfert radiatif dans le regime frequentiel tandis que les parametres d'interet sont les distributions spatiales des coefficients d'absorption et de diffusion reduit. Cette these est consacree au developpement d'une methode inverse robuste pour la resolution du probleme de TOD dans le domaine frequentiel. Pour repondre a cet objectif, ce travail est structure en trois parties qui constituent les principaux axes de la these. Premierement, une comparaison des algorithmes de Gauss-Newton amorti et de Broyden- Fletcher-Goldfarb-Shanno (BFGS) est proposee dans le cas bidimensionnel. Deux methodes de regularisation sont combinees pour chacun des deux algorithmes, a savoir la reduction de la dimension de l'espace de controle basee sur le maillage et la regularisation par penalisation de Tikhonov pour l'algorithme de Gauss-Newton amorti, et les regularisations basees sur le maillage et l'utilisation des gradients de Sobolev, uniformes ou spatialement dependants, lors de l'extraction du gradient de la fonction cout, pour la methode BFGS. Les resultats numeriques indiquent que l'algorithme de BFGS surpasse celui de Gauss-Newton amorti en ce qui concerne la qualite des reconstructions obtenues, le temps de calcul ou encore la facilite de selection du parametre de regularisation. Deuxiemement, une etude sur la quasi-independance du parametre de penalisation de Tikhonov optimal par rapport a la dimension de l'espace de controle dans les problemes inverses d'estimation de fonctions spatialement dependantes est menee. Cette etude fait suite a une observation realisee lors de la premiere partie de ce travail ou le parametre de Tikhonov, determine par la methode " L-curve ", se trouve etre independant de la dimension de l'espace de controle dans le cas sous-determine. Cette hypothese est demontree theoriquement puis verifiee numeriquement sur un probleme inverse lineaire de conduction de la chaleur puis sur le probleme inverse non-lineaire de TOD. La verification numerique repose sur la determination d'un parametre de Tikhonov optimal, defini comme etant celui qui minimise les ecarts entre les cibles et les reconstructions. La demonstration theorique repose sur le principe de Morozov (discrepancy principle) dans le cas lineaire, tandis qu'elle repose essentiellement sur l'hypothese que les fonctions radiatives a reconstruire sont des variables aleatoires suivant une loi normale dans le cas non-lineaire. En conclusion, la these demontre que le parametre de Tikhonov peut etre determine en utilisant une parametrisation des variables de controle associee a un maillage lâche afin de reduire les temps de calcul. Troisiemement, une methode inverse multi-echelle basee sur les ondelettes associee a l'algorithme de BFGS est developpee. Cette methode, qui s'appuie sur une reformulation du probleme inverse original en une suite de sous-problemes inverses de la plus grande echelle a la plus petite, a l'aide de la transformee en ondelettes, permet de faire face a la propriete de convergence locale de l'optimiseur et a la presence de nombreux minima locaux dans la fonction cout. Les resultats numeriques montrent que la methode proposee est plus stable vis-a-vis de l'estimation initiale des proprietes radiatives et fournit des reconstructions finales plus precises que l'algorithme de BFGS ordinaire tout en necessitant des temps de calcul semblables. Les resultats de ces travaux sont presentes dans cette these sous forme de quatre articles. Le premier article a ete accepte dans l'International Journal of Thermal Sciences, le deuxieme est accepte dans la revue Inverse Problems in Science and Engineering, le troisieme est accepte dans le Journal of Computational and Applied Mathematics et le quatrieme a ete soumis au Journal of Quantitative Spectroscopy & Radiative Transfer. Dix autres articles ont ete publies dans des comptes-rendus de conferences avec comite de lecture. Ces articles sont disponibles en format pdf sur le site de la Chaire de recherche t3e (www.t3e.info).
NASA Astrophysics Data System (ADS)
Houehanou, Ernesto C.
L'incorporation des ajouts cimentaires dans le beton est connue pour ses avantages technologiques et environnementaux. Pour assurer une plus grande utilisation des ajouts cimentaires, il faut accroitre la connaissance dans ce domaine surtout les facteurs relatifs a la durabilite des ouvrages construits avec les betons contenant des ajouts mineraux. Jusqu'a present, la plupart des etudes sur les betons contenant de la cendre volante et du laitier semble s'accorder sur leur moins bonne durabilite a l'ecaillage surtout lorsqu'on les compare au beton ordinaire. Les raisons de cette moins bonne performance ne sont pas toutes connues et cela limite bien des fois l'incorporation de la cendre volante et de laitier dans le beton pour des ouvrages fortement exposes aux cycles de gel-degel en presence de sels fondants. Cette these vise la comprehension des problematiques de la durabilite a l'ecaillage des betons contenant des ajouts cimentaires tels la cendre volante et le laitier. Les objectifs sont de mieux comprendre la representativite et la severite relative des essais normalises ASTM C672 et NQ 2621-900 pour l'evaluation de la durabilite a l'ecaillage de ces betons, d'etudier l'influence de la methode de murissement sur la durabilite a l'ecaillage et d'etudier la relation entre la durabilite a l'ecaillage et la sorptivite des surfaces de beton ainsi que la particularite de la microstructure des betons contenant de la cendre volante. Cinq types de betons a air entraine contenant 25% et 35% de cendre volante et laitier ainsi que 1% et 2% de fumee de silice ont ete produits, muris selon differentes methodes et soumis a des essais acceleres selon les deux procedures normalisees ainsi qu'un essai de sorptivite. Les differentes methodes de murissement sont choisies de facon a mettre en evidence aussi bien l'influence des parametres des essais que celle de la methode de murissement elle-meme. La durabilite en laboratoire des betons testes a ete comparee avec celle de betons similaires apres 4 et 6 annees de service. La microstructure des betons en service a ete analysee au moyen du microscope a balayage electronique (MEB). Les resultats montrent que la qualite du murissement influence grandement la durabilite a l'ecaillage des betons contenant de la cendre volante et de laitier surtout lorsqu'ils sont soumis ' aux essais acceleres en laboratoire. La duree du pretraitement humide est un parametre cle de la durabilite a l'ecaillage des betons testes en laboratoire. Le pretraitement humide correspond a la duree totale du murissement humide (100% HR) et de la periode de presaturation. Pour les deux methodes d'essai, l'allongement du pretraitement humide a 28 jours ameliore la resistance a l'ecaillage de tous les types de betons et en particulier celle des betons avec cendres volantes. Pour les deux methodes d'essai, l'allongement du pretraitement humide a 28 jours ameliore la resistance a l'ecaillage de tous les types de betons et en particulier celle des betons avec cendres volantes. La periode de presaturation de 7 jours de la procedure NQ 2621-900 a un effet similaire a celui d'un murissement humide de meme longueur. Un murissement humide de 28 jours apparait optimal et conduit a une estimation plus realiste de la resistance a l'ecaillage reelle des betons. Pour une meme duree de pretraitement humide, les procedures NQ 2621-900 et ASTM C672 donnent des resultats equivalents. L'utilisation d'un moule a fond drainant n'a pas d'effet sur la resistance a l'ecaillage des betons de cette etude. Bien que le murissement dans l'eau saturee de chaux offre toute l'eau requise pour favoriser le developpement des proprietes du beton et l'amelioration de sa durabilite a l'ecaillage, elle lessive cependant les ions alcalins ce qui diminue defavorablement l'alcalinite et le pH de la solution interstitielle de la pate de ciment pres de la surface exposee. L'utilisation d'un agent de murissement protege mieux les betons contenant de la cendre volante et ameliore significativement leurs resistances a l'ecaillage mais elle a tendance a reduire la durabilite a l'ecaillage des betons contenant de laitier. Pour developper une bonne resistance a l'ecaillage, il est essentiel de produire une surface de beton impermeable qui resiste a la penetration de l'eau externe. La permeabilite et la porosite de la peau du beton sont etroitement liees a la sorptivite. L' allongement de la duree de murissement humide des betons avec ajouts cimentaires diminue systematiquement la sorptivite et ameliore leur durabilite a l'ecaillage, particulierement dans le cas des betons avec cendres volantes. Les resultats montrent qu'il existe une bonne correlation entre les resultats des essais d'ecaillage et les mesures de sorptivite. La correlation etablie entre la sorptivite et la durabilite a l'ecaillage des betons, l'effet • determinant de la carbonatation sur la durabilite a l'ecaillage des betons avec cendres volantes ainsi que l'explication de l'origine de la difference de severite entre les essais ASTM C-672 et NQ 2621-900 sont les contributions scientifiques de cette these. Au plan technique et industriel, elle met en evidence le mode de murissement qui favorise une meilleure durabilite a l'ecaillage des betons et suggere une methode de caracterisation en laboratoire qui ferait une meilleure prediction du comportement en service. Mots cles : Beton, cendre volante, laitier, durabilite, ecaillage, murissement, sorptivite
Product Support Manager Guidebook
2011-04-01
package is being developed using supportability analysis concepts such as Failure Mode, Effects and Criticality Analysis (FMECA), Fault Tree Analysis ( FTA ...Analysis (LORA) Condition Based Maintenance + (CBM+) Fault Tree Analysis ( FTA ) Failure Mode, Effects, and Criticality Analysis (FMECA) Maintenance Task...Reporting and Corrective Action System (FRACAS), Fault Tree Analysis ( FTA ), Level of Repair Analysis (LORA), Maintenance Task Analysis (MTA
Primary, Secondary, and Meta-Analysis of Research
ERIC Educational Resources Information Center
Glass, Gene V.
1976-01-01
Examines data analysis at three levels: analysis of data; secondary analysis is the re-analysis of data for the purpose of answering the original research question with better statistical techniques, or answering new questions with old data; and, meta-analysis refers to the statistical analysis of many analysis results from individual studies for…
Software technology testbed softpanel prototype
NASA Technical Reports Server (NTRS)
1991-01-01
The following subject areas are covered: analysis of using Ada for the development of real-time control systems for the Space Station; analysis of the functionality of the Application Generator; analysis of the User Support Environment criteria; analysis of the SSE tools and procedures which are to be used for the development of ground/flight software for the Space Station; analysis if the CBATS tutorial (an Ada tutorial package); analysis of Interleaf; analysis of the Integration, Test and Verification process of the Space Station; analysis of the DMS on-orbit flight architecture; analysis of the simulation architecture.
[Preliminarily application of content analysis to qualitative nursing data].
Liang, Shu-Yuan; Chuang, Yeu-Hui; Wu, Shu-Fang
2012-10-01
Content analysis is a methodology for objectively and systematically studying the content of communication in various formats. Content analysis in nursing research and nursing education is called qualitative content analysis. Qualitative content analysis is frequently applied to nursing research, as it allows researchers to determine categories inductively and deductively. This article examines qualitative content analysis in nursing research from theoretical and practical perspectives. We first describe how content analysis concepts such as unit of analysis, meaning unit, code, category, and theme are used. Next, we describe the basic steps involved in using content analysis, including data preparation, data familiarization, analysis unit identification, creating tentative coding categories, category refinement, and establishing category integrity. Finally, this paper introduces the concept of content analysis rigor, including dependability, confirmability, credibility, and transferability. This article elucidates the content analysis method in order to help professionals conduct systematic research that generates data that are informative and useful in practical application.
Qualitative data analysis: conceptual and practical considerations.
Liamputtong, Pranee
2009-08-01
Qualitative inquiry requires that collected data is organised in a meaningful way, and this is referred to as data analysis. Through analytic processes, researchers turn what can be voluminous data into understandable and insightful analysis. This paper sets out the different approaches that qualitative researchers can use to make sense of their data including thematic analysis, narrative analysis, discourse analysis and semiotic analysis and discusses the ways that qualitative researchers can analyse their data. I first discuss salient issues in performing qualitative data analysis, and then proceed to provide some suggestions on different methods of data analysis in qualitative research. Finally, I provide some discussion on the use of computer-assisted data analysis.
2014-05-22
Commander and Staff 2: Mission Analysis 3: Mission analysis 3: Course of Action (COA) Development 4: Staff Estimates 4: COA Analysis 5: Commander’s...Commander and Staff 2: Mission Analysis 2: Mission Analysis 3: Mission analysis 3: Course of Action (COA) Development 3: Course of Action (COA... Development 4: Staff Estimates 4: COA Analysis 4: COA Analysis 5: Commander’s Estimate 5: COA Comparison 5: COA Comparison 6: Preparation
Distributed and Collaborative Software Analysis
NASA Astrophysics Data System (ADS)
Ghezzi, Giacomo; Gall, Harald C.
Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of
Full Life Cycle of Data Analysis with Climate Model Diagnostic Analyzer (CMDA)
NASA Astrophysics Data System (ADS)
Lee, S.; Zhai, C.; Pan, L.; Tang, B.; Zhang, J.; Bao, Q.; Malarout, N.
2017-12-01
We have developed a system that supports the full life cycle of a data analysis process, from data discovery, to data customization, to analysis, to reanalysis, to publication, and to reproduction. The system called Climate Model Diagnostic Analyzer (CMDA) is designed to demonstrate that the full life cycle of data analysis can be supported within one integrated system for climate model diagnostic evaluation with global observational and reanalysis datasets. CMDA has four subsystems that are highly integrated to support the analysis life cycle. Data System manages datasets used by CMDA analysis tools, Analysis System manages CMDA analysis tools which are all web services, Provenance System manages the meta data of CMDA datasets and the provenance of CMDA analysis history, and Recommendation System extracts knowledge from CMDA usage history and recommends datasets/analysis tools to users. These four subsystems are not only highly integrated but also easily expandable. New datasets can be easily added to Data System and scanned to be visible to the other subsystems. New analysis tools can be easily registered to be available in the Analysis System and Provenance System. With CMDA, a user can start a data analysis process by discovering datasets of relevance to their research topic using the Recommendation System. Next, the user can customize the discovered datasets for their scientific use (e.g. anomaly calculation, regridding, etc) with tools in the Analysis System. Next, the user can do their analysis with the tools (e.g. conditional sampling, time averaging, spatial averaging) in the Analysis System. Next, the user can reanalyze the datasets based on the previously stored analysis provenance in the Provenance System. Further, they can publish their analysis process and result to the Provenance System to share with other users. Finally, any user can reproduce the published analysis process and results. By supporting the full life cycle of climate data analysis, CMDA improves the research productivity and collaboration level of its user.
Analysis of the Effects of the Commander’s Battle Positioning on Unit Combat Performance
1991-03-01
Analysis ......... .. 58 Logistic Regression Analysis ......... .. 61 Canonical Correlation Analysis ........ .. 62 Descriminant Analysis...entails classifying objects into two or more distinct groups, or responses. Dillon defines descriminant analysis as "deriving linear combinations of the...object given it’s predictor variables. The second objective is, through analysis of the parameters of the descriminant functions, determine those
Wightman, Jade; Julio, Flávia; Virués-Ortega, Javier
2014-05-01
Experimental functional analysis is an assessment methodology to identify the environmental factors that maintain problem behavior in individuals with developmental disabilities and in other populations. Functional analysis provides the basis for the development of reinforcement-based approaches to treatment. This article reviews the procedures, validity, and clinical implementation of the methodological variations of functional analysis and function-based interventions. We present six variations of functional analysis methodology in addition to the typical functional analysis: brief functional analysis, single-function tests, latency-based functional analysis, functional analysis of precursors, and trial-based functional analysis. We also present the three general categories of function-based interventions: extinction, antecedent manipulation, and differential reinforcement. Functional analysis methodology is a valid and efficient approach to the assessment of problem behavior and the selection of treatment strategies.
Factor Analysis and Counseling Research
ERIC Educational Resources Information Center
Weiss, David J.
1970-01-01
Topics discussed include factor analysis versus cluster analysis, analysis of Q correlation matrices, ipsativity and factor analysis, and tests for the significance of a correlation matrix prior to application of factor analytic techniques. Techniques for factor extraction discussed include principal components, canonical factor analysis, alpha…
Data and Tools | Energy Analysis | NREL
and Tools Energy Analysis Data and Tools NREL develops energy analysis data and tools to assess collections. Data Products Technology and Performance Analysis Tools Energy Systems Analysis Tools Economic and Financial Analysis Tools
14 CFR 417.231 - Collision avoidance analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Collision avoidance analysis. 417.231..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.231 Collision avoidance analysis. (a) General. A flight safety analysis must include a collision avoidance analysis that...
Orbit Transfer Vehicle (OTV) engine, phase A study. Volume 2: Study
NASA Technical Reports Server (NTRS)
Mellish, J. A.
1979-01-01
The hydrogen oxygen engine used in the orbiter transfer vehicle is described. The engine design is analyzed and minimum engine performance and man rating requirements are discussed. Reliability and safety analysis test results are presented and payload, risk and cost, and engine installation parameters are defined. Engine tests were performed including performance analysis, structural analysis, thermal analysis, turbomachinery analysis, controls analysis, and cycle analysis.
MAGMA: Generalized Gene-Set Analysis of GWAS Data
de Leeuw, Christiaan A.; Mooij, Joris M.; Heskes, Tom; Posthuma, Danielle
2015-01-01
By aggregating data for complex traits in a biologically meaningful way, gene and gene-set analysis constitute a valuable addition to single-marker analysis. However, although various methods for gene and gene-set analysis currently exist, they generally suffer from a number of issues. Statistical power for most methods is strongly affected by linkage disequilibrium between markers, multi-marker associations are often hard to detect, and the reliance on permutation to compute p-values tends to make the analysis computationally very expensive. To address these issues we have developed MAGMA, a novel tool for gene and gene-set analysis. The gene analysis is based on a multiple regression model, to provide better statistical performance. The gene-set analysis is built as a separate layer around the gene analysis for additional flexibility. This gene-set analysis also uses a regression structure to allow generalization to analysis of continuous properties of genes and simultaneous analysis of multiple gene sets and other gene properties. Simulations and an analysis of Crohn’s Disease data are used to evaluate the performance of MAGMA and to compare it to a number of other gene and gene-set analysis tools. The results show that MAGMA has significantly more power than other tools for both the gene and the gene-set analysis, identifying more genes and gene sets associated with Crohn’s Disease while maintaining a correct type 1 error rate. Moreover, the MAGMA analysis of the Crohn’s Disease data was found to be considerably faster as well. PMID:25885710
MAGMA: generalized gene-set analysis of GWAS data.
de Leeuw, Christiaan A; Mooij, Joris M; Heskes, Tom; Posthuma, Danielle
2015-04-01
By aggregating data for complex traits in a biologically meaningful way, gene and gene-set analysis constitute a valuable addition to single-marker analysis. However, although various methods for gene and gene-set analysis currently exist, they generally suffer from a number of issues. Statistical power for most methods is strongly affected by linkage disequilibrium between markers, multi-marker associations are often hard to detect, and the reliance on permutation to compute p-values tends to make the analysis computationally very expensive. To address these issues we have developed MAGMA, a novel tool for gene and gene-set analysis. The gene analysis is based on a multiple regression model, to provide better statistical performance. The gene-set analysis is built as a separate layer around the gene analysis for additional flexibility. This gene-set analysis also uses a regression structure to allow generalization to analysis of continuous properties of genes and simultaneous analysis of multiple gene sets and other gene properties. Simulations and an analysis of Crohn's Disease data are used to evaluate the performance of MAGMA and to compare it to a number of other gene and gene-set analysis tools. The results show that MAGMA has significantly more power than other tools for both the gene and the gene-set analysis, identifying more genes and gene sets associated with Crohn's Disease while maintaining a correct type 1 error rate. Moreover, the MAGMA analysis of the Crohn's Disease data was found to be considerably faster as well.
NASA Technical Reports Server (NTRS)
Cirillo, William M.; Earle, Kevin D.; Goodliff, Kandyce E.; Reeves, J. D.; Stromgren, Chel; Andraschko, Mark R.; Merrill, R. Gabe
2008-01-01
NASA s Constellation Program employs a strategic analysis methodology in providing an integrated analysis capability of Lunar exploration scenarios and to support strategic decision-making regarding those scenarios. The strategic analysis methodology integrates the assessment of the major contributors to strategic objective satisfaction performance, affordability, and risk and captures the linkages and feedbacks between all three components. Strategic analysis supports strategic decision making by senior management through comparable analysis of alternative strategies, provision of a consistent set of high level value metrics, and the enabling of cost-benefit analysis. The tools developed to implement the strategic analysis methodology are not element design and sizing tools. Rather, these models evaluate strategic performance using predefined elements, imported into a library from expert-driven design/sizing tools or expert analysis. Specific components of the strategic analysis tool set include scenario definition, requirements generation, mission manifesting, scenario lifecycle costing, crew time analysis, objective satisfaction benefit, risk analysis, and probabilistic evaluation. Results from all components of strategic analysis are evaluated a set of pre-defined figures of merit (FOMs). These FOMs capture the high-level strategic characteristics of all scenarios and facilitate direct comparison of options. The strategic analysis methodology that is described in this paper has previously been applied to the Space Shuttle and International Space Station Programs and is now being used to support the development of the baseline Constellation Program lunar architecture. This paper will present an overview of the strategic analysis methodology and will present sample results from the application of the strategic analysis methodology to the Constellation Program lunar architecture.
Electronic Circuit Analysis Language (ECAL)
NASA Astrophysics Data System (ADS)
Chenghang, C.
1983-03-01
The computer aided design technique is an important development in computer applications and it is an important component of computer science. The special language for electronic circuit analysis is the foundation of computer aided design or computer aided circuit analysis (abbreviated as CACD and CACA) of simulated circuits. Electronic circuit analysis language (ECAL) is a comparatively simple and easy to use circuit analysis special language which uses the FORTRAN language to carry out the explanatory executions. It is capable of conducting dc analysis, ac analysis, and transient analysis of a circuit. Futhermore, the results of the dc analysis can be used directly as the initial conditions for the ac and transient analyses.
Solar Photovoltaic Manufacturing Cost Analysis | Energy Analysis | NREL
Solar Photovoltaic Manufacturing Cost Analysis Solar Photovoltaic Manufacturing Cost Analysis NREL's photovoltaic (PV) manufacturing cost analysis-part of our broader effort supporting manufacturing manufacturing sector, and is that growth sustainable? NREL's manufacturing cost analysis studies show that: U.S
Data, Analysis, and Visualization | Computational Science | NREL
Data, Analysis, and Visualization Data, Analysis, and Visualization Data management, data analysis . At NREL, our data management, data analysis, and scientific visualization capabilities help move the approaches to image analysis and computer vision. Data Management and Big Data Systems, software, and tools
A Multidimensional Analysis Tool for Visualizing Online Interactions
ERIC Educational Resources Information Center
Kim, Minjeong; Lee, Eunchul
2012-01-01
This study proposes and verifies the performance of an analysis tool for visualizing online interactions. A review of the most widely used methods for analyzing online interactions, including quantitative analysis, content analysis, and social network analysis methods, indicates these analysis methods have some limitations resulting from their…
Antón, Alfonso; Pazos, Marta; Martín, Belén; Navero, José Manuel; Ayala, Miriam Eleonora; Castany, Marta; Martínez, Patricia; Bardavío, Javier
2013-01-01
To assess sensitivity, specificity, and agreement among automated event analysis, automated trend analysis, and expert evaluation to detect glaucoma progression. This was a prospective study that included 37 eyes with a follow-up of 36 months. All had glaucomatous disks and fields and performed reliable visual fields every 6 months. Each series of fields was assessed with 3 different methods: subjective assessment by 2 independent teams of glaucoma experts, glaucoma/guided progression analysis (GPA) event analysis, and GPA (visual field index-based) trend analysis. Kappa agreement coefficient between methods and sensitivity and specificity for each method using expert opinion as gold standard were calculated. The incidence of glaucoma progression was 16% to 18% in 3 years but only 3 cases showed progression with all 3 methods. Kappa agreement coefficient was high (k=0.82) between subjective expert assessment and GPA event analysis, and only moderate between these two and GPA trend analysis (k=0.57). Sensitivity and specificity for GPA event and GPA trend analysis were 71% and 96%, and 57% and 93%, respectively. The 3 methods detected similar numbers of progressing cases. The GPA event analysis and expert subjective assessment showed high agreement between them and moderate agreement with GPA trend analysis. In a period of 3 years, both methods of GPA analysis offered high specificity, event analysis showed 83% sensitivity, and trend analysis had a 66% sensitivity.
Dangers in Using Analysis of Covariance Procedures.
ERIC Educational Resources Information Center
Campbell, Kathleen T.
Problems associated with the use of analysis of covariance (ANCOVA) as a statistical control technique are explained. Three problems relate to the use of "OVA" methods (analysis of variance, analysis of covariance, multivariate analysis of variance, and multivariate analysis of covariance) in general. These are: (1) the wasting of information when…
Using Cognitive Task Analysis and Eye Tracking to Understand Imagery Analysis
2006-01-01
Using Cognitive Task Analysis and Eye Tracking to Understand Imagery Analysis Laura Kurland, Abigail Gertner, Tom Bartee, Michael Chisholm and...have used these to study the analysts search behavior in detail. 2 EXPERIMENT Using a Cognitive Task Analysis (CTA) framework for knowledge...TITLE AND SUBTITLE Using Cognitive Task Analysis and Eye Tracking to Understand Imagery Analysis 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM
An Analysis of Effects of Variable Factors on Weapon Performance
1993-03-01
ALTERNATIVE ANALYSIS A. CATEGORICAL DATA ANALYSIS Statistical methodology for categorical data analysis traces its roots to the work of Francis Galton in the...choice of statistical tests . This thesis examines an analysis performed by Surface Warfare Development Group (SWDG). The SWDG analysis is shown to be...incorrect due to the misapplication of testing methods. A corrected analysis is presented and recommendations suggested for changes to the testing
Structure identification methods for atomistic simulations of crystalline materials
Stukowski, Alexander
2012-05-28
Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.
NASA Technical Reports Server (NTRS)
Rose, Cheryl A.; Starnes, James H., Jr.
1996-01-01
An efficient, approximate analysis for calculating complete three-dimensional stress fields near regions of geometric discontinuities in laminated composite structures is presented. An approximate three-dimensional local analysis is used to determine the detailed local response due to far-field stresses obtained from a global two-dimensional analysis. The stress results from the global analysis are used as traction boundary conditions for the local analysis. A generalized plane deformation assumption is made in the local analysis to reduce the solution domain to two dimensions. This assumption allows out-of-plane deformation to occur. The local analysis is based on the principle of minimum complementary energy and uses statically admissible stress functions that have an assumed through-the-thickness distribution. Examples are presented to illustrate the accuracy and computational efficiency of the local analysis. Comparisons of the results of the present local analysis with the corresponding results obtained from a finite element analysis and from an elasticity solution are presented. These results indicate that the present local analysis predicts the stress field accurately. Computer execution-times are also presented. The demonstrated accuracy and computational efficiency of the analysis make it well suited for parametric and design studies.
A guide to understanding meta-analysis.
Israel, Heidi; Richter, Randy R
2011-07-01
With the focus on evidence-based practice in healthcare, a well-conducted systematic review that includes a meta-analysis where indicated represents a high level of evidence for treatment effectiveness. The purpose of this commentary is to assist clinicians in understanding meta-analysis as a statistical tool using both published articles and explanations of components of the technique. We describe what meta-analysis is, what heterogeneity is, and how it affects meta-analysis, effect size, the modeling techniques of meta-analysis, and strengths and weaknesses of meta-analysis. Common components like forest plot interpretation, software that may be used, special cases for meta-analysis, such as subgroup analysis, individual patient data, and meta-regression, and a discussion of criticisms, are included.
Applying Cognitive Work Analysis to Time Critical Targeting Functionality
2004-10-01
Cognitive Task Analysis , CTA, Cognitive Task Analysis , Human Factors, GUI, Graphical User Interface, Heuristic Evaluation... Cognitive Task Analysis MITRE Briefing January 2000 Dynamic Battle Management Functional Architecture 3-1 Section 3 Human Factors...clear distinction between Cognitive Work Analysis (CWA) and Cognitive Task Analysis (CTA), therefore this document will refer to these
78 FR 59732 - Revisions to Design of Structures, Components, Equipment, and Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-27
...,'' Section 3.7.2, ``Seismic System Analysis,'' Section 3.7.3, ``Seismic Subsystem Analysis,'' Section 3.8.1... Analysis,'' (Accession No. ML13198A223); Section 3.7.3, ``Seismic Subsystem Analysis,'' (Accession No..., ``Seismic System Analysis,'' Section 3.7.3, ``Seismic Subsystem Analysis,'' Section 3.8.1, ``Concrete...
A Role for Language Analysis in Mathematics Textbook Analysis
ERIC Educational Resources Information Center
O'Keeffe, Lisa; O'Donoghue, John
2015-01-01
In current textbook analysis research, there is a strong focus on the content, structure and expectation presented by the textbook as elements for analysis. This research moves beyond such foci and proposes a framework for textbook language analysis which is intended to be integrated into an overall framework for mathematics textbook analysis. The…
1988-01-19
approach for the analysis of aerial images. In this approach image analysis is performed ast three levels of abstraction, namely iconic or low-level... image analysis , symbolic or medium-level image analysis , and semantic or high-level image analysis . Domain dependent knowledge about prototypical urban
ERIC Educational Resources Information Center
Gilmore, Alex
2015-01-01
Discourse studies is a vast, multidisciplinary, and rapidly expanding area of research, embracing a range of approaches including discourse analysis, corpus analysis, conversation analysis, interactional sociolinguistics, critical discourse analysis, genre analysis and multimodal discourse analysis. Each approach offers its own unique perspective…
Research Questions: Women and Mass Media.
ERIC Educational Resources Information Center
Busby, Linda J.
Typically, research concerning media presentations of women has involved six types of analysis: (1) content analysis (what is said), (2) cultural and social analysis (why it is said), (3) control or gatekeeper analysis (by whom it is said), (4) audience analysis (to whom it is said), (5) media analysis (in which channel), and (6) effects analysis…
Code of Federal Regulations, 2014 CFR
2014-07-01
... analysis information by State and local government officials. 1400.9 Section 1400.9 Protection of... CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.9 Access to off-site consequence analysis...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-19
... Analysis, FY 2012 Service Contract Inventory, and FY 2012 Service Contract Inventory Planned Analysis for... of the availability of the FY 2011 Service Contract Inventory Analysis, the FY 2012 Service Contract Inventory, and the FY 2012 Service Contract Inventory Planned Analysis. The FY 2011 inventory analysis...
Code of Federal Regulations, 2011 CFR
2011-07-01
... analysis information by State and local government officials. 1400.9 Section 1400.9 Protection of... CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.9 Access to off-site consequence analysis...
Code of Federal Regulations, 2010 CFR
2010-07-01
... analysis information by State and local government officials. 1400.9 Section 1400.9 Protection of... CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.9 Access to off-site consequence analysis...
Code of Federal Regulations, 2013 CFR
2013-07-01
... analysis information by State and local government officials. 1400.9 Section 1400.9 Protection of... CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.9 Access to off-site consequence analysis...
Code of Federal Regulations, 2012 CFR
2012-07-01
... analysis information by State and local government officials. 1400.9 Section 1400.9 Protection of... CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.9 Access to off-site consequence analysis...
Impact Analysis | Energy Analysis | NREL
Impact Analysis Impact Analysis Our impact analysis work addresses the impacts of markets and Portfolio Standards We are engaged in a multi-year project to examine the costs, benefits, and other impacts
Using Framework Analysis in nursing research: a worked example.
Ward, Deborah J; Furber, Christine; Tierney, Stephanie; Swallow, Veronica
2013-11-01
To demonstrate Framework Analysis using a worked example and to illustrate how criticisms of qualitative data analysis including issues of clarity and transparency can be addressed. Critics of the analysis of qualitative data sometimes cite lack of clarity and transparency about analytical procedures; this can deter nurse researchers from undertaking qualitative studies. Framework Analysis is flexible, systematic, and rigorous, offering clarity, transparency, an audit trail, an option for theme-based and case-based analysis and for readily retrievable data. This paper offers further explanation of the process undertaken which is illustrated with a worked example. Data were collected from 31 nursing students in 2009 using semi-structured interviews. The data collected are not reported directly here but used as a worked example for the five steps of Framework Analysis. Suggestions are provided to guide researchers through essential steps in undertaking Framework Analysis. The benefits and limitations of Framework Analysis are discussed. Nurses increasingly use qualitative research methods and need to use an analysis approach that offers transparency and rigour which Framework Analysis can provide. Nurse researchers may find the detailed critique of Framework Analysis presented in this paper a useful resource when designing and conducting qualitative studies. Qualitative data analysis presents challenges in relation to the volume and complexity of data obtained and the need to present an 'audit trail' for those using the research findings. Framework Analysis is an appropriate, rigorous and systematic method for undertaking qualitative analysis. © 2013 Blackwell Publishing Ltd.
NASA Technical Reports Server (NTRS)
Grosveld, Ferdinand W.; Schiller, Noah H.; Cabell, Randolph H.
2011-01-01
Comet Enflow is a commercially available, high frequency vibroacoustic analysis software founded on Energy Finite Element Analysis (EFEA) and Energy Boundary Element Analysis (EBEA). Energy Finite Element Analysis (EFEA) was validated on a floor-equipped composite cylinder by comparing EFEA vibroacoustic response predictions with Statistical Energy Analysis (SEA) and experimental results. Statistical Energy Analysis (SEA) predictions were made using the commercial software program VA One 2009 from ESI Group. The frequency region of interest for this study covers the one-third octave bands with center frequencies from 100 Hz to 4000 Hz.
Yokoyama, Eiji; Uchimura, Masako
2007-11-01
Ninety-five enterohemorrhagic Escherichia coli serovar O157 strains, including 30 strains isolated from 13 intrafamily outbreaks and 14 strains isolated from 3 mass outbreaks, were studied by pulsed-field gel electrophoresis (PFGE) and variable number of tandem repeats (VNTR) typing, and the resulting data were subjected to cluster analysis. Cluster analysis of the VNTR typing data revealed that 57 (60.0%) of 95 strains, including all epidemiologically linked strains, formed clusters with at least 95% similarity. Cluster analysis of the PFGE patterns revealed that 67 (70.5%) of 95 strains, including all but 1 of the epidemiologically linked strains, formed clusters with 90% similarity. The number of epidemiologically unlinked strains forming clusters was significantly less by VNTR cluster analysis than by PFGE cluster analysis. The congruence value between PFGE and VNTR cluster analysis was low and did not show an obvious correlation. With two-step cluster analysis, the number of clustered epidemiologically unlinked strains by PFGE cluster analysis that were divided by subsequent VNTR cluster analysis was significantly higher than the number by VNTR cluster analysis that were divided by subsequent PFGE cluster analysis. These results indicate that VNTR cluster analysis is more efficient than PFGE cluster analysis as an epidemiological tool to trace the transmission of enterohemorrhagic E. coli O157.
How Many Studies Do You Need? A Primer on Statistical Power for Meta-Analysis
ERIC Educational Resources Information Center
Valentine, Jeffrey C.; Pigott, Therese D.; Rothstein, Hannah R.
2010-01-01
In this article, the authors outline methods for using fixed and random effects power analysis in the context of meta-analysis. Like statistical power analysis for primary studies, power analysis for meta-analysis can be done either prospectively or retrospectively and requires assumptions about parameters that are unknown. The authors provide…
Canister Storage Building (CSB) Design Basis Accident Analysis Documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
CROWE, R.D.; PIEPHO, M.G.
2000-03-23
This document provided the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report''. All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.
An Array of Qualitative Data Analysis Tools: A Call for Data Analysis Triangulation
ERIC Educational Resources Information Center
Leech, Nancy L.; Onwuegbuzie, Anthony J.
2007-01-01
One of the most important steps in the qualitative research process is analysis of data. The purpose of this article is to provide elements for understanding multiple types of qualitative data analysis techniques available and the importance of utilizing more than one type of analysis, thus utilizing data analysis triangulation, in order to…
Correlating Detergent Fiber Analysis and Dietary Fiber Analysis Data for Corn Stover
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfrum, E. J.; Lorenz, A. J.; deLeon, N.
There exist large amounts of detergent fiber analysis data [neutral detergent fiber (NDF), acid detergent fiber (ADF), acid detergent lignin (ADL)] for many different potential cellulosic ethanol feedstocks, since these techniques are widely used for the analysis of forages. Researchers working in the area of cellulosic ethanol are interested in the structural carbohydrates in a feedstock (principally glucan and xylan), which are typically determined by acid hydrolysis of the structural fraction after multiple extractions of the biomass. These so-called dietary fiber analysis methods are significantly more involved than detergent fiber analysis methods. The purpose of this study was to determinemore » whether it is feasible to correlate detergent fiber analysis values to glucan and xylan content determined by dietary fiber analysis methods for corn stover. In the detergent fiber analysis literature cellulose is often estimated as the difference between ADF and ADL, while hemicellulose is often estimated as the difference between NDF and ADF. Examination of a corn stover dataset containing both detergent fiber analysis data and dietary fiber analysis data predicted using near infrared spectroscopy shows that correlations between structural glucan measured using dietary fiber techniques and cellulose estimated using detergent techniques, and between structural xylan measured using dietary fiber techniques and hemicellulose estimated using detergent techniques are high, but are driven largely by the underlying correlation between total extractives measured by fiber analysis and NDF/ADF. That is, detergent analysis data is correlated to dietary fiber analysis data for structural carbohydrates, but only indirectly; the main correlation is between detergent analysis data and solvent extraction data produced during the dietary fiber analysis procedure.« less
A Quantitative Approach to Scar Analysis
Khorasani, Hooman; Zheng, Zhong; Nguyen, Calvin; Zara, Janette; Zhang, Xinli; Wang, Joyce; Ting, Kang; Soo, Chia
2011-01-01
Analysis of collagen architecture is essential to wound healing research. However, to date no consistent methodologies exist for quantitatively assessing dermal collagen architecture in scars. In this study, we developed a standardized approach for quantitative analysis of scar collagen morphology by confocal microscopy using fractal dimension and lacunarity analysis. Full-thickness wounds were created on adult mice, closed by primary intention, and harvested at 14 days after wounding for morphometrics and standard Fourier transform-based scar analysis as well as fractal dimension and lacunarity analysis. In addition, transmission electron microscopy was used to evaluate collagen ultrastructure. We demonstrated that fractal dimension and lacunarity analysis were superior to Fourier transform analysis in discriminating scar versus unwounded tissue in a wild-type mouse model. To fully test the robustness of this scar analysis approach, a fibromodulin-null mouse model that heals with increased scar was also used. Fractal dimension and lacunarity analysis effectively discriminated unwounded fibromodulin-null versus wild-type skin as well as healing fibromodulin-null versus wild-type wounds, whereas Fourier transform analysis failed to do so. Furthermore, fractal dimension and lacunarity data also correlated well with transmission electron microscopy collagen ultrastructure analysis, adding to their validity. These results demonstrate that fractal dimension and lacunarity are more sensitive than Fourier transform analysis for quantification of scar morphology. PMID:21281794
Demonstration Advanced Avionics System (DAAS), Phase 1
NASA Technical Reports Server (NTRS)
Bailey, A. J.; Bailey, D. G.; Gaabo, R. J.; Lahn, T. G.; Larson, J. C.; Peterson, E. M.; Schuck, J. W.; Rodgers, D. L.; Wroblewski, K. A.
1981-01-01
Demonstration advanced anionics system (DAAS) function description, hardware description, operational evaluation, and failure mode and effects analysis (FMEA) are provided. Projected advanced avionics system (PAAS) description, reliability analysis, cost analysis, maintainability analysis, and modularity analysis are discussed.
Box truss analysis and technology development. Task 1: Mesh analysis and control
NASA Technical Reports Server (NTRS)
Bachtell, E. E.; Bettadapur, S. S.; Coyner, J. V.
1985-01-01
An analytical tool was developed to model, analyze and predict RF performance of box truss antennas with reflective mesh surfaces. The analysis system is unique in that it integrates custom written programs for cord tied mesh surfaces, thereby drastically reducing the cost of analysis. The analysis system is capable of determining the RF performance of antennas under any type of manufacturing or operating environment by integrating together the various disciplines of design, finite element analysis, surface best fit analysis and RF analysis. The Integrated Mesh Analysis System consists of six separate programs: The Mesh Tie System Model Generator, The Loadcase Generator, The Model Optimizer, The Model Solver, The Surface Topography Solver and The RF Performance Solver. Additionally, a study using the mesh analysis system was performed to determine the effect of on orbit calibration, i.e., surface adjustment, on a typical box truss antenna.
NASA Technical Reports Server (NTRS)
Hodges, Robert V.; Nixon, Mark W.; Rehfield, Lawrence W.
1987-01-01
A methodology was developed for the structural analysis of composite rotor blades. This coupled-beam analysis is relatively simple to use compared with alternative analysis techniques. The beam analysis was developed for thin-wall single-cell rotor structures and includes the effects of elastic coupling. This paper demonstrates the effectiveness of the new composite-beam analysis method through comparison of its results with those of an established baseline analysis technique. The baseline analysis is an MSC/NASTRAN finite-element model built up from anisotropic shell elements. Deformations are compared for three linear static load cases of centrifugal force at design rotor speed, applied torque, and lift for an ideal rotor in hover. A D-spar designed to twist under axial loading is the subject of the analysis. Results indicate the coupled-beam analysis is well within engineering accuracy.
Symmetry analysis for hyperbolic equilibria using a TB/dengue fever model
NASA Astrophysics Data System (ADS)
Massoukou, R. Y. M.'Pika; Govinder, K. S.
2016-08-01
We investigate the interplay between Lie symmetry analysis and dynamical systems analysis. As an example, we take a toy model describing the spread of TB and dengue fever. We first undertake a comprehensive dynamical systems analysis including a discussion about local stability. For those regions in which such analyzes cannot be translated to global behavior, we undertake a Lie symmetry analysis. It is shown that the Lie analysis can be useful in providing information for systems where the (local) dynamical systems analysis breaks down.
Re-Analysis of Data on the Space Radiation Environment above South-East Asia
1989-11-01
the cosmic ray flux with geomagnetic latitude, and also show expected increases due to the South Atlantic Anomaly (SAA) and outer belt electrons. How...TECHNIQUES USED 8 3.1 Orbital analysis 8 3.2 Analysis of cosmic ray effects 9 3.3 Analysis of trapped particle effects 11 3.4 Geomagnetic and...magnetospheric activity 12 4 UNCERTAINTIES AND SOURCES OF ERROR 13 4.1 Orbital analysis 13 4.2 Analysis of cosmic ray effects 13 4.3 Analysis of trapped particle
Combustion: Structural interaction in a viscoelastic material
NASA Technical Reports Server (NTRS)
Chang, T. Y.; Chang, J. P.; Kumar, M.; Kuo, K. K.
1980-01-01
The effect of interaction between combustion processes and structural deformation of solid propellant was considered. The combustion analysis was performed on the basis of deformed crack geometry, which was determined from the structural analysis. On the other hand, input data for the structural analysis, such as pressure distribution along the crack boundary and ablation velocity of the crack, were determined from the combustion analysis. The interaction analysis was conducted by combining two computer codes, a combustion analysis code and a general purpose finite element structural analysis code.
Namkoong, Sun; Hong, Seung Phil; Kim, Myung Hwa; Park, Byung Cheol
2013-02-01
Nowadays, although its clinical value remains controversial institutions utilize hair mineral analysis. Arguments about the reliability of hair mineral analysis persist, and there have been evaluations of commercial laboratories performing hair mineral analysis. The objective of this study was to assess the reliability of intra-laboratory and inter-laboratory data at three commercial laboratories conducting hair mineral analysis, compared to serum mineral analysis. Two divided hair samples taken from near the scalp were submitted for analysis at the same time, to all laboratories, from one healthy volunteer. Each laboratory sent a report consisting of quantitative results and their interpretation of health implications. Differences among intra-laboratory and interlaboratory data were analyzed using SPSS version 12.0 (SPSS Inc., USA). All the laboratories used identical methods for quantitative analysis, and they generated consistent numerical results according to Friedman analysis of variance. However, the normal reference ranges of each laboratory varied. As such, each laboratory interpreted the patient's health differently. On intra-laboratory data, Wilcoxon analysis suggested they generated relatively coherent data, but laboratory B could not in one element, so its reliability was doubtful. In comparison with the blood test, laboratory C generated identical results, but not laboratory A and B. Hair mineral analysis has its limitations, considering the reliability of inter and intra laboratory analysis comparing with blood analysis. As such, clinicians should be cautious when applying hair mineral analysis as an ancillary tool. Each laboratory included in this study requires continuous refinement from now on for inducing standardized normal reference levels.
Postflight Analysis of the Apollo 14 Cryogenic Oxygen System
NASA Technical Reports Server (NTRS)
Rule, D. D.
1972-01-01
A postflight analysis of the Apollo 14 cryogenic oxygen system is presented. The subjects discussed are: (1) methods of analysis, (2) stratification and heat transfer, (3) flight analysis, (4) postflight analysis, and (5) determination of model parameters.
2008-06-01
PESTEL Analysis PESTEL Analysis examines the general environment surrounding the defense industry in a macro perspective. Its focus is on six main...legislation (p. 575). C. STRATEGIC ANALYSIS OF THE MACRO ENVIRONMENT 1. Porter’s Five-Forces Model Analysis Porter’s Five-Forces Model is used to analyze...B. PREVIOUS ANALYSES.................................................................................9 C. STRATEGIC ANALYSIS OF THE MACRO
Energy Analysis Publications | Energy Analysis | NREL
Systems Impact Analysis We perform impact analysis to evaluate and understand the impact of markets publications. Featured Publications Complex Systems Analysis Complex systems analysis integrates all aspects of , policies, and financing on technology uptake and the impact of new technologies on markets and policy
40 CFR 60.1120 - What steps must I complete for my siting analysis?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What steps must I complete for my... Requirements: Siting Analysis § 60.1120 What steps must I complete for my siting analysis? (a) For your siting analysis, you must complete five steps: (1) Prepare an analysis. (2) Make your analysis available to the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
PECH, S.H.
This report describes the methodology used in conducting the K Basins Hazard Analysis, which provides the foundation for the K Basins Final Safety Analysis Report. This hazard analysis was performed in accordance with guidance provided by DOE-STD-3009-94, Preparation Guide for U. S. Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports and implements the requirements of DOE Order 5480.23, Nuclear Safety Analysis Report.
ERIC Educational Resources Information Center
Glass, Gene V.; And Others
Integrative analysis, or what is coming to be known as meta-analysis, is the integration of the findings of many empirical research studies of a topic. Meta-analysis differs from traditional narrative forms of research reviewing in that it is more quantitative and statistical. Thus, the methods of meta-analysis are merely statistical methods,…
Improvement of the cost-benefit analysis algorithm for high-rise construction projects
NASA Astrophysics Data System (ADS)
Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir
2018-03-01
The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.
Rizzardi, Anthony E; Zhang, Xiaotun; Vogel, Rachel Isaksson; Kolb, Suzanne; Geybels, Milan S; Leung, Yuet-Kin; Henriksen, Jonathan C; Ho, Shuk-Mei; Kwak, Julianna; Stanford, Janet L; Schmechel, Stephen C
2016-07-11
Digital image analysis offers advantages over traditional pathologist visual scoring of immunohistochemistry, although few studies examining the correlation and reproducibility of these methods have been performed in prostate cancer. We evaluated the correlation between digital image analysis (continuous variable data) and pathologist visual scoring (quasi-continuous variable data), reproducibility of each method, and association of digital image analysis methods with outcomes using prostate cancer tissue microarrays (TMAs) stained for estrogen receptor-β2 (ERβ2). Prostate cancer TMAs were digitized and evaluated by pathologist visual scoring versus digital image analysis for ERβ2 staining within tumor epithelium. Two independent analysis runs were performed to evaluate reproducibility. Image analysis data were evaluated for associations with recurrence-free survival and disease specific survival following radical prostatectomy. We observed weak/moderate Spearman correlation between digital image analysis and pathologist visual scores of tumor nuclei (Analysis Run A: 0.42, Analysis Run B: 0.41), and moderate/strong correlation between digital image analysis and pathologist visual scores of tumor cytoplasm (Analysis Run A: 0.70, Analysis Run B: 0.69). For the reproducibility analysis, there was high Spearman correlation between pathologist visual scores generated for individual TMA spots across Analysis Runs A and B (Nuclei: 0.84, Cytoplasm: 0.83), and very high correlation between digital image analysis for individual TMA spots across Analysis Runs A and B (Nuclei: 0.99, Cytoplasm: 0.99). Further, ERβ2 staining was significantly associated with increased risk of prostate cancer-specific mortality (PCSM) when quantified by cytoplasmic digital image analysis (HR 2.16, 95 % CI 1.02-4.57, p = 0.045), nuclear image analysis (HR 2.67, 95 % CI 1.20-5.96, p = 0.016), and total malignant epithelial area analysis (HR 5.10, 95 % CI 1.70-15.34, p = 0.004). After adjusting for clinicopathologic factors, only total malignant epithelial area ERβ2 staining was significantly associated with PCSM (HR 4.08, 95 % CI 1.37-12.15, p = 0.012). Digital methods of immunohistochemical quantification are more reproducible than pathologist visual scoring in prostate cancer, suggesting that digital methods are preferable and especially warranted for studies involving large sample sizes.
Spartan Release Engagement Mechanism (REM) stress and fracture analysis
NASA Technical Reports Server (NTRS)
Marlowe, D. S.; West, E. J.
1984-01-01
The revised stress and fracture analysis of the Spartan REM hardware for current load conditions and mass properties is presented. The stress analysis was performed using a NASTRAN math model of the Spartan REM adapter, base, and payload. Appendix A contains the material properties, loads, and stress analysis of the hardware. The computer output and model description are in Appendix B. Factors of safety used in the stress analysis were 1.4 on tested items and 2.0 on all other items. Fracture analysis of the items considered fracture critical was accomplished using the MSFC Crack Growth Analysis code. Loads and stresses were obtaind from the stress analysis. The fracture analysis notes are located in Appendix A and the computer output in Appendix B. All items analyzed met design and fracture criteria.
Secondary Students' Perceptions about Learning Qualitative Analysis in Inorganic Chemistry
NASA Astrophysics Data System (ADS)
Tan, Kim-Chwee Daniel; Goh, Ngoh-Khang; Chia, Lian-Sai; Treagust, David F.
2001-02-01
Grade 10 students in Singapore find qualitative analysis one of the more difficult topics in their external examinations. Fifty-one grade 10 students (15-17 years old) from three schools were interviewed to investigate their perceptions about learning qualitative analysis and the aspects of qualitative analysis they found difficult. The results showed that students found qualitative analysis tedious, difficult to understand and found the practical sessions unrelated to what they learned in class. They also believed that learning qualitative analysis required a great amount of memory work. It is proposed that their difficulties may arise from not knowing explicitly what is required in qualitative analysis, the content of qualitative analysis, the lack of motivation to understand qualitative analysis, cognitive overloading, and the lack of mastery of the required process skills.
A TIERED APPROACH TO PERFORMING UNCERTAINTY ANALYSIS IN CONDUCTING EXPOSURE ANALYSIS FOR CHEMICALS
The WHO/IPCS draft Guidance Document on Characterizing and Communicating Uncertainty in Exposure Assessment provides guidance on recommended strategies for conducting uncertainty analysis as part of human exposure analysis. Specifically, a tiered approach to uncertainty analysis ...
Organizational Analysis With Results Using Transactional Analysis
ERIC Educational Resources Information Center
Clary, Thomas C.; Clary, Erica W.
1976-01-01
OARTA (Organization Analysis with Results Using Transactional Analysis) is a way of thinking designed to resolve problems and reach goals through action-oriented research and analysis--a learning experience in which members of an organization can develop themselves and their organization. (ABM)
geospatial data analysis using parallel processing High performance computing Renewable resource technical potential and supply curve analysis Spatial database utilization Rapid analysis of large geospatial datasets energy and geospatial analysis products Research Interests Rapid, web-based renewable resource analysis
Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study.
Vaismoradi, Mojtaba; Turunen, Hannele; Bondas, Terese
2013-09-01
Qualitative content analysis and thematic analysis are two commonly used approaches in data analysis of nursing research, but boundaries between the two have not been clearly specified. In other words, they are being used interchangeably and it seems difficult for the researcher to choose between them. In this respect, this paper describes and discusses the boundaries between qualitative content analysis and thematic analysis and presents implications to improve the consistency between the purpose of related studies and the method of data analyses. This is a discussion paper, comprising an analytical overview and discussion of the definitions, aims, philosophical background, data gathering, and analysis of content analysis and thematic analysis, and addressing their methodological subtleties. It is concluded that in spite of many similarities between the approaches, including cutting across data and searching for patterns and themes, their main difference lies in the opportunity for quantification of data. It means that measuring the frequency of different categories and themes is possible in content analysis with caution as a proxy for significance. © 2013 Wiley Publishing Asia Pty Ltd.
NASA trend analysis procedures
NASA Technical Reports Server (NTRS)
1993-01-01
This publication is primarily intended for use by NASA personnel engaged in managing or implementing trend analysis programs. 'Trend analysis' refers to the observation of current activity in the context of the past in order to infer the expected level of future activity. NASA trend analysis was divided into 5 categories: problem, performance, supportability, programmatic, and reliability. Problem trend analysis uncovers multiple occurrences of historical hardware or software problems or failures in order to focus future corrective action. Performance trend analysis observes changing levels of real-time or historical flight vehicle performance parameters such as temperatures, pressures, and flow rates as compared to specification or 'safe' limits. Supportability trend analysis assesses the adequacy of the spaceflight logistics system; example indicators are repair-turn-around time and parts stockage levels. Programmatic trend analysis uses quantitative indicators to evaluate the 'health' of NASA programs of all types. Finally, reliability trend analysis attempts to evaluate the growth of system reliability based on a decreasing rate of occurrence of hardware problems over time. Procedures for conducting all five types of trend analysis are provided in this publication, prepared through the joint efforts of the NASA Trend Analysis Working Group.
The Utility of Template Analysis in Qualitative Psychology Research.
Brooks, Joanna; McCluskey, Serena; Turley, Emma; King, Nigel
2015-04-03
Thematic analysis is widely used in qualitative psychology research, and in this article, we present a particular style of thematic analysis known as Template Analysis. We outline the technique and consider its epistemological position, then describe three case studies of research projects which employed Template Analysis to illustrate the diverse ways it can be used. Our first case study illustrates how the technique was employed in data analysis undertaken by a team of researchers in a large-scale qualitative research project. Our second example demonstrates how a qualitative study that set out to build on mainstream theory made use of the a priori themes (themes determined in advance of coding) permitted in Template Analysis. Our final case study shows how Template Analysis can be used from an interpretative phenomenological stance. We highlight the distinctive features of this style of thematic analysis, discuss the kind of research where it may be particularly appropriate, and consider possible limitations of the technique. We conclude that Template Analysis is a flexible form of thematic analysis with real utility in qualitative psychology research.
West, Phillip B [Idaho Falls, ID; Novascone, Stephen R [Idaho Falls, ID; Wright, Jerry P [Idaho Falls, ID
2012-05-29
Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.
West, Phillip B [Idaho Falls, ID; Novascone, Stephen R [Idaho Falls, ID; Wright, Jerry P [Idaho Falls, ID
2011-09-27
Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.
Inventory of File nam.t00z.awiphi00.tm00.grib2
Factor [non-dim] 041 50 mb HGT analysis Geopotential Height [gpm] 042 50 mb TMP analysis Temperature [K /kg] 052 50 mb RIME analysis Rime Factor [non-dim] 053 75 mb HGT analysis Geopotential Height [gpm SNMR analysis Snow Mixing Ratio [kg/kg] 064 75 mb RIME analysis Rime Factor [non-dim] 065 100 mb HGT
2007-10-01
1984. Complex principal component analysis : Theory and examples. Journal of Climate and Applied Meteorology 23: 1660-1673. Hotelling, H. 1933...Sediments 99. ASCE: 2,566-2,581. Von Storch, H., and A. Navarra. 1995. Analysis of climate variability. Applications of statistical techniques. Berlin...ERDC TN-SWWRP-07-9 October 2007 Regional Morphology Empirical Analysis Package (RMAP): Orthogonal Function Analysis , Background and Examples by
Integrated Sensitivity Analysis Workflow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman-Hill, Ernest J.; Hoffman, Edward L.; Gibson, Marcus J.
2014-08-01
Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.
Global/local methods research using a common structural analysis framework
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Ransom, Jonathan B.; Griffin, O. H., Jr.; Thompson, Danniella M.
1991-01-01
Methodologies for global/local stress analysis are described including both two- and three-dimensional analysis methods. These methods are being developed within a common structural analysis framework. Representative structural analysis problems are presented to demonstrate the global/local methodologies being developed.
Code of Federal Regulations, 2012 CFR
2012-07-01
... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION... analysis (OCA) information means sections 2 through 5 of a risk management plan (consisting of an... consequence analysis (OCA) data elements means the results of the off-site consequence analysis conducted by a...
Code of Federal Regulations, 2011 CFR
2011-07-01
... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION... analysis (OCA) information means sections 2 through 5 of a risk management plan (consisting of an... consequence analysis (OCA) data elements means the results of the off-site consequence analysis conducted by a...
Code of Federal Regulations, 2013 CFR
2013-07-01
... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION... analysis (OCA) information means sections 2 through 5 of a risk management plan (consisting of an... consequence analysis (OCA) data elements means the results of the off-site consequence analysis conducted by a...
Code of Federal Regulations, 2014 CFR
2014-07-01
... OFF-SITE CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION... analysis (OCA) information means sections 2 through 5 of a risk management plan (consisting of an... consequence analysis (OCA) data elements means the results of the off-site consequence analysis conducted by a...
MDAS: an integrated system for metabonomic data analysis.
Liu, Juan; Li, Bo; Xiong, Jiang-Hui
2009-03-01
Metabonomics, the latest 'omics' research field, shows great promise as a tool in biomarker discovery, drug efficacy and toxicity analysis, disease diagnosis and prognosis. One of the major challenges now facing researchers is how to process this data to yield useful information about a biological system, e.g., the mechanism of diseases. Traditional methods employed in metabonomic data analysis use multivariate analysis methods developed independently in chemometrics research. Additionally, with the development of machine learning approaches, some methods such as SVMs also show promise for use in metabonomic data analysis. Aside from the application of general multivariate analysis and machine learning methods to this problem, there is also a need for an integrated tool customized for metabonomic data analysis which can be easily used by biologists to reveal interesting patterns in metabonomic data.In this paper, we present a novel software tool MDAS (Metabonomic Data Analysis System) for metabonomic data analysis which integrates traditional chemometrics methods and newly introduced machine learning approaches. MDAS contains a suite of functional models for metabonomic data analysis and optimizes the flow of data analysis. Several file formats can be accepted as input. The input data can be optionally preprocessed and can then be processed with operations such as feature analysis and dimensionality reduction. The data with reduced dimensionalities can be used for training or testing through machine learning models. The system supplies proper visualization for data preprocessing, feature analysis, and classification which can be a powerful function for users to extract knowledge from the data. MDAS is an integrated platform for metabonomic data analysis, which transforms a complex analysis procedure into a more formalized and simplified one. The software package can be obtained from the authors.
Aoki, Shuichiro; Murata, Hiroshi; Fujino, Yuri; Matsuura, Masato; Miki, Atsuya; Tanito, Masaki; Mizoue, Shiro; Mori, Kazuhiko; Suzuki, Katsuyoshi; Yamashita, Takehiro; Kashiwagi, Kenji; Hirasawa, Kazunori; Shoji, Nobuyuki; Asaoka, Ryo
2017-12-01
To investigate the usefulness of the Octopus (Haag-Streit) EyeSuite's cluster trend analysis in glaucoma. Ten visual fields (VFs) with the Humphrey Field Analyzer (Carl Zeiss Meditec), spanning 7.7 years on average were obtained from 728 eyes of 475 primary open angle glaucoma patients. Mean total deviation (mTD) trend analysis and EyeSuite's cluster trend analysis were performed on various series of VFs (from 1st to 10th: VF1-10 to 6th to 10th: VF6-10). The results of the cluster-based trend analysis, based on different lengths of VF series, were compared against mTD trend analysis. Cluster-based trend analysis and mTD trend analysis results were significantly associated in all clusters and with all lengths of VF series. Between 21.2% and 45.9% (depending on VF series length and location) of clusters were deemed to progress when the mTD trend analysis suggested no progression. On the other hand, 4.8% of eyes were observed to progress using the mTD trend analysis when cluster trend analysis suggested no progression in any two (or more) clusters. Whole field trend analysis can miss local VF progression. Cluster trend analysis appears as robust as mTD trend analysis and useful to assess both sectorial and whole field progression. Cluster-based trend analyses, in particular the definition of two or more progressing cluster, may help clinicians to detect glaucomatous progression in a timelier manner than using a whole field trend analysis, without significantly compromising specificity. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Development of a realistic stress analysis for fatigue analysis of notched composite laminates
NASA Technical Reports Server (NTRS)
Humphreys, E. A.; Rosen, B. W.
1979-01-01
A finite element stress analysis which consists of a membrane and interlaminar shear spring analysis was developed. This approach was utilized in order to model physically realistic failure mechanisms while maintaining a high degree of computational economy. The accuracy of the stress analysis predictions is verified through comparisons with other solutions to the composite laminate edge effect problem. The stress analysis model was incorporated into an existing fatigue analysis methodology and the entire procedure computerized. A fatigue analysis is performed upon a square laminated composite plate with a circular central hole. A complete description and users guide for the computer code FLAC (Fatigue of Laminated Composites) is included as an appendix.
NASA Astrophysics Data System (ADS)
Ogura, Yuki; Tanaka, Yuji; Hase, Eiji; Yamashita, Toyonobu; Yasui, Takeshi
2018-02-01
We compare two-dimensional auto-correlation (2D-AC) analysis and two-dimensional Fourier transform (2D-FT) for evaluation of age-dependent structural change of facial dermal collagen fibers caused by intrinsic aging and extrinsic photo-aging. The age-dependent structural change of collagen fibers for female subjects' cheek skin in their 20s, 40s, and 60s were more noticeably reflected in 2D-AC analysis than in 2D-FT analysis. Furthermore, 2D-AC analysis indicated significantly higher correlation with the skin elasticity measured by Cutometer® than 2D-AC analysis. 2D-AC analysis of SHG image has a high potential for quantitative evaluation of not only age-dependent structural change of collagen fibers but also skin elasticity.
Xu, Ning; Zhou, Guofu; Li, Xiaojuan; Lu, Heng; Meng, Fanyun; Zhai, Huaqiang
2017-05-01
A reliable and comprehensive method for identifying the origin and assessing the quality of Epimedium has been developed. The method is based on analysis of HPLC fingerprints, combined with similarity analysis, hierarchical cluster analysis (HCA), principal component analysis (PCA) and multi-ingredient quantitative analysis. Nineteen batches of Epimedium, collected from different areas in the western regions of China, were used to establish the fingerprints and 18 peaks were selected for the analysis. Similarity analysis, HCA and PCA all classified the 19 areas into three groups. Simultaneous quantification of the five major bioactive ingredients in the Epimedium samples was also carried out to confirm the consistency of the quality tests. These methods were successfully used to identify the geographical origin of the Epimedium samples and to evaluate their quality. Copyright © 2016 John Wiley & Sons, Ltd.
Digital microarray analysis for digital artifact genomics
NASA Astrophysics Data System (ADS)
Jaenisch, Holger; Handley, James; Williams, Deborah
2013-06-01
We implement a Spatial Voting (SV) based analogy of microarray analysis for digital gene marker identification in malware code sections. We examine a famous set of malware formally analyzed by Mandiant and code named Advanced Persistent Threat (APT1). APT1 is a Chinese organization formed with specific intent to infiltrate and exploit US resources. Manidant provided a detailed behavior and sting analysis report for the 288 malware samples available. We performed an independent analysis using a new alternative to the traditional dynamic analysis and static analysis we call Spatial Analysis (SA). We perform unsupervised SA on the APT1 originating malware code sections and report our findings. We also show the results of SA performed on some members of the families associated by Manidant. We conclude that SV based SA is a practical fast alternative to dynamics analysis and static analysis.
Try Fault Tree Analysis, a Step-by-Step Way to Improve Organization Development.
ERIC Educational Resources Information Center
Spitzer, Dean
1980-01-01
Fault Tree Analysis, a systems safety engineering technology used to analyze organizational systems, is described. Explains the use of logic gates to represent the relationship between failure events, qualitative analysis, quantitative analysis, and effective use of Fault Tree Analysis. (CT)
A Common Foundation of Information and Analytical Capability for AFSPC Decision Making
2005-06-23
System Strategic Master Plan MAPs/MSP CRRAAF TASK FORCE CONOPS MUA Task Weights Engagement Analysis ASIIS Optimization ACEIT COST Analysis...Engangement Architecture Analysis Architecture MUA AFSPC POM S&T Planning Military Utility Analysis ACEIT COST Analysis Joint Capab Integ Develop System
Design/Analysis of the JWST ISIM Bonded Joints for Survivability at Cryogenic Temperatures
NASA Technical Reports Server (NTRS)
Bartoszyk, Andrew; Johnston, John; Kaprielian, Charles; Kuhn, Jonathan; Kunt, Cengiz; Rodini, Benjamin; Young, Daniel
2005-01-01
Contents include the following: JWST/ISIM introduction. Design and analysis challenges for ISIM bonded joints. JWST/ISIM joint designs. Bonded joint analysis. Finite element modeling. Failure criteria and margin calculation. Analysis/test correlation procedure. Example of test data and analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, T.F.; Mok, G.C.; Carlson, R.W.
1996-12-01
CASKS is a microcomputer based computer system developed by LLNL to assist the Nuclear Regulatory Commission in performing confirmatory analyses for licensing review of radioactive-material storage cask designs. The analysis programs of the CASKS computer system consist of four modules--the impact analysis module, the thermal analysis module, the thermally-induced stress analysis module, and the pressure-induced stress analysis module. CASKS uses a series of menus to coordinate input programs, cask analysis programs, output programs, data archive programs and databases, so the user is able to run the system in an interactive environment. This paper outlines the theoretical background on the impactmore » analysis module and the yielding surface formulation. The close agreement between the CASKS analytical predictions and the results obtained form the two storage asks drop tests performed by SNL and by BNFL at Winfrith serves as the validation of the CASKS impact analysis module.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, T.F.; Mok, G.C.; Carlson, R.W.
1995-08-01
CASKS is a microcomputer based computer system developed by LLNL to assist the Nuclear Regulatory Commission in performing confirmatory analyses for licensing review of radioactive-material storage cask designs. The analysis programs of the CASKS computer system consist of four modules: the impact analysis module, the thermal analysis module, the thermally-induced stress analysis module, and the pressure-induced stress analysis module. CASKS uses a series of menus to coordinate input programs, cask analysis programs, output programs, data archive programs and databases, so the user is able to run the system in an interactive environment. This paper outlines the theoretical background on themore » impact analysis module and the yielding surface formulation. The close agreement between the CASKS analytical predictions and the results obtained form the two storage casks drop tests performed by SNL and by BNFL at Winfrith serves as the validation of the CASKS impact analysis module.« less
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Nemeth, Michael P.; Hilburger, Mark W.
2004-01-01
A technology review and assessment of modeling and analysis efforts underway in support of a safe return to flight of the thermal protection system (TPS) for the Space Shuttle external tank (ET) are summarized. This review and assessment effort focuses on the structural modeling and analysis practices employed for ET TPS foam design and analysis and on identifying analysis capabilities needed in the short-term and long-term. The current understanding of the relationship between complex flight environments and ET TPS foam failure modes are reviewed as they relate to modeling and analysis. A literature review on modeling and analysis of TPS foam material systems is also presented. Finally, a review of modeling and analysis tools employed in the Space Shuttle Program is presented for the ET TPS acreage and close-out foam regions. This review includes existing simplified engineering analysis tools are well as finite element analysis procedures.
Barteneva, Natasha S; Vorobjev, Ivan A
2018-01-01
In this paper, we review some of the recent advances in cellular heterogeneity and single-cell analysis methods. In modern research of cellular heterogeneity, there are four major approaches: analysis of pooled samples, single-cell analysis, high-throughput single-cell analysis, and lately integrated analysis of cellular population at a single-cell level. Recently developed high-throughput single-cell genetic analysis methods such as RNA-Seq require purification step and destruction of an analyzed cell often are providing a snapshot of the investigated cell without spatiotemporal context. Correlative analysis of multiparameter morphological, functional, and molecular information is important for differentiation of more uniform groups in the spectrum of different cell types. Simplified distributions (histograms and 2D plots) can underrepresent biologically significant subpopulations. Future directions may include the development of nondestructive methods for dissecting molecular events in intact cells, simultaneous correlative cellular analysis of phenotypic and molecular features by hybrid technologies such as imaging flow cytometry, and further progress in supervised and non-supervised statistical analysis algorithms.
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
Orbit Maneuver for Responsive Coverage Using Electric Propulsion
2010-03-01
24 4. Results and Analysis ...Orbit Analysis ............................................................................28 Figure 3.6 Circular Orbit Analysis ...29 Figure 3.7 Elliptical Orbit Analysis
2006-01-01
ENVIRONMENTAL ANALYSIS Analysis of Explosives in Soil Using Solid Phase Microextraction and Gas Chromatography Howard T. Mayfield Air Force Research...Abstract: Current methods for the analysis of explosives in soils utilize time consuming sample preparation workups and extractions. The method detection...chromatography/mass spectrometry to provide a con- venient and sensitive analysis method for explosives in soil. Keywords: Explosives, TNT, solid phase
Inventory of File gfs.t06z.pgrb2.1p00.f000
analysis U-Component of Wind [m/s] 002 planetary boundary layer VGRD analysis V-Component of Wind [m/s] 003 planetary boundary layer VRATE analysis Ventilation Rate [m^2/s] 004 surface GUST analysis Wind Speed (Gust mb RH analysis Relative Humidity [%] 008 10 mb UGRD analysis U-Component of Wind [m/s] 009 10 mb VGRD
Inventory of File gfs.t06z.pgrb2.0p50.f000
analysis U-Component of Wind [m/s] 002 planetary boundary layer VGRD analysis V-Component of Wind [m/s] 003 planetary boundary layer VRATE analysis Ventilation Rate [m^2/s] 004 surface GUST analysis Wind Speed (Gust mb RH analysis Relative Humidity [%] 008 10 mb UGRD analysis U-Component of Wind [m/s] 009 10 mb VGRD
Inventory of File gfs.t06z.pgrb2.0p25.f000
analysis U-Component of Wind [m/s] 002 planetary boundary layer VGRD analysis V-Component of Wind [m/s] 003 planetary boundary layer VRATE analysis Ventilation Rate [m^2/s] 004 surface GUST analysis Wind Speed (Gust mb RH analysis Relative Humidity [%] 008 10 mb UGRD analysis U-Component of Wind [m/s] 009 10 mb VGRD
Inventory of File gfs.t06z.pgrb2.2p50.f000
analysis U-Component of Wind [m/s] 002 planetary boundary layer VGRD analysis V-Component of Wind [m/s] 003 planetary boundary layer VRATE analysis Ventilation Rate [m^2/s] 004 surface GUST analysis Wind Speed (Gust mb RH analysis Relative Humidity [%] 008 10 mb UGRD analysis U-Component of Wind [m/s] 009 10 mb VGRD
Evidential Reasoning in Expert Systems for Image Analysis.
1985-02-01
techniques to image analysis (IA). There is growing evidence that these techniques offer significant improvements in image analysis , particularly in the...2) to provide a common framework for analysis, (3) to structure the ER process for major expert-system tasks in image analysis , and (4) to identify...approaches to three important tasks for expert systems in the domain of image analysis . This segment concluded with an assessment of the strengths
2006-09-30
unlimited. Prepared for: Naval Postgraduate School, Monterey, California 93943 Integrated Portfolio Analysis : Return on Investment and Real Options... Analysis of Intelligence Information Systems (Cryptologic Carry On Program) 30 September 2006 by LCDR Cesar G. Rios, Jr., Naval Postgraduate...October 2005 – 30 September 2006 4. TITLE AND SUBTITLE Integrated Portfolio Analysis : Return on Investment and Real Options Analysis of Intelligence
Stanton, Neville A; Bessell, Kevin
2014-01-01
This paper presents the application of Cognitive Work Analysis to the description of the functions, situations, activities, decisions, strategies, and competencies of a Trafalgar class submarine when performing the function of returning to periscope depth. All five phases of Cognitive Work Analysis are presented, namely: Work Domain Analysis, Control Task Analysis, Strategies Analysis, Social Organisation and Cooperation Analysis, and Worker Competencies Analysis. Complex socio-technical systems are difficult to analyse but Cognitive Work Analysis offers an integrated way of analysing complex systems with the core of functional means-ends analysis underlying all of the other representations. The joined-up analysis offers a coherent framework for understanding how socio-technical systems work. Data were collected through observation and interviews at different sites across the UK. The resultant representations present a statement of how the work domain and current activities are configured in this complex socio-technical system. This is intended to provide a baseline, from which all future conceptions of the domain may be compared. The strength of the analysis is in the multiple representations from which the constraints acting on the work may be analysed. Future research needs to challenge the assumptions behind these constraints in order to develop new ways of working. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Technical Reports Server (NTRS)
Todling, Ricardo; Diniz, F. L. R.; Takacs, L. L.; Suarez, M. J.
2018-01-01
Many hybrid data assimilation systems currently used for NWP employ some form of dual-analysis system approach. Typically a hybrid variational analysis is responsible for creating initial conditions for high-resolution forecasts, and an ensemble analysis system is responsible for creating sample perturbations used to form the flow-dependent part of the background error covariance required in the hybrid analysis component. In many of these, the two analysis components employ different methodologies, e.g., variational and ensemble Kalman filter. In such cases, it is not uncommon to have observations treated rather differently between the two analyses components; recentering of the ensemble analysis around the hybrid analysis is used to compensated for such differences. Furthermore, in many cases, the hybrid variational high-resolution system implements some type of four-dimensional approach, whereas the underlying ensemble system relies on a three-dimensional approach, which again introduces discrepancies in the overall system. Connected to these is the expectation that one can reliably estimate observation impact on forecasts issued from hybrid analyses by using an ensemble approach based on the underlying ensemble strategy of dual-analysis systems. Just the realization that the ensemble analysis makes substantially different use of observations as compared to their hybrid counterpart should serve as enough evidence of the implausibility of such expectation. This presentation assembles numerous anecdotal evidence to illustrate the fact that hybrid dual-analysis systems must, at the very minimum, strive for consistent use of the observations in both analysis sub-components. Simpler than that, this work suggests that hybrid systems can reliably be constructed without the need to employ a dual-analysis approach. In practice, the idea of relying on a single analysis system is appealing from a cost-maintenance perspective. More generally, single-analysis systems avoid contradictions such as having to choose one sub-component to generate performance diagnostics to another, possibly not fully consistent, component.
Canister Storage Building (CSB) Hazard Analysis Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
POWERS, T.B.
2000-03-16
This report describes the methodology used in conducting the Canister Storage Building (CSB) Hazard Analysis to support the final CSB Safety Analysis Report and documents the results. This report describes the methodology used in conducting the Canister Storage Building (CSB) hazard analysis to support the CSB final safety analysis report (FSAR) and documents the results. The hazard analysis process identified hazardous conditions and material-at-risk, determined causes for potential accidents, identified preventive and mitigative features, and qualitatively estimated the frequencies and consequences of specific occurrences. The hazard analysis was performed by a team of cognizant CSB operations and design personnel, safetymore » analysts familiar with the CSB, and technical experts in specialty areas. The material included in this report documents the final state of a nearly two-year long process. Attachment A provides two lists of hazard analysis team members and describes the background and experience of each. The first list is a complete list of the hazard analysis team members that have been involved over the two-year long process. The second list is a subset of the first list and consists of those hazard analysis team members that reviewed and agreed to the final hazard analysis documentation. The material included in this report documents the final state of a nearly two-year long process involving formal facilitated group sessions and independent hazard and accident analysis work. The hazard analysis process led to the selection of candidate accidents for further quantitative analysis. New information relative to the hazards, discovered during the accident analysis, was incorporated into the hazard analysis data in order to compile a complete profile of facility hazards. Through this process, the results of the hazard and accident analyses led directly to the identification of safety structures, systems, and components, technical safety requirements, and other controls required to protect the public, workers, and environment.« less
Regularized Generalized Canonical Correlation Analysis
ERIC Educational Resources Information Center
Tenenhaus, Arthur; Tenenhaus, Michel
2011-01-01
Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…
41 CFR 105-53.141 - Office of Policy Analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Analysis. 105-53.141 Section 105-53.141 Public Contracts and Property Management Federal Property... FUNCTIONS Central Offices § 105-53.141 Office of Policy Analysis. The Office of Policy Analysis, headed by the Associate Administrator for Policy Analysis, is responsible for providing analytical support...
16 CFR 1000.28 - Directorate for Economic Analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Directorate for Economic Analysis. 1000.28... AND FUNCTIONS § 1000.28 Directorate for Economic Analysis. The Directorate for Economic Analysis, which is managed by the Associate Executive Director for Economic Analysis, is responsible for providing...
Thermochemical Conversion Techno-Economic Analysis | Bioenergy | NREL
Conversion Techno-Economic Analysis Thermochemical Conversion Techno-Economic Analysis NREL's Thermochemical Conversion Analysis team focuses on the conceptual process design and techno-economic analysis , detailed process models, and TEA developed under this project provide insights into the potential economic
40 CFR 35.927-1 - Infiltration/inflow analysis.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Infiltration/inflow analysis. 35.927-1... Infiltration/inflow analysis. (a) The infiltration/inflow analysis shall demonstrate the nonexistence or possible existence of excessive infiltration/inflow in the sewer system. The analysis should identify the...
40 CFR 35.927-1 - Infiltration/inflow analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Infiltration/inflow analysis. 35.927-1... Infiltration/inflow analysis. (a) The infiltration/inflow analysis shall demonstrate the nonexistence or possible existence of excessive infiltration/inflow in the sewer system. The analysis should identify the...
40 CFR 35.927-1 - Infiltration/inflow analysis.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 1 2013-07-01 2013-07-01 false Infiltration/inflow analysis. 35.927-1... Infiltration/inflow analysis. (a) The infiltration/inflow analysis shall demonstrate the nonexistence or possible existence of excessive infiltration/inflow in the sewer system. The analysis should identify the...
40 CFR 35.927-1 - Infiltration/inflow analysis.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 1 2012-07-01 2012-07-01 false Infiltration/inflow analysis. 35.927-1... Infiltration/inflow analysis. (a) The infiltration/inflow analysis shall demonstrate the nonexistence or possible existence of excessive infiltration/inflow in the sewer system. The analysis should identify the...
40 CFR 35.927-1 - Infiltration/inflow analysis.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 1 2014-07-01 2014-07-01 false Infiltration/inflow analysis. 35.927-1... Infiltration/inflow analysis. (a) The infiltration/inflow analysis shall demonstrate the nonexistence or possible existence of excessive infiltration/inflow in the sewer system. The analysis should identify the...
Computational methods for global/local analysis
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.
1992-01-01
Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.
16 CFR 1000.28 - Directorate for Economic Analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Directorate for Economic Analysis. 1000.28... AND FUNCTIONS § 1000.28 Directorate for Economic Analysis. The Directorate for Economic Analysis, which is managed by the Associate Executive Director for Economic Analysis, is responsible for providing...
16 CFR 1000.28 - Directorate for Economic Analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Directorate for Economic Analysis. 1000.28... AND FUNCTIONS § 1000.28 Directorate for Economic Analysis. The Directorate for Economic Analysis, which is managed by the Associate Executive Director for Economic Analysis, is responsible for providing...
16 CFR 1000.28 - Directorate for Economic Analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Directorate for Economic Analysis. 1000.28... AND FUNCTIONS § 1000.28 Directorate for Economic Analysis. The Directorate for Economic Analysis, which is managed by the Associate Executive Director for Economic Analysis, is responsible for providing...
Analysis of space shuttle main engine data using Beacon-based exception analysis for multi-missions
NASA Technical Reports Server (NTRS)
Park, H.; Mackey, R.; James, M.; Zak, M.; Kynard, M.; Sebghati, J.; Greene, W.
2002-01-01
This paper describes analysis of the Space Shuttle Main Engine (SSME) sensor data using Beacon-based exception analysis for multimissions (BEAM), a new technology developed for sensor analysis and diagnostics in autonomous space systems by the Jet Propulsion Laboratory (JPL).
Rhetorical Analysis in Critical Policy Research
ERIC Educational Resources Information Center
Winton, Sue
2013-01-01
Rhetorical analysis, an approach to critical discourse analysis, is presented as a useful method for critical policy analysis and its effort to understand the role policies play in perpetuating inequality. A rhetorical analysis of Character "Matters!", the character education policy of a school board in Ontario, Canada, provides an…
DOT National Transportation Integrated Search
1995-05-01
In October, of 1992, the Housatonic Area Regional Transit (HART) District published a planning study providing an in-depth analysis of its fixed route bus transit service. This comprehensive operational analysis (COA) was the first detailed analysis ...
41 CFR 105-53.141 - Office of Policy Analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Analysis. 105-53.141 Section 105-53.141 Public Contracts and Property Management Federal Property... FUNCTIONS Central Offices § 105-53.141 Office of Policy Analysis. The Office of Policy Analysis, headed by the Associate Administrator for Policy Analysis, is responsible for providing analytical support...
41 CFR 105-53.141 - Office of Policy Analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Analysis. 105-53.141 Section 105-53.141 Public Contracts and Property Management Federal Property... FUNCTIONS Central Offices § 105-53.141 Office of Policy Analysis. The Office of Policy Analysis, headed by the Associate Administrator for Policy Analysis, is responsible for providing analytical support...
41 CFR 105-53.141 - Office of Policy Analysis.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Analysis. 105-53.141 Section 105-53.141 Public Contracts and Property Management Federal Property... FUNCTIONS Central Offices § 105-53.141 Office of Policy Analysis. The Office of Policy Analysis, headed by the Associate Administrator for Policy Analysis, is responsible for providing analytical support...
41 CFR 105-53.141 - Office of Policy Analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Analysis. 105-53.141 Section 105-53.141 Public Contracts and Property Management Federal Property... FUNCTIONS Central Offices § 105-53.141 Office of Policy Analysis. The Office of Policy Analysis, headed by the Associate Administrator for Policy Analysis, is responsible for providing analytical support...
Structured Analysis and the Data Flow Diagram: Tools for Library Analysis.
ERIC Educational Resources Information Center
Carlson, David H.
1986-01-01
This article discusses tools developed to aid the systems analysis process (program evaluation and review technique, Gantt charts, organizational charts, decision tables, flowcharts, hierarchy plus input-process-output). Similarities and differences among techniques, library applications of analysis, structured systems analysis, and the data flow…
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2014 CFR
2014-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
Numerical Analysis of Stochastic Dynamical Systems in the Medium-Frequency Range
2003-02-01
frequency vibration analysis such as the statistical energy analysis (SEA), the traditional modal analysis (well-suited for high and low: frequency...that the first few structural normal modes primarily constitute the total response. In the higher frequency range, the statistical energy analysis (SEA
14 CFR 417.309 - Flight safety system analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... system anomaly occurring and all of its effects as determined by the single failure point analysis and... termination system. (c) Single failure point. A command control system must undergo an analysis that... fault tree analysis or a failure modes effects and criticality analysis; (2) Identify all possible...
DOT National Transportation Integrated Search
1994-07-01
In October, of 1992, the Housatonic Area Regional Transit (HART) District published a planning study providing an in-depth analysis of its fixed route bus transit service. This comprehensive operational analysis (COA) was the first detailed analysis ...
DOT National Transportation Integrated Search
1995-02-01
In October, of 1992, the Housatonic Area Regional Transit (HART) District published a planning study providing an in-depth analysis of its fixed route bus transit service. This comprehensive operational analysis (COA) was the first detailed analysis ...
Evaluating a Computerized Aid for Conducting a Cognitive Task Analysis
2000-01-01
in conducting a cognitive task analysis . The conduct of a cognitive task analysis is costly and labor intensive. As a result, a few computerized aids...evaluation of a computerized aid, specifically CAT-HCI (Cognitive Analysis Tool - Human Computer Interface), for the conduct of a detailed cognitive task analysis . A
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-03
... Response to Comments on Previous Analysis C. Summary of the Comparative Analysis 1. Quantitative Analysis 2... preliminary quantitative analysis are specific building designs, in most cases with specific spaces defined... preliminary determination. C. Summary of the Comparative Analysis DOE carried out both a broad quantitative...
48 CFR 1552.211-76 - Legal analysis.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Legal analysis. 1552.211... Legal analysis. As prescribed in 1511.011-76, insert this contract clause when it is determined that the contract involves legal analysis. Legal Analysis (APR 1984) The Contractor shall furnish to the Contracting...
48 CFR 1552.211-76 - Legal analysis.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 6 2013-10-01 2013-10-01 false Legal analysis. 1552.211... Legal analysis. As prescribed in 1511.011-76, insert this contract clause when it is determined that the contract involves legal analysis. Legal Analysis (APR 1984) The Contractor shall furnish to the Contracting...
48 CFR 1552.211-76 - Legal analysis.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 6 2012-10-01 2012-10-01 false Legal analysis. 1552.211... Legal analysis. As prescribed in 1511.011-76, insert this contract clause when it is determined that the contract involves legal analysis. Legal Analysis (APR 1984) The Contractor shall furnish to the Project...
48 CFR 1552.211-76 - Legal analysis.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Legal analysis. 1552.211... Legal analysis. As prescribed in 1511.011-76, insert this contract clause when it is determined that the contract involves legal analysis. Legal Analysis (APR 1984) The Contractor shall furnish to the Project...
ERIC Educational Resources Information Center
Grochowalski, Joseph H.
2015-01-01
Component Universe Score Profile analysis (CUSP) is introduced in this paper as a psychometric alternative to multivariate profile analysis. The theoretical foundations of CUSP analysis are reviewed, which include multivariate generalizability theory and constrained principal components analysis. Because CUSP is a combination of generalizability…
Algal Biofuels Techno-Economic Analysis | Bioenergy | NREL
Biofuels Techno-Economic Analysis Algal Biofuels Techno-Economic Analysis To promote an understanding of the challenges and opportunities unique to microalgae, NREL's Algae Techno-Economic Analysis group focuses on techno-economic analysis (TEA) for the production and conversion of algal biomass into
14 CFR 1260.145 - Cost and price analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Cost and price analysis. 1260.145 Section... price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be accomplished in various ways...
38 CFR 49.45 - Cost and price analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Cost and price analysis... price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be accomplished in various ways...
40 CFR 30.45 - Cost and price analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Cost and price analysis. 30.45 Section... price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be accomplished in various ways...
Space tug economic analysis study. Volume 2: Tug concepts analysis. Part 2: Economic analysis
NASA Technical Reports Server (NTRS)
1972-01-01
An economic analysis of space tug operations is presented. The subjects discussed are: (1) cost uncertainties, (2) scenario analysis, (3) economic sensitivities, (4) mixed integer programming formulation of the space tug problem, and (5) critical parameters in the evaluation of a public expenditure.
Analysis of Developmental Data: Comparison Among Alternative Methods
ERIC Educational Resources Information Center
Wilson, Ronald S.
1975-01-01
To examine the ability of the correction factor epsilon to counteract statistical bias in univariate analysis, an analysis of variance (adjusted by epsilon) and a multivariate analysis of variance were performed on the same data. The results indicated that univariate analysis is a fully protected design when used with epsilon. (JMB)
14 CFR 417.227 - Toxic release hazard analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Toxic release hazard analysis. 417.227..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.227 Toxic release hazard analysis. A flight safety analysis must establish flight commit criteria that protect the public from any...
7 CFR 160.17 - Laboratory analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Laboratory analysis. 160.17 Section 160.17 Agriculture... STANDARDS FOR NAVAL STORES Methods of Analysis, Inspection, Sampling and Grading § 160.17 Laboratory analysis. The analysis and laboratory testing of naval stores shall be conducted, so far as is practicable...
48 CFR 215.404-1 - Proposal analysis techniques.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Proposal analysis... Contract Pricing 215.404-1 Proposal analysis techniques. (1) Follow the procedures at PGI 215.404-1 for proposal analysis. (2) For spare parts or support equipment, perform an analysis of— (i) Those line items...
32 CFR 989.8 - Analysis of alternatives.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 6 2011-07-01 2011-07-01 false Analysis of alternatives. 989.8 Section 989.8... ENVIRONMENTAL IMPACT ANALYSIS PROCESS (EIAP) § 989.8 Analysis of alternatives. (a) The Air Force must analyze... of reasonable alternatives, it may limit alternatives selected for detailed environmental analysis to...
32 CFR 989.37 - Procedures for analysis abroad.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 6 2011-07-01 2011-07-01 false Procedures for analysis abroad. 989.37 Section... PROTECTION ENVIRONMENTAL IMPACT ANALYSIS PROCESS (EIAP) § 989.37 Procedures for analysis abroad. Procedures for analysis of environmental actions abroad are contained in 32 CFR part 187. That directive provides...
7 CFR 160.17 - Laboratory analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Laboratory analysis. 160.17 Section 160.17 Agriculture... STANDARDS FOR NAVAL STORES Methods of Analysis, Inspection, Sampling and Grading § 160.17 Laboratory analysis. The analysis and laboratory testing of naval stores shall be conducted, so far as is practicable...
16 CFR § 1000.28 - Directorate for Economic Analysis.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Directorate for Economic Analysis. § 1000... ORGANIZATION AND FUNCTIONS § 1000.28 Directorate for Economic Analysis. The Directorate for Economic Analysis, which is managed by the Associate Executive Director for Economic Analysis, is responsible for providing...
Cabinetmaker. Occupational Analysis Series.
ERIC Educational Resources Information Center
Chinien, Chris; Boutin, France
This document contains the analysis of the occupation of cabinetmaker, or joiner, that is accepted by the Canadian Council of Directors as the national standard for the occupation. The front matter preceding the analysis includes exploration of the development of the analysis, structure of the analysis, validation method, scope of the cabinetmaker…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-06
... Rulemaking: State-78, Risk Analysis and Management Records SUMMARY: Notice is hereby given that the... portions of the Risk Analysis and Management (RAM) Records, State-78, system of records contain criminal...) * * * (2) * * * Risk Analysis and Management Records, STATE-78. * * * * * (b) * * * (1) * * * Risk Analysis...
40 CFR 92.131 - Smoke, data analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Smoke, data analysis. 92.131 Section... analysis. The following procedure shall be used to analyze the smoke test data: (a) Locate each throttle... performed by direct analysis of the recorder traces, or by computer analysis of data collected by automatic...
1982-08-23
LUBRICATION, FAILURE PROGRESSION WNITORING OIL-ANALYSIS, FAILURE ANALYSIS, TRIBOLOGY WEAR DEBRIS ANALYSIS, WEAR REGIMS DIAGNOSTICS, BENCH TESTING, FERROGRApHy ...Spectrometric Oil Analysis . ............... 400 G. Analytical Ferrography ............................. 411 3 NAEC-92-153 TABLE OF CONTENTS (Continued...of ferrography entry deposit mnicrographs of these sequences, which can be directly related to sample debris concentration levels. These micrographs
14 CFR 417.221 - Time delay analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.221 Time delay analysis. (a) General. A flight safety analysis must include a time delay analysis that establishes the mean elapsed time between the violation of a flight termination rule and the time when the flight safety system is...
14 CFR 417.221 - Time delay analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.221 Time delay analysis. (a) General. A flight safety analysis must include a time delay analysis that establishes the mean elapsed time between the violation of a flight termination rule and the time when the flight safety system is...
Systems Analysis and Integration Publications | Transportation Research |
data Vehicle analysis Vehicle energy Vehicle modeling Vehicle simulation Wireless power transfer The NREL Systems Analysis and Integration Publications Systems Analysis and Integration Publications NREL publishes technical reports, fact sheets, and other documents about its systems analysis and
39 CFR 3002.12 - Office of Rates, Analysis, and Planning.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 39 Postal Service 1 2011-07-01 2011-07-01 false Office of Rates, Analysis, and Planning. 3002.12... Rates, Analysis, and Planning. (a) The Office of Rates, Analysis, and Planning is responsible for technical (as opposed to legal) analysis and the formulation of policy recommendations for the Commission...
High-Level Overview of Data Needs for RE Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Anthony
2016-12-22
This presentation provides a high level overview of analysis topics and associated data needs. Types of renewable energy analysis are grouped into two buckets: First, analysis for renewable energy potential, and second, analysis for other goals. Data requirements are similar but and they build upon one another.
Lipid Analysis: Isolation, separation, identification and lipidomic analysis - Fourth Edition
USDA-ARS?s Scientific Manuscript database
Review of book, Lipid Analysis, Isolation, separation, identification and lipidomic analysis - Fourth Edition, by W.W. Chrisitie and X. Han, 2010. William W. Christie is considered by many to be the most prominent international authority on lipid analysis. The co-author, Dr. Xianlin Han, is a pion...
40 CFR 1400.8 - Access to off-site consequence analysis information by Federal government officials.
Code of Federal Regulations, 2011 CFR
2011-07-01
... analysis information by Federal government officials. 1400.8 Section 1400.8 Protection of Environment... INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.8 Access to off-site consequence analysis information by Federal...
43 CFR 46.135 - Incorporation of referenced documents into NEPA analysis.
Code of Federal Regulations, 2012 CFR
2012-10-01
... the analysis at hand. (b) Citations of specific information or analysis from other source documents... NEPA analysis. 46.135 Section 46.135 Public Lands: Interior Office of the Secretary of the Interior... Quality § 46.135 Incorporation of referenced documents into NEPA analysis. (a) The Responsible Official...
40 CFR 1400.8 - Access to off-site consequence analysis information by Federal government officials.
Code of Federal Regulations, 2013 CFR
2013-07-01
... analysis information by Federal government officials. 1400.8 Section 1400.8 Protection of Environment... INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.8 Access to off-site consequence analysis information by Federal...
40 CFR 1400.8 - Access to off-site consequence analysis information by Federal government officials.
Code of Federal Regulations, 2012 CFR
2012-07-01
... analysis information by Federal government officials. 1400.8 Section 1400.8 Protection of Environment... INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.8 Access to off-site consequence analysis information by Federal...
39 CFR 3002.12 - Office of Rates, Analysis, and Planning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 39 Postal Service 1 2012-07-01 2012-07-01 false Office of Rates, Analysis, and Planning. 3002.12... Rates, Analysis, and Planning. (a) The Office of Rates, Analysis, and Planning is responsible for technical (as opposed to legal) analysis and the formulation of policy recommendations for the Commission...
40 CFR 1400.8 - Access to off-site consequence analysis information by Federal government officials.
Code of Federal Regulations, 2014 CFR
2014-07-01
... analysis information by Federal government officials. 1400.8 Section 1400.8 Protection of Environment... INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.8 Access to off-site consequence analysis information by Federal...
43 CFR 46.135 - Incorporation of referenced documents into NEPA analysis.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the analysis at hand. (b) Citations of specific information or analysis from other source documents... NEPA analysis. 46.135 Section 46.135 Public Lands: Interior Office of the Secretary of the Interior... Quality § 46.135 Incorporation of referenced documents into NEPA analysis. (a) The Responsible Official...
43 CFR 46.135 - Incorporation of referenced documents into NEPA analysis.
Code of Federal Regulations, 2014 CFR
2014-10-01
... the analysis at hand. (b) Citations of specific information or analysis from other source documents... NEPA analysis. 46.135 Section 46.135 Public Lands: Interior Office of the Secretary of the Interior... Quality § 46.135 Incorporation of referenced documents into NEPA analysis. (a) The Responsible Official...
43 CFR 46.135 - Incorporation of referenced documents into NEPA analysis.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the analysis at hand. (b) Citations of specific information or analysis from other source documents... NEPA analysis. 46.135 Section 46.135 Public Lands: Interior Office of the Secretary of the Interior... Quality § 46.135 Incorporation of referenced documents into NEPA analysis. (a) The Responsible Official...
43 CFR 46.135 - Incorporation of referenced documents into NEPA analysis.
Code of Federal Regulations, 2013 CFR
2013-10-01
... the analysis at hand. (b) Citations of specific information or analysis from other source documents... NEPA analysis. 46.135 Section 46.135 Public Lands: Interior Office of the Secretary of the Interior... Quality § 46.135 Incorporation of referenced documents into NEPA analysis. (a) The Responsible Official...
32 CFR 989.38 - Requirements for analysis abroad.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Requirements for analysis abroad. 989.38 Section... PROTECTION ENVIRONMENTAL IMPACT ANALYSIS PROCESS (EIAP) § 989.38 Requirements for analysis abroad. (a) The EPF will generally perform the same functions for analysis of actions abroad that it performs in the...
32 CFR 989.38 - Requirements for analysis abroad.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 6 2012-07-01 2012-07-01 false Requirements for analysis abroad. 989.38 Section... PROTECTION ENVIRONMENTAL IMPACT ANALYSIS PROCESS (EIAP) § 989.38 Requirements for analysis abroad. (a) The EPF will generally perform the same functions for analysis of actions abroad that it performs in the...
32 CFR 989.38 - Requirements for analysis abroad.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 6 2013-07-01 2013-07-01 false Requirements for analysis abroad. 989.38 Section... PROTECTION ENVIRONMENTAL IMPACT ANALYSIS PROCESS (EIAP) § 989.38 Requirements for analysis abroad. (a) The EPF will generally perform the same functions for analysis of actions abroad that it performs in the...
32 CFR 989.38 - Requirements for analysis abroad.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 6 2014-07-01 2014-07-01 false Requirements for analysis abroad. 989.38 Section... PROTECTION ENVIRONMENTAL IMPACT ANALYSIS PROCESS (EIAP) § 989.38 Requirements for analysis abroad. (a) The EPF will generally perform the same functions for analysis of actions abroad that it performs in the...
32 CFR 989.38 - Requirements for analysis abroad.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 6 2011-07-01 2011-07-01 false Requirements for analysis abroad. 989.38 Section... PROTECTION ENVIRONMENTAL IMPACT ANALYSIS PROCESS (EIAP) § 989.38 Requirements for analysis abroad. (a) The EPF will generally perform the same functions for analysis of actions abroad that it performs in the...
Text analysis methods, text analysis apparatuses, and articles of manufacture
Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M
2014-10-28
Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.
2004-01-01
Cognitive Task Analysis Abstract As Department of Defense (DoD) leaders rely more on modeling and simulation to provide information on which to base...capabilities and intent. Cognitive Task Analysis (CTA) Cognitive Task Analysis (CTA) is an extensive/detailed look at tasks and subtasks performed by a...Domain Analysis and Task Analysis: A Difference That Matters. In Cognitive Task Analysis , edited by J. M. Schraagen, S.
NASA Technical Reports Server (NTRS)
Mason, P. W.; Harris, H. G.; Zalesak, J.; Bernstein, M.
1974-01-01
The NASA Structural Analysis System (NASTRAN) Model 1 finite element idealization, input data, and detailed analytical results are presented. The data presented include: substructuring analysis for normal modes, plots of member data, plots of symmetric free-free modes, plots of antisymmetric free-free modes, analysis of the wing, analysis of the cargo doors, analysis of the payload, and analysis of the orbiter.
Assessing the validity of discourse analysis: transdisciplinary convergence
NASA Astrophysics Data System (ADS)
Jaipal-Jamani, Kamini
2014-12-01
Research studies using discourse analysis approaches make claims about phenomena or issues based on interpretation of written or spoken text, which includes images and gestures. How are findings/interpretations from discourse analysis validated? This paper proposes transdisciplinary convergence as a way to validate discourse analysis approaches to research. The argument is made that discourse analysis explicitly grounded in semiotics, systemic functional linguistics, and critical theory, offers a credible research methodology. The underlying assumptions, constructs, and techniques of analysis of these three theoretical disciplines can be drawn on to show convergence of data at multiple levels, validating interpretations from text analysis.
Experiment Design and Analysis Guide - Neutronics & Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Misti A Lillo
2014-06-01
The purpose of this guide is to provide a consistent, standardized approach to performing neutronics/physics analysis for experiments inserted into the Advanced Test Reactor (ATR). This document provides neutronics/physics analysis guidance to support experiment design and analysis needs for experiments irradiated in the ATR. This guide addresses neutronics/physics analysis in support of experiment design, experiment safety, and experiment program objectives and goals. The intent of this guide is to provide a standardized approach for performing typical neutronics/physics analyses. Deviation from this guide is allowed provided that neutronics/physics analysis details are properly documented in an analysis report.
Failure Analysis of a Service Tube
NASA Astrophysics Data System (ADS)
Xie, Zhongdong; Cai, Weiguo; Li, Zhenxing; Guan, YiMing; Zhang, Baocheng; Yang, XiaoTong
2017-12-01
One tube was cracked used in the primary reformer furnace in a fertilizer plant for two and half years. In order to find out the causes of cracking, the methods for chemical composition analysis, macro- and microstructure analysis, penetrant testing, weld analysis, crack and surface damage analysis, mechanics property analysis, high temperature endurance performance analysis, stress and wall thickness calculation were adopted. The integrated assessment results showed that the carbon content of the tube was in the lower limit of the standard range; the tube effective wall thickness was too small; local overheating leads to tube cracking in use process.
NASA Technical Reports Server (NTRS)
Zoladz, T.; Earhart, E.; Fiorucci, T.
1995-01-01
Utilizing high-frequency data from a highly instrumented rotor assembly, seeded bearing defect signatures are characterized using both conventional linear approaches, such as power spectral density analysis, and recently developed nonlinear techniques such as bicoherence analysis. Traditional low-frequency (less than 20 kHz) analysis and high-frequency envelope analysis of both accelerometer and acoustic emission data are used to recover characteristic bearing distress information buried deeply in acquired data. The successful coupling of newly developed nonlinear signal analysis with recovered wideband envelope data from accelerometers and acoustic emission sensors is the innovative focus of this research.
Relaxation mode analysis of a peptide system: comparison with principal component analysis.
Mitsutake, Ayori; Iijima, Hiromitsu; Takano, Hiroshi
2011-10-28
This article reports the first attempt to apply the relaxation mode analysis method to a simulation of a biomolecular system. In biomolecular systems, the principal component analysis is a well-known method for analyzing the static properties of fluctuations of structures obtained by a simulation and classifying the structures into some groups. On the other hand, the relaxation mode analysis has been used to analyze the dynamic properties of homopolymer systems. In this article, a long Monte Carlo simulation of Met-enkephalin in gas phase has been performed. The results are analyzed by the principal component analysis and relaxation mode analysis methods. We compare the results of both methods and show the effectiveness of the relaxation mode analysis.
A Review on the Nonlinear Dynamical System Analysis of Electrocardiogram Signal
Mohapatra, Biswajit
2018-01-01
Electrocardiogram (ECG) signal analysis has received special attention of the researchers in the recent past because of its ability to divulge crucial information about the electrophysiology of the heart and the autonomic nervous system activity in a noninvasive manner. Analysis of the ECG signals has been explored using both linear and nonlinear methods. However, the nonlinear methods of ECG signal analysis are gaining popularity because of their robustness in feature extraction and classification. The current study presents a review of the nonlinear signal analysis methods, namely, reconstructed phase space analysis, Lyapunov exponents, correlation dimension, detrended fluctuation analysis (DFA), recurrence plot, Poincaré plot, approximate entropy, and sample entropy along with their recent applications in the ECG signal analysis. PMID:29854361
Nonlinear analysis of structures. [within framework of finite element method
NASA Technical Reports Server (NTRS)
Armen, H., Jr.; Levine, H.; Pifko, A.; Levy, A.
1974-01-01
The development of nonlinear analysis techniques within the framework of the finite-element method is reported. Although the emphasis is concerned with those nonlinearities associated with material behavior, a general treatment of geometric nonlinearity, alone or in combination with plasticity is included, and applications presented for a class of problems categorized as axisymmetric shells of revolution. The scope of the nonlinear analysis capabilities includes: (1) a membrane stress analysis, (2) bending and membrane stress analysis, (3) analysis of thick and thin axisymmetric bodies of revolution, (4) a general three dimensional analysis, and (5) analysis of laminated composites. Applications of the methods are made to a number of sample structures. Correlation with available analytic or experimental data range from good to excellent.
A Review on the Nonlinear Dynamical System Analysis of Electrocardiogram Signal.
Nayak, Suraj K; Bit, Arindam; Dey, Anilesh; Mohapatra, Biswajit; Pal, Kunal
2018-01-01
Electrocardiogram (ECG) signal analysis has received special attention of the researchers in the recent past because of its ability to divulge crucial information about the electrophysiology of the heart and the autonomic nervous system activity in a noninvasive manner. Analysis of the ECG signals has been explored using both linear and nonlinear methods. However, the nonlinear methods of ECG signal analysis are gaining popularity because of their robustness in feature extraction and classification. The current study presents a review of the nonlinear signal analysis methods, namely, reconstructed phase space analysis, Lyapunov exponents, correlation dimension, detrended fluctuation analysis (DFA), recurrence plot, Poincaré plot, approximate entropy, and sample entropy along with their recent applications in the ECG signal analysis.
A catalog of automated analysis methods for enterprise models.
Florez, Hector; Sánchez, Mario; Villalobos, Jorge
2016-01-01
Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.
Analysis of flexible aircraft longitudinal dynamics and handling qualities. Volume 2: Data
NASA Technical Reports Server (NTRS)
Waszak, M. R.; Schmidt, D. K.
1985-01-01
Two analysis methods are applied to a family of flexible aircraft in order to investigate how and when structural (especially dynamic aeroelastic) effects affect the dynamic characteristics of aircraft. The first type of analysis is an open loop modal analysis technique. This method considers the effect of modal residue magnitudes on determining vehicle handling qualities. The second method is a pilot in the loop analysis procedure that considers several closed loop system characteristics. Both analyses indicated that dynamic aeroelastic effects caused a degradation in vehicle tracking performance, based on the evaluation of some simulation results. Volume 2 consists of the presentation of the state variable models of the flexible aircraft configurations used in the analysis applications mode shape plots for the structural modes, numerical results from the modal analysis frequency response plots from the pilot in the loop analysis and a listing of the modal analysis computer program.
Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) for a 3-D Flexible Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J.-W.
2001-01-01
The formulation and implementation of an optimization method called Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) are extended from single discipline analysis (aerodynamics only) to multidisciplinary analysis - in this case, static aero-structural analysis - and applied to a simple 3-D wing problem. The method aims to reduce the computational expense incurred in performing shape optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, Finite Element Method (FEM) structural analysis and sensitivity analysis tools. Results for this small problem show that the method reaches the same local optimum as conventional optimization. However, unlike its application to the win,, (single discipline analysis), the method. as I implemented here, may not show significant reduction in the computational cost. Similar reductions were seen in the two-design-variable (DV) problem results but not in the 8-DV results given here.
Model-Based Linkage Analysis of a Quantitative Trait.
Song, Yeunjoo E; Song, Sunah; Schnell, Audrey H
2017-01-01
Linkage Analysis is a family-based method of analysis to examine whether any typed genetic markers cosegregate with a given trait, in this case a quantitative trait. If linkage exists, this is taken as evidence in support of a genetic basis for the trait. Historically, linkage analysis was performed using a binary disease trait, but has been extended to include quantitative disease measures. Quantitative traits are desirable as they provide more information than binary traits. Linkage analysis can be performed using single-marker methods (one marker at a time) or multipoint (using multiple markers simultaneously). In model-based linkage analysis the genetic model for the trait of interest is specified. There are many software options for performing linkage analysis. Here, we use the program package Statistical Analysis for Genetic Epidemiology (S.A.G.E.). S.A.G.E. was chosen because it also includes programs to perform data cleaning procedures and to generate and test genetic models for a quantitative trait, in addition to performing linkage analysis. We demonstrate in detail the process of running the program LODLINK to perform single-marker analysis, and MLOD to perform multipoint analysis using output from SEGREG, where SEGREG was used to determine the best fitting statistical model for the trait.
Rethinking vulnerability analysis and governance with emphasis on a participatory approach.
Rossignol, Nicolas; Delvenne, Pierre; Turcanu, Catrinel
2015-01-01
This article draws on vulnerability analysis as it emerged as a complement to classical risk analysis, and it aims at exploring its ability for nurturing risk and vulnerability governance actions. An analysis of the literature on vulnerability analysis allows us to formulate a three-fold critique: first, vulnerability analysis has been treated separately in the natural and the technological hazards fields. This separation prevents vulnerability from unleashing the full range of its potential, as it constrains appraisals into artificial categories and thus already closes down the outcomes of the analysis. Second, vulnerability analysis focused on assessment tools that are mainly quantitative, whereas qualitative appraisal is a key to assessing vulnerability in a comprehensive way and to informing policy making. Third, a systematic literature review of case studies reporting on participatory approaches to vulnerability analysis allows us to argue that participation has been important to address the above, but it remains too closed down in its approach and would benefit from embracing a more open, encompassing perspective. Therefore, we suggest rethinking vulnerability analysis as one part of a dynamic process between opening-up and closing-down strategies, in order to support a vulnerability governance framework. © 2014 Society for Risk Analysis.
Velo and REXAN - Integrated Data Management and High Speed Analysis for Experimental Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleese van Dam, Kerstin; Carson, James P.; Corrigan, Abigail L.
2013-01-10
The Chemical Imaging Initiative at the Pacific Northwest National Laboratory (PNNL) is creating a ‘Rapid Experimental Analysis’ (REXAN) Framework, based on the concept of reusable component libraries. REXAN allows developers to quickly compose and customize high throughput analysis pipelines for a range of experiments, as well as supporting the creation of multi-modal analysis pipelines. In addition, PNNL has coupled REXAN with its collaborative data management and analysis environment Velo to create an easy to use data management and analysis environments for experimental facilities. This paper will discuss the benefits of Velo and REXAN in the context of three examples: PNNLmore » High Resolution Mass Spectrometry - reducing analysis times from hours to seconds, and enabling the analysis of much larger data samples (100KB to 40GB) at the same time · ALS X-Ray tomography - reducing analysis times of combined STXM and EM data collected at the ALS from weeks to minutes, decreasing manual work and increasing data volumes that can be analysed in a single step ·Multi-modal nano-scale analysis of STXM and TEM data - providing a semi automated process for particle detection The creation of REXAN has significantly shortened the development time for these analysis pipelines. The integration of Velo and REXAN has significantly increased the scientific productivity of the instruments and their users by creating easy to use data management and analysis environments with greatly reduced analysis times and improved analysis capabilities.« less
Bismuth-based electrochemical stripping analysis
Wang, Joseph
2004-01-27
Method and apparatus for trace metal detection and analysis using bismuth-coated electrodes and electrochemical stripping analysis. Both anodic stripping voltammetry and adsorptive stripping analysis may be employed.
Trial Sequential Analysis in systematic reviews with meta-analysis.
Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian
2017-03-06
Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.
Exergy: its potential and limitations in environmental science and technology.
Dewulf, Jo; Van Langenhove, Herman; Muys, Bart; Bruers, Stijn; Bakshi, Bhavik R; Grubb, Geoffrey F; Paulus, D M; Sciubba, Enrico
2008-04-01
New technologies, either renewables-based or not, are confronted with both economic and technical constraints. Their development takes advantage of considering the basic laws of economics and thermodynamics. With respect to the latter, the exergy concept pops up. Although its fundamentals, that is, the Second Law of Thermodynamics, were already established in the 1800s, it is only in the last years that the exergy concept has gained a more widespread interest in process analysis, typically employed to identify inefficiencies. However, exergy analysis today is implemented far beyond technical analysis; it is also employed in environmental, (thermo)economic, and even sustainability analysis of industrial systems. Because natural ecosystems are also subjected to the basic laws of thermodynamics, it is another subject of exergy analysis. After an introduction on the concept itself, this review focuses on the potential and limitations of the exergy conceptin (1) ecosystem analysis, utilized to describe maximum storage and maximum dissipation of energy flows (2); industrial system analysis: from single process analysis to complete process chain analysis (3); (thermo)economic analysis, with extended exergy accounting; and (4) environmental impact assessment throughout the whole life cycle with quantification of the resource intake and emission effects. Apart from technical system analysis, it proves that exergy as a tool in environmental impact analysis may be the most mature field of application, particularly with respect to resource and efficiency accounting, one of the major challenges in the development of sustainable technology. Far less mature is the exergy analysis of natural ecosystems and the coupling with economic analysis, where a lively debate is presently going on about the actual merits of an exergy-based approach.
Inventory of File sref_em.t03z.pgrb212.ctl.grib2
UGRD analysis U-Component of Wind [m/s] 006 10 m above ground VGRD analysis V-Component of Wind [m/s of Wind [m/s] 018 250 mb VGRD analysis V-Component of Wind [m/s] 019 500 mb HGT analysis Geopotential Height [gpm] 020 500 mb UGRD analysis U-Component of Wind [m/s] 021 500 mb VGRD analysis V-Component of
Inventory of File sref_nmm.t03z.pgrb132.ctl.grib2
UGRD analysis U-Component of Wind [m/s] 006 10 m above ground VGRD analysis V-Component of Wind [m/s of Wind [m/s] 018 250 mb VGRD analysis V-Component of Wind [m/s] 019 500 mb HGT analysis Geopotential Height [gpm] 020 500 mb UGRD analysis U-Component of Wind [m/s] 021 500 mb VGRD analysis V-Component of
Inventory of File sref_nmm.t03z.pgrb221.ctl.grib2
UGRD analysis U-Component of Wind [m/s] 006 10 m above ground VGRD analysis V-Component of Wind [m/s of Wind [m/s] 018 250 mb VGRD analysis V-Component of Wind [m/s] 019 500 mb HGT analysis Geopotential Height [gpm] 020 500 mb UGRD analysis U-Component of Wind [m/s] 021 500 mb VGRD analysis V-Component of
Inventory of File sref_em.t03z.pgrb132.ctl.grib2
UGRD analysis U-Component of Wind [m/s] 006 10 m above ground VGRD analysis V-Component of Wind [m/s of Wind [m/s] 018 250 mb VGRD analysis V-Component of Wind [m/s] 019 500 mb HGT analysis Geopotential Height [gpm] 020 500 mb UGRD analysis U-Component of Wind [m/s] 021 500 mb VGRD analysis V-Component of
Inventory of File sref_nmm.t03z.pgrb243.ctl.grib2
UGRD analysis U-Component of Wind [m/s] 006 10 m above ground VGRD analysis V-Component of Wind [m/s of Wind [m/s] 018 250 mb VGRD analysis V-Component of Wind [m/s] 019 500 mb HGT analysis Geopotential Height [gpm] 020 500 mb UGRD analysis U-Component of Wind [m/s] 021 500 mb VGRD analysis V-Component of
Inventory of File sref_em.t03z.pgrb243.ctl.grib2
UGRD analysis U-Component of Wind [m/s] 006 10 m above ground VGRD analysis V-Component of Wind [m/s of Wind [m/s] 018 250 mb VGRD analysis V-Component of Wind [m/s] 019 500 mb HGT analysis Geopotential Height [gpm] 020 500 mb UGRD analysis U-Component of Wind [m/s] 021 500 mb VGRD analysis V-Component of
Inventory of File sref_em.t03z.pgrb221.ctl.grib2
UGRD analysis U-Component of Wind [m/s] 006 10 m above ground VGRD analysis V-Component of Wind [m/s of Wind [m/s] 018 250 mb VGRD analysis V-Component of Wind [m/s] 019 500 mb HGT analysis Geopotential Height [gpm] 020 500 mb UGRD analysis U-Component of Wind [m/s] 021 500 mb VGRD analysis V-Component of
Inventory of File sref_nmm.t03z.pgrb212.ctl.grib2
UGRD analysis U-Component of Wind [m/s] 006 10 m above ground VGRD analysis V-Component of Wind [m/s of Wind [m/s] 018 250 mb VGRD analysis V-Component of Wind [m/s] 019 500 mb HGT analysis Geopotential Height [gpm] 020 500 mb UGRD analysis U-Component of Wind [m/s] 021 500 mb VGRD analysis V-Component of
Inventory of File sref_nmm.t03z.pgrb216.ctl.grib2
UGRD analysis U-Component of Wind [m/s] 006 10 m above ground VGRD analysis V-Component of Wind [m/s of Wind [m/s] 018 250 mb VGRD analysis V-Component of Wind [m/s] 019 500 mb HGT analysis Geopotential Height [gpm] 020 500 mb UGRD analysis U-Component of Wind [m/s] 021 500 mb VGRD analysis V-Component of
Inventory of File sref_em.t03z.pgrb216.ctl.grib2
UGRD analysis U-Component of Wind [m/s] 006 10 m above ground VGRD analysis V-Component of Wind [m/s of Wind [m/s] 018 250 mb VGRD analysis V-Component of Wind [m/s] 019 500 mb HGT analysis Geopotential Height [gpm] 020 500 mb UGRD analysis U-Component of Wind [m/s] 021 500 mb VGRD analysis V-Component of
Inventory of File nam.t00z.awp21100.tm00.grib2
analysis Pressure Reduced to MSL [Pa] 002 surface GUST analysis Wind Speed (Gust) [m/s] 003 100 mb HGT -Component of Wind [m/s] 007.2 100 mb VGRD analysis V-Component of Wind [m/s] 008 150 mb HGT analysis Wind [m/s] 012.2 150 mb VGRD analysis V-Component of Wind [m/s] 013 200 mb HGT analysis Geopotential
Market Analysis for Nondevelopmental Items
1992-02-01
A252 287 S-muININu nIn Defense Standardization Program MARKET ANALYSIS FOR NONDEVELOPMENTAL ITEMS February 1992 This do-umcont has bee-n CiPPn*x.,d... market analysis, that task would be much more difficult. This bro- chure proposes a generic approach to market analysis that can be tailored to a wide...Statement A per telecon Greg Saunders OASD(P&L)PR/MM Washington, DC 20301-8000 NWW 6/30/92 Market Analysis for NDI WHY DO MARKET ANALYSIS? The
2009-06-01
simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE
2009-02-01
range of modal analysis and the high frequency region of statistical energy analysis , is referred to as the mid-frequency range. The corresponding...frequency range of modal analysis and the high frequency region of statistical energy analysis , is referred to as the mid-frequency range. The...predictions. The averaging process is consistent with the averaging done in statistical energy analysis for stochastic systems. The FEM will always
Self-analysis and the development of an interpretation.
Campbell, Donald
2017-10-01
In spite of the fact that Freud's self-analysis was at the centre of so many of his discoveries, self-analysis remains a complex, controversial and elusive exercise. While self-analysis is often seen as emerging at the end of an analysis and then used as a criteria in assessing the suitability for termination, I try to attend to the patient's resistance to self-analysis throughout an analysis. I take the view that the development of the patient's capacity for self-analysis within the analytic session contributes to the patient's growth and their creative and independent thinking during the analysis, which prepares him or her for a fuller life after the formal analysis ends. The model I will present is based on an over lapping of the patient's and the analyst's self-analysis, with recognition and use of the analyst's counter-transference. My focus is on the analyst's self-analysis that is in response to a particular crisis of not knowing, which results in feeling intellectually and emotionally stuck. This paper is not a case study, but a brief look at the process I went through to arrive at a particular interpretation with a particular patient during a particular session. I will concentrate on resistances in which both patient and analyst initially rely upon what is consciously known. Copyright © 2017 Institute of Psychoanalysis.
Analysis Center. Areas of Expertise Techno-Economic Analysis Mechanical design 3D modeling/CAD Finite element analysis (FEA) Wave energy conversion Thermal power cycle analysis Research Interests Cost
Automating a Detailed Cognitive Task Analysis for Structuring Curriculum
1991-06-01
Cognitive Task Analysis For... cognitive task analysis o3 0 chniques. A rather substantial literature has been amassed relative to _ - cutonqed knowledge acquisition but only seven...references have been found in LO V*r data base seaci of literature specifically addressing cognitive task analysis . - A variety of forms of cognitive task analysis
21 CFR 2.19 - Methods of analysis.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Methods of analysis. 2.19 Section 2.19 Food and... ADMINISTRATIVE RULINGS AND DECISIONS General Provisions § 2.19 Methods of analysis. Where the method of analysis... enforcement programs to utilize the methods of analysis of the AOAC INTERNATIONAL (AOAC) as published in the...
Meta-analysis genomewide association of pork quality traits: ultimate pH and shear force
USDA-ARS?s Scientific Manuscript database
It is common practice to perform genome-wide association analysis (GWA) using a genomic evaluation model of a single population. Joint analysis of several populations is more difficult. An alternative to joint analysis could be the meta-analysis (MA) of several GWA from independent genomic evaluatio...
24 CFR 84.45 - Cost and price analysis.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Cost and price analysis. 84.45....45 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be accomplished in various...
41 CFR 105-72.505 - Cost and price analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Cost and price analysis... § 105-72.505 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be...
32 CFR 32.45 - Cost and price analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 1 2010-07-01 2010-07-01 false Cost and price analysis. 32.45 Section 32.45... price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be accomplished in various ways...
43 CFR 12.945 - Cost and price analysis.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Cost and price analysis. 12.945 Section 12... Requirements § 12.945 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be...
12 CFR 703.6 - Credit analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Credit analysis. 703.6 Section 703.6 Banks and... ACTIVITIES § 703.6 Credit analysis. A Federal credit union must conduct and document a credit analysis on an... Federal Deposit Insurance Corporation. A Federal credit union must update this analysis at least annually...
12 CFR 703.6 - Credit analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Credit analysis. 703.6 Section 703.6 Banks and... ACTIVITIES § 703.6 Credit analysis. A Federal credit union must conduct and document a credit analysis on an... Federal Deposit Insurance Corporation. A Federal credit union must update this analysis at least annually...
Task Analysis - Its Relation to Content Analysis.
ERIC Educational Resources Information Center
Gagne, Robert M.
Task analysis is a procedure having the purpose of identifying different kinds of performances which are outcomes of learning, in order to make possible the specification of optimal instructional conditions for each kind of outcome. Task analysis may be related to content analysis in two different ways: (1) it may be used to identify the probably…
The Empirical Review of Meta-Analysis Published in Korea
ERIC Educational Resources Information Center
Park, Sunyoung; Hong, Sehee
2016-01-01
Meta-analysis is a statistical method that is increasingly utilized to combine and compare the results of previous primary studies. However, because of the lack of comprehensive guidelines for how to use meta-analysis, many meta-analysis studies have failed to consider important aspects, such as statistical programs, power analysis, publication…
40 CFR 1502.23 - Cost-benefit analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Cost-benefit analysis. 1502.23 Section... § 1502.23 Cost-benefit analysis. If a cost-benefit analysis relevant to the choice among environmentally... compliance with section 102(2)(B) of the Act the statement shall, when a cost-benefit analysis is prepared...
40 CFR 1502.23 - Cost-benefit analysis.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Cost-benefit analysis. 1502.23 Section... § 1502.23 Cost-benefit analysis. If a cost-benefit analysis relevant to the choice among environmentally... compliance with section 102(2)(B) of the Act the statement shall, when a cost-benefit analysis is prepared...
Advantages of Social Network Analysis in Educational Research
ERIC Educational Resources Information Center
Ushakov, K. M.; Kukso, K. N.
2015-01-01
Currently one of the main tools for the large scale studies of schools is statistical analysis. Although it is the most common method and it offers greatest opportunities for analysis, there are other quantitative methods for studying schools, such as network analysis. We discuss the potential advantages that network analysis has for educational…
21 CFR 2.19 - Methods of analysis.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Methods of analysis. 2.19 Section 2.19 Food and... ADMINISTRATIVE RULINGS AND DECISIONS General Provisions § 2.19 Methods of analysis. Where the method of analysis... enforcement programs to utilize the methods of analysis of the AOAC INTERNATIONAL (AOAC) as published in the...
40 CFR Table 5 to Subpart Jjjjjj... - Fuel Analysis Requirements
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Fuel Analysis Requirements 5 Table 5... Part 63—Fuel Analysis Requirements As stated in § 63.11213, you must comply with the following requirements for fuel analysis testing for affected sources: To conduct a fuel analysis for the following...
Position Analysis Questionnaire ( PAQ ). This job analysis instrument consists of 187 job elements organized into six divisions. In the analysis of a job...with the PAQ the relevance of the individual elements to the job are rated using any of several rating scales such as importance, or time.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-22
... To Support Specific Success Criteria in the Standardized Plant Analysis Risk Models--Surry and Peach... Specific Success Criteria in the Standardized Plant Analysis Risk Models--Surry and Peach Bottom, Draft..., ``Confirmatory Thermal-Hydraulic Analysis to Support Specific Success Criteria in the Standardized Plant Analysis...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-20
... determined that the quantitative analysis of the energy consumption of buildings built to Standard 90.1-2007... Determination 3. Public Comments Regarding the Preliminary Determination II. Summary of the Comparative Analysis... Analysis B. Quantitative Analysis 1. Discussion of Whole Building Energy Analysis 2. Results of Whole...
41 CFR 60-2.12 - Job group analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Job group analysis. 60-2... group analysis. (a) Purpose: A job group analysis is a method of combining job titles within the... employed. (b) In the job group analysis, jobs at the establishment with similar content, wage rates, and...
34 CFR 477.1 - What is the State Program Analysis Assistance and Policy Studies Program?
Code of Federal Regulations, 2011 CFR
2011-07-01
... ANALYSIS ASSISTANCE AND POLICY STUDIES PROGRAM General § 477.1 What is the State Program Analysis Assistance and Policy Studies Program? The State Program Analysis Assistance and Policy Studies Program... 34 Education 3 2011-07-01 2011-07-01 false What is the State Program Analysis Assistance and...
34 CFR 477.1 - What is the State Program Analysis Assistance and Policy Studies Program?
Code of Federal Regulations, 2010 CFR
2010-07-01
... ANALYSIS ASSISTANCE AND POLICY STUDIES PROGRAM General § 477.1 What is the State Program Analysis Assistance and Policy Studies Program? The State Program Analysis Assistance and Policy Studies Program... 34 Education 3 2010-07-01 2010-07-01 false What is the State Program Analysis Assistance and...
Common pitfalls in statistical analysis: Linear regression analysis
Aggarwal, Rakesh; Ranganathan, Priya
2017-01-01
In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis. PMID:28447022
Meta-Analysis for Primary and Secondary Data Analysis: The Super-Experiment Metaphor.
ERIC Educational Resources Information Center
Jackson, Sally
1991-01-01
Considers the relation between meta-analysis statistics and analysis of variance statistics. Discusses advantages and disadvantages as a primary data analysis tool. Argues that the two approaches are partial paraphrases of one another. Advocates an integrative approach that introduces the best of meta-analytic thinking into primary analysis…
An Overview of Discourse Analysis and Its Usefulness in TESOL.
ERIC Educational Resources Information Center
Milne, Geraldine Veronica
This paper provides an overview of discourse analysis from a linguistic point of view, discussing why it is relevant to Teaching English to Speakers of Other Languages (TESOL). It focuses on the following: discourse and discourse analysis; discourse analysis and TESOL; approaches to discourse analysis; systemic functional linguistics; theme and…
NASA Astrophysics Data System (ADS)
De, Anupam; Bandyopadhyay, Gautam; Chakraborty, B. N.
2010-10-01
Financial ratio analysis is an important and commonly used tool in analyzing financial health of a firm. Quite a large number of financial ratios, which can be categorized in different groups, are used for this analysis. However, to reduce number of ratios to be used for financial analysis and regrouping them into different groups on basis of empirical evidence, Factor Analysis technique is being used successfully by different researches during the last three decades. In this study Factor Analysis has been applied over audited financial data of Indian cement companies for a period of 10 years. The sample companies are listed on the Stock Exchange India (BSE and NSE). Factor Analysis, conducted over 44 variables (financial ratios) grouped in 7 categories, resulted in 11 underlying categories (factors). Each factor is named in an appropriate manner considering the factor loads and constituent variables (ratios). Representative ratios are identified for each such factor. To validate the results of Factor Analysis and to reach final conclusion regarding the representative ratios, Cluster Analysis had been performed.
NASA Technical Reports Server (NTRS)
1972-01-01
The technical and cost analysis that was performed for the payload system operations analysis is presented. The technical analysis consists of the operations for the payload/shuttle and payload/tug, and the spacecraft analysis which includes sortie, automated, and large observatory type payloads. The cost analysis includes the costing tradeoffs of the various payload design concepts and traffic models. The overall objectives of this effort were to identify payload design and operational concepts for the shuttle which will result in low cost design, and to examine the low cost design concepts to identify applicable design guidelines. The operations analysis examined several past and current NASA and DoD satellite programs to establish a shuttle operations model. From this model the analysis examined the payload/shuttle flow and determined facility concepts necessary for effective payload/shuttle ground operations. The study of the payload/tug operations was an examination of the various flight timelines for missions requiring the tug.
Statistical analysis of life history calendar data.
Eerola, Mervi; Helske, Satu
2016-04-01
The life history calendar is a data-collection tool for obtaining reliable retrospective data about life events. To illustrate the analysis of such data, we compare the model-based probabilistic event history analysis and the model-free data mining method, sequence analysis. In event history analysis, we estimate instead of transition hazards the cumulative prediction probabilities of life events in the entire trajectory. In sequence analysis, we compare several dissimilarity metrics and contrast data-driven and user-defined substitution costs. As an example, we study young adults' transition to adulthood as a sequence of events in three life domains. The events define the multistate event history model and the parallel life domains in multidimensional sequence analysis. The relationship between life trajectories and excess depressive symptoms in middle age is further studied by their joint prediction in the multistate model and by regressing the symptom scores on individual-specific cluster indices. The two approaches complement each other in life course analysis; sequence analysis can effectively find typical and atypical life patterns while event history analysis is needed for causal inquiries. © The Author(s) 2012.
FMAP: Functional Mapping and Analysis Pipeline for metagenomics and metatranscriptomics studies.
Kim, Jiwoong; Kim, Min Soo; Koh, Andrew Y; Xie, Yang; Zhan, Xiaowei
2016-10-10
Given the lack of a complete and comprehensive library of microbial reference genomes, determining the functional profile of diverse microbial communities is challenging. The available functional analysis pipelines lack several key features: (i) an integrated alignment tool, (ii) operon-level analysis, and (iii) the ability to process large datasets. Here we introduce our open-sourced, stand-alone functional analysis pipeline for analyzing whole metagenomic and metatranscriptomic sequencing data, FMAP (Functional Mapping and Analysis Pipeline). FMAP performs alignment, gene family abundance calculations, and statistical analysis (three levels of analyses are provided: differentially-abundant genes, operons and pathways). The resulting output can be easily visualized with heatmaps and functional pathway diagrams. FMAP functional predictions are consistent with currently available functional analysis pipelines. FMAP is a comprehensive tool for providing functional analysis of metagenomic/metatranscriptomic sequencing data. With the added features of integrated alignment, operon-level analysis, and the ability to process large datasets, FMAP will be a valuable addition to the currently available functional analysis toolbox. We believe that this software will be of great value to the wider biology and bioinformatics communities.
Lü, Yiran; Hao, Shuxin; Zhang, Guoqing; Liu, Jie; Liu, Yue; Xu, Dongqun
2018-01-01
To implement the online statistical analysis function in information system of air pollution and health impact monitoring, and obtain the data analysis information real-time. Using the descriptive statistical method as well as time-series analysis and multivariate regression analysis, SQL language and visual tools to implement online statistical analysis based on database software. Generate basic statistical tables and summary tables of air pollution exposure and health impact data online; Generate tendency charts of each data part online and proceed interaction connecting to database; Generate butting sheets which can lead to R, SAS and SPSS directly online. The information system air pollution and health impact monitoring implements the statistical analysis function online, which can provide real-time analysis result to its users.
NASA Astrophysics Data System (ADS)
Belleri, Basayya K.; Kerur, Shravankumar B.
2018-04-01
A computer-oriented procedure for solving the dynamic force analysis problem for general planar mechanisms is presented. This paper provides position analysis, velocity analysis, acceleration analysis and force analysis of six bar mechanism with variable topology approach. Six bar mechanism is constructed by joining two simple four bar mechanisms. Initially the position, velocity and acceleration analysis of first four bar mechanism are determined by using the input parameters. The outputs (angular displacement, velocity and acceleration of rocker)of first four bar mechanism are used as input parameter for the second four bar mechanism and the position, velocity, acceleration and forces are analyzed. With out-put parameters of second four-bar mechanism the force analysis of first four-bar mechanism is carried out.
NASA Technical Reports Server (NTRS)
Ruf, Joseph; Holt, James B.; Canabal, Francisco
1999-01-01
This paper presents the status of analyses on three Rocket Based Combined Cycle configurations underway in the Applied Fluid Dynamics Analysis Group (TD64). TD64 is performing computational fluid dynamics analysis on a Penn State RBCC test rig, the proposed Draco axisymmetric RBCC engine and the Trailblazer engine. The intent of the analysis on the Penn State test rig is to benchmark the Finite Difference Navier Stokes code for ejector mode fluid dynamics. The Draco engine analysis is a trade study to determine the ejector mode performance as a function of three engine design variables. The Trailblazer analysis is to evaluate the nozzle performance in scramjet mode. Results to date of each analysis are presented.
Analysis of Phenix end-of-life natural convection test with the MARS-LMR code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, H. Y.; Ha, K. S.; Lee, K. L.
The end-of-life test of Phenix reactor performed by the CEA provided an opportunity to have reliable and valuable test data for the validation and verification of a SFR system analysis code. KAERI joined this international program for the analysis of Phenix end-of-life natural circulation test coordinated by the IAEA from 2008. The main objectives of this study were to evaluate the capability of existing SFR system analysis code MARS-LMR and to identify any limitation of the code. The analysis was performed in three stages: pre-test analysis, blind posttest analysis, and final post-test analysis. In the pre-test analysis, the design conditionsmore » provided by the CEA were used to obtain a prediction of the test. The blind post-test analysis was based on the test conditions measured during the tests but the test results were not provided from the CEA. The final post-test analysis was performed to predict the test results as accurate as possible by improving the previous modeling of the test. Based on the pre-test analysis and blind test analysis, the modeling for heat structures in the hot pool and cold pool, steel structures in the core, heat loss from roof and vessel, and the flow path at core outlet were reinforced in the final analysis. The results of the final post-test analysis could be characterized into three different phases. In the early phase, the MARS-LMR simulated the heat-up process correctly due to the enhanced heat structure modeling. In the mid phase before the opening of SG casing, the code reproduced the decrease of core outlet temperature successfully. Finally, in the later phase the increase of heat removal by the opening of the SG opening was well predicted with the MARS-LMR code. (authors)« less
Tools for T-RFLP data analysis using Excel.
Fredriksson, Nils Johan; Hermansson, Malte; Wilén, Britt-Marie
2014-11-08
Terminal restriction fragment length polymorphism (T-RFLP) analysis is a DNA-fingerprinting method that can be used for comparisons of the microbial community composition in a large number of samples. There is no consensus on how T-RFLP data should be treated and analyzed before comparisons between samples are made, and several different approaches have been proposed in the literature. The analysis of T-RFLP data can be cumbersome and time-consuming, and for large datasets manual data analysis is not feasible. The currently available tools for automated T-RFLP analysis, although valuable, offer little flexibility, and few, if any, options regarding what methods to use. To enable comparisons and combinations of different data treatment methods an analysis template and an extensive collection of macros for T-RFLP data analysis using Microsoft Excel were developed. The Tools for T-RFLP data analysis template provides procedures for the analysis of large T-RFLP datasets including application of a noise baseline threshold and setting of the analysis range, normalization and alignment of replicate profiles, generation of consensus profiles, normalization and alignment of consensus profiles and final analysis of the samples including calculation of association coefficients and diversity index. The procedures are designed so that in all analysis steps, from the initial preparation of the data to the final comparison of the samples, there are various different options available. The parameters regarding analysis range, noise baseline, T-RF alignment and generation of consensus profiles are all given by the user and several different methods are available for normalization of the T-RF profiles. In each step, the user can also choose to base the calculations on either peak height data or peak area data. The Tools for T-RFLP data analysis template enables an objective and flexible analysis of large T-RFLP datasets in a widely used spreadsheet application.
LOFT L2-3 blowdown experiment safety analyses D, E, and G; LOCA analyses H, K, K1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perryman, J.L.; Keeler, C.D.; Saukkoriipi, L.O.
1978-12-01
Three calculations using conservative off-nominal conditions and evaluation model options were made using RELAP4/MOD5 for blowdown-refill and RELAP4/MOD6 for reflood for Loss-of-Fluid Test Experiment L2-3 to support the experiment safety analysis effort. The three analyses are as follows: Analysis D: Loss of commercial power during Experiment L2-3; Analysis E: Hot leg quick-opening blowdown valve (QOBV) does not open during Experiment L2-3; and Analysis G: Cold leg QOBV does not open during Experiment L2-3. In addition, the results of three LOFT loss-of-coolant accident (LOCA) analyses using a power of 56.1 MW and a primary coolant system flow rate of 3.6 millionmore » 1bm/hr are presented: Analysis H: Intact loop 200% hot leg break; emergency core cooling (ECC) system B unavailable; Analysis K: Pressurizer relief valve stuck in open position; ECC system B unavailable; and Analysis K1: Same as analysis K, but using a primary coolant system flow rate of 1.92 million 1bm/hr (L2-4 pre-LOCE flow rate). For analysis D, the maximum cladding temperature reached was 1762/sup 0/F, 22 sec into reflood. In analyses E and G, the blowdowns were slower due to one of the QOBVs not functioning. The maximum cladding temperature reached in analysis E was 1700/sup 0/F, 64.7 sec into reflood; for analysis G, it was 1300/sup 0/F at the start of reflood. For analysis H, the maximum cladding temperature reached was 1825/sup 0/F, 0.01 sec into reflood. Analysis K was a very slow blowdown, and the cladding temperatures followed the saturation temperature of the system. The results of analysis K1 was nearly identical to analysis K; system depressurization was not affected by the primary coolant system flow rate.« less
A global optimization approach to multi-polarity sentiment analysis.
Li, Xinmiao; Li, Jing; Wu, Yukeng
2015-01-01
Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From the results of this comparison, we found that PSOGO-Senti is more suitable for improving a difficult multi-polarity sentiment analysis problem.
Environmental analysis of higher brominated diphenyl ethers and decabromodiphenyl ethane.
Kierkegaard, Amelie; Sellström, Ulla; McLachlan, Michael S
2009-01-16
Methods for environmental analysis of higher brominated diphenyl ethers (PBDEs), in particular decabromodiphenyl ether (BDE209), and the recently discovered environmental contaminant decabromodiphenyl ethane (deBDethane) are reviewed. The extensive literature on analysis of BDE209 has identified several critical issues, including contamination of the sample, degradation of the analyte during sample preparation and GC analysis, and the selection of appropriate detection methods and surrogate standards. The limited experience with the analysis of deBDethane suggests that there are many commonalities with BDE209. The experience garnered from the analysis of BDE209 over the last 15 years will greatly facilitate progress in the analysis of deBDethane.
NASA Technical Reports Server (NTRS)
2001-01-01
This document presents the full-scale analyses of the CFD RSRM. The RSRM model was developed with a 20 second burn time. The following are presented as part of the full-scale analyses: (1) RSRM embedded inclusion analysis; (2) RSRM igniter nozzle design analysis; (3) Nozzle Joint 4 erosion anomaly; (4) RSRM full motor port slag accumulation analysis; (5) RSRM motor analysis of two-phase flow in the aft segment/submerged nozzle region; (6) Completion of 3-D Analysis of the hot air nozzle manifold; (7) Bates Motor distributed combustion test case; and (8) Three Dimensional Polysulfide Bump Analysis.
[Enzymatic analysis of the quality of foodstuffs].
Kolesnov, A Iu
1997-01-01
Enzymatic analysis is an independent and separate branch of enzymology and analytical chemistry. It has become one of the most important methodologies used in food analysis. Enzymatic analysis allows the quick, reliable determination of many food ingredients. Often these contents cannot be determined by conventional methods, or if methods are available, they are determined only with limited accuracy. Today, methods of enzymatic analysis are being increasingly used in the investigation of foodstuffs. Enzymatic measurement techniques are used in industry, scientific and food inspection laboratories for quality analysis. This article describes the requirements of an optimal analytical method: specificity, sample preparation, assay performance, precision, sensitivity, time requirement, analysis cost, safety of reagents.
The Analysis of a Diet for the Human Being and the Companion Animal using Big Data in 2016
Kang, Hye Won
2017-01-01
The purpose of this study was to investigate the diet tendencies of human and companion animals using big data analysis. The keyword data of human diet and companion animals' diet were collected from the portal site Naver from January 1, 2016 until December 31, 2016 and collected data were analyzed by simple frequency analysis, N-gram analysis, keyword network analysis and seasonality analysis. In terms of human, the word exercise had the highest frequency through simple frequency analysis, whereas diet menu most frequently appeared in the N-gram analysis. companion animals, the term dog had the highest frequency in simple frequency analysis, whereas diet method was most frequent through N-gram analysis. Keyword network analysis for human indicated 4 groups: diet group, exercise group, commercial diet food group, and commercial diet program group. However, the keyword network analysis for companion animals indicated 3 groups: diet group, exercise group, and professional medical help group. The analysis of seasonality showed that the interest in diet for both human and companion animals increased steadily since February of 2016 and reached its peak in July. In conclusion, diets of human and companion animals showed similar tendencies, particularly having higher preference for dietary control over other methods. The diets of companion animals are determined by the choice of their owners as effective diet method for owners are usually applied to the companion animals. Therefore, it is necessary to have empirical demonstration of whether correlation of obesity between human being and the companion animals exist. PMID:29124046
The Analysis of a Diet for the Human Being and the Companion Animal using Big Data in 2016.
Jung, Eun-Jin; Kim, Young-Suk; Choi, Jung-Wa; Kang, Hye Won; Chang, Un-Jae
2017-10-01
The purpose of this study was to investigate the diet tendencies of human and companion animals using big data analysis. The keyword data of human diet and companion animals' diet were collected from the portal site Naver from January 1, 2016 until December 31, 2016 and collected data were analyzed by simple frequency analysis, N-gram analysis, keyword network analysis and seasonality analysis. In terms of human, the word exercise had the highest frequency through simple frequency analysis, whereas diet menu most frequently appeared in the N-gram analysis. companion animals, the term dog had the highest frequency in simple frequency analysis, whereas diet method was most frequent through N-gram analysis. Keyword network analysis for human indicated 4 groups: diet group, exercise group, commercial diet food group, and commercial diet program group. However, the keyword network analysis for companion animals indicated 3 groups: diet group, exercise group, and professional medical help group. The analysis of seasonality showed that the interest in diet for both human and companion animals increased steadily since February of 2016 and reached its peak in July. In conclusion, diets of human and companion animals showed similar tendencies, particularly having higher preference for dietary control over other methods. The diets of companion animals are determined by the choice of their owners as effective diet method for owners are usually applied to the companion animals. Therefore, it is necessary to have empirical demonstration of whether correlation of obesity between human being and the companion animals exist.
[Quantitative surface analysis of Pt-Co, Cu-Au and Cu-Ag alloy films by XPS and AES].
Li, Lian-Zhong; Zhuo, Shang-Jun; Shen, Ru-Xiang; Qian, Rong; Gao, Jie
2013-11-01
In order to improve the quantitative analysis accuracy of AES, We associated XPS with AES and studied the method to reduce the error of AES quantitative analysis, selected Pt-Co, Cu-Au and Cu-Ag binary alloy thin-films as the samples, used XPS to correct AES quantitative analysis results by changing the auger sensitivity factors to make their quantitative analysis results more similar. Then we verified the accuracy of the quantitative analysis of AES when using the revised sensitivity factors by other samples with different composition ratio, and the results showed that the corrected relative sensitivity factors can reduce the error in quantitative analysis of AES to less than 10%. Peak defining is difficult in the form of the integral spectrum of AES analysis since choosing the starting point and ending point when determining the characteristic auger peak intensity area with great uncertainty, and to make analysis easier, we also processed data in the form of the differential spectrum, made quantitative analysis on the basis of peak to peak height instead of peak area, corrected the relative sensitivity factors, and verified the accuracy of quantitative analysis by the other samples with different composition ratio. The result showed that the analytical error in quantitative analysis of AES reduced to less than 9%. It showed that the accuracy of AES quantitative analysis can be highly improved by the way of associating XPS with AES to correct the auger sensitivity factors since the matrix effects are taken into account. Good consistency was presented, proving the feasibility of this method.
Eijssen, Lars M T; Goelela, Varshna S; Kelder, Thomas; Adriaens, Michiel E; Evelo, Chris T; Radonjic, Marijana
2015-06-30
Illumina whole-genome expression bead arrays are a widely used platform for transcriptomics. Most of the tools available for the analysis of the resulting data are not easily applicable by less experienced users. ArrayAnalysis.org provides researchers with an easy-to-use and comprehensive interface to the functionality of R and Bioconductor packages for microarray data analysis. As a modular open source project, it allows developers to contribute modules that provide support for additional types of data or extend workflows. To enable data analysis of Illumina bead arrays for a broad user community, we have developed a module for ArrayAnalysis.org that provides a free and user-friendly web interface for quality control and pre-processing for these arrays. This module can be used together with existing modules for statistical and pathway analysis to provide a full workflow for Illumina gene expression data analysis. The module accepts data exported from Illumina's GenomeStudio, and provides the user with quality control plots and normalized data. The outputs are directly linked to the existing statistics module of ArrayAnalysis.org, but can also be downloaded for further downstream analysis in third-party tools. The Illumina bead arrays analysis module is available at http://www.arrayanalysis.org . A user guide, a tutorial demonstrating the analysis of an example dataset, and R scripts are available. The module can be used as a starting point for statistical evaluation and pathway analysis provided on the website or to generate processed input data for a broad range of applications in life sciences research.
ECONOMIC ANALYSIS FOR THE GROUND WATER RULE ...
The Ground Water Rule Economic Analysis provides a description of the need for the rule, consideration of regulatory alternatives, baseline analysis including national ground water system profile and an estimate of pathogen and indicator occurrence (Chapter 4), a risk assessment and benefits analysis (Chapter 5), and a cost analysis ( Chapter 6). Chapters 4, 5 and 6, selected appendices and sections of other chapters will be peer reviewed. The objective of the Economic Analysis Document is to support the final Ground Water Rule.
Inventory of File gfs.t06z.pgrb2b.0p25.f000
UGRD analysis U-Component of Wind [m/s] 005 1 mb VGRD analysis V-Component of Wind [m/s] 006 1 mb ABSV Temperature [K] 011 2 mb RH analysis Relative Humidity [%] 012 2 mb UGRD analysis U-Component of Wind [m/s ] 013 2 mb VGRD analysis V-Component of Wind [m/s] 014 2 mb ABSV analysis Absolute Vorticity [1/s] 015 2
Inventory of File gfs.t06z.pgrb2.0p25.anl
UGRD analysis U-Component of Wind [m/s] 005 10 mb VGRD analysis V-Component of Wind [m/s] 006 10 mb -Component of Wind [m/s] 011 20 mb VGRD analysis V-Component of Wind [m/s] 012 20 mb ABSV analysis Absolute UGRD analysis U-Component of Wind [m/s] 018 30 mb VGRD analysis V-Component of Wind [m/s] 019 30 mb
Inventory of File gfs.t06z.pgrb2b.1p00.f000
UGRD analysis U-Component of Wind [m/s] 005 1 mb VGRD analysis V-Component of Wind [m/s] 006 1 mb ABSV Temperature [K] 011 2 mb RH analysis Relative Humidity [%] 012 2 mb UGRD analysis U-Component of Wind [m/s ] 013 2 mb VGRD analysis V-Component of Wind [m/s] 014 2 mb ABSV analysis Absolute Vorticity [1/s] 015 2
Inventory of File gfs.t06z.pgrb2b.0p50.f000
UGRD analysis U-Component of Wind [m/s] 005 1 mb VGRD analysis V-Component of Wind [m/s] 006 1 mb ABSV Temperature [K] 011 2 mb RH analysis Relative Humidity [%] 012 2 mb UGRD analysis U-Component of Wind [m/s ] 013 2 mb VGRD analysis V-Component of Wind [m/s] 014 2 mb ABSV analysis Absolute Vorticity [1/s] 015 2
Inventory of File gfs.t06z.pgrb2.0p50.anl
UGRD analysis U-Component of Wind [m/s] 005 10 mb VGRD analysis V-Component of Wind [m/s] 006 10 mb -Component of Wind [m/s] 011 20 mb VGRD analysis V-Component of Wind [m/s] 012 20 mb ABSV analysis Absolute UGRD analysis U-Component of Wind [m/s] 018 30 mb VGRD analysis V-Component of Wind [m/s] 019 30 mb
AADL Fault Modeling and Analysis Within an ARP4761 Safety Assessment
2014-10-01
Analysis Generator 27 3.2.3 Mapping to OpenFTA Format File 27 3.2.4 Mapping to Generic XML Format 28 3.2.5 AADL and FTA Mapping Rules 28 3.2.6 Issues...PSSA), System Safety Assessment (SSA), Common Cause Analysis (CCA), Fault Tree Analysis ( FTA ), Failure Modes and Effects Analysis (FMEA), Failure...Modes and Effects Summary, Mar - kov Analysis (MA), and Dependence Diagrams (DDs), also referred to as Reliability Block Dia- grams (RBDs). The
Structural-Thermal-Optical-Performance (STOP) Analysis
NASA Technical Reports Server (NTRS)
Bolognese, Jeffrey; Irish, Sandra
2015-01-01
The presentation will be given at the 26th Annual Thermal Fluids Analysis Workshop (TFAWS 2015) hosted by the Goddard Spaceflight Center (GSFC) Thermal Engineering Branch (Code 545). A STOP analysis is a multidiscipline analysis, consisting of Structural, Thermal and Optical Performance Analyses, that is performed for all space flight instruments and satellites. This course will explain the different parts of performing this analysis. The student will learn how to effectively interact with each discipline in order to accurately obtain the system analysis results.
1987-08-01
HVAC duct hanger system over an extensive frequency range. The finite element, component mode synthesis, and statistical energy analysis methods are...800-5,000 Hz) analysis was conducted with Statistical Energy Analysis (SEA) coupled with a closed-form harmonic beam analysis program. These...resonances may be obtained by using a finer frequency increment. Statistical Energy Analysis The basic assumption used in SEA analysis is that within each band
Comparative study of standard space and real space analysis of quantitative MR brain data.
Aribisala, Benjamin S; He, Jiabao; Blamire, Andrew M
2011-06-01
To compare the robustness of region of interest (ROI) analysis of magnetic resonance imaging (MRI) brain data in real space with analysis in standard space and to test the hypothesis that standard space image analysis introduces more partial volume effect errors compared to analysis of the same dataset in real space. Twenty healthy adults with no history or evidence of neurological diseases were recruited; high-resolution T(1)-weighted, quantitative T(1), and B(0) field-map measurements were collected. Algorithms were implemented to perform analysis in real and standard space and used to apply a simple standard ROI template to quantitative T(1) datasets. Regional relaxation values and histograms for both gray and white matter tissues classes were then extracted and compared. Regional mean T(1) values for both gray and white matter were significantly lower using real space compared to standard space analysis. Additionally, regional T(1) histograms were more compact in real space, with smaller right-sided tails indicating lower partial volume errors compared to standard space analysis. Standard space analysis of quantitative MRI brain data introduces more partial volume effect errors biasing the analysis of quantitative data compared to analysis of the same dataset in real space. Copyright © 2011 Wiley-Liss, Inc.
CADDIS Volume 4. Data Analysis: Exploratory Data Analysis
Intro to exploratory data analysis. Overview of variable distributions, scatter plots, correlation analysis, GIS datasets. Use of conditional probability to examine stressor levels and impairment. Exploring correlations among multiple stressors.
Chernyakhovskiy is member of the Markets & Policy Analysis Group in the Strategic Energy Analysis Center . Areas of Expertise Energy policy and market analysis Data analysis and statistical modeling Research
A methodological comparison of customer service analysis techniques
James Absher; Alan Graefe; Robert Burns
2003-01-01
Techniques used to analyze customer service data need to be studied. Two primary analysis protocols, importance-performance analysis (IP) and gap score analysis (GA), are compared in a side-by-side comparison using data from two major customer service research projects. A central concern is what, if any, conclusion might be different due solely to the analysis...
Image analysis library software development
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Bryant, J.
1977-01-01
The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.
ERIC Educational Resources Information Center
Ho, Hsuan-Fu; Hung, Chia-Chi
2008-01-01
Purpose: The purpose of this paper is to examine how a graduate institute at National Chiayi University (NCYU), by using a model that integrates analytic hierarchy process, cluster analysis and correspondence analysis, can develop effective marketing strategies. Design/methodology/approach: This is primarily a quantitative study aimed at…
29 CFR 95.45 - Cost and price analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 1 2010-07-01 2010-07-01 true Cost and price analysis. 95.45 Section 95.45 Labor Office of... Procurement Standards § 95.45 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be...
14 CFR 437.29 - Hazard analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Hazard analysis. 437.29 Section 437.29... Documentation § 437.29 Hazard analysis. (a) An applicant must perform a hazard analysis that complies with § 437.55(a). (b) An applicant must provide to the FAA all the results of each step of the hazard analysis...
14 CFR 437.29 - Hazard analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Hazard analysis. 437.29 Section 437.29... Documentation § 437.29 Hazard analysis. (a) An applicant must perform a hazard analysis that complies with § 437.55(a). (b) An applicant must provide to the FAA all the results of each step of the hazard analysis...
Evaluation of the utility of a discrete-trial functional analysis in early intervention classrooms.
Kodak, Tiffany; Fisher, Wayne W; Paden, Amber; Dickes, Nitasha
2013-01-01
We evaluated a discrete-trial functional analysis implemented by regular classroom staff in a classroom setting. The results suggest that the discrete-trial functional analysis identified a social function for each participant and may require fewer staff than standard functional analysis procedures. © Society for the Experimental Analysis of Behavior.
ERIC Educational Resources Information Center
Cooper, Barry; Glaesser, Judith
2016-01-01
We discuss a recent development in the set theoretic analysis of data sets characterized by limited diversity. Ragin, in developing his Qualitative Comparative Analysis (QCA), developed a standard analysis that produces parsimonious, intermediate, and complex Boolean solutions of truth tables. Schneider and Wagemann argue this standard analysis…
Finite element analysis of helicopter structures
NASA Technical Reports Server (NTRS)
Rich, M. J.
1978-01-01
Application of the finite element analysis is now being expanded to three dimensional analysis of mechanical components. Examples are presented for airframe, mechanical components, and composite structure calculations. Data are detailed on the increase of model size, computer usage, and the effect on reducing stress analysis costs. Future applications for use of finite element analysis for helicopter structures are projected.
Using Cluster Analysis to Examine Husband-Wife Decision Making
ERIC Educational Resources Information Center
Bonds-Raacke, Jennifer M.
2006-01-01
Cluster analysis has a rich history in many disciplines and although cluster analysis has been used in clinical psychology to identify types of disorders, its use in other areas of psychology has been less popular. The purpose of the current experiments was to use cluster analysis to investigate husband-wife decision making. Cluster analysis was…
ERIC Educational Resources Information Center
Hsu, Chien-Ju; Thompson, Cynthia K.
2018-01-01
Purpose: The purpose of this study is to compare the outcomes of the manually coded Northwestern Narrative Language Analysis (NNLA) system, which was developed for characterizing agrammatic production patterns, and the automated Computerized Language Analysis (CLAN) system, which has recently been adopted to analyze speech samples of individuals…
Fuel Cell Technology Status Analysis | Hydrogen and Fuel Cells | NREL
Technology Status Analysis Fuel Cell Technology Status Analysis Get Involved Fuel cell developers interested in collaborating with NREL on fuel cell technology status analysis should send an email to NREL's Technology Validation Team at techval@nrel.gov. NREL's analysis of fuel cell technology provides objective
27 CFR 25.195 - Removals for analysis.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., DEPARTMENT OF THE TREASURY LIQUORS BEER Removals Without Payment of Tax Removals for Analysis, Research... analysis in packages or in bulk containers. The brewer shall record beer removed for analysis in daily...
27 CFR 25.195 - Removals for analysis.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., DEPARTMENT OF THE TREASURY LIQUORS BEER Removals Without Payment of Tax Removals for Analysis, Research... analysis in packages or in bulk containers. The brewer shall record beer removed for analysis in daily...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Kaushik; Clarity, Justin B; Cumberland, Riley M
This will be licensed via RSICC. A new, integrated data and analysis system has been designed to simplify and automate the performance of accurate and efficient evaluations for characterizing the input to the overall nuclear waste management system -UNF-Storage, Transportation & Disposal Analysis Resource and Data System (UNF-ST&DARDS). A relational database within UNF-ST&DARDS provides a standard means by which UNF-ST&DARDS can succinctly store and retrieve modeling and simulation (M&S) parameters for specific spent nuclear fuel analysis. A library of various analysis model templates provides the ability to communicate the various set of M&S parameters to the most appropriate M&S application.more » Interactive visualization capabilities facilitate data analysis and results interpretation. UNF-ST&DARDS current analysis capabilities include (1) assembly-specific depletion and decay, (2) and spent nuclear fuel cask-specific criticality and shielding. Currently, UNF-ST&DARDS uses SCALE nuclear analysis code system for performing nuclear analysis.« less
Use of direct gradient analysis to uncover biological hypotheses in 16s survey data and beyond.
Erb-Downward, John R; Sadighi Akha, Amir A; Wang, Juan; Shen, Ning; He, Bei; Martinez, Fernando J; Gyetko, Margaret R; Curtis, Jeffrey L; Huffnagle, Gary B
2012-01-01
This study investigated the use of direct gradient analysis of bacterial 16S pyrosequencing surveys to identify relevant bacterial community signals in the midst of a "noisy" background, and to facilitate hypothesis-testing both within and beyond the realm of ecological surveys. The results, utilizing 3 different real world data sets, demonstrate the utility of adding direct gradient analysis to any analysis that draws conclusions from indirect methods such as Principal Component Analysis (PCA) and Principal Coordinates Analysis (PCoA). Direct gradient analysis produces testable models, and can identify significant patterns in the midst of noisy data. Additionally, we demonstrate that direct gradient analysis can be used with other kinds of multivariate data sets, such as flow cytometric data, to identify differentially expressed populations. The results of this study demonstrate the utility of direct gradient analysis in microbial ecology and in other areas of research where large multivariate data sets are involved.
Anima: Modular Workflow System for Comprehensive Image Data Analysis
Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa
2014-01-01
Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541
Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.
Arganda-Carreras, Ignacio; Andrey, Philippe
2017-01-01
With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.
[Conversation analysis for improving nursing communication].
Yi, Myungsun
2007-08-01
Nursing communication has become more important than ever before because quality of nursing services largely depends on the quality of communication in a very competitive health care environment. This article was to introduce ways to improve nursing communication using conversation analysis. This was a review study on conversation analysis, critically examining previous studies in nursing communication and interpersonal relationships. This study provided theoretical backgrounds and basic assumptions of conversation analysis which was influenced by ethnomethodology, phenomenology, and sociolinguistic. In addition, the characteristics and analysis methods of conversation analysis were illustrated in detail. Lastly, how conversation analysis could help improve communication was shown, by examining researches using conversation analysis not only for ordinary conversations but also for extraordinary or difficult conversations such as conversations between patients with dementia and their professional nurses. Conversation analysis can help in improving nursing communication by providing various structures and patterns as well as prototypes of conversation, and by suggesting specific problems and problem-solving strategies in communication.
Levels of narrative analysis in health psychology.
Murray, M
2000-05-01
The past 10-15 years have seen a rapid increase in the study of narrative across all the social sciences. It is sometimes assumed that narrative has the same meaning irrespective of the context in which it is expressed. This article considers different levels of narrative analysis within health psychology. Specifically, it considers the character of health and illness narratives as a function of the personal, interpersonal, positional and societal levels of analysis. At the personal level of analysis narratives are portrayed as expressions of the lived experience of the narrator. At the interpersonal level of analysis the narrative is one that is co-created in dialogue. At the positional level of analysis the analysis considers the differences in social position between the narrator and the listener. The societal level of analysis is concerned with the socially shared stories that are characteristic of certain communities or societies. The challenge is to articulate the connections between these different levels of narrative analysis and to develop strategies to promote emancipatory narratives.
Multivariate Methods for Meta-Analysis of Genetic Association Studies.
Dimou, Niki L; Pantavou, Katerina G; Braliou, Georgia G; Bagos, Pantelis G
2018-01-01
Multivariate meta-analysis of genetic association studies and genome-wide association studies has received a remarkable attention as it improves the precision of the analysis. Here, we review, summarize and present in a unified framework methods for multivariate meta-analysis of genetic association studies and genome-wide association studies. Starting with the statistical methods used for robust analysis and genetic model selection, we present in brief univariate methods for meta-analysis and we then scrutinize multivariate methodologies. Multivariate models of meta-analysis for a single gene-disease association studies, including models for haplotype association studies, multiple linked polymorphisms and multiple outcomes are discussed. The popular Mendelian randomization approach and special cases of meta-analysis addressing issues such as the assumption of the mode of inheritance, deviation from Hardy-Weinberg Equilibrium and gene-environment interactions are also presented. All available methods are enriched with practical applications and methodologies that could be developed in the future are discussed. Links for all available software implementing multivariate meta-analysis methods are also provided.
Precursor Analysis for Flight- and Ground-Based Anomaly Risk Significance Determination
NASA Technical Reports Server (NTRS)
Groen, Frank
2010-01-01
This slide presentation reviews the precursor analysis for flight and ground based anomaly risk significance. It includes information on accident precursor analysis, real models vs. models, and probabilistic analysis.
48 CFR 15.404-1 - Proposal analysis techniques.
Code of Federal Regulations, 2014 CFR
2014-10-01
... are: I Price Analysis, II Quantitative Techniques for Contract Pricing, III Cost Analysis, IV Advanced... obtained through market research for the same or similar items. (vii) Analysis of data other than certified...
48 CFR 15.404-1 - Proposal analysis techniques.
Code of Federal Regulations, 2013 CFR
2013-10-01
... are: I Price Analysis, II Quantitative Techniques for Contract Pricing, III Cost Analysis, IV Advanced... obtained through market research for the same or similar items. (vii) Analysis of data other than certified...
Fatigue of notched fiber composite laminates. Part 1: Analytical model
NASA Technical Reports Server (NTRS)
Mclaughlin, P. V., Jr.; Kulkarni, S. V.; Huang, S. N.; Rosen, B. W.
1975-01-01
A description is given of a semi-empirical, deterministic analysis for prediction and correlation of fatigue crack growth, residual strength, and fatigue lifetime for fiber composite laminates containing notches (holes). The failure model used for the analysis is based upon composite heterogeneous behavior and experimentally observed failure modes under both static and fatigue loading. The analysis is consistent with the wearout philosophy. Axial cracking and transverse cracking failure modes are treated together in the analysis. Cracking off-axis is handled by making a modification to the axial cracking analysis. The analysis predicts notched laminate failure from unidirectional material fatique properties using constant strain laminate analysis techniques. For multidirectional laminates, it is necessary to know lamina fatique behavior under axial normal stress, transverse normal stress and axial shear stress. Examples of the analysis method are given.
NASA Astrophysics Data System (ADS)
Stedman, J. D.; Spyrou, N. M.
1994-12-01
The trace element concentrations in porcine brain samples as determined by particle-induced X-ray emission (PIXE) analysis, instrumental neutron activation analysis (INAA) and particle-induced gamma-ray emission (PIGE) analysis are compared. The matrix composition was determined by Rutherford backscattering (RBS). Al, Si, P, S, Cl, K, Ca, Mn, Fe and Cd were determined by PIXE analysis Na, K, Sc, Fe, Co, Zn, As, Br, Rb, and Cs by INAA and Na, Mg and Fe by PIGE analysis. The bulk elements C, N, O, Na Cl and S were found by RBS analysis. Elemental concentrations are obtained using the comparator method of analysis rather than an absolute method, the validity which is examined by comparing the elemental concentrations obtained in porcine brain using two separate certified reference materials.
An Integrated Tool for System Analysis of Sample Return Vehicles
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Maddock, Robert W.; Winski, Richard G.
2012-01-01
The next important step in space exploration is the return of sample materials from extraterrestrial locations to Earth for analysis. Most mission concepts that return sample material to Earth share one common element: an Earth entry vehicle. The analysis and design of entry vehicles is multidisciplinary in nature, requiring the application of mass sizing, flight mechanics, aerodynamics, aerothermodynamics, thermal analysis, structural analysis, and impact analysis tools. Integration of a multidisciplinary problem is a challenging task; the execution process and data transfer among disciplines should be automated and consistent. This paper describes an integrated analysis tool for the design and sizing of an Earth entry vehicle. The current tool includes the following disciplines: mass sizing, flight mechanics, aerodynamics, aerothermodynamics, and impact analysis tools. Python and Java languages are used for integration. Results are presented and compared with the results from previous studies.
Gait Analysis Using Wearable Sensors
Tao, Weijun; Liu, Tao; Zheng, Rencheng; Feng, Hutian
2012-01-01
Gait analysis using wearable sensors is an inexpensive, convenient, and efficient manner of providing useful information for multiple health-related applications. As a clinical tool applied in the rehabilitation and diagnosis of medical conditions and sport activities, gait analysis using wearable sensors shows great prospects. The current paper reviews available wearable sensors and ambulatory gait analysis methods based on the various wearable sensors. After an introduction of the gait phases, the principles and features of wearable sensors used in gait analysis are provided. The gait analysis methods based on wearable sensors is divided into gait kinematics, gait kinetics, and electromyography. Studies on the current methods are reviewed, and applications in sports, rehabilitation, and clinical diagnosis are summarized separately. With the development of sensor technology and the analysis method, gait analysis using wearable sensors is expected to play an increasingly important role in clinical applications. PMID:22438763
Vibration signature analysis of multistage gear transmission
NASA Technical Reports Server (NTRS)
Choy, F. K.; Tu, Y. K.; Savage, M.; Townsend, D. P.
1989-01-01
An analysis is presented for multistage multimesh gear transmission systems. The analysis predicts the overall system dynamics and the transmissibility to the gear box or the enclosed structure. The modal synthesis approach of the analysis treats the uncoupled lateral/torsional model characteristics of each stage or component independently. The vibration signature analysis evaluates the global dynamics coupling in the system. The method synthesizes the interaction of each modal component or stage with the nonlinear gear mesh dynamics and the modal support geometry characteristics. The analysis simulates transient and steady state vibration events to determine the resulting torque variations, speeds, changes, rotor imbalances, and support gear box motion excitations. A vibration signature analysis examines the overall dynamic characteristics of the system, and the individual model component responses. The gear box vibration analysis also examines the spectral characteristics of the support system.
The HIV Cure Research Agenda: The Role of Mathematical Modelling and Cost-Effectiveness Analysis.
Freedberg, Kenneth A; Possas, Cristina; Deeks, Steven; Ross, Anna Laura; Rosettie, Katherine L; Di Mascio, Michele; Collins, Chris; Walensky, Rochelle P; Yazdanpanah, Yazdan
The research agenda towards an HIV cure is building rapidly. In this article, we discuss the reasons for and methodological approach to using mathematical modeling and cost-effectiveness analysis in this agenda. We provide a brief description of the proof of concept for cure and the current directions of cure research. We then review the types of clinical economic evaluations, including cost analysis, cost-benefit analysis, and cost-effectiveness analysis. We describe the use of mathematical modeling and cost-effectiveness analysis early in the HIV epidemic as well as in the era of combination antiretroviral therapy. We then highlight the novel methodology of Value of Information analysis and its potential role in the planning of clinical trials. We close with recommendations for modeling and cost-effectiveness analysis in the HIV cure agenda.
Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mason, B. H.; Walsh, J. L.
2001-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.
Measurement uncertainty analysis techniques applied to PV performance measurements
NASA Astrophysics Data System (ADS)
Wells, C.
1992-10-01
The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.
The potential for meta-analysis to support decision analysis in ecology.
Mengersen, Kerrie; MacNeil, M Aaron; Caley, M Julian
2015-06-01
Meta-analysis and decision analysis are underpinned by well-developed methods that are commonly applied to a variety of problems and disciplines. While these two fields have been closely linked in some disciplines such as medicine, comparatively little attention has been paid to the potential benefits of linking them in ecology, despite reasonable expectations that benefits would be derived from doing so. Meta-analysis combines information from multiple studies to provide more accurate parameter estimates and to reduce the uncertainty surrounding them. Decision analysis involves selecting among alternative choices using statistical information that helps to shed light on the uncertainties involved. By linking meta-analysis to decision analysis, improved decisions can be made, with quantification of the costs and benefits of alternate decisions supported by a greater density of information. Here, we briefly review concepts of both meta-analysis and decision analysis, illustrating the natural linkage between them and the benefits from explicitly linking one to the other. We discuss some examples in which this linkage has been exploited in the medical arena and how improvements in precision and reduction of structural uncertainty inherent in a meta-analysis can provide substantive improvements to decision analysis outcomes by reducing uncertainty in expected loss and maximising information from across studies. We then argue that these significant benefits could be translated to ecology, in particular to the problem of making optimal ecological decisions in the face of uncertainty. Copyright © 2013 John Wiley & Sons, Ltd.
Sample size and power considerations in network meta-analysis
2012-01-01
Background Network meta-analysis is becoming increasingly popular for establishing comparative effectiveness among multiple interventions for the same disease. Network meta-analysis inherits all methodological challenges of standard pairwise meta-analysis, but with increased complexity due to the multitude of intervention comparisons. One issue that is now widely recognized in pairwise meta-analysis is the issue of sample size and statistical power. This issue, however, has so far only received little attention in network meta-analysis. To date, no approaches have been proposed for evaluating the adequacy of the sample size, and thus power, in a treatment network. Findings In this article, we develop easy-to-use flexible methods for estimating the ‘effective sample size’ in indirect comparison meta-analysis and network meta-analysis. The effective sample size for a particular treatment comparison can be interpreted as the number of patients in a pairwise meta-analysis that would provide the same degree and strength of evidence as that which is provided in the indirect comparison or network meta-analysis. We further develop methods for retrospectively estimating the statistical power for each comparison in a network meta-analysis. We illustrate the performance of the proposed methods for estimating effective sample size and statistical power using data from a network meta-analysis on interventions for smoking cessation including over 100 trials. Conclusion The proposed methods are easy to use and will be of high value to regulatory agencies and decision makers who must assess the strength of the evidence supporting comparative effectiveness estimates. PMID:22992327
Cognitive task analysis: Techniques applied to airborne weapons training
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terranova, M.; Seamster, T.L.; Snyder, C.E.
1989-01-01
This is an introduction to cognitive task analysis as it may be used in Naval Air Systems Command (NAVAIR) training development. The focus of a cognitive task analysis is human knowledge, and its methods of analysis are those developed by cognitive psychologists. This paper explains the role that cognitive task analysis and presents the findings from a preliminary cognitive task analysis of airborne weapons operators. Cognitive task analysis is a collection of powerful techniques that are quantitative, computational, and rigorous. The techniques are currently not in wide use in the training community, so examples of this methodology are presented alongmore » with the results. 6 refs., 2 figs., 4 tabs.« less
Development of the mathematical model for design and verification of acoustic modal analysis methods
NASA Astrophysics Data System (ADS)
Siner, Alexander; Startseva, Maria
2016-10-01
To reduce the turbofan noise it is necessary to develop methods for the analysis of the sound field generated by the blade machinery called modal analysis. Because modal analysis methods are very difficult and their testing on the full scale measurements are very expensive and tedious it is necessary to construct some mathematical models allowing to test modal analysis algorithms fast and cheap. At this work the model allowing to set single modes at the channel and to analyze generated sound field is presented. Modal analysis of the sound generated by the ring array of point sound sources is made. Comparison of experimental and numerical modal analysis results is presented at this work.
Adding results to a meta-analysis: Theory and example
NASA Astrophysics Data System (ADS)
Willson, Victor L.
Meta-analysis has been used as a research method to describe bodies of research data. It promotes hypothesis formation and the development of science education laws. A function overlooked, however, is the role it plays in updating research. Methods to integrate new research with meta-analysis results need explication. A procedure is presented using Bayesian analysis. Research in science education attitude correlation with achievement has been published after a recent meta-analysis of the topic. The results show how new findings complement the previous meta-analysis and extend its conclusions. Additional methodological questions adddressed are how studies are to be weighted, which variables are to be examined, and how often meta-analysis are to be updated.
Integrated analysis of engine structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1981-01-01
The need for light, durable, fuel efficient, cost effective aircraft requires the development of engine structures which are flexible, made from advaced materials (including composites), resist higher temperatures, maintain tighter clearances and have lower maintenance costs. The formal quantification of any or several of these requires integrated computer programs (multilevel and/or interdisciplinary analysis programs interconnected) for engine structural analysis/design. Several integrated analysis computer prorams are under development at Lewis Reseach Center. These programs include: (1) COBSTRAN-Composite Blade Structural Analysis, (2) CODSTRAN-Composite Durability Structural Analysis, (3) CISTRAN-Composite Impact Structural Analysis, (4) STAEBL-StruTailoring of Engine Blades, and (5) ESMOSS-Engine Structures Modeling Software System. Three other related programs, developed under Lewis sponsorship, are described.
Continued investigation of potential application of Omega navigation to civil aviation
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.
1978-01-01
Major attention is given to an analysis of receiver repeatability in measuring OMEGA phase data. Repeatability is defined as the ability of two like receivers which are co-located to achieve the same LOP phase readings. Specific data analysis is presented. A propagation model is described which has been used in the analysis of propagation anomalies. Composite OMEGA analysis is presented in terms of carrier phase correlation analysis and the determination of carrier phase weighting coefficients for minimizing composite phase variation. Differential OMEGA error analysis is presented for receiver separations. Three frequency analysis includes LOP error and position error based on three and four OMEGA transmissions. Results of phase amplitude correlation studies are presented.
ERIC Educational Resources Information Center
Braune, Rolf; Foshay, Wellesley R.
1983-01-01
The proposed three-step strategy for research on human information processing--concept hierarchy analysis, analysis of example sets to teach relations among concepts, and analysis of problem sets to build a progressively larger schema for the problem space--may lead to practical procedures for instructional design and task analysis. Sixty-four…
Criteria for Developing a Successful Privatization Project
1989-05-01
conceptualization and planning are required when pursuing privatization projects. In fact, privatization project proponents need to know how to...selection of projects for analysis, methods of acquiring information about these projects, and the analysis framwork . Chapter IV includes the analysis. A...performed an analysis to determine cormion conceptual and creative approaches and lessons learned. This analysis was then used to develop criteria for
Analysis of metabolic energy utilization in the Skylab astronauts
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
Skylab biomedical data regarding man's metabolic processes for extended periods of weightlessness is presented. The data was used in an integrated metabolic balance analysis which included analysis of Skylab water balance, electrolyte balance, evaporative water loss, and body composition. A theoretical analysis of energy utilization in man is presented. The results of the analysis are presented in tabular and graphic format.
Mach 14 Flow Restrictor Thermal Stress Analysis
1984-08-01
tranfer analysis, thermal stress analysis, results translation from ABAQUS to PATRAN-G, and the method used to determine the heat transfer film...G, model translation into ABAQUS format, transient heat transfer analysis and thermal stress analysis input decks, results translation from ABAQUS ...TRANSLATION FROM PATRAN-G TO ABAQUS 3 ABAQUS CONSIDERATIONS 8 MATERIAL PROPERTIES OF COLUMBIUM C-103 10 USER SUBROUTINE FILM 11 TRANSIENT
ERIC Educational Resources Information Center
Çokluk, Ömay; Koçak, Duygu
2016-01-01
In this study, the number of factors obtained from parallel analysis, a method used for determining the number of factors in exploratory factor analysis, was compared to that of the factors obtained from eigenvalue and scree plot--two traditional methods for determining the number of factors--in terms of consistency. Parallel analysis is based on…
21 CFR 123.6 - Hazard analysis and Hazard Analysis Critical Control Point (HACCP) plan.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Hazard analysis and Hazard Analysis Critical Control Point (HACCP) plan. 123.6 Section 123.6 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF... processor shall have and implement a written HACCP plan whenever a hazard analysis reveals one or more food...
21 CFR 123.6 - Hazard analysis and Hazard Analysis Critical Control Point (HACCP) plan.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Hazard analysis and Hazard Analysis Critical Control Point (HACCP) plan. 123.6 Section 123.6 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF... processor shall have and implement a written HACCP plan whenever a hazard analysis reveals one or more food...
Lenz's law and dimensional analysis
NASA Astrophysics Data System (ADS)
Pelesko, John A.; Cesky, Michael; Huertas, Sharon
2005-01-01
We show that the time it takes a magnet to fall through a nonmagnetic metallic tube may be found via dimensional analysis. The simple analysis makes this classic demonstration of Lenz's law accessible qualitatively and quantitatively to students with little knowledge of electromagnetism and only elementary knowledge of calculus. The analysis provides a new example of the power and limitations of dimensional analysis.
Skills Analysis. Workshop Package on Skills Analysis, Skills Audit and Training Needs Analysis.
ERIC Educational Resources Information Center
Hayton, Geoff; And Others
This four-part package is designed to assist Australian workshop leaders running 2-day workshops on skills analysis, skills audit, and training needs analysis. Part A contains information on how to use the package and a list of workshop aims. Parts B, C, and D consist, respectively, of the workshop leader's guide; overhead transparency sheets and…
ERIC Educational Resources Information Center
DeNisi, Angelo S.; McCormick, Ernest J.
The Position Analysis Questionnaire (PAQ) is a structured job analysis procedure that provides for the analysis of jobs in terms of each of 187 job elements, these job elements being grouped into six divisions: information input, mental processes, work output, relationships with other persons, job context, and other job characteristics. Two…
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA
Baixauli-Pérez, Mª Piedad
2017-01-01
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants. PMID:28665325
Using cognitive work analysis to explore activity allocation within military domains.
Jenkins, D P; Stanton, N A; Salmon, P M; Walker, G H; Young, M S
2008-06-01
Cognitive work analysis (CWA) is frequently advocated as an approach for the analysis of complex socio-technical systems. Much of the current CWA literature within the military domain pays particular attention to its initial phases; work domain analysis and contextual task analysis. Comparably, the analysis of the social and organisational constraints receives much less attention. Through the study of a helicopter mission planning system software tool, this paper describes an approach for investigating the constraints affecting the distribution of work. The paper uses this model to evaluate the potential benefits of the social and organisational analysis phase within a military context. The analysis shows that, through its focus on constraints, the approach provides a unique description of the factors influencing the social organisation within a complex domain. This approach appears to be compatible with existing approaches and serves as a validation of more established social analysis techniques. As part of the ergonomic design of mission planning systems, the social organisation and cooperation analysis phase of CWA provides a constraint-based description informing allocation of function between key actor groups. This approach is useful because it poses questions related to the transfer of information and optimum working practices.
NASA Astrophysics Data System (ADS)
Subagiyo, A.; Dwiproborini, F.; Sari, N.
2017-06-01
The border of RI-PNG Muara Tami district is located on the eastern part of Jayapura city, which has agricultural potential. The past paradigm put the border as the backyard caused underdevelopment in border RI-PNG Muara Tami district, so that needed acceleration development through agropolitan concept. The purpose of the research is to define the aspect of physical, social, economic and border security to support agropolitan concept in border RI-PNG Muara Tami district. The analytical research method are border interactionan analysis, border security analysis, land capability analysis, land availability analysis, schallogram analysis, institutional analysis, leading comodity analysis (LQ and Growth Share), agribusiness linkage system analysis, accessibility analysis and A’WOT analysis. The result shown that mobilization from PNG to Muara Tami district could increase the economic opportunities with agricultural based. Border security of RI-PNG Muara Tami district is vulnerable, yet still condusive to mobilization. There is 12.977,94 Ha potensial land for agricultural (20,93%). There are six leading commodities to developed are rice, watermelon, banana, coconut, areca nut and cocoa. The border of RI-PNG Muara Tami district is ready enough to support agropolitan concept, but still have problems in social and economy aspect.
Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA.
Fuentes-Bargues, José Luis; González-Cruz, Mª Carmen; González-Gaya, Cristina; Baixauli-Pérez, Mª Piedad
2017-06-30
The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants.
Time-localized frequency analysis of ultrasonic guided waves for nondestructive testing
NASA Astrophysics Data System (ADS)
Shin, Hyeon Jae; Song, Sung-Jin
2000-05-01
A time-localized frequency (TLF) analysis is employed for the guided wave mode identification and improved guided wave applications. For the analysis of time-localized frequency contents of digitized ultrasonic signals, TLF analysis consists of splitting the time domain signal into overlapping segments, weighting each with the hanning window, and forming the columns of discrete Fourier transforms. The result is presented by a frequency versus time domain diagram showing frequency variation along the signal arrival time. For the demonstration of the utility of TLF analysis, an experimental group velocity dispersion pattern obtained by TLF analysis is compared with the dispersion diagram obtained by theory of elasticity. Sample piping is carbon steel piping that is used for the transportation of natural gas underground. Guided wave propagation characteristic on the piping is considered with TLF analysis and wave structure concepts. TLF analysis is used for the detection of simulated corrosion defects and the assessment of weld joint using ultrasonic guided waves. TLF analysis has revealed that the difficulty of mode identification in multi-mode propagation could be overcome. Group velocity dispersion pattern obtained by TLF analysis agrees well with theoretical results.
Methods for assessing the stability of slopes during earthquakes-A retrospective
Jibson, R.W.
2011-01-01
During the twentieth century, several methods to assess the stability of slopes during earthquakes were developed. Pseudostatic analysis was the earliest method; it involved simply adding a permanent body force representing the earthquake shaking to a static limit-equilibrium analysis. Stress-deformation analysis, a later development, involved much more complex modeling of slopes using a mesh in which the internal stresses and strains within elements are computed based on the applied external loads, including gravity and seismic loads. Stress-deformation analysis provided the most realistic model of slope behavior, but it is very complex and requires a high density of high-quality soil-property data as well as an accurate model of soil behavior. In 1965, Newmark developed a method that effectively bridges the gap between these two types of analysis. His sliding-block model is easy to apply and provides a useful index of co-seismic slope performance. Subsequent modifications to sliding-block analysis have made it applicable to a wider range of landslide types. Sliding-block analysis provides perhaps the greatest utility of all the types of analysis. It is far easier to apply than stress-deformation analysis, and it yields much more useful information than does pseudostatic analysis. ?? 2010.
Pavlova, Milena; Tsiachristas, Apostolos; Vermaeten, Gerhard; Groot, Wim
2009-01-01
Portfolio analysis is a business management tool that can assist health care managers to develop new organizational strategies. The application of portfolio analysis to US hospital settings has been frequently reported. In Europe however, the application of this technique has received little attention, especially concerning public hospitals. Therefore, this paper examines the peculiarities of portfolio analysis and its applicability to the strategic management of European public hospitals. The analysis is based on a pilot application of a multi-factor portfolio analysis in a Dutch university hospital. The nature of portfolio analysis and the steps in a multi-factor portfolio analysis are reviewed along with the characteristics of the research setting. Based on these data, a multi-factor portfolio model is developed and operationalized. The portfolio model is applied in a pilot investigation to analyze the market attractiveness and hospital strengths with regard to the provision of three orthopedic services: knee surgery, hip surgery, and arthroscopy. The pilot portfolio analysis is discussed to draw conclusions about potential barriers to the overall adoption of portfolio analysis in the management of a public hospital. Copyright (c) 2008 John Wiley & Sons, Ltd.
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis.
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.
pyAudioAnalysis: An Open-Source Python Library for Audio Signal Analysis
Giannakopoulos, Theodoros
2015-01-01
Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library. PMID:26656189
Using multiple group modeling to test moderators in meta-analysis.
Schoemann, Alexander M
2016-12-01
Meta-analysis is a popular and flexible analysis that can be fit in many modeling frameworks. Two methods of fitting meta-analyses that are growing in popularity are structural equation modeling (SEM) and multilevel modeling (MLM). By using SEM or MLM to fit a meta-analysis researchers have access to powerful techniques associated with SEM and MLM. This paper details how to use one such technique, multiple group analysis, to test categorical moderators in meta-analysis. In a multiple group meta-analysis a model is fit to each level of the moderator simultaneously. By constraining parameters across groups any model parameter can be tested for equality. Using multiple groups to test for moderators is especially relevant in random-effects meta-analysis where both the mean and the between studies variance of the effect size may be compared across groups. A simulation study and the analysis of a real data set are used to illustrate multiple group modeling with both SEM and MLM. Issues related to multiple group meta-analysis and future directions for research are discussed. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Amundsen, R. M.; Feldhaus, W. S.; Little, A. D.; Mitchum, M. V.
1995-01-01
Electronic integration of design and analysis processes was achieved and refined at Langley Research Center (LaRC) during the development of an optical bench for a laser-based aerospace experiment. Mechanical design has been integrated with thermal, structural and optical analyses. Electronic import of the model geometry eliminates the repetitive steps of geometry input to develop each analysis model, leading to faster and more accurate analyses. Guidelines for integrated model development are given. This integrated analysis process has been built around software that was already in use by designers and analysis at LaRC. The process as currently implemented used Pro/Engineer for design, Pro/Manufacturing for fabrication, PATRAN for solid modeling, NASTRAN for structural analysis, SINDA-85 and P/Thermal for thermal analysis, and Code V for optical analysis. Currently, the only analysis model to be built manually is the Code V model; all others can be imported for the Pro/E geometry. The translator from PATRAN results to Code V optical analysis (PATCOD) was developed and tested at LaRC. Directions for use of the translator or other models are given.
Al Ajmi, Eiman; Forghani, Behzad; Reinhold, Caroline; Bayat, Maryam; Forghani, Reza
2018-06-01
There is a rich amount of quantitative information in spectral datasets generated from dual-energy CT (DECT). In this study, we compare the performance of texture analysis performed on multi-energy datasets to that of virtual monochromatic images (VMIs) at 65 keV only, using classification of the two most common benign parotid neoplasms as a testing paradigm. Forty-two patients with pathologically proven Warthin tumour (n = 25) or pleomorphic adenoma (n = 17) were evaluated. Texture analysis was performed on VMIs ranging from 40 to 140 keV in 5-keV increments (multi-energy analysis) or 65-keV VMIs only, which is typically considered equivalent to single-energy CT. Random forest (RF) models were constructed for outcome prediction using separate randomly selected training and testing sets or the entire patient set. Using multi-energy texture analysis, tumour classification in the independent testing set had accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 92%, 86%, 100%, 100%, and 83%, compared to 75%, 57%, 100%, 100%, and 63%, respectively, for single-energy analysis. Multi-energy texture analysis demonstrates superior performance compared to single-energy texture analysis of VMIs at 65 keV for classification of benign parotid tumours. • We present and validate a paradigm for texture analysis of DECT scans. • Multi-energy dataset texture analysis is superior to single-energy dataset texture analysis. • DECT texture analysis has high accura\\cy for diagnosis of benign parotid tumours. • DECT texture analysis with machine learning can enhance non-invasive diagnostic tumour evaluation.
Abuabara, Allan; Baratto-Filho, Flares; Aguiar Anele, Juliana; Leonardi, Denise Piotto; Sousa-Neto, Manoel Damião
2013-01-01
The success of endodontic treatment depends on the identification of all root canals. Technological advances have facilitated this process as well as the assessment of internal anatomical variations. The aim of this study was to compare the efficacy of clinical and radiological methods in locating second mesiobuccal canals (MB2) in maxillary first molars. Fifty patients referred for analysis; access and clinical analysis; cone-beam endodontic treatment of their maxillary first molars were submitted to the following assessments: analysis; access and clinical analysis; cone-beam computed tomography (CBCT); post-CBCT clinical analysis; clinical analysis using an operating microscope; and clinical analysis after Start X ultrasonic inserts in teeth with negative results in all previous analyses. Periapical radiographic analysis revealed the presence of MB2 in four (8%) teeth, clinical analysis in 25 (50%), CBCT analysis in 27 (54%) and clinical analysis following CBCT and using an operating microscope in 27 (54%) and 29 (58%) teeth, respectively. The use of Start X ultrasonic inserts allowed one to detect two additional teeth with MB2 (62%). According to Vertucci's classification 48% of the mesiobuccal canals found were type I, 28% type II, 18% type IV and 6% type V. Statistical analysis showed no significant differences (p > 0.5) in the ability of CBCT to detect MB2 canals when compared with clinical assessment with or without an operating microscope. A significant difference (p < 0.001)was found only between periapical radiography and clinical/CBCT evaluations. Combined use of different methods increased the detection ofthe second canal in MB roots, but without statistical difference among CBCT, operating microscope, Start X and clinical analysis.
Time-frequency analysis : mathematical analysis of the empirical mode decomposition.
DOT National Transportation Integrated Search
2009-01-01
Invented over 10 years ago, empirical mode : decomposition (EMD) provides a nonlinear : time-frequency analysis with the ability to successfully : analyze nonstationary signals. Mathematical : Analysis of the Empirical Mode Decomposition : is a...
CADDIS Volume 4. Data Analysis: Basic Analyses
Use of statistical tests to determine if an observation is outside the normal range of expected values. Details of CART, regression analysis, use of quantile regression analysis, CART in causal analysis, simplifying or pruning resulting trees.
2001-10-25
Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
Crash Certification by Analysis - Are We There Yet?
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Fasanella, Edwin L.; Lyle, Karen H.
2006-01-01
This paper addresses the issue of crash certification by analysis. This broad topic encompasses many ancillary issues including model validation procedures, uncertainty in test data and analysis models, probabilistic techniques for test-analysis correlation, verification of the mathematical formulation, and establishment of appropriate qualification requirements. This paper will focus on certification requirements for crashworthiness of military helicopters; capabilities of the current analysis codes used for crash modeling and simulation, including some examples of simulations from the literature to illustrate the current approach to model validation; and future directions needed to achieve "crash certification by analysis."
Augmenting Qualitative Text Analysis with Natural Language Processing: Methodological Study.
Guetterman, Timothy C; Chang, Tammy; DeJonckheere, Melissa; Basu, Tanmay; Scruggs, Elizabeth; Vydiswaran, V G Vinod
2018-06-29
Qualitative research methods are increasingly being used across disciplines because of their ability to help investigators understand the perspectives of participants in their own words. However, qualitative analysis is a laborious and resource-intensive process. To achieve depth, researchers are limited to smaller sample sizes when analyzing text data. One potential method to address this concern is natural language processing (NLP). Qualitative text analysis involves researchers reading data, assigning code labels, and iteratively developing findings; NLP has the potential to automate part of this process. Unfortunately, little methodological research has been done to compare automatic coding using NLP techniques and qualitative coding, which is critical to establish the viability of NLP as a useful, rigorous analysis procedure. The purpose of this study was to compare the utility of a traditional qualitative text analysis, an NLP analysis, and an augmented approach that combines qualitative and NLP methods. We conducted a 2-arm cross-over experiment to compare qualitative and NLP approaches to analyze data generated through 2 text (short message service) message survey questions, one about prescription drugs and the other about police interactions, sent to youth aged 14-24 years. We randomly assigned a question to each of the 2 experienced qualitative analysis teams for independent coding and analysis before receiving NLP results. A third team separately conducted NLP analysis of the same 2 questions. We examined the results of our analyses to compare (1) the similarity of findings derived, (2) the quality of inferences generated, and (3) the time spent in analysis. The qualitative-only analysis for the drug question (n=58) yielded 4 major findings, whereas the NLP analysis yielded 3 findings that missed contextual elements. The qualitative and NLP-augmented analysis was the most comprehensive. For the police question (n=68), the qualitative-only analysis yielded 4 primary findings and the NLP-only analysis yielded 4 slightly different findings. Again, the augmented qualitative and NLP analysis was the most comprehensive and produced the highest quality inferences, increasing our depth of understanding (ie, details and frequencies). In terms of time, the NLP-only approach was quicker than the qualitative-only approach for the drug (120 vs 270 minutes) and police (40 vs 270 minutes) questions. An approach beginning with qualitative analysis followed by qualitative- or NLP-augmented analysis took longer time than that beginning with NLP for both drug (450 vs 240 minutes) and police (390 vs 220 minutes) questions. NLP provides both a foundation to code qualitatively more quickly and a method to validate qualitative findings. NLP methods were able to identify major themes found with traditional qualitative analysis but were not useful in identifying nuances. Traditional qualitative text analysis added important details and context. ©Timothy C Guetterman, Tammy Chang, Melissa DeJonckheere, Tanmay Basu, Elizabeth Scruggs, VG Vinod Vydiswaran. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 29.06.2018.
NeoAnalysis: a Python-based toolbox for quick electrophysiological data processing and analysis.
Zhang, Bo; Dai, Ji; Zhang, Tao
2017-11-13
In a typical electrophysiological experiment, especially one that includes studying animal behavior, the data collected normally contain spikes, local field potentials, behavioral responses and other associated data. In order to obtain informative results, the data must be analyzed simultaneously with the experimental settings. However, most open-source toolboxes currently available for data analysis were developed to handle only a portion of the data and did not take into account the sorting of experimental conditions. Additionally, these toolboxes require that the input data be in a specific format, which can be inconvenient to users. Therefore, the development of a highly integrated toolbox that can process multiple types of data regardless of input data format and perform basic analysis for general electrophysiological experiments is incredibly useful. Here, we report the development of a Python based open-source toolbox, referred to as NeoAnalysis, to be used for quick electrophysiological data processing and analysis. The toolbox can import data from different data acquisition systems regardless of their formats and automatically combine different types of data into a single file with a standardized format. In cases where additional spike sorting is needed, NeoAnalysis provides a module to perform efficient offline sorting with a user-friendly interface. Then, NeoAnalysis can perform regular analog signal processing, spike train, and local field potentials analysis, behavioral response (e.g. saccade) detection and extraction, with several options available for data plotting and statistics. Particularly, it can automatically generate sorted results without requiring users to manually sort data beforehand. In addition, NeoAnalysis can organize all of the relevant data into an informative table on a trial-by-trial basis for data visualization. Finally, NeoAnalysis supports analysis at the population level. With the multitude of general-purpose functions provided by NeoAnalysis, users can easily obtain publication-quality figures without writing complex codes. NeoAnalysis is a powerful and valuable toolbox for users doing electrophysiological experiments.
On the Choice of Variable for Atmospheric Moisture Analysis
NASA Technical Reports Server (NTRS)
Dee, Dick P.; DaSilva, Arlindo M.; Atlas, Robert (Technical Monitor)
2002-01-01
The implications of using different control variables for the analysis of moisture observations in a global atmospheric data assimilation system are investigated. A moisture analysis based on either mixing ratio or specific humidity is prone to large extrapolation errors, due to the high variability in space and time of these parameters and to the difficulties in modeling their error covariances. Using the logarithm of specific humidity does not alleviate these problems, and has the further disadvantage that very dry background estimates cannot be effectively corrected by observations. Relative humidity is a better choice from a statistical point of view, because this field is spatially and temporally more coherent and error statistics are therefore easier to obtain. If, however, the analysis is designed to preserve relative humidity in the absence of moisture observations, then the analyzed specific humidity field depends entirely on analyzed temperature changes. If the model has a cool bias in the stratosphere this will lead to an unstable accumulation of excess moisture there. A pseudo-relative humidity can be defined by scaling the mixing ratio by the background saturation mixing ratio. A univariate pseudo-relative humidity analysis will preserve the specific humidity field in the absence of moisture observations. A pseudorelative humidity analysis is shown to be equivalent to a mixing ratio analysis with flow-dependent covariances. In the presence of multivariate (temperature-moisture) observations it produces analyzed relative humidity values that are nearly identical to those produced by a relative humidity analysis. Based on a time series analysis of radiosonde observed-minus-background differences it appears to be more justifiable to neglect specific humidity-temperature correlations (in a univariate pseudo-relative humidity analysis) than to neglect relative humidity-temperature correlations (in a univariate relative humidity analysis). A pseudo-relative humidity analysis is easily implemented in an existing moisture analysis system, by simply scaling observed-minus background moisture residuals prior to solving the analysis equation, and rescaling the analyzed increments afterward.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Valerie A.; Ogilvie, Alistair B.
2012-01-01
This report addresses the general data requirements for reliability analysis of fielded wind turbines and other wind plant equipment. The report provides a rationale for why this data should be collected, a list of the data needed to support reliability and availability analysis, and specific data recommendations for a Computerized Maintenance Management System (CMMS) to support automated analysis. This data collection recommendations report was written by Sandia National Laboratories to address the general data requirements for reliability analysis of operating wind turbines. This report is intended to help develop a basic understanding of the data needed for reliability analysis frommore » a Computerized Maintenance Management System (CMMS) and other data systems. The report provides a rationale for why this data should be collected, a list of the data needed to support reliability and availability analysis, and specific recommendations for a CMMS to support automated analysis. Though written for reliability analysis of wind turbines, much of the information is applicable to a wider variety of equipment and analysis and reporting needs. The 'Motivation' section of this report provides a rationale for collecting and analyzing field data for reliability analysis. The benefits of this type of effort can include increased energy delivered, decreased operating costs, enhanced preventive maintenance schedules, solutions to issues with the largest payback, and identification of early failure indicators.« less
Comparison of variance estimators for meta-analysis of instrumental variable estimates
Schmidt, AF; Hingorani, AD; Jefferis, BJ; White, J; Groenwold, RHH; Dudbridge, F
2016-01-01
Abstract Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two versions of the delta method (IV before or after pooling), four bootstrap estimators, a jack-knife estimator and a heteroscedasticity-consistent (HC) variance estimator were compared using simulation. Two types of meta-analyses were compared, a two-stage meta-analysis pooling results, and a one-stage meta-analysis pooling datasets. Results: Using a two-stage meta-analysis, coverage of the point estimate using bootstrapped estimators deviated from nominal levels at weak instrument settings and/or outcome probabilities ≤ 0.10. The jack-knife estimator was the least biased resampling method, the HC estimator often failed at outcome probabilities ≤ 0.50 and overall the delta method estimators were the least biased. In the presence of between-study heterogeneity, the delta method before meta-analysis performed best. Using a one-stage meta-analysis all methods performed equally well and better than two-stage meta-analysis of greater or equal size. Conclusions: In the presence of between-study heterogeneity, two-stage meta-analyses should preferentially use the delta method before meta-analysis. Weak instrument bias can be reduced by performing a one-stage meta-analysis. PMID:27591262