Exemples d’utilisation des techniques d’optimisation en calcul de structures de reacteurs
2003-03-01
34~ optimisation g~om~trique (architecture fig~e) A la difference du secteur automobile et des avionneurs, la plupart des composants des r~acteurs n...utilise des lois de comportement mat~riaux non lin~aires ainsi que des hypotheses de grands d~placements. Ltude d’optimisation consiste ý minimiser...un disque simple et d~cid6 de s~lectionner trois param~tes qui influent sur la rupture : 1paisseur de la toile du disque ElI, la hauteur L3 et la
NASA Astrophysics Data System (ADS)
Ghaddar, A.; Sinno, N.
2005-05-01
La complexité du phénomène de files d'attente dans les systèmes informatiques et télécommunications nécessite leur simulation par des modèles Markoviens pour les mesures de performance, mesure des délais d'attente au niveau des routeurs pour le modèle informatique et l'étude de la gestion des appels téléphoniques pour le modèle des circuits téléphoniques. L'optimisation des méthodes numériques de résolution des équations relatives à ces deux modèles va permettre d' ídentifier les critères de convergence rapide vers les états stationnaires correspondant à ces mesures.
Conception et optimisation d'une peau en composite pour une aile adaptative =
NASA Astrophysics Data System (ADS)
Michaud, Francois
Les preoccupations economiques et environnementales constituent des enjeux majeurs pour le developpement de nouvelles technologies en aeronautique. C'est dans cette optique qu'est ne le projet MDO-505 intitule Morphing Architectures and Related Technologies for Wing Efficiency Improvement. L'objectif de ce projet vise a concevoir une aile adaptative active servant a ameliorer sa laminarite et ainsi reduire la consommation de carburant et les emissions de l'avion. Les travaux de recherche realises ont permis de concevoir et optimiser une peau en composite adaptative permettant d'assurer l'amelioration de la laminarite tout en conservant son integrite structurale. D'abord, une methode d'optimisation en trois etapes fut developpee avec pour objectif de minimiser la masse de la peau en composite en assurant qu'elle s'adapte par un controle actif de la surface deformable aux profils aerodynamiques desires. Le processus d'optimisation incluait egalement des contraintes de resistance, de stabilite et de rigidite de la peau en composite. Suite a l'optimisation, la peau optimisee fut simplifiee afin de faciliter la fabrication et de respecter les regles de conception de Bombardier Aeronautique. Ce processus d'optimisation a permis de concevoir une peau en composite dont les deviations ou erreurs des formes obtenues etaient grandement reduites afin de repondre au mieux aux profils aerodynamiques optimises. Les analyses aerodynamiques realisees a partir de ces formes ont predit de bonnes ameliorations de la laminarite. Par la suite, une serie de validations analytiques fut realisee afin de valider l'integrite structurale de la peau en composite suivant les methodes generalement utilisees par Bombardier Aeronautique. D'abord, une analyse comparative par elements finis a permis de valider une rigidite equivalente de l'aile adaptative a la section d'aile d'origine. Le modele par elements finis fut par la suite mis en boucle avec des feuilles de calcul afin de valider la stabilite et la resistance de la peau en composite pour les cas de chargement aerodynamique reels. En dernier lieu, une analyse de joints boulonnes fut realisee en utilisant un outil interne nomme LJ 85 BJSFM GO.v9 developpe par Bombardier Aeronautique. Ces analyses ont permis de valider numeriquement l'integrite structurale de la peau de composite pour des chargements et des admissibles de materiaux aeronautiques typiques.
Optimisation des trajectoires verticales par la methode de la recherche de l'harmonie =
NASA Astrophysics Data System (ADS)
Ruby, Margaux
Face au rechauffement climatique, les besoins de trouver des solutions pour reduire les emissions de CO2 sont urgentes. L'optimisation des trajectoires est un des moyens pour reduire la consommation de carburant lors d'un vol. Afin de determiner la trajectoire optimale de l'avion, differents algorithmes ont ete developpes. Le but de ces algorithmes est de reduire au maximum le cout total d'un vol d'un avion qui est directement lie a la consommation de carburant et au temps de vol. Un autre parametre, nomme l'indice de cout est considere dans la definition du cout de vol. La consommation de carburant est fournie via des donnees de performances pour chaque phase de vol. Dans le cas de ce memoire, les phases d'un vol complet, soit, une phase de montee, une phase de croisiere et une phase de descente, sont etudies. Des " marches de montee " etaient definies comme des montees de 2 000ft lors de la phase de croisiere sont egalement etudiees. L'algorithme developpe lors de ce memoire est un metaheuristique, nomme la recherche de l'harmonie, qui, concilie deux types de recherches : la recherche locale et la recherche basee sur une population. Cet algorithme se base sur l'observation des musiciens lors d'un concert, ou plus exactement sur la capacite de la musique a trouver sa meilleure harmonie, soit, en termes d'optimisation, le plus bas cout. Differentes donnees d'entrees comme le poids de l'avion, la destination, la vitesse de l'avion initiale et le nombre d'iterations doivent etre, entre autre, fournies a l'algorithme pour qu'il soit capable de determiner la solution optimale qui est definie comme : [Vitesse de montee, Altitude, Vitesse de croisiere, Vitesse de descente]. L'algorithme a ete developpe a l'aide du logiciel MATLAB et teste pour plusieurs destinations et plusieurs poids pour un seul type d'avion. Pour la validation, les resultats obtenus par cet algorithme ont ete compares dans un premier temps aux resultats obtenus suite a une recherche exhaustive qui a utilisee toutes les combinaisons possibles. Cette recherche exhaustive nous a fourni l'optimal global; ainsi, la solution de notre algorithme doit se rapprocher le plus possible de la recherche exhaustive afin de prouver qu'il donne des resultats proche de l'optimal global. Une seconde comparaison a ete effectuee entre les resultats fournis par l'algorithme et ceux du Flight Management System (FMS) qui est un systeme d'avionique situe dans le cockpit de l'avion fournissant la route a suivre afin d'optimiser la trajectoire. Le but est de prouver que l'algorithme de la recherche de l'harmonie donne de meilleurs resultats que l'algorithme implemente dans le FMS.
NASA Astrophysics Data System (ADS)
Haguma, Didier
Il est dorenavant etabli que les changements climatiques auront des repercussions sur les ressources en eau. La situation est preoccupante pour le secteur de production d'energie hydroelectrique, car l'eau constitue le moteur pour generer cette forme d'energie. Il sera important d'adapter les regles de gestion et/ou les installations des systemes hydriques, afin de minimiser les impacts negatifs et/ou pour capitaliser sur les retombees positives que les changements climatiques pourront apporter. Les travaux de la presente recherche s'interessent au developpement d'une methode de gestion des systemes hydriques qui tient compte des projections climatiques pour mieux anticiper les impacts de l'evolution du climat sur la production d'hydroelectricite et d'etablir des strategies d'adaptation aux changements climatiques. Le domaine d'etude est le bassin versant de la riviere Manicouagan situe dans la partie centrale du Quebec. Une nouvelle approche d'optimisation des ressources hydriques dans le contexte des changements climatiques est proposee. L'approche traite le probleme de la saisonnalite et de la non-stationnarite du climat d'une maniere explicite pour representer l'incertitude rattachee a un ensemble des projections climatiques. Cette approche permet d'integrer les projections climatiques dans le probleme d'optimisation des ressources en eau pour une gestion a long terme des systemes hydriques et de developper des strategies d'adaptation de ces systemes aux changements climatiques. Les resultats montrent que les impacts des changements climatiques sur le regime hydrologique du bassin de la riviere Manicouagan seraient le devancement et l'attenuation de la crue printaniere et l'augmentation du volume annuel d'apports. L'adaptation des regles de gestion du systeme hydrique engendrerait une hausse de la production hydroelectrique. Neanmoins, une perte de la performance des installations existantes du systeme hydrique serait observee a cause de l'augmentation des deversements non productibles dans le climat futur. Des strategies d'adaptation structurale ont ete analysees pour augmenter la capacite de production et la capacite d'ecoulement de certaines centrales hydroelectriques afin d'ameliorer la performance du systeme. Une analyse economique a permis de choisir les meilleures mesures d'adaptation et de determiner le moment opportun pour la mise en oeuvre de ces mesures. Les resultats de la recherche offrent aux gestionnaires des systemes hydriques un outil qui permet de mieux anticiper les consequences des changements climatiques sur la production hydroelectrique, incluant le rendement de centrales, les deversements non productibles et le moment le plus opportun pour inclure des modifications aux systemes hydriques. Mots-cles : systemes hydriques, adaptation aux changements climatiques, riviere Manicouagan
Optimisation thermique de moules d'injection construits par des processus génératifs
NASA Astrophysics Data System (ADS)
Boillat, E.; Glardon, R.; Paraschivescu, D.
2002-12-01
Une des potentialités les plus remarquables des procédés de production génératifs, comme le frittage sélectif par laser, est leur capacité à fabriquer des moules pour l'injection plastique équipés directement de canaux de refroidissement conformes, parfaitement adaptés aux empreintes Pour que l'industrie de l'injection puisse tirer pleinement parti de cette nouvelle opportunité, il est nécessaire de mettre à la disposition des moulistes des logiciels de simulation capables d'évaluer les gains de productivité et de qualité réalisables avec des systèmes de refroidissement mieux adaptés. Ces logiciels devraient aussi être capables, le cas échéant, de concevoir le système de refroidissement optimal dans des situations où l'empreinte d'injection est complexe. Devant le manque d'outils disponibles dans ce domaine, le but de cet article est de proposer un modèle simple de moules d'injection. Ce modèle permet de comparer différentes stratégies de refroidissement et peut être couplé avec un algorithme d'optimisation.
Comparaison de méthodes d'identification des paramètres d'une machine asynchrone
NASA Astrophysics Data System (ADS)
Bellaaj-Mrabet, N.; Jelassi, K.
1998-07-01
Interests, in Genetic Algorithms (G.A.) expands rapidly. This paper consists initially to apply G.A. for identifying induction motor parameters. Next, we compare the performances with classical methods like Maximum Likelihood and classical electrotechnical methods. These methods are applied on three induction motors of different powers to compare results following a set of criteria. Les algorithmes génétiques sont des méthodes adaptatives de plus en plus utilisée pour la résolution de certains problèmes d'optimisation. Le présent travail consiste d'une part, à mettre en œuvre un A.G sur des problèmes d'identification des machines électriques, et d'autre part à comparer ses performances avec les méthodes classiques tels que la méthode du maximum de vraisemblance et la méthode électrotechnique basée sur des essais à vides et en court-circuit. Ces méthodes sont appliquées sur des machines asynchrones de différentes puissances. Les résultats obtenus sont comparés selon certains critères, permettant de conclure sur la validité et la performance de chaque méthode.
Betons durables a base de cendres d'ecorces de riz
NASA Astrophysics Data System (ADS)
Wilson, William
De nos jours, le développement durable est devenu une nécessité dans l'ensemble des sphères d'activité de notre monde, et particulièrement dans le domaine du béton. En effet, le développement des sociétés passe inévitablement par l'augmentation des infrastructures; le béton est le matériau principalement utilisé et son empreinte environnementale considérable gagnerait à être diminuée. L'industrie et la recherche sont très actives à ce niveau et la science des bétons durables est en pleine expansion. Dans ce contexte, une piste de solution est l'utilisation de résidus industriels ou agricoles comme ajouts cimentaires, ce qui permet de remplacer partiellement le ciment très polluant, tout en produisant des bétons avec une meilleure durabilité. Les cendres d'écorce de riz (RHA) présentent ainsi un potentiel cimentaire similaire aux meilleurs ajouts cimentaires actuellement utilisés, mais les applications concrètes de ce nouveau matériau demeurent peu développées à ce jour. Le présent projet a donc été conçu afin d'illustrer le potentiel des RHA d'une part dans les pays industrialisés pour améliorer la durabilité des bétons hautes performances (BHP) et pour améliorer les propriétés à l'état frais des bétons autoplaçants (BAP); et d'autre part, dans les pays en développement pour démocratiser les bétons durables produits avec des technologies adaptées aux réalités locales. Une première phase réalisée avec des RHA de haute qualité (RHAI) a ainsi été consacrée aux applications en pays industrialisés. La caractérisation des RHAI a indiqué une composition de 90% de silice amorphe, des particules légèrement plus grossières que le ciment, et une microstructure très poreuse et absorbante. Afin de pallier à cette absorption d'eau, l'optimisation du type de superplastifiant a permis de déterminer que l'utilisation du
NASA Astrophysics Data System (ADS)
Comot, Pierre
L'industrie aeronautique, cherche a etudier la possibilite d'utiliser de maniere structurelle des joints brases, dans une optique de reduction de poids et de cout. Le developpement d'une methode d'evaluation rapide, fiable et peu couteuse pour evaluer l'integrite structurelle des joints apparait donc indispensable. La resistance mecanique d'un joint brase dependant principalement de la quantite de phase fragile dans sa microstructure. Les ondes guidees ultrasonores permettent de detecter ce type de phase lorsqu'elles sont couplees a une mesure spatio-temporelle. De plus la nature de ce type d'ondes permet l'inspection de joints ayant des formes complexes. Ce memoire se concentre donc sur le developpement d'une technique basee sur l'utilisation d'ondes guidees ultrasonores pour l'inspection de joints brases a recouvrement d'Inconel 625 avec comme metal d'apport du BNi-2. Dans un premiers temps un modele elements finis du joint a ete utilise pour simuler la propagation des ultrasons et optimiser les parametres d'inspection, la simulation a permis egalement de demontrer la faisabilite de la technique pour la detection de la quantite de phase fragile dans ce type de joints. Les parametres optimises sont la forme de signal d'excitation, sa frequence centrale et la direction d'excitation. Les simulations ont montre que l'energie de l'onde ultrasonore transmise a travers le joint aussi bien que celle reflechie, toutes deux extraites des courbes de dispersion, etaient proportionnelles a la quantite de phase fragile presente dans le joint et donc cette methode permet d'identifier la presence ou non d'une phase fragile dans ce type de joint. Ensuite des experimentations ont ete menees sur trois echantillons typiques presentant differentes quantites de phase fragile dans le joint, pour obtenir ce type d'echantillons differents temps de brasage ont ete utilises (1, 60 et 180 min). Pour cela un banc d'essai automatise a ete developpe permettant d'effectuer une analyse similaire a celle utilisee en simulation. Les parametres experimentaux ayant ete choisis en accord avec l'optimisation effectuee lors des simulations et apres une premiere optimisation du procede experimental. Finalement les resultats experimentaux confirment les resultats obtenus en simulation, et demontrent le potentiel de la methode developpee.
NASA Astrophysics Data System (ADS)
Corbeil Therrien, Audrey
La tomographie d'emission par positrons (TEP) est un outil precieux en recherche preclinique et pour le diagnostic medical. Cette technique permet d'obtenir une image quantitative de fonctions metaboliques specifiques par la detection de photons d'annihilation. La detection des ces photons se fait a l'aide de deux composantes. D'abord, un scintillateur convertit l'energie du photon 511 keV en photons du spectre visible. Ensuite, un photodetecteur convertit l'energie lumineuse en signal electrique. Recemment, les photodiodes avalanche monophotoniques (PAMP) disposees en matrice suscitent beaucoup d'interet pour la TEP. Ces matrices forment des detecteurs sensibles, robustes, compacts et avec une resolution en temps hors pair. Ces qualites en font un photodetecteur prometteur pour la TEP, mais il faut optimiser les parametres de la matrice et de l'electronique de lecture afin d'atteindre les performances optimales pour la TEP. L'optimisation de la matrice devient rapidement une operation difficile, car les differents parametres interagissent de maniere complexe avec les processus d'avalanche et de generation de bruit. Enfin, l'electronique de lecture pour les matrices de PAMP demeure encore rudimentaire et il serait profitable d'analyser differentes strategies de lecture. Pour repondre a cette question, la solution la plus economique est d'utiliser un simulateur pour converger vers la configuration donnant les meilleures performances. Les travaux de ce memoire presentent le developpement d'un tel simulateur. Celui-ci modelise le comportement d'une matrice de PAMP en se basant sur les equations de physique des semiconducteurs et des modeles probabilistes. Il inclut les trois principales sources de bruit, soit le bruit thermique, les declenchements intempestifs correles et la diaphonie optique. Le simulateur permet aussi de tester et de comparer de nouvelles approches pour l'electronique de lecture plus adaptees a ce type de detecteur. Au final, le simulateur vise a quantifier l'impact des parametres du photodetecteur sur la resolution en energie et la resolution en temps et ainsi optimiser les performances de la matrice de PAMP. Par exemple, l'augmentation du ratio de surface active ameliore les performances, mais seulement jusqu'a un certain point. D'autres phenomenes lies a la surface active, comme le bruit thermique, provoquent une degradation du resultat. Le simulateur nous permet de trouver un compromis entre ces deux extremes. Les simulations avec les parametres initiaux demontrent une efficacite de detection de 16,7 %, une resolution en energie de 14,2 % LMH et une resolution en temps de 0.478 ns LMH. Enfin, le simulateur propose, bien qu'il vise une application en TEP, peut etre adapte pour d'autres applications en modifiant la source de photons et en adaptant les objectifs de performances. Mots-cles : Photodetecteurs, photodiodes avalanche monophotoniques, semiconducteurs, tomographie d'emission par positrons, simulations, modelisation, detection monophotonique, scintillateurs, circuit d'etouffement, SPAD, SiPM, Photodiodes avalanche operees en mode Geiger
NASA Astrophysics Data System (ADS)
Antony, R.; Moliton, A.; Ratier, B.
1998-06-01
Light emitting diode based on the structure ITO/Alq3/Ca-Al lead to enhanced quantum efficiency when the Alq3 active layer is obtained by IBAD (Ion Beam Assisted Deposition): with Iodine ions, the optimization (quantum efficiency multiplied by a factor10) is obtained for an ion energy equal to 100eV. La réalisation de diodes électroluminescentes basées sur la structure ITO/Alq3/Ca-Al conduit à des performances améliorées lorsque le dépôt de la couche active Alq3 est effectué avec l'assistance d'un faisceau d'ions; l'optimisation (rendement quantique interne accru d'un ordre de grandeur) correspond à des ions Iode d'énergie 100eV.
Derimay, François; Souteyrand, Geraud; Motreff, Pascal; Rioufol, Gilles; Finet, Gerard
2017-10-13
The rePOT (proximal optimisation technique) sequence proved significantly more effective than final kissing balloon (FKB) with two drug-eluting stents (DES) in a bench test. We sought to validate efficacy experimentally in a large range of latest-generation DES. On left main fractal coronary bifurcation bench models, five samples of each of the six main latest-generation DES (Coroflex ISAR, Orsiro, Promus PREMIER, Resolute Integrity, Ultimaster, XIENCE Xpedition) were implanted on rePOT (initial POT, side branch inflation, final POT). Proximal elliptical ratio, side branch obstruction (SBO), stent overstretch and strut malapposition were quantified on 2D and 3D OCT. Results were compared to FKB with Promus PREMIER. Whatever the design, rePOT maintained vessel circularity compared to FKB: elliptical ratio, 1.02±0.01 to 1.04±0.01 vs. 1.26±0.02 (p<0.05). Global strut malapposition was much lower: 2.6±1.4% to 0.1±0.2% vs. 40.4±8.4% for FKB (p<0.05). However, only Promus PREMIER and XIENCE Xpedition achieved significantly less SBO: respectively, 5.6±3.5% and 10.0±5.3% vs. 23.5±5.7% for FKB (p<0.05). Platform design differences had little influence on the excellent results of rePOT versus FKB. RePOT optimised strut apposition without proximal elliptical deformation in the six main latest-generation DES. Thickness and design characteristics seemed relevant for optimising SBO.
Ouédraogo, Solange Odile Yugbaré; Yougbaré, Nestor; Kouéta, Fla; Dao; Ouédraogo, Moussa; Lougué, Claudine; Ludovic, Kam; Traoré, Ramata Ouédraogo; Yé, Diarra
2015-01-01
Introduction Il s'agit d'analyser la prise en charge du nouveau-né dans le cadre de la stratégie na-tionale de subvention des accouchements et des soins obstétricaux et néonatals d'urgence mis en place par le gouvernement du Burkina Faso en 2006. Méthodes Nous avons menée une étude à visée descriptive et analytique comportant un volet ré-trospectif du 01 janvier 2006 au 31 décembre 2010 portant sur les paramètres épidémiologiques, cliniques des nouveau-nés hospitalisés et un volet prospectif du 3 octobre 2011 au 29 février 2012 par une entrevue des accompagnateurs des nouveau-nés et des prestataires des services de santé. Résultats Les hospitalisations ont augmenté de 43,65% entre 2006 à 2010 Le taux de mortalité néo-natale hospitalière qui était de 11,04% a connu une réduction moyenne annuelle de 3,95%. L'entrevue a porté sur 110 accompagnateurs et 76 prestataires. La majorité des prestataires (97,44%) et des ac-compagnateurs (88,18%) étaient informés de la stratégie mais n'avait pas une connaissance exacte de sa définition. Les prestataires (94,74%) ont signalé des ruptures de médicaments, consommables médicaux et des pannes d’ appareils de laboratoire et d'imagerie. Parmi les accompagnateurs (89%) disaient être satisfaits des services offerts et (72,89%) trouvaient les coûts abordables mais évoquaient les difficultés du transport. Conclusion: La subvention a amélioré la prise en charge du nou-veau-né mais son optimisation nécessiterait une meilleur information et implication de tous les acteurs. Conclusion La subvention a amélioré la prise en charge du nouveau-né mais son optimisation nécessiterait une meilleur information et implication de tous les acteurs. PMID:26161166
NASA Astrophysics Data System (ADS)
Mariotte, F.; Sauviac, B.; Héliot, J. Ph.
1995-10-01
After a brief overview of the concept of electromagnetic chirality, this paper deals with a numerical simulation of isotropic composites with metallic chiral inclusions: computations of permittivity, permeability and chirality parameter as functions of frequency are presented. The theoretical results are, step by step, compared with measurements of chiral composites at microwave frequencies. The application of such media in Radar Cross-Section (RCS) management and control is discussed. The introduction of chiral inclusions seems to make impedance matching possible and may lead to attractive shields with lower reflectivity and larger bandwidth. However the optimization of material characteristics necessary to get a specific absorber remains a difficult task. Après un bref rappel du principe de la chiralité, cet article présente une modélisation des propriétés effectives des matériaux hétérogènes à inclusions chirales métalliques : calcul de la permittivité, perméabilité et coefficient de chiralité du composite en fonction de la fréquence. Les résultats théoriques sont validés, pas à pas, par des mesures effectuées sur des composites chiraux de natures différentes. L'application de tels matériaux à la conception de matériaux absorbant les ondes électromagnétiques est ensuite envisagée. Les inclusions chirales semblent offrir la possibilité de régler l'impédance à l'interface air-milieu absorbant permettant ainsi de concevoir des absorbants micro-ondes plus performants en terme d'atténuation ou de largeur de bande. L'optimisation des caractéristiques des matériaux pour obtenir des performances données restent néanmoins très délicate.
Fabrication de memoire monoelectronique non volatile par une approche de nanogrille flottante
NASA Astrophysics Data System (ADS)
Guilmain, Marc
Les transistors monoelectroniques (SET) sont des dispositifs de tailles nanometriques qui permettent la commande d'un electron a la fois et donc, qui consomment peu d'energie. Une des applications complementaires des SET qui attire l'attention est son utilisation dans des circuits de memoire. Une memoire monoelectronique (SEM) non volatile a le potentiel d'operer a des frequences de l'ordre des gigahertz ce qui lui permettrait de remplacer en meme temps les memoires mortes de type FLASH et les memoires vives de type DRAM. Une puce SEM permettrait donc ultimement la reunification des deux grands types de memoire au sein des ordinateurs. Cette these porte sur la fabrication de memoires monoelectroniques non volatiles. Le procede de fabrication propose repose sur le procede nanodamascene developpe par C. Dubuc et al. a 1'Universite de Sherbrooke. L'un des avantages de ce procede est sa compatibilite avec le back-end-of-line (BEOL) des circuits CMOS. Ce procede a le potentiel de fabriquer plusieurs couches de circuits memoirestres denses au-dessus de tranches CMOS. Ce document presente, entre autres, la realisation d'un simulateur de memoires monoelectroniques ainsi que les resultats de simulations de differentes structures. L'optimisation du procede de fabrication de dispositifs monoelectroniques et la realisation de differentes architectures de SEM simples sont traitees. Les optimisations ont ete faites a plusieurs niveaux : l'electrolithographie, la gravure de l'oxyde, le soulevement du titane, la metallisation et la planarisation CMP. La caracterisation electrique a permis d'etudier en profondeur les dispositifs formes de jonction de Ti/TiO2 et elle a demontre que ces materiaux ne sont pas appropries. Par contre, un SET forme de jonction de TiN/Al2O3 a ete fabrique et caracterise avec succes a basse temperature. Cette demonstration demontre le potentiel du procede de fabrication et de la deposition de couche atomique (ALD) pour la fabrication de memoires monoelectroniques. Mots-cles: Transistor monoelectronique (SET), memoire monoelectronique (SEM), jonction tunnel, temps de retention, nanofabrication, electrolithographie, planarisation chimicomecanique.
2005-10-01
l’optimisation des composants les plus prometteurs pour la d~couverte des m~dicaments. La technique CAF-SM est capable de localiser et de caract~riser...technique d’avant-garde peut ainsi aiguiller ]a recherche biotechnologique A travers les domaines cl~s et 6liminer beaucoup d’ann6es d’exp6rimentation...A l’aveuglette »). Ceci permettra de rationaliser le concept et la mise au point dans des domaines tels que la bio- detection et l’identification
NASA Astrophysics Data System (ADS)
Meghzili, B.; Medjram, M. S.; Achour, S.
2005-05-01
A cause de la sécheresse qui a sévit durant une décennie, la station de traitement de l'eau potable de la ville de Skikda Algérie, en vue de combler le déficit en eau, utilise un mélange des eaux de surface (barrage) et des eaux souterraines (forage). Les résultats des analyses physico-chimiques de ces eaux montrent la présence de micropolluants, notamment le mercure, avec une concentration de 0.035 mg/l pour les eaux de surface et une concentration de 0.02 mg/l pour les eaux souterraines. Ces résultats obtenus, montrent également que la concentration en matières organiques dépasse, pour les deux sources, les normes OMS. Afin de réduire les effets de cette pollution, nous avons calculé les doses nécessaires des différents réactifs utilisés sur la base des essais d'optimisation réalisés en laboratoire. Les résultats obtenus nous ont permis de conclure que les doses de 30 à 60 mg/l de sulfate d'aluminium (S.A) sont nécessaires à une bonne élimination de la turbidité et des matières organiques (M.O) et que l'utilisation du charbon active en poudre (C.A.P) permet la réduction de la teneur en mercure au dessous du seuil admissible. L'utilisation d'un adjuvant (chaux vive) permet d'améliorer les résultats surtout pour la turbidité (4 mg/l). Une préchloration au break point semble intéressante pour améliorer la phase de floculation.
NASA Astrophysics Data System (ADS)
Hentabli, Kamel
Cette recherche s'inscrit dans le cadre du projet de recherche Active Control Technology entre l'Ecole de Technologie Superieure et le constructeur Bombardier Aeronautique . Le but est de concevoir des strategies de commandes multivariables et robustes pour des modeles dynamiques d'avions. Ces strategies de commandes devraient assurer a l'avion une haute performance et satisfaire des qualites de vol desirees en l'occurrence, une bonne manoeuvrabilite, de bonnes marges de stabilite et un amortissement des mouvements phugoides et rapides de l'avion. Dans un premier temps, nous nous sommes principalement interesses aux methodes de synthese LTI et plus exactement a l'approche Hinfinity et la mu-synthese. Par la suite, nous avons accorde un interet particulier aux techniques de commande LPV. Pour mener a bien ce travail, nous avons envisage une approche frequentielle, typiquement Hinfinity. Cette approche est particulierement interessante, dans la mesure ou le modele de synthese est construit directement a partir des differentes specifications du cahier des charges. En effet, ces specifications sont traduites sous forme de gabarits frequentiels, correspondant a des ponderations en entree et en sortie que l'on retrouve dans la synthese Hinfinity classique. Par ailleurs, nous avons utilise une representation de type lineaire fractionnelle (LFT), jugee mieux adaptee pour la prise en compte des differents types d'incertitudes, qui peuvent intervenir sur le systeme. De plus, cette representation s'avere tres appropriee pour l'analyse de la robustesse via les outils de la mu-analyse. D'autre part, afin d'optimiser le compromis entre les specifications de robustesse et de performance, nous avons opte pour une structure de commande a 2 degres de liberte avec modele de reference. Enfin, ces techniques sont illustrees sur des applications realistes, demontrant ainsi la pertinence et l'applicabilite de chacune d'elle. Mots cles. Commande de vol, qualites de vol et manoeuvrabilite, commande robuste, approche Hinfinity , mu-synthese, systemes lineaires a parametres variants, sequencement de gains, transformation lineaire fractionnelle, inegalite matricielle lineaire.
NASA Astrophysics Data System (ADS)
Le Blanc, Catherine
We present the initial study, the realisation and the characterisation of a high intensity femtosecond laser chain. This chain is able to produce intensities higher than 10^18 W/cm2 on target, at a 10 Hertz repetition rate. We present first fondamentals principles of chirped pulse propagation mechanisms: group velocity dispersion, self-phase modulation, self-focusing, and gain saturation. Then after describing the spectroscopic properties of titanium doped sapphire (Ti:S), we discuss near infrared femtosecond oscillators able to be amplified in Ti:S medium, and describe our home made Kerr Lens mode locked femtosecond Ti:S oscillator. The high intensity laser chain is based on the Chirped Pulse Amplification concept, which consists in stretching the pulse before its amplification in order to avoid non-linear effects such self-focusing or breakdown, and then recompressing it to its initial pulse duration. We have developed two compact and efficient multipass amplifiers for femtosecond chirped pulse amplification. With only two of these devices, we obtain an amplification factor of 10^8, which corresponds to a peak power of ˜ 0.5 terawatt after compression. We analyse in details the performances of this system and its advantages in terms of its high quantum yield (0.3), flexibility, and optical quality. Some observed spectral distortion on chirped pulses is simply explained by a gain saturation model. High dynamic pulse temporal control is crucial for interaction experiments. For this reason, we have developed a third order sampling autocorrelator. This device is able to measure 100 fs pulses with more than 8 orders of magnitude. I compare our obtained performances to other femtosecond systems and I analyse the best way to increase the energy, reduce the pulse duration and optimise focusing in order to reach the 10^19 W/cm2 regime. Ce mémoire présente les études préliminaires, la réalisation et la caractérisation d'une chaîne dite "de haute intensité" en régime femtosecondes. Cette installation est capable de produire sur cible des intensités supérieures à 10^18 W/cm2 à une récurrence de 10 tirs par seconde. Nous présentons tout d'abord les principes fondamentaux de la propagation d'impulsions étirées dans un milieu amplificateur en mettant en évidence les principaux mécanismes mis en jeu dans ce processus : la dispersion de vitesse de groupe, l'automodulation de phase, l'autofocalisation et la saturation du gain. Une analyse des différentes possibilités quant au choix de la source initiale capable d'émettre des impulsions de durée femtoseconde dans le domaine du proche infrarouge est réalisée après la présentation des caractéristiques spectroscopiques du saphir dopé au titane. Sont en particulier décrits en détail le principe et le fonctionnement de l'oscillateur au saphir dopé au titane à autoblocage de modes que nous avons construit à cet effet. L'amplification jusqu'aux puissances de l'ordre du terawatt repose sur le concept "CPA" d'amplification d'impulsions à dérive de fréquence. Nous explorons ici la voie des amplificateurs à multipassages suivant des configurations déjà développées pour les amplificateurs à colorants. Une attention particulière a été portée sur les méthodes de réglage, la fiabilité, la stabilité et la qualité du profil spatial du faisceau. D'autre part, une modélisation rend parfaitement compte des effets de saturation dans les amplificateurs. Les impulsions obtenues après compression ont une énergie de l'ordre de 60 mJ et une durée de l'ordre de 130 fs. La partie suivante est consacrée à une étude des problèmes d'étirement et de recompression pour tenter d'expliquer les raisons de cette recompression imparfaite. Finalement, un paramètre encore plus crucial pour la physique de l'interaction sur cible solide sans création d'un préplasma est le contraste des impulsions. Nous décrivons donc l'appareil de mesure capable d'effectuer des autocorrélations avec une dynamique dépassant 10^8 permettant d'observer un éventuel pied d'énergie à ce niveau. Les problèmes de propagation d'impulsions lumineuses ultra intenses sont ensuite abordés, pour optimiser la focalisation du faisceau jusqu'à des intensités de 10^18 W/cm2. Dans une discussion finale, les performances de cette installation sont comparées à celles des systèmes développés par ailleurs et nous réalisons une étude plus prospective des étapes à mettre en oeuvre afin d'augmenter l'énergie amplifiée, diminuer la durée des impulsions et optimiser la focalisation, le but à atteindre étant le régime des 10^19 W/cm2.
NASA Astrophysics Data System (ADS)
Laksimi, Abdelouahed; Bounouas, Lahsen; Benmedakhene, Salim; Azari, Zitoun; Imad, Abdellatif
To obtain good mechanical performance of the composite material, it is important to optimise the fibres ratio as well as the fibre/matrix interface quality which have influence on the damage. The main objective of this study is to determine the structural parameters influence on damage evolution concerning two types of polypropylene glass fibres composites. With a classical approach of damage mechanical theory which consists of load-unload tensile tests, acoustic emission permits to detect and follow damage mechanisms during loading. Fractographic analysis highlights the different assumptions and conclusions for this study.
Nouvelles morphologies de fibres electrofilees de polymere thermosensible =
NASA Astrophysics Data System (ADS)
Sta, Marwa
Ce memoire presente une etude sur la possibilite d'obtenir des membranes a base de polymeres thermosensibles avec differentes morphologies pour des applications d'administration de medicaments. Ces membranes ont ete obtenues par electrofilage du Poly (N vinylcaprolactame) (PNVCL), un polymere thermosensible, soit seul ou en melange avec du Polycaprolactone (PCL), un polymere biodegradable. Les parametres de procede ainsi que les proprietes de la solution a electrofiler ont ete optimises dans le but de creer des fibres de PNVCL lisses, continues et sans perles. Des solutions du melange (PNVCL) / (PCL) ont ete ensuite preparees en suivant quatre methodes differentes de preparation. Ces methodes se basent sur l'emploi de solvants distincts, eau distillee et chloroforme, avec differentes concentrations de polymere, 42wt% et 30wt% respectivement. Ces solutions ont ete electrofilees en utilisant les parametres de procede qui correspondaient aux meilleures conditions pour l'electrofilage du PNVCL. Ensuite, le ketophofene, un medicament hydrophobe, a ete ajoute au PNVCL et au melange PNVCL/PCL avant l'electrofilage afin d'etudier la capacite des fibres de PNVCL et de melanges de retenir le medicament hydrophobe et a en faire sa liberation. Enfin, des fibres noyau-enveloppes ont ete obtenues par electrofilage coaxial, en utilisant une solution aqueuse du melange PNVCL/PCL (42 wt%) pour l'enveloppe et une solution aqueuse du PEG (30 wt%) pour le noyau. Les morphologies des membranes resultantes et de leurs fibres ont ete caracterisees par microscopie electronique a balayage (MEB). La temperature de solution critique inferieure (LCST) de ces fibres, qui est la temperature en dessous de laquelle le polymere est soluble dans l'eau et au-dessus duquel il precipite, a ete evaluee par calorimetrie differentielle a balayage (DSC). L'efficacite d'encapsulation (EE) et la liberation du medicament ont ete evaluees en utilisant la technique de spectrophotometrie UV-visible. Des coupes transversales des fibres du melange (PNVCL) / (PCL) et des fibres preparees par electrofilage coaxial ont ete caracterisees par microscopie electronique a balayage (MEB) a haute resolution dans le but de determiner la taille des particules de PCL a l'interieur des fibres et de visualiser la morphologie noyau/enveloppe des fibres resultantes. Une membrane a nanofibres lisses et continues a ete obtenue par l'optimisation de l'electrofilage de PNVCL. L'addition de PCL au melange a conduit a un controle de la LCST et de l'hydrophobicite de la membrane. Il a egalement ete demontre que la liberation de medicament hydrophobe peut etre controlee par la morphologie de ce melange PCL/PNVCL. Finalement, il a ete possible de fabriquer des fibres noyau-enveloppe par l'electrofilage coaxial de PNVCL.
NASA Astrophysics Data System (ADS)
Boyer, Sylvain
On estime que sur les 3,7 millions des travailleurs au Quebec, plus de 500 000 sont exposes quotidiennement a des niveaux de bruits pouvant causer des lesions de l'appareil auditif. Lorsqu'il n'est pas possible de diminuer le niveau de bruit environnant, en modifiant les sources de bruits, ou en limitant la propagation du son, le port de protecteurs auditifs individualises, telles que les coquilles, demeure l'ultime solution. Bien que vue comme une solution a court terme, elle est communement employee, du fait de son caractere peu dispendieux, de sa facilite d'implantation et de son adaptabilite a la plupart des operations en environnement bruyant. Cependant les protecteurs auditifs peuvent etre a la fois inadaptes aux travailleurs et a leur environnement et inconfortables ce qui limite leur temps de port, reduisant leur protection effective. Afin de palier a ces difficultes, un projet de recherche sur la protection auditive intitule : " Developpement d'outils et de methodes pour ameliorer et mieux evaluer la protection auditive individuelle des travailleur ", a ete mis sur pied en 2010, associant l'Ecole de technologie superieure (ETS) et l'Institut de recherche Robert-Sauve en sante et en securite du travail (IRSST). S'inscrivant dans ce programme de recherche, le present travail de doctorat s'interesse specifiquement a la protection auditive au moyen de protecteurs auditifs " passifs " de type coquille, dont l'usage presente trois problematiques specifiques presentees dans les paragraphes suivants. La premiere problematique specifique concerne l'inconfort cause par exemple par la pression statique induite par la force de serrage de l'arceau, qui peut reduire le temps de port recommande pour limiter l'exposition au bruit. Il convient alors de pouvoir donner a l'utilisateur un protecteur confortable, adapte a son environnement de travail et a son activite. La seconde problematique specifique est l'evaluation de la protection reelle apportee par le protecteur. La methode des seuils auditifs REAT (Real Ear Attenuation Threshold) aussi vu comme un "golden standard" est utilise pour quantifier la reduction du bruit mais surestime generalement la performance des protecteurs. Les techniques de mesure terrains, telles que la F-MIRE (Field Measurement in Real Ear) peuvent etre a l'avenir de meilleurs outils pour evaluer l'attenuation individuelle. Si ces techniques existent pour des bouchons d'oreilles, elles doivent etre adaptees et ameliorees pour le cas des coquilles, en determinant l'emplacement optimal des capteurs acoustiques et les facteurs de compensation individuels qui lient la mesure microphonique a la mesure qui aurait ete prise au tympan. La troisieme problematique specifique est l'optimisation de l'attenuation des coquilles pour les adapter a l'individu et a son environnement de travail. En effet, le design des coquilles est generalement base sur des concepts empiriques et des methodes essais/erreurs sur des prototypes. La piste des outils predictifs a ete tres peu etudiee jusqu'a present et meriterait d'etre approfondie. L'utilisation du prototypage virtuel, permettrait a la fois d'optimiser le design avant production, d'accelerer la phase de developpement produit et d'en reduire les couts. L'objectif general de cette these est de repondre a ces differentes problematiques par le developpement d'un modele de l'attenuation sonore d'un protecteur auditif de type coquille. A cause de la complexite de la geometrie de ces protecteurs, la methode principale de modelisation retenue a priori est la methode des elements finis (FEM). Pour atteindre cet objectif general, trois objectifs specifiques ont ete etablis et sont presentes dans les trois paragraphes suivants. (Abstract shortened by ProQuest.).
Mid-wave IR Signatures of CFAV Quest in the North Atlantic Summer and Winter Climates
2008-05-01
hiver, ce qui laisse supposer qu’elle pose un plus grand risque de détection du navire. DRDC Atlantic TM 2007-312 i This page intentionally left blank...consacrer ses efforts de gestion des signatures infrarouge. L’objectif final de l’obtention d’un moyen d’optimiser la signature infrarouge des navires...que représentée par le ministre de la Défense nationale, 2008 Original signed by Zahir A. Daya Original signed by Dave Hopkin Original signed by Ron
NASA Astrophysics Data System (ADS)
Laurat, J.; Keller, G.; Oliveira-Huguenin, J.-A.; Fabre, C.; Coudreau, T.
2006-10-01
Nous générons et caractérisons des faisceaux intriqués dégénérés en fréquence à l'aide d'un dispositif original appelé Oscillateur Paramétrique Optique à auto-verrouillage de phase. Nous avons ainsi obtenu des faisceaux intriqués pour lesquels ”l'inséparabilité" vaut 0.33 ± 0.02. Nous avons utilisé le formalisme des matrices de covariance pour déterminer la transformation ”non-locale" à appliquer à ces faisceaux afin d'en extraire le maximum d'intrication.
NASA Astrophysics Data System (ADS)
Demers, Vincent
L'objectif de ce projet est de determiner les conditions de laminage et la temperature de traitement thermique maximisant les proprietes fonctionnelles de l'alliage a memoire de forme Ti-Ni. Les specimens sont caracterises par des mesures de calorimetrie, de microscopie optique, de gene ration de contrainte, de deformation recuperable et des essais mecaniques. Pour un cycle unique, l'utilisation d'un taux d'ecrouissage e=1.5 obtenu avec l'application d'une force de tension FT = 0.1sigma y et d'une huile minerale resulte en un echantillon droit, sans microfissure et qui apres un recuit a 400°C, produit un materiau nanostructure manifestant des proprietes fonctionnelles deux fois plus grandes que le meme materiau ayant une structure polygonisee. Pour des cycles repetes, les memes conditions de laminage sont valables mais le niveau de deformation optimal est situe entre e=0.75-2, et depend particulierement du mode de sollicitation, du niveau de stabilisation et du nombre de cycles a la rupture requis par l'application.
NASA Astrophysics Data System (ADS)
Sire, Stéphane; Marya, Surendar
This Note presents ways to improve the weld penetration potential of TIG process by optimising silica application around the joints in a plain carbon steel and an aluminium alloy 5086. Whereas for plain carbon steels, full coverage of joint improves penetration, the presence of a blank zone around the joint in the flux coating on aluminium 5086 using AC-TIG seems to be the best solution for cosmetic and deep welds. To cite this article: S. Sire, S. Marya, C. R. Mecanique 330 (2002) 83-89.
1992-02-01
Division (Code RM) ONERA Office of Aeronautics & Space Technology 29 ave de la Division Leclerc NASA Hq 92320 Chfitillon Washington DC 20546 France United...Vector of thickness variables. V’ = [ t2 ........ tN Vector of thickness changes. AV ’= [rt, 5t2 ......... tNJ TI 7-9 Vector of strain derivatives. F...ds, ds, I d, 1i’,= dt, dr2 ........ dt--N Vector of buckling derivatives. dX d). , dt1 dt2 dtN Then 5F= Vs’i . AV and SX V,’. AV The linearised
NASA Astrophysics Data System (ADS)
Aboutajeddine, Ahmed
Les modeles micromecaniques de transition d'echelles qui permettent de determiner les proprietes effectives des materiaux heterogenes a partir de la microstructure sont consideres dans ce travail. L'objectif est la prise en compte de la presence d'une interphase entre la matrice et le renforcement dans les modeles micromecaniques classiques, de meme que la reconsideration des approximations de base de ces modeles, afin de traiter les materiaux multiphasiques. Un nouveau modele micromecanique est alors propose pour tenir compte de la presence d'une interphase elastique mince lors de la determination des proprietes effectives. Ce modele a ete construit grace a l'apport de l'equation integrale, des operateurs interfaciaux de Hill et de la methode de Mori-Tanaka. Les expressions obtenues pour les modules globaux et les champs dans l'enrobage sont de nature analytique. L'approximation de base de ce modele est amelioree par la suite dans un nouveau modele qui s'interesse aux inclusions enrobees avec un enrobage mince ou epais. La resolution utilisee s'appuie sur une double homogeneisation realisee au niveau de l'inclusion enrobee et du materiau. Cette nouvelle demarche, permettra d'apprehender completement les implications des approximations de la modelisation. Les resultats obtenus sont exploites par la suite dans la solution de l'assemblage de Hashin. Ainsi, plusieurs modeles micromecaniques classiques d'origines differentes se voient unifier et rattacher, dans ce travail, a la representation geometrique de Hashin. En plus de pouvoir apprecier completement la pertinence de l'approximation de chaque modele dans cette vision unique, l'extension correcte de ces modeles aux materiaux multiphasiques est rendue possible. Plusieurs modeles analytiques et explicites sont alors proposee suivant des solutions de differents ordres de l'assemblage de Hashin. L'un des modeles explicite apparait comme une correction directe du modele de Mori-Tanaka, dans les cas ou celui ci echoue a donner de bons resultats. Finalement, ce modele de Mori-Tanaka corrige est utilise avec les operateurs de Hill pour construire un modele de transition d'echelle pour les materiaux ayant une interphase elastoplastique. La loi de comportement effective trouvee est de nature incrementale et elle est conjuguee a la relation de la plasticite de l'interphase. Des simulations d'essais mecaniques pour plusieurs proprietes de l'interphase plastique a permis de dresser des profils de l'enrobage octroyant un meilleur comportement au materiau.
Algorithmes de couplage RANS et ecoulement potentiel
NASA Astrophysics Data System (ADS)
Gallay, Sylvain
Dans le processus de developpement d'avion, la solution retenue doit satisfaire de nombreux criteres dans de nombreux domaines, comme par exemple le domaine de la structure, de l'aerodynamique, de la stabilite et controle, de la performance ou encore de la securite, tout en respectant des echeanciers precis et minimisant les couts. Les geometries candidates sont nombreuses dans les premieres etapes de definition du produit et de design preliminaire, et des environnements d'optimisations multidisciplinaires sont developpes par les differentes industries aeronautiques. Differentes methodes impliquant differents niveaux de modelisations sont necessaires pour les differentes phases de developpement du projet. Lors des phases de definition et de design preliminaires, des methodes rapides sont necessaires afin d'etudier les candidats efficacement. Le developpement de methodes ameliorant la precision des methodes existantes tout en gardant un cout de calcul faible permet d'obtenir un niveau de fidelite plus eleve dans les premieres phases de developpement du projet et ainsi grandement diminuer les risques associes. Dans le domaine de l'aerodynamisme, les developpements des algorithmes de couplage visqueux/non visqueux permettent d'ameliorer les methodes de calcul lineaires non visqueuses en methodes non lineaires prenant en compte les effets visqueux. Ces methodes permettent ainsi de caracteriser l'ecoulement visqueux sur les configurations et predire entre autre les mecanismes de decrochage ou encore la position des ondes de chocs sur les surfaces portantes. Cette these se focalise sur le couplage entre une methode d'ecoulement potentiel tridimensionnelle et des donnees de section bidimensionnelles visqueuses. Les methodes existantes sont implementees et leurs limites identifiees. Une methode originale est ensuite developpee et validee. Les resultats sur une aile elliptique demontrent la capacite de l'algorithme a de grands angles d'attaques et dans la region post-decrochage. L'algorithme de couplage a ete compare a des donnees de plus haute fidelite sur des configurations issues de la litterature. Un modele de fuselage base sur des relations empiriques et des simulations RANS a ete teste et valide. Les coefficients de portance, de trainee et de moment de tangage ainsi que les coefficients de pression extraits le long de l'envergure ont montre un bon accord avec les donnees de soufflerie et les modeles RANS pour des configurations transsoniques. Une configuration a geometrie hypersustentatoire a permis d'etudier la modelisation des surfaces hypersustentees de la methode d'ecoulement potentiel, demontrant que la cambrure peut etre prise en compte uniquement dans les donnees visqueuses.
Les Cicatrices Retractiles Post-Brulures Du Membre Inferieur Chez L’Enfant
Sankale, A.A.; Manyacka Ma Nyemb, P.; Coulibaly, N.F.; Ndiaye, A.; Ndoye, M.
2010-01-01
Summary Il s'agit d'une étude faisant ressortir les aspects épidémiologiques, cliniques et thérapeutiques des séquelles de brûlures du membre inférieur chez l'enfant, à propos de 42 cas colligés au service de chirurgie infantile de l'Hôpital Aristide Le Dantec (Sénégal). L'âge moyen retrouvé est de 5 ans et 3 mois, et le sex-ratio garçons/filles de 1,8/1. La brûlure thermique est causée par une flamme dans 33% des cas, par un liquide chaud dans 21% des cas, et par des braises dans 21% des cas. Les cicatrices rétractiles intéressent le genou et le creux poplité dans 47% des cas et le pied dans 45% des cas. Elles sont bilatérales dans 21% des cas, et concernent une autre localisation associée dans 21% des cas. Quant aux brides, 21% ont bénéficié d'une chirurgie, avec un délai moyen de 3 ans et 2 mois après la brûlure. Cette procédure chirurgicale consiste en une plastie en Z dans 91% des cas, à laquelle est associée une greffe de peau dans 54% des cas. Une rééducation fonctionnelle est pratiquée chez 54% des opérés. Parallèlement aux données de la littérature, nos résultats montrent que l'optimisation de la prise en charge passe par une meilleure prévention des accidents domestiques et une bonne codification thérapeutique. PMID:21991202
NASA Astrophysics Data System (ADS)
Metiche, Slimane
La demande croissante en poteaux pour les differents reseaux d'electricite et de telecommunications a rendu necessaire l'utilisation de materiaux innovants, qui preservent l'environnement. La majorite des poteaux electriques existants au Canada ainsi qu'a travers le monde, sont fabriques a partir de materiaux traditionnels tel que le bois, le beton ou l'acier. Les motivations des industriels et des chercheurs a penser a d'autres solutions sont diverses, citons entre autre: La limitation en longueur des poteaux en bois ainsi que la vulnerabilite des poteaux fabriques en beton ou en acier aux agressions climatiques. Les nouveaux poteaux en materiaux composites se presentent comme de bons candidats a cet effet, cependant; leur comportement structural n'est pas connu et des etudes theoriques et experimentales approfondies sont necessaires avant leur mise en marche a grande echelle. Un programme de recherche intensif comportant plusieurs projets experimentaux, analytiques et numeriques est en cours a l'Universite de Sherbrooke afin d'evaluer le comportement a court et a long termes de ces nouveaux poteaux en Polymeres Renforces de Fibres (PRF). C'est dans ce contexte que s'inscrit la presente these, et notre recherche vise a evaluer le comportement a la flexion de nouveaux poteaux tubulaires coniques fabriques en materiaux composites par enroulement filamentaire et ce, a travers une etude theorique, ainsi qu'a travers une serie d'essais de flexion en "grandeur reelle" afin de comprendre le comportement structural de ces poteaux, d'optimiser la conception et de proposer une procedure de dimensionnement pour les utilisateurs. Les poteaux en Polymeres Renforces de Fibres (PRF) etudies dans cette these sont fabriques avec une resine epoxyde renforcee de fibres de verre type E. Chaque type poteaux est constitue principalement de trois zones ou les proprietes geometriques (epaisseur, diametre) et les proprietes mecaniques sont differentes d'une zone a l'autre. La difference entre ces proprietes est due au nombre de couches utilisees dans chaque zone ainsi qu'a l'orientation des fibres de chaque couche. Un total de vingt-trois prototypes de dimensions differentes; ont ete testes en flexion jusqu'a la rupture. Deux types de fibres de verre de masses lineaires differentes, ont ete utilisees afin d'evaluer l'effet du type de fibres sur le comportement a la flexion. Un nouveau montage experimental permettant de tester tous les types de poteaux en PRF a ete dimensionne et fabrique selon les recommandations decrites dans les normes ASTM D 4923-01 et ANSI C 136.20-2005. Un modele analytique base sur la theorie des poutres en elasticite lineaire est propose dans cette these. Ce modele predit avec une bonne precision le comportement experimental charge---deflexion ainsi que la deflexion maximale au sommet des poteaux en PRF; constitues de plusieurs zones de caracteristiques geometriques et mecaniques differentes. Une procedure de dimensionnement des poteaux en PRF, basee sur les resultats experimentaux obtenus dans le cadre de la presente these, est egalement proposee. Les resultats obtenus dans le cadre de la presente these permettront le developpement et l'amelioration des regles de conception utiles et pratiques a l'usage des concepteurs et des industriels du domaine des poteaux en PRF. Les retombees de cette recherche sont a la fois economiques et technologiques, car les resultats obtenus constitueront une banque de donnees qui contribueront au developpement des normes de calcul, et par consequent a l'optimisation des materiaux utilises, et serviront a valider de futurs resultats et modeles theoriques.
Optimisation de l'émission du continuum femtoseconde de lumière blanche entre 600 nm et 800 nm
NASA Astrophysics Data System (ADS)
Ramstein, S.; Mottin, S.
2005-06-01
Un dispositif de spectroscopie avec résolution du temps de vol des photons en milieu diffus a été développé. Celui-ci repose sur l'utilisation d'un continuum de lumière blanche généré par focalisation d'un laser amplifié (830 nm, 1 kHz, 0.5 W, 170 fs) dans de l'eau déminéralisée. Afin d'optimiser spectralement et en puissance la source blanche sur la fenêtre spectrale 600 800 nm, une étude de la mise en forme spatio-temporelle avant autofocalisation de l'impulsion laser par le milieu a été menée. Cette mise en forme est effectuée de manière spatiale en changeant la focale de la lentille de focalisation et de manière temporelle en changeant le taux de compression de l'impulsion. L'étude montre que le cône de lumière émise possède plus de puissance dans la fenêtre spectrale d'intérêt pour des focales longues. Sur la fenêtre 600-800 nm, le rendement énergétique intégré varie de 5%, avec une focalef=6cm, à 15%, avec une focale f = 30 cm. La mise en forme temporelle montre des effets similaires avec les mêmes ordres de grandeur.
2000-09-01
fassent, si rien ne change par ailleurs. plus proche de l’ing~ni~rie du vivant . Une vdritable optimisation des 6ldinents de soutien logistique A bord...s souvent dans les arbres l’utilisation en service. fonctionnels, peuvent 8tre recueillies plus facilement dans un tableau crois6 avec les divers syst
NASA Astrophysics Data System (ADS)
Blais, Mathieu
Au Quebec, les alumineries sont de grandes consommatrices d'energie electrique, soit pres de 14 % de la puissance installee d'Hydro-Quebec. Dans ce contexte, des petits gains en efficacite energetique des cuves d'electrolyse pourraient avoir un impact important sur la reduction globale de la consommation d'electricite. Le projet de maitrise decrit dans cette etude repond a la problematique suivante : comment l'optimisation de la geometrie d'un bloc cathodique en vue d'uniformiser la densite de courant peut augmenter l'efficacite energetique et la duree de vie de la cuve d'aluminium? Le but premier du projet est de modifier la geometrie en vue d'ameliorer le comportement thermoelectrique des blocs cathodiques et d'accroitre par le fait meme l'efficacite energetique du procede de production d'aluminium. La mauvaise distribution de la densite de courant dans la cuve est responsable de certains problemes energetiques ayant des impacts negatifs sur l'economie et l'environnement. Cette non-uniformite de la distribution du courant induit une usure prematuree de la surface de la cathode et contribue a reduire la stabilite magnetohydrodynamique de la nappe de metal liquide. Afin de quantifier les impacts que peut avoir l'uniformisation de la densite de courant a travers le bloc cathodique, un modele d'un bloc cathodique d'une cuve de la technologie AP-30 a ete concu et analyse par elements finis. A partir de son comportement thermoelectrique et de donnees experimentales d'une cuve AP-30 tirees de la litterature, une correlation entre le profil de densite de courant a la surface du bloc et le taux d'erosion local au meme endroit a ete creee. Cette relation correspond au modele predictif de la duree de vie de tout bloc du meme materiau a partir de son profil de densite de courant. Ensuite, une programmation a ete faite incorporant dans une meme fonction cout les impacts economiques de la duree de vie, de la chute de voltage cathodique et de l'utilisation de nouveaux materiaux. Ceci a permis d'evaluer les benefices faits a partir d'un bloc modifie par rapport au bloc de reference. Plusieurs parametres geometriques du bloc sont variables sur un domaine realiste et l'integration d'un composant en materiau plus conducteur y a egalement ete etudiee. Utilisant des outils mathematiques d'optimisation, un design de bloc optimal a pu etre trouve. Les resultats demontrent qu'il est possible de generer des economies a partir de la modification du bloc. Il est egalement prouve que l'uniformisation de la densite de courant a travers le bloc peut apporter de grands avantages economiques et environnementaux dans le procede d'electrolyse de l'aluminium. Les resultats de cette etude serviront d'arguments pour les chercheurs dans l'industrie a savoir s'il vaut la peine d'investir ou non dans la fabrication d'un prototype experimental souvent tres couteux. Mots-cles : Efficacite energetique, electrolyse de l'aluminium, cathode, simulation thermoelectrique, uniformisation de la densite de courant, optimisation.
NASA Astrophysics Data System (ADS)
Madidi, Fatima Zahra
Les lignes aeriennes de transport et de distribution de l'energie electrique sont souvent exposees a diverses contraintes. Parmi celles-ci, la pollution des isolateurs constitue l'un des facteurs de premiere importance dans la fiabilite du transport d'energie. En effet, la presence de pollution sur les isolateurs lorsqu'elle est humidifiee entraine la diminution de leur performance electrique en favorisant l'apparition d'arcs de contournement. De telles pannes peuvent parfois causer des impacts socioeconomiques importants. Par ailleurs, le developpement de nouveaux revetements pour ces isolateurs peut s'averer un moyen efficace pour les proteger contre l'apparition de l'arc de contournement. Les revetements superhydrophobes ont fait l'objet de nombreuses etudes au cours de ces dernieres annees. Ces surfaces sont preparees en combinant une rugosite nano-microstructuree avec une faible energie de surface. En outre, de telles surfaces ont de nombreuses applications si elles sont durables et n'ont pas d'effets nocifs sur l'environnement. L'objectif principal de la presente etude vise d'abord l'elaboration de revetements superhydrophobes, puis l'etude de leur duree de vie, leurs proprietes dielectriques et photocatalytiques. Une grande variete de materiaux a faible energie de surface peuvent etre utilises pour le developpement de ces revetements. Dans cette recherche, le caoutchouc de silicone (CS) est employe car il presente de nombreuses proprietes, notamment une forte hydrophobie, une resistance aux rayonnements ultraviolets, et une bonne tenue au feu sans degagement de produits toxiques. Toutefois, le point faible de ces materiaux est la degradation de leurs proprietes hydrophobes. Afin d'ameliorer certaines proprietes du caoutchouc de silicone, des nanoparticules seront additionnees au polymere de base. La technique d'elaboration des revetements consiste a ajouter des nanoparticules de dioxyde de titane (TiO2) au polymere de base, par des methodes ayant un potentiel d'applications industrielles. Les parametres d'elaboration ont ete optimises afin d'obtenir des revetements stables presentant des angles de contact eleves avec des gouttelettes d'eau et de faibles hysteresis. Apres avoir obtenu des surfaces superhydrophobes, des tests de stabilite ont ete effectues dans des conditions de vieillissement accelere tels que l'immersion dans des solutions aqueuses avec differents pH et conductivites ou la degradation par rayonnement UV. Les resultats obtenus ont montre que l'ajout de nanoparticules de dioxyde de titane au caoutchouc de silicone permet d'ameliorer l'hydrophobie et la stabilite de ce polymere. Les revetements superhydrophobes obtenus ont pu maintenir leur superhydrophobicite apres plusieurs jours d'immersion dans differentes solutions aqueuses. Ils presentent aussi une bonne stabilite lorsqu'ils sont exposes aux rayons UV ainsi qu'une stabilite mecanique. De plus, on a pu developper des revetements nanocomposites et microcomposites avec de bonnes proprietes dielectriques. L'etude des proprietes photocatalytiques des revetements superhydrophobes ont montre que l'adsorption des molecules reactives (polluants) sur la surface du catalyseur (TiO2) est le parametre cle de l'activite photocatalytique. L'augmentation de la concentration de TiO2 conduit a une diminution de la concentration du polluant et par consequent a un excellent rendement photocatalytique.
Contrôle cohérent de la dynamique de fragmentation d'agrégats alcalins
NASA Astrophysics Data System (ADS)
Lindinger, A.; Lupulescu, C.; Le Roux, J.; Bartelt, A.; Vajda, Š.; Wöste, L.
2004-11-01
Les agrégats métalliques présentent des propriétés extraordinaires, en particulier chimiques et catalytiques, qui dépendent fortement de leur taille. Ce comportement en fait des candidats idéaux pour l'analyse en temps réel de processus photo-induits ultrarapides, le but ultime étant la conduite de scénarios de contrôle cohérent. Nous avons réalisé des expériences d'ionisation multiphotonique non stationnaire sur de petits agrégats alcalins de différentes tailles et dans différents états électroniques rovibrationnels, y compris leur état fondamental, sondant ainsi leur dynamique ondulatoire, leur orientation structurelle, leur transfert de charge et leur dissociation. Les processus observés dépendent grandement des paramètres du faisceau laser d'excitation, tels que sa phase, son amplitude et sa durée ; cette sensibilité plaide pour l'emploi d'un système de contrôle rétroactif permettant de générer les formes d'impulsion optimales. Les caractéristiques spectrales et temporelles de ces dernières reflètent les propriétés du système étudié ainsi que les processus photochimiques que l'irradiation y induit. Nous présentons d'abord la dynamique vibrationelle d'états électroniques excités, liés, dissociatifs et prédissociatifs, des dimères et trimères alcalins. Suit une description du principe d'observation par pompage optique par effet Raman stimulé de la dynamique d'un paquet d'onde dans l'état électronique fondamental. Puisque les paramètres de l'impulsion d'excitation influencent de façon significative le poids des différentes trajectoires dans l'espace des phases, nous avons mené des expériences sur les canaux de fragmentation concurrents d'une réaction photo-induite, en étudiant les différents embranchements des voies d'ionisation et de fragmentation de Na{2}K excité électroniquement. L'utilisation d'un algorithme évolutif, pour l'optimisation de la phase et de l'amplitude de l'onde électromagnétique appliquée, permit d'influencer significativement la quantité résultante d'ions parents ou fragments. Des propriétés intéressantes sont déduites de la forme des impulsions laser optimales obtenues, qui révèlent la période de vibration moléculaire et qui, combinées aux courbes de potentiel, permettent de proposer les trajectoires que l'optimisation impose aux paquets d'onde. Pour finir, nous avons étudié comment une variété plus grande d'agrégats, en contribuant au canal de fragmentation NaK, influe sur la forme optimale d'impulsion. Là encore, la structure de cette dernière apporte des éclaircissements sur les canaux de fragmentation empruntés durant le processus de contrôle.
NASA Astrophysics Data System (ADS)
Martinez, Nicolas
Actuellement le Canada et plus specialement le Quebec, comptent une multitude de sites isoles dont !'electrification est assuree essentiellement par des generatrices diesel a cause, principalement, de l'eloignement du reseau central de disuibution electrique. Bien que consideree comme une source fiable et continue, 1 'utilisation des generatrices diesel devient de plus en plus problematique d'un point de vue energetique, economique et environnemental. Dans le but d'y resoudre et de proposer une methode d'approvisionnement plus performante, mains onereuse et plus respectueuse de l'environnement, l'usage des energies renouvelables est devenu indispensable. Differents travaux ont alors demontre que le couplage de ces energies avec des generauices diesel, formant des systemes hybrides, semble etre une des meilleures solutions. Parmi elles, le systeme hybride eolien-diesel avec stockage par air comprime (SHEDAC) se presente comme une configuration optimale pour 1 'electrification des sites isoles. En effet, differentes etudes ont permis de mettre en avant l'efficacite du stockage par air comprime, par rapport a d'autres technologies de stockage, en complement d'un systeme hybride eolien-diesel. Plus precisement, ce systeme se compose de sous-systemes qui sont: des eoliennes, des generatrices diesel et une chaine de compression et de stockage d'air comprime qui est ensuite utilisee pour suralimenter les generatrices. Ce processus permet ainsi de reduire Ia consommation en carburant tout en agrandissant la part d'energie renouvelable dans Ia production electrique. A ce jour, divers travaux de recherche ont pennis de demontrer I' efficacite d 'un tel systeme et de presenter une variete de configurations possibles. A travers ce memoire, un logiciel de dimensionnement energetique y est elabore dans le but d'unifom1iser !'approche energetique de cette technologie. Cet outil se veut etre une innovation dans le domaine puisqu'il est actuellement impossible de dimensionner un SHEDAC avec les outils existants. Un etat de l'art specifique associe a une validation des resultats a ete realise dans le but de proposer un logiciel fiable et performant. Dans une logique visant !'implantation d'un SHEDAC sur un site isole du Nord-du-Quebec, le logiciel developpe a ete, ensuite, utilise pour realiser une etude energetique permettant d' en degager la solution optimale a mettre en place. Enfin, a l' aide des outils et des resultats obtenus precedemment, !'elaboration de nouvelles strategies d'operation est presentee dans le but de demontrer comment le systeme pourrait etre optimise afm de repondre a differentes contraintes techniques. Le contenu de ce memoire est presente sous forme de trois articles originaux, soumis a des joumaux scientifiques avec cornite de lecture, et d'un chapitre specifique presentant les nouvelles strategies d 'operation. Ils relatent des travaux decrits dans le paragraphe precedent et permettent d'en deduire un usage concluant et pertinent du SHEDAC dans un site isole nordique.
NASA Astrophysics Data System (ADS)
Zaag, Mahdi
La disponibilite des modeles precis des avions est parmi les elements cles permettant d'assurer leurs ameliorations. Ces modeles servent a ameliorer les commandes de vol et de concevoir de nouveaux systemes aerodynamiques pour la conception des ailes deformables des avions. Ce projet consiste a concevoir un systeme d'identification de certains parametres du modele du moteur de l'avion d'affaires americain Cessna Citation X pour la phase de croisiere a partir des essais en vol. Ces essais ont ete effectues sur le simulateur de vol concu et fabrique par CAE Inc. qui possede le niveau D de la dynamique de vol. En effet, le niveau D est le plus haut niveau de precision donne par l'autorite federale de reglementation FAA de l'aviation civile aux Etats-Unis. Une methodologie basee sur les reseaux de neurones optimises a l'aide d'un algorithme intitule le "grand deluge etendu" est utilisee dans la conception de ce systeme d'identification. Plusieurs tests de vol pour differentes altitudes et differents nombres de Mach ont ete realises afin de s'en servir comme bases de donnees pour l'apprentissage des reseaux de neurones. La validation de ce modele a ete realisee a l'aide des donnees du simulateur. Malgre la nonlinearite et la complexite du systeme, les parametres du moteur ont ete tres bien predits pour une enveloppe de vol determinee. Ce modele estime pourrait etre utilise pour des analyses de fonctionnement du moteur et pourrait assurer le controle de l'avion pendant cette phase de croisiere. L'identification des parametres du moteur pourrait etre realisee aussi pour les autres phases de montee et de descente afin d'obtenir son modele complet pour toute l'enveloppe du vol de l'avion Cessna Citation X (montee, croisiere, descente). Cette methode employee dans ce travail pourrait aussi etre efficace pour realiser un modele pour l'identification des coefficients aerodynamiques du meme avion a partir toujours des essais en vol. None None None
Transport aérien longue distance des brûlés graves: revue de la littérature et application pratique
Leclerc, T.; Hoffmann, C.; Forsans, E.; Cirodde, A.; Boutonnet, M.; Jault, P.; Tourtier, J.-P.; Bargues, L.; Donat, N.
2015-01-01
Summary Les brûlés graves nécessitent une prise en charge multidisciplinaire dans des centres hautement spécialisés. La rareté de ces centres impose souvent le transport aérien médicalisé longue distance. Cependant, il y a peu de données publiées sur ces transferts. Dans cette mise au point, pour optimiser la prise en charge des brûlés dès qu’un transport aérien est décidé ou même seulement envisagé, nous proposons d’extraire de cette littérature limitée des principes simples s’appuyant aussi sur l’expérience pratique du Service de Santé des Armées françaises. Nous décrivons d’abord comment les contraintes aéronautiques peuvent affecter le transport de brûlés graves à bord d’aéronefs. Nous abordons ensuite la régulation de ces missions, en analysant les risques associés au transport aérien des brûlés graves et leurs implications sur les indications, la chronologie et les modalités du transport. Enfin, nous développons la conduite de la mission, comprenant la préparation du matériel et des consommables avant le vol, l’évaluation et la mise en condition du patient avant l’embarquement, et la poursuite de la prise en charge en vol. PMID:26668564
NASA Astrophysics Data System (ADS)
Ayari-Kanoun, Asma
Ce travail de these porte sur le developpement d'une nouvelle approche pour la localisation et l'organisation de nanocristaux de silicium realises par gravure electrochimique. Cette derniere represente une technique simple et peu couteuse par rapport aux autres techniques couramment utilisees pour la fabrication de nanocristaux de silicium. L'idee de ce travail a ete d'etudier la nanostructuration de minces couches de nitrure de silicium, d'environ 30 nm d'epaisseur pour permettre par la suite un arrangement periodique des nanocristaux de silicium. Cette pre-structuration est obtenue de facon artificielle en imposant un motif periodique via une technique de lithographie par faisceau d'electrons combinee avec une gravure plasma. Une optimisation des conditions de lithographie et de gravure plasma ont permis d'obtenir des reseaux de trous de 30 nm de diametre debouchant sur le silicium avec un bon controle de leur morphologie (taille, profondeur et forme). En ajustant les conditions de gravure electrochimique (concentration d'acide, temps de gravure et densite de courant), nous avons obtenu des reseaux -2D ordonnes de nanocristaux de silicium de 10 nm de diametre a travers ces masques de nanotrous avec le controle parfait de leur localisation, la distance entre les nanocristaux et leur orientation cristalline. Des etudes electriques preliminaires sur ces nanocristaux ont permis de mettre en evidence des effets de chargement. Ces resultats tres prometteurs confirment l'interet des nanocristaux de silicium realises par gravure electrochimique dans le futur pour la fabrication a grande echelle de dispositifs nanoelectroniques. Mots-cles : localisation, organisation, nanocristaux de silicium, gravure electrochimique, lithographie electronique, gravure plasma, nitrure de silicium.
Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718
Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.
Strategies facilitant les tests en pre-certification pour la robustesse a l'egard des radiations =
NASA Astrophysics Data System (ADS)
Souari, Anis
Les effets des radiations cosmiques sur l'electronique embarquee preoccupent depuis. quelques decennies les chercheurs interesses par la robustesse des circuits integres. Plusieurs. recherches ont ete menees dans cette direction, principalement pour les applications spatiales. ou lâenvironnement de leur deploiement est hostile. En effet, ces environnements sont denses. en termes de particules qui, lorsquâelles interagissent avec les circuits integres, peuvent. mener a leur dysfonctionnement, voir meme a leur destruction. De plus, les effets des. radiations sâaccentuent pour les nouvelles generations des circuits integres ou la diminution. de la taille des transistors et lâaugmentation de la complexite de ces circuits augmentent la. probabilite dâapparition des anomalies et par consequence la croissance des besoins de test. Lâexpansion de lâelectronique grand public (commercial off-the-shelf, COTS) et lâadoption. de ces composants pour des applications critiques comme les applications avioniques et. spatiales incitent egalement les chercheurs a doubler les efforts de verification de la fiabilite. de ces circuits. Les COTS, malgre leurs meilleures caracteristiques en comparaison avec les. circuits durcis tolerants aux radiations qui sont couteux et en retard en termes de technologie. utilisee, sont vulnerables aux radiations. Afin dâameliorer la fiabilite de ces circuits, une evaluation de leur vulnerabilite dans les. differents niveaux dâabstraction du flot de conception est recommandee. Ceci aide les. concepteurs a prendre les mesures de mitigation necessaires sur le design au niveau. dâabstraction en question. Enfin, afin de satisfaire les exigences de tolerance aux pannes, des. tests tres couteux de certification, obtenus a lâaide de bombardement de particules (protons, neutrons, etc.), sont necessaires. Dans cette these, nous nous interessons principalement a definir une strategie de precertification. permettant dâevaluer dâune facon realiste la sensibilite des circuits integres face. aux effets des radiations afin dâeviter dâenvoyer des circuits non robustes a la phase tres. couteuse de la certification. Les circuits cibles par nos travaux sont les circuits integres. programmables par lâusager (FPGA) a base de memoire SRAM et le type de pannes ciblees, causees par les radiations, est les SEU (single event upset) consistant a un basculement de. lâetat logique dâun element de memoire a son complementaire. En effet, les FPGA a base de. memoire SRAM sont de plus en plus demandes par la communaute de lâaerospatial grace a. leurs caracteristiques de prototypage rapide et de reconfiguration sur site mais ils sont. vulnerables face aux radiations ou les SEU sont les pannes les plus frequentes dans les. elements de memoire de type SRAM. Nous proposons une nouvelle approche dâinjection de. pannes par emulation permettant de mimer les effets des radiations sur la memoire de. configuration des FPGA et de generer des resultats les plus fideles possibles des resultats des. tests de certification. Cette approche est basee sur la consideration de la difference de. sensibilite des elements de memoire de configuration lorsquâils sont a lâetat '1' et a lâetat '0', observee sous des tests acceleres sous faisceaux de protons au renomme laboratoire. TRIUMF, dans la procedure de generation des sequences de test dans le but de mimer la. distribution des pannes dans la memoire de configuration. Les resultats des experimentations. de validation montrent que la strategie proposee est efficace et genere des resultats realistes. Ces resultats revelent que ne pas considerer la difference de sensibilite peut mener a une. sous-estimation de la sensibilite des circuits face aux radiations. Dans la meme optique dâoptimisation de la procedure dâinjection des pannes par emulation, a. savoir le test de pre-certification, nous proposons une methodologie permettant de maximiser. la detection des bits critiques (bits provoquant une defaillance fonctionnelle sâils changent. dâetat) pour un nombre bien determine de SEU (qui est le modele de pannes adopte) ou de. maximiser la precision de lâestimation de nombre des bits critiques. Pour ce faire, une. classification des bits de configuration en differents ensembles est tout dâabord mise en. oeuvre, selon leur contenu, les ressources quâils configurent et leur criticite. Ensuite, une. evaluation de la sensibilite de chaque ensemble est accomplie. Enfin, la priorisation. dâinjection des pannes dans les ensembles les plus sensibles est recommandee. Plusieurs. scenarios dâoptimisation dâinjection des pannes sont proposes et les resultats sont compares. avec ceux donnes par la methode conventionnelle dâinjection aleatoire des pannes. La. methodologie dâoptimisation proposee assure une amelioration de plus de deux ordres de. grandeur. Une derniere approche facilitant lâevaluation de la sensibilite des bits configurant les LUT. (look up table) de FPGA, les plus petites entites configurables du FPGA permettant. dâimplementer des fonctions combinatoires, utilises par un design est presentee. Elle permet. lâidentification facile et sans cout en termes dâutilisation du materiel ou dâoutils externes des. bits des LUT. Lâapproche proposee est simple et efficace, offrant une couverture de pannes. de 100 % et applicable aux nouvelles generations des FPGA de Xilinx. Les approches proposees contribuent a repondre aux exigences du cahier des charges de cette. these et a achever les objectifs definis. Le realisme et la maximisation de lâestimation de la. vulnerabilite des circuits sous test offerts par les nouvelles approches assurent le. developpement dâune strategie de test en pre-certification efficace. En effet, la premiere. approche dâinjection de pannes considerant la difference de sensibilite relative des elements. de memoire selon leur contenu genere des resultats donnant une erreur relative atteignant. 3.1 % quand compares aux resultats obtenus a TRIUMF alors que lâerreur relative donnee. par la comparaison des resultats dâune injection conventionnelle aleatoire de pannes avec. ceux de TRIUMF peut atteindre la valeur de 75 %. De plus, lâapplication de cette approche a. des circuits plus conventionnels montre que 2.3 fois plus dâerreurs sont detectees en. comparaison avec lâinjection aleatoire des pannes. Ceci suggere que ne pas considerer la. difference de sensibilite relative dans la procedure dâemulation peut mener a une sousestimation. de la sensibilite du design face aux radiations. Les resultats de la deuxieme. approche proposee ont ete aussi compares aux resultats dâune injection aleatoire de pannes. Lâapproche proposee, maximisant le nombre des bits critiques inverses, permet dâatteindre. un facteur dâacceleration de 108 de la procedure dâinjection des pannes en comparaison a. lâapproche aleatoire. Elle permet aussi de minimiser lâerreur dâestimation du nombre des bits. critiques pour atteindre une valeur de ±1.1 % calculee pour un intervalle de confiance de. 95 % tandis que la valeur dâerreur dâestimation des bits critiques generee par lâapproche. aleatoire dâinjection des pannes pour le meme intervalle de confiance peut atteindre ±8.6 %. Enfin, la derniere approche proposee dâinjection de pannes dans les LUT se distingue des. autres approches disponibles dans la litterature par sa simplicite tout en assurant une. couverture maximale de pannes de 100 %. En effet, lâapproche proposee est independante. des outils externes permettant dâidentifier les bits configurant les LUT qui sont obsoletes ou. ne supportent pas les nouvelles generations des FPGA. Elle agit directement sur les fichiers. generes par lâoutil de synthese adopte.
Détection et prise en charge efficace à l’urgence d’une luxation de Lisfranc à faible vélocité
Mayich, D. Joshua; Mayich, Michael S.; Daniels, Timothy R.
2012-01-01
Résumé Objectif Améliorer la capacité des médecins de soins primaires de reconnaître les mécanismes et les présentations courantes des luxations de Lisfranc à basse vélocité (LLF) et mieux faire comprendre le rôle de l’imagerie et les principes des soins primaires dans les cas de LLF à basse vélocité. Sources des données Une recension des ouvrages spécialisés dans MEDLINE a été effectuée et les résultats ont été résumés, concernant notamment l’anatomie et les mécanismes, les diagnostics cliniques et par imagerie et les principes de la prise en charge en milieu de soins primaires. Message principal Les LLF à basse vélocité sont le résultat de divers mécanismes et, à l’examen clinique et à l’imagerie, les signes révélateurs sont très peu perceptibles. Il faut donc un fort degré de suspicion et de prudence dans la prise en charge de ce type de blessure. Conclusion Si on met en pratique quelques principes thérapeutiques dans la prise en charge des LLF à basse vélocité, qui sont potentiellement dévastatrices si elles ne sont pas reconnues, et ce, dès leur présentation, il est possible d’optimiser l’issue de telles blessures.
NASA Astrophysics Data System (ADS)
Fareh, Fouad
Le moulage par injection basse pression des poudres metalliques est une technique de fabrication qui permet de fabriquer des pieces possedant la complexite des pieces coulees mais avec les proprietes mecaniques des pieces corroyees. Cependant, l'optimisation des etapes de deliantage et de frittage a ete jusqu'a maintenant effectuee a l'aide de melange pour lesquels la moulabilite optimale n'a pas encore ete demontree. Ainsi, la comprehension des proprietes rheologiques et de la segregation des melanges est tres limitee et cela presente le point faible du processus de LPIM. L'objectif de ce projet de recherche etait de caracteriser l'influence des liants sur le comportement rheologique des melanges en mesurant la viscosite et la segregation des melanges faible viscosite utilises dans le procede LPIM. Afin d'atteindre cet objectif, des essais rheologiques et thermogravimetriques ont ete conduits sur 12 melanges. Ces melanges ont ete prepares a base de poudre d'Inconel 718 de forme spherique (chargement solide constant a 60%) et de cires, d'agents surfactants ou epaississants. Les essais rheologiques ont ete utilises entre autre pour calculer l'indice d'injectabilite ?STV des melanges, tandis que les essais thermogravimetriques ont permis d'evaluer precisement la segregation des poudres dans les melanges. Il a ete demontre que les trois (3) melanges contenant de la cire de paraffine et de l'acide stearique presentent des indices alpha STV plus eleves qui sont avantageux pour le moulage par injection des poudres metalliques (MIM), mais segregent beaucoup trop pour que la piece fabriquee produise de bonnes caracteristiques mecaniques. A l'oppose, le melange contenant de la cire de paraffine et de l'ethylene-vinyle acetate ainsi que le melange contenant seulement de la cire de carnauba segregent peu voire pas du tout, mais possedent de tres faibles indices alphaSTV : ils sont donc difficilement injectables. Le meilleur compromis semble donc etre les melanges contenant de la cire (de paraffine, d'abeille et de carnauba) et de faible teneur en acide stearique et en ethylene-vinyle acetate. Par ailleurs, les lois physiques preexistantes ont permis de confirmer les resultats des essais rheologiques et thermogravimetriques, mais aussi de mettre en evidence l'influence de la segregation sur les proprietes rheologiques des melanges. Ces essais ont aussi montre l'effet de constituants de liant et du temps passe a l'etat fondu sur l'intensite de la segregation dans les melanges. Les melanges contenants de l'acide stearique segregent rapidement. La caracterisation des melanges developpes pour le moulage basse pression des poudres metalliques doit etre obtenue a l'aide d'une methode de courte duree pour eviter la segregation et de mesurer precisement l'aptitude a l'ecoulement de ces melanges.
Technology Insertion and Management: Options for the Canadian Forces
2010-01-01
Minister of National Defence, 2010 © Sa Majesté la Reine ( en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2010...semble indiquer qu’un changement de paradigme est en cours dans la façon dont les militaires élaborent et emploient la technologie à utiliser avec leurs...coûts-avantages fondée sur l’optimisation des options en matière d’insertion technologique. ii DRDC CORA TM 2010-015 This page
Bélanger, SA; Warren, AE; Hamilton, RM; Gray, C; Gow, RM; Sanatani, S; Côté, J-M; Lougheed, J; LeBlanc, J; Martin, S; Miles, B; Mitchell, C; Gorman, DA; Weiss, M; Schachar, R
2009-01-01
Les décisions en matière de réglementation et les documents scientifiques au sujet de la prise en charge du trouble de déficit de l’attention avec hyperactivité (TDAH) soulèvent des questions quant à l’innocuité des médicaments et à l’évaluation convenable à effectuer avant le traitement afin de déterminer la pertinence d’une pharmacothérapie. Ce constat est particulièrement vrai en présence de cardiopathies structurelles ou fonctionnelles. Le présent article contient l’analyse des données disponibles, y compris les publications révisées par des pairs, des données tirées du site Web de la Food and Drug Administration des États-Unis au sujet des réactions indésirables déclarées chez des enfants qui prennent des stimulants, ainsi que des données de Santé Canada sur le même problème. Des lignes directrices consensuelles sur l’évaluation pertinente sont proposées d’après l’apport des membres de la Société canadienne de pédiatrie, de la Société canadienne de cardiologie et de l’Académie canadienne de psychiatrie de l’enfant et de l’adolescent, qui possèdent notamment des compétences et des connaissances précises tant dans le secteur du TDAH que de la cardiologie pédiatrique. Le présent document de principes prône une anamnèse et un examen physique détaillés avant la prescription de stimulants et s’attarde sur le dépistage des facteurs de risque de mort subite, mais il ne contient pas de recommandations systématiques de dépistage électrocardiographique ou de consultations avec un spécialiste en cardiologie, à moins que les antécédents ou que l’examen physique ne le justifient. Le document contient un questionnaire pour repérer les enfants potentiellement vulnérables à une mort subite (quel que soit le type de TDAH ou les médicaments utilisés pour le traiter). Même si les recommandations dépendent des meilleures données probantes disponibles, le comité s’entend pour affirmer que d’autres recherches s’imposent pour optimiser l’approche de ce scénario clinique courant.
Developpement des betons semi autoplacants a rheologie adaptee pour des infrastructures
NASA Astrophysics Data System (ADS)
Sotomayor Cruz, Cristian Daniel
Au cours des dernières décennies, les infrastructures canadiennes et québécoises comportent plusieurs structures en béton armé présentant des problèmes de durabilité dus aux conditions climatiques sévères, à la mauvaise conception des structures, à la qualité des matériaux, aux types des bétons choisis, aux systèmes de construction ou à l'existence d'événements incontrôlables. En ce qui concerne le choix du béton pour la construction des infrastructures, une vaste gamme de béton divisée en deux principaux types peut être utilisée: le béton conventionnel vibré (BCV) et le béton autoplaçant (BAP). Dans le cas d'un BCV, la consolidation inadéquate par vibration a été un problème récurrent, occasionnant des dommages structuraux. Ceci a conduit à une réduction de la durabilité et à une augmentation du coût d'entretien et de réparation des infrastructures. Rien que l'utilisation d'un BAP a des avantages tels que l'élimination de la vibration, la réduction des coûts de main d'oeuvre et l'amélioration de la qualité des structures, néanmoins, le coût initial d'un BAP par rapport à un BCV ne permet pas encore de généraliser son utilisation dans l'industrie de la construction. Ce mémoire présente la conception d'une nouvelle gamme de béton semi-autoplaçant pour la construction des infrastructures (BSAP-I) exigeant une vibration minimale. Il s'agit de trouver un équilibre optimal entre la rhéologie et le coût initial du nouveau béton pour conférer une bonne performance structurale et économique aux structures. Le programme expérimental établi a premièrement permis d'évaluer la faisabilité d'utilisation des BSAP-I pour la mise en place des piliers d'une infrastructure de pont à Sherbrooke. En plus, l'utilisation d'un plan d'expériences a permis l'évaluation de trois paramètres de formulation sur les propriétés des mélanges de BSAP-I à l'état frais et durci. Finalement, l'évaluation de la performance des BSAP-I optimisés à travers une caractérisation complète des propriétés mécaniques et de la durabilité a été réalisée. A la suite de cette étude, les résultats obtenus nous permettent de conclure que : (1) L'utilisation d'un BSAP-I avec un gros granulat de 5 - 14 mm, des rapports E/L = 0,37 et S/G = 0,52 et une teneur en air de 6 à 9% a été possible en conférant un équilibre optimal fluidité / stabilité à l'état frais, ainsi qu'un niveau de thixotropie adéquate au chantier permettant d'optimiser la conception du coffrage des piliers de pont et de conférer des qualités de surfaces très acceptables de ces infrastructures. (2) La méthode adaptée pour l'essai L-Box contenant 2 barres et une vibration de 5 secondes a permis de bien caractériser la capacité de remplissage d'un BSAP-I. (3) L'utilisation d'un plan factoriel 23 a permis d'obtenir des modèles statistiques fiables, capables de prédire les propriétés rhéologiques à l'état frais et les résistances en compression des BSAP-I avec des dosages en liant entre 370 et 420 kg/m3, des rapports E/L entre 0,34 et 0,40 et S/G entre 0,47 et 0,53. (4) Des mesures de vitesse d'écoulement T40 d'un BSAP-I sont très semblables à celles d'un BAP. En plus, des valeurs T40 montrent une bonne corrélation linéaire avec celles de T400 mesurés dans la boîte L-Box. (5) À la frontière du BAP et du BCV, une bande rhéologique possédant un τ0 entre 30 et 320 Pa et un η entre 10 et 140 Pa.s a été trouvée pour la conception optimale des BSAP-I. (6) Les BSAP-I optimisés ont également conféré une très bonne performance à l'état frais, en permettant maintenir un bon équilibre entre la rhéologie et la stabilité dans le temps, lorsqu'on utilise une énergie de vibration minimale pour amorcer son écoulement. (7) À l'état durci Les BSAP-I ont conféré une bonne performance présentant des résistances mécaniques élevées et des niveaux négligeables de pénétration aux ions chlores, de perte de masse par écaillage et des attaques par le gel/dégel. (8) L'utilisation des ciments combinés possédant de la fumée de silice, du laitier et de la cendre volante ont permis améliorer le comportement rhéologique et minimiser le retrait par séchage des BSAP-I dans le temps.
Mutual information-based LPI optimisation for radar network
NASA Astrophysics Data System (ADS)
Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun
2015-07-01
Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.
Non-linearite et couplages lumiere-matiere en electrodynamique quantique en circuit
NASA Astrophysics Data System (ADS)
Bourassa, Jerome
L'électrodynamique quantique en circuit est un contexte unique pour l'optique quantique et le calcul quantique. Dans cette architecture où des qubits supraconducteurs, composés de jonctions Josephson, sont fortement couplés au champ électromagnétique de résonateurs coplanaires, la dynamique du système est semblable à celle des atomes dans des cavités optiques. La polyvalence de la conception des circuits supraconducteurs permet d'étudier l'interaction lumière-matière de différents régimes et manières. Ainsi, plusieurs qubits peuvent être couplés à un seul résonateur afin de les enchevêtrer. Une jonction Josephson peut également être intégrée directement au résonateur afin de produire une interaction non linéaire entre les photons. De la même manière, il a été suggéré que le couplage qubitrésonateur pourrait devenir l'échelle d'énergie dominante du système : le régime de couplage ultrafort. Malgré que la dynamique qubit-résonateur soit bien comprise, les modèles actuels ne permettent pas de prédire correctement les effets dispersifs du résonateur sur les qubits tels : le décalage de Lamb, l'interaction d'échange virtuelle et le temps de relaxation. Comme il n'y a pas non plus de modèle général permettant de déterminer les caractéristiques d'un résonateur non linéaire, on comprend mal comment rendre la non-linéarité plus forte, ni même si le régime de couplage ultrafort peut être physiquement réalisé dans ces circuits. Dans le cadre de ma thèse, je me suis intéressé à la modélisation de qubits et de résonateurs afin de mieux comprendre l'interaction lumière-matière en circuits, dans le but de développer des conceptions alternatives d'architectures plus performantes ou qui explorent des régimes d'interactions méconnus. Pour ce faire, j'ai développé une méthode analytique générale permettant de trouver l'hamiltonien exact de circuits distribués non linéaires, une méthode basée sur la mécanique lagrangienne et la représentation des modes propres d'oscillation. La grande qualité de la méthode réside dans la description analytique détaillée des paramètres de l'hamiltonien du système en fonction de la géométrie et des caractéristiques électromagnétiques du circuit. Non seulement le formalisme développé réconcilie le modèle quantique avec l'électromagnétisme classique et la théorie des circuits, mais va bien au-delà en formulant d'importantes prédictions sur la nature des interactions et l'influence des fluctuations du vide du résonateur sur la dynamique des qubits supraconducteurs. À l'aide d'exemples numériques réalistes et compatibles avec les technologies actuelles, je montre comment de simples optimisations de conception permettraient d'augmenter grandement l'efficacité et la rapidité d'exécution de calculs quantiques avec l'architecture, en plus d'atteindre des régimes de non-linéarité et de couplage lumière-matière inédits. En permettant de mieux comprendre l'interaction lumière-matière dans les circuits et d'optimiser l'architecture afin d'atteindre de nouveaux régimes de couplages, la méthode d'analyse de circuit développée dans cette thèse permettra de tester et raffiner nos connaissances sur l'électrodynamique quantique et la physique quantique. Mots-clés: Information quantique, électrodynamique quantique, supraconductivité, électromagnétisme, qubit supraconducteur, résonateur non linéaire, couplage ultrafort, effet Kerr.
Mountfort, Katrina; Mehran, Roxana; Colombo, Antonio; Stella, Pieter; Romaguera, Rafael; Sardella, Gennaro
2015-09-01
Although second-generation drug-eluting stents (DES) have improved outcomes in percutaneous coronary interventions (PCIs), important unmet needs remain. Two symposia at EuroPCR 2015 focused on two challenging scenarios. First, patients with diabetes mellitus (DM) have generally inferior outcomes following PCI. The Cre8™ stent (manufactured by CID Spa, member of Alvimedica Group) has shown unique efficacy in subpopulations of patients with DM during clinical trials. A live case in a patient with diabetes illustrated the challenges of complex multivessel disease. Second, optimising stent selection towards devices that have demonstrated complete and early endothelialisation offers the potential to reduce the duration of dual antiplatelet therapy. The Cre8™ DES features a polymer-free platform and has been associated with low rates of in-stent thrombosis.
NASA Astrophysics Data System (ADS)
Minotti, P.; Le Moal, P.; Buchaillot, L.; Ferreira, A.
1996-10-01
The modeling of traveling wave type piezoelectric motors involves a large variety of mechanical and physical phenomena and therefore leads to numerous approaches and models. The latter, mainly based on phenomenological and numerical (based on Finite Element Method) analyses, are not suitable for current objectives oriented toward the development of efficient C.A.D. tools. As a result, an attempt is done to investigate analytical approaches, in order to theoretically model the mechanical energy conversion at the stator/rotor interface. This paper is the first in a serie of three articles devoted to the modeling of such rotative motors. After a short description of the operating principles specific to the piezomotors, the mechanical and tribological assumptions made for the driving mechanism of the rotor are briefly described. Then it is shown that the kinematic and dynamic modeling of the stator, combined with the static representation of the stator/rotor interface, gives an efficient way in order to perform the calculation of the loading characteristics of the driving shaft. Finally, the specifications of a new software named C.A.S.I.M.M.I.R.E., which has been recently developed on the basis of our earlier mechanical modeling, are described. In the last of these three papers, the theoretical simulations performed on SHINSEI Japanese motors will show to be close to the experimental data and that the results reported in this paper will lead to the structural optimization of future traveling wave ultrasonic motors. La modélisation des moteurs piézo-électriques à onde progressive implique une grande variété de phénomènes physiques et mécaniques. Cette variété conduit à des approches et modèles tout aussi nombreux et variés, qui reposent principalement sur des analyses phénoménologiques et numériques (Méthode Élements Finis), et ne permettent pas de répondre aux éxigences actuelles concernant le développement d'outils C.A.O. performants. Cette nécessité nous a conduits à développer une modélisation théorique analytique de la conversion d'énergie à l'interface stator/rotor. Ce papier est le premier d'une série de trois articles consacrés à la modélisation des moteurs piézo-électriques rotatifs. Après une rapide description des principes de fonctionnement de ces piézomoteurs, les hypothèses mécaniques et tribologiques concernant le mécanisme d'entraînement du rotor sont énoncées succinctement. On démontre ensuite que la modélisation cinématique et dynamique du stator, combinée à une représentation statique du comportement à l'interface stator/rotor, autorise l'évaluation des caractéristiques en charge des moteurs à onde progressive. Enfin, le logiciel baptisé C.A.S.I.M.M.I.R.E., récemment développé sur la base de la modélisation mécanique précédente, est présenté puis testé. Dans le dernier article de cette série, nous confirmerons la validité des simulations théoriques issues de ce logiciel, à partir de la caractérisation expérimentale de moteurs japonais de la firme SHINSEI. Ce nouveau logiciel constitue d'ores et déjà un outil performant en vue de l'optimisation des futurs moteurs à onde progressive, et a déjà fait l'objet d'une première exploitation en milieu industriel.
NASA Astrophysics Data System (ADS)
Ganem, G.; Dubray, B.
1998-04-01
Treatment intensification is needed to overcome the disappointing efficacy of anticancer treatments used as a single modality. The conception, comparison and optimization of radiotherapy - chemotherapy combinations is hampered by the lack of a common tool enabling the clinicians to quantify the effects of the association of different treatment modalities. Such a difficulty is mainly due to chemotherapy (large array of drugs, pharmacological uncertainties), but is also the consequence of end-point multiplicity (tumour control, normal tissue injury) and tumour biology complexity. L'intensification thérapeutique est une nécessité face à l'efficacité insuffisante des traitements anticancéreux utilisés séparément. La conception, la comparaison, et l'optimisation des modalités possibles d'association de radiothérapie et de chimiothérapie est considérablement gênée par l'absence d'un outil permettant de mesurer l'efficacitéé résultante de la combinaison d'agents thérapeutiques différents. Cette difficulté provient essentiellement de l'usage de la chimiothérapie (nombreuses drogues, pharmacocinétique mal connue), mais aussi de la multiplicité des critères de jugement (action anti-tumorale, lésions des tissus sains) et de la complexité de la biologie tumorale.
NASA Astrophysics Data System (ADS)
Dalverny, O.; Capéraa, S.; Pantalé, O.; Sattouf, C.
2002-12-01
Cet article présente une méthodologie d'identification de lois constitutives et de lois de contact adaptées aux matériaux métalliques sous chargement dynamique à grande vitesse de déformation. Les essais sont effectués à partir de montages expérimentaux adaptés à un lanceur à gaz permettant d'obtenir une vitesse de projectile de l'ordre de 350m/s pour une masse totale de 30gr. Le premier essai consiste en un impact de Taylor correspondant à un chargement mécanique de type compression. Le second essai de type “extrusion conique" permet la détermination des lois de frottement à grande vitesse. La procédure générale d'identification des lois de comportement à partir d'essais dynamiques se fait au moyen d'une analyse post-mortem des échantillons et de la corrélation entre ces résultats expérimentaux et un modèle numérique des essais. Pour les deux cas précédemment cités, nous présentons la configuration optimale d'essai ainsi que les résultats obtenus à partir d'un algorithme d'optimisation de type Levenberg-Marquard.
Elbouti, Anass; Rafai, Mostafa; Chouaib, Naoufal; Jidane, Said; Belkouch, Ahmed; Bakkali, Hicham; Belyamani, Lahcen
2016-01-01
Cette étude à pour objectifs de décrire les pratiques des prescriptions, évaluer leur pertinence et leur conformité aux règles d’utilisations et étudier les facteurs susceptibles de les influencer. Il s’agit d’une étude transversale d’évaluation des prescriptions antibiotiques portant sur 105 patients réalisée au service des urgences médico-chirurgicales de l’H.M.I.Med V de Rabat sur une période d’un mois. Le recueil des données était fait à l’aide d’un questionnaire rapportant les données démographiques et anamnestiques, les antécédents, la notion d’allergie, les données spécifiques de l’examen clinique, les données para cliniques, la prescription détaillée de l’antibiotique. Les données récoltées ont été ensuite évaluées par un médecin référent, chargé d’indiquer les éventuelles erreurs de traitement. Parmi les infections ayant motivé la prescription des antibiotiques, les affections des systèmes respiratoires et urinaires étaient au premier rang, les familles d’antibiotiques les plus couramment employées sont les pénicillines, les quinolones et les céphalosporines. 74 prescriptions soit (70,5%) étaient à la fois pertinentes et conformes contre 9 prescriptions soit (8,6%) justifiées mais non pertinentes et 6 prescriptions soit (5,7%) étaient jugées injustifiées par le médecin référent par absence d’infection. Les évaluations des pratiques médicales sont rarement menées dans les établissements de santé; c’est dans ce cadre que nous avons voulu nous inscrire en produisant cette étude afin d’améliorer la pertinence de nos prescriptions antibiotiques et d’optimiser leur conformité aux différentes recommandations. PMID:28292124
Gordon, G T; McCann, B P
2015-01-01
This paper describes the basis of a stakeholder-based sustainable optimisation indicator (SOI) system to be developed for small-to-medium sized activated sludge (AS) wastewater treatment plants (WwTPs) in the Republic of Ireland (ROI). Key technical publications relating to best practice plant operation, performance audits and optimisation, and indicator and benchmarking systems for wastewater services are identified. Optimisation studies were developed at a number of Irish AS WwTPs and key findings are presented. A national AS WwTP manager/operator survey was carried out to verify the applied operational findings and identify the key operator stakeholder requirements for this proposed SOI system. It was found that most plants require more consistent operational data-based decision-making, monitoring and communication structures to facilitate optimised, sustainable and continuous performance improvement. The applied optimisation and stakeholder consultation phases form the basis of the proposed stakeholder-based SOI system. This system will allow for continuous monitoring and rating of plant performance, facilitate optimised operation and encourage the prioritisation of performance improvement through tracking key operational metrics. Plant optimisation has become a major focus due to the transfer of all ROI water services to a national water utility from individual local authorities and the implementation of the EU Water Framework Directive.
Algorithmes et architectures pour ordinateurs quantiques supraconducteurs
NASA Astrophysics Data System (ADS)
Blais, Alexandre
Depuis sa formulation, la theorie de l'information a ete basee, implicitement, sur les lois de la physique classique. Une telle formulation est toutefois incomplete puisqu'elle ne tient pas compte de la realite quantique. Au cours des vingt dernieres annees, l'expansion de la theorie de l'information englobant les effets purement quantiques a connu un interet grandissant. La realisation d'un systeme de traitement de l'information quantique, un ordinateur quantique, presente toutefois de nombreux defis. Dans ce document, on s'interesse a differents aspects concernant ces defis. On commence par presenter des concepts algorithmiques comme l'optimisation de calculs quantiques et le calcul quantique geometrique. Par la suite, on s'interesse au design et a differents aspects de l'utilisation de qubits bases sur les jonctions Josephson. En particulier, un nouveau design de qubit supraconducteur est suggere. On presente aussi une approche originale pour l'interaction entre qubits. Cette approche est tres generale puisqu'elle peut etre appliquee a differents designs de qubits. Finalement, on s'interesse a la lecture des qubits supraconducteurs de flux. Le detecteur suggere ici a l'avantage de pouvoir etre decouple du qubit lorsqu'il n'y a pas de mesure en cours.
Les psychoses de l’épileptique: approche clinique à propos d'un cas
Charfi, Nada; Trigui, Dorsaf; Ben Thabet, Jihène; Zouari, Nasreddine; Zouari, Lobna; Maalej, Mohamed
2014-01-01
Pour discuter les liens entre épilepsie et psychose, les auteurs rapportent l'observation d'une fille âgée de 22 ans, traitée pour épilepsie fronto-temporale, adressée en psychiatrie pour des hallucinations sensorielles et cénesthésiques et un délire mystique et d'influence, apparus secondairement et non améliorés par le traitement antiépileptique. Les symptômes psychotiques, chez l’épileptique, peuvent entrer dans le cadre de psychoses intercritiques, post-critiques ou alternatives. Pour le cas rapporté, les symptômes psychotiques étaient intercritiques et chroniques. Il s'agissait vraisemblablement d'une psychose schizophréniforme. Dans ce type de psychose, une indifférence affective et une restriction des activités sont rarement rencontrés, alors que les fluctuations rapides de l'humeur sont fréquentes. Les thématiques délirantes sont assez souvent mystiques, alimentées par des hallucinations auditives et par des hallucinations visuelles inhabituelles. Les troubles négatifs sont rares. Les psychoses épileptiques n'ont pas été identifiées comme des entités nosographiques dans les systèmes de classification psychiatrique (DSM-IV et CIM-10), ce qui pose un problème de reconnaissance de ces troubles. Une collaboration entre psychiatre et neurologue devient ainsi nécessaire pour mieux comprendre cette comorbidité complexe, éviter les erreurs diagnostiques et optimiser la prise en charge. PMID:25309665
Methodologie experimentale pour evaluer les caracteristiques des plateformes graphiques avioniques
NASA Astrophysics Data System (ADS)
Legault, Vincent
Within a context where the aviation industry intensifies the development of new visually appealing features and where time-to-market must be as short as possible, rapid graphics processing benchmarking in a certified avionics environment becomes an important issue. With this work we intend to demonstrate that it is possible to deploy a high-performance graphics application on an avionics platform that uses certified graphical COTS components. Moreover, we would like to bring to the avionics community a methodology which will allow developers to identify the needed elements for graphics system optimisation and provide them tools that can measure the complexity of this type of application and measure the amount of resources to properly scale a graphics system according to their needs. As far as we know, no graphics performance profiling tool dedicated to critical embedded architectures has been proposed. We thus had the idea of implementing a specialized benchmarking tool that would be an appropriate and effective solution to this problem. Our solution resides in the extraction of the key graphics specifications from an inherited application to use them afterwards in a 3D image generation application.
Optimisation of active suspension control inputs for improved vehicle handling performance
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Kasać, Josip; Tseng, H. Eric; Hrovat, Davor
2016-11-01
Active suspension is commonly considered under the framework of vertical vehicle dynamics control aimed at improvements in ride comfort. This paper uses a collocation-type control variable optimisation tool to investigate to which extent the fully active suspension (FAS) application can be broaden to the task of vehicle handling/cornering control. The optimisation approach is firstly applied to solely FAS actuator configurations and three types of double lane-change manoeuvres. The obtained optimisation results are used to gain insights into different control mechanisms that are used by FAS to improve the handling performance in terms of path following error reduction. For the same manoeuvres the FAS performance is compared with the performance of different active steering and active differential actuators. The optimisation study is finally extended to combined FAS and active front- and/or rear-steering configurations to investigate if they can use their complementary control authorities (over the vertical and lateral vehicle dynamics, respectively) to further improve the handling performance.
NASA Astrophysics Data System (ADS)
Savard, Stephane
Les premieres etudes d'antennes a base de supraconducteurs a haute temperature critique emettant une impulsion electromagnetique dont le contenu en frequence se situe dans le domaine terahertz remontent a 1996. Une antenne supraconductrice est formee d'un micro-pont d'une couche mince supraconductrice sur lequel un courant continu est applique. Un faisceau laser dans le visible est focalise sur le micro-pont et place le supraconducteur dans un etat hors-equilibre ou des paires sont brisees. Grace a la relaxation des quasiparticules en surplus et eventuellement de la reformation des paires supraconductrices, nous pouvons etudier la nature de la supraconductivite. L'analyse de la cinetique temporelle du champ electromagnetique emis par une telle antenne terahertz supraconductrice s'est averee utile pour decrire qualitativement les caracteristiques de celle-ci en fonction des parametres d'operation tels que le courant applique, la temperature et la puissance d'excitation. La comprehension de l'etat hors-equilibre est la cle pour comprendre le fonctionnement des antennes terahertz supraconductrices a haute temperature critique. Dans le but de comprendre ultimement cet etat hors-equilibre, nous avions besoin d'une methode et d'un modele pour extraire de facon plus systematique les proprietes intrinseques du materiau qui compose l'antenne terahertz a partir des caracteristiques d'emission de celle-ci. Nous avons developpe une procedure pour calibrer le spectrometre dans le domaine temporel en utilisant des antennes terahertz de GaAs bombarde aux protons H+ comme emetteur et detecteur. Une fois le montage calibre, nous y avons insere une antenne emettrice dipolaire de YBa 2Cu3O7-delta . Un modele avec des fonctions exponentielles de montee et de descente du signal est utilise pour lisser le spectre du champ electromagnetique de l'antenne de YBa 2Cu3O7-delta, ce qui nous permet d'extraire les proprietes intrinseques de ce dernier. Pour confirmer la validite du modele choisi, nous avons mesure les proprietes intrinseques du meme echantillon de YBa2Cu3O7- delta avec la technique pompe-visible et sonde-terahertz donnant, elle aussi, acces aux temps caracteristiques regissant l'evolution hors-equilibre de ce materiau. Dans le meilleur scenario, ces temps caracteristiques devraient correspondre a ceux evalues grace a la modelisation des antennes. Un bon controle des parametres de croissance des couches minces supraconductrices et de fabrication du dispositif nous a permis de realiser des antennes d'emission terahertz possedant d'excellentes caracteristiques en terme de largeur de bande d'emission (typiquement 3 THz) exploitables pour des applications de spectroscopie resolue dans le domaine temporel. Le modele developpe et retenu pour le lissage du spectre terahertz decrit bien les caracteristiques de l'antenne supraconductrice pour tous les parametres d'operation. Toutefois, le lien avec la technique pompe-sonde lors de la comparaison des proprietes intrinseques n'est pas direct malgre que les deux techniques montrent que le temps de relaxation des porteurs augmente pres de la temperature critique. Les donnees en pompe-sonde indiquent que la mesure du temps de relaxation depend de la frequence de la sonde, ce qui complique la correspondance des proprietes intrinseques entre les deux techniques. De meme, le temps de relaxation extrait a partir du spectre de l'antenne terahertz augmente en s'approchant de la temperature critique (T c) de YBa2Cu 3O7-delta. Le comportement en temperature du temps de relaxation correspond a une loi de puissance qui est fonction de l'inverse du gap supraconducteur avec un exposant 5 soit 1/Delta 5(T). Le travail presente dans cette these permet de mieux decrire les caracteristiques des antennes supraconductrices a haute temperature critique et de les relier aux proprietes intrinseques du materiau qui les compose. De plus, cette these presente les parametres a ajuster comme le courant applique, la puissance de la pompe, la temperature d'operation, etc, afin d'optimiser l'emission et les performances de ces antennes supraconductrices entre autres pour maximiser leur etendue en frequence dans une perspective d'application en spectroscopie terahertz. Cependant, plusieurs des resultats obtenus soulevent la difficulte de decrire l'etat hors-equilibre et la necessite de developper une theorie pour le supraconducteur YBa2 Cu3O7-delta
NASA Astrophysics Data System (ADS)
Fouladi, Ehsan; Mojallali, Hamed
2018-01-01
In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.
Détermination de la vitesse de coupe en usinage à l'aide des réseaux de neurones
NASA Astrophysics Data System (ADS)
Amor, Noureddine; Noureddine, Ali; Kherfane, Riad Lakhdar
2018-02-01
En usinage par enlèvement de copeaux, il est nécessaire de connaître des éléments tels que la géométrie à obtenir, la matière à usiner, le type d'opération, la machine-outil, l'outil de coupe, la profondeur de passe, l'avance, la vitesse de coupe. Ces trois derniers éléments quantifiables sont déterminés à l'aide de tables, abaques, logiciel informatique dédié, ou système CFAO, offrant une large gamme de choix mais manquant de transparence et de flexibilité. La contribution de cet article est d'appliquer les techniques d'intelligence artificielle basées sur les réseaux de neurones artificiels (RNA) au développement d'un système de décision pour le choix des paramètres de coupe. Pour modéliser la vitesse de coupe, nous utilisons un RNA avec un algorithme de rétro-propagation. Des valeurs expérimentales provenant d'un abaque source serviront à construire et établir le RNA pour estimer la valeur de la vitesse de coupe, utilisant comme données un certain nombre de paramètres d'influence. La validité des résultats obtenus montre que cette méthode peut être appliquée avec succès et que son utilisation dans le domaine de l'usinage peut contribuer à optimiser les conditions de coupe par un choix plus précis et plus rapide de la vitesse de coupe.
Optimisation study of a vehicle bumper subsystem with fuzzy parameters
NASA Astrophysics Data System (ADS)
Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.
2012-10-01
This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).
NASA Astrophysics Data System (ADS)
Wang, Congsi; Wang, Yan; Wang, Zhihai; Wang, Meng; Yuan, Shuai; Wang, Weifeng
2018-04-01
It is well known that calculating and reducing of radar cross section (RCS) of the active phased array antenna (APAA) are both difficult and complicated. It remains unresolved to balance the performance of the radiating and scattering when the RCS is reduced. Therefore, this paper develops a structure and scattering array factor coupling model of APAA based on the phase errors of radiated elements generated by structural distortion and installation error of the array. To obtain the optimal radiating and scattering performance, an integrated optimisation model is built to optimise the installation height of all the radiated elements in normal direction of the array, in which the particle swarm optimisation method is adopted and the gain loss and scattering array factor are selected as the fitness function. The simulation indicates that the proposed coupling model and integrated optimisation method can effectively decrease the RCS and that the necessary radiating performance can be simultaneously guaranteed, which demonstrate an important application value in engineering design and structural evaluation of APAA.
Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E
2018-04-09
Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.
NASA Astrophysics Data System (ADS)
Minard, Benoit
De nos jours, la problématique du bruit généré par les avions est devenue un point de développement important dans le domaine de l'aéronautique. C'est ainsi que de nombreuses études sont faites dans le domaine et une première approche consiste à modéliser de façon numérique ce bruit de manière à réduire de façon conséquente les coûts lors de la conception. C'est dans ce contexte qu'un motoriste a demandé à l'université de Sherbrooke, et plus particulièrement au groupe d'acoustique de l'Université de Sherbrooke (GAUS), de développer un outil de calcul de la propagation des ondes acoustiques dans les nacelles mais aussi pour l'étude des effets d'installation. Cet outil de prédiction leur permet de réaliser des études afin d'optimiser les traitements acoustiques (« liners »), la géométrie de ces nacelles pour des études portant sur l'intérieur de la nacelle et des études de positionnement des moteurs et de design pour les effets d'installation. L'objectif de ce projet de maîtrise était donc de poursuivre le travail réalisé par [gousset, 2011] sur l'utilisation d'une méthode de lancer de rayons pour l'étude des effets d'installation des moteurs d'avion. L'amélioration du code, sa rapidité, sa fiabilité et sa généralité étaient les objectifs principaux. Le code peut être utilisé avec des traitements acoustiques de surfaces («liners») et peut prendre en compte le phénomène de la diffraction par les arêtes et enfin peut être utilisé pour réaliser des études dans des environnements complexes tels que les nacelles d'avion. Le code développé fonctionne en 3D et procéde en 3 étapes : (1) Calcul des faisceaux initiaux (division d'une sphère, demi-sphère, maillage des surfaces de la géométrie) (2) Propagation des faisceaux dans l'environnement d'étude : calcul de toutes les caractéristiques des rayons convergents (amplitude, phase, nombre de réflexions, ...) (3) Reconstruction du champ de pression en un ou plusieurs points de l'espace à partir de rayons convergents (sommation des contributions de chaque rayon) : sommation cohérente. Le code (GA3DP) permet de prendre en compte les traitements de surface des parois, la directivité de la source, l'atténuation atmosphérique et la diffraction d'ordre 1. Le code a été validé en utilisant différentes méthodes telles que la méthode des sources-images, la méthode d'analyse modale ou encore la méthode des éléments finis de frontière. Un module Matlab a été créé spécialement pour l'étude des effets d'installation et intégré au code existant chez Pratt & Whitney Canada.
Optimisation of active suspension control inputs for improved performance of active safety systems
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Xu, Li; Tseng, H. Eric; Hrovat, Davor
2018-01-01
A collocation-type control variable optimisation method is used to investigate the extent to which the fully active suspension (FAS) can be applied to improve the vehicle electronic stability control (ESC) performance and reduce the braking distance. First, the optimisation approach is applied to the scenario of vehicle stabilisation during the sine-with-dwell manoeuvre. The results are used to provide insights into different FAS control mechanisms for vehicle performance improvements related to responsiveness and yaw rate error reduction indices. The FAS control performance is compared to performances of the standard ESC system, optimal active brake system and combined FAS and ESC configuration. Second, the optimisation approach is employed to the task of FAS-based braking distance reduction for straight-line vehicle motion. Here, the scenarios of uniform and longitudinally or laterally non-uniform tyre-road friction coefficient are considered. The influences of limited anti-lock braking system (ABS) actuator bandwidth and limit-cycle ABS behaviour are also analysed. The optimisation results indicate that the FAS can provide competitive stabilisation performance and improved agility when compared to the ESC system, and that it can reduce the braking distance by up to 5% for distinctively non-uniform friction conditions.
NATO Human Resources (Manpower) Management (Gestion des ressources humaines (effectifs) de l’OTAN)
2012-02-01
performances , la gestion des récompenses et du salaire, et la motivation du personnel. En conséquence, il faut que la gestion des ressources humaines...importance au fil du temps. Les ressources humaines, chargées à l’origine de l’embauche, du licenciement, de la paie et de la gestion des ...les effets des systèmes d’évaluation des performances ;
Haering, Diane; Huchez, Aurore; Barbier, Franck; Holvoët, Patrice; Begon, Mickaël
2017-01-01
Introduction Teaching acrobatic skills with a minimal amount of repetition is a major challenge for coaches. Biomechanical, statistical or computer simulation tools can help them identify the most determinant factors of performance. Release parameters, change in moment of inertia and segmental momentum transfers were identified in the prediction of acrobatics success. The purpose of the present study was to evaluate the relative contribution of these parameters in performance throughout expertise or optimisation based improvements. The counter movement forward in flight (CMFIF) was chosen for its intrinsic dichotomy between the accessibility of its attempt and complexity of its mastery. Methods Three repetitions of the CMFIF performed by eight novice and eight advanced female gymnasts were recorded using a motion capture system. Optimal aerial techniques that maximise rotation potential at regrasp were also computed. A 14-segment-multibody-model defined through the Rigid Body Dynamics Library was used to compute recorded and optimal kinematics, and biomechanical parameters. A stepwise multiple linear regression was used to determine the relative contribution of these parameters in novice recorded, novice optimised, advanced recorded and advanced optimised trials. Finally, fixed effects of expertise and optimisation were tested through a mixed-effects analysis. Results and discussion Variation in release state only contributed to performances in novice recorded trials. Moment of inertia contribution to performance increased from novice recorded, to novice optimised, advanced recorded, and advanced optimised trials. Contribution to performance of momentum transfer to the trunk during the flight prevailed in all recorded trials. Although optimisation decreased transfer contribution, momentum transfer to the arms appeared. Conclusion Findings suggest that novices should be coached on both contact and aerial technique. Inversely, mainly improved aerial technique helped advanced gymnasts increase their performance. For both, reduction of the moment of inertia should be focused on. The method proposed in this article could be generalized to any aerial skill learning investigation. PMID:28422954
Mise en oeuvre et caracterisation de pieces autotrempantes elaborees avec de nouveaux alliages meres
NASA Astrophysics Data System (ADS)
Bouchemit, Arslane Abdelkader
: L'autotrempabilite des aciers en metallurgie des poudres (MP) permet d'obtenir des pieces ayant une microstructure de trempe (martensite et/ou bainite), et ce, directement lors du refroidissement a la sortie du four de frittage (frittage industriel : 10 45 °C/min [550 a 350 °C]). Cela permet entre autres d'eliminer les traitements thermiques d'austenisation et de trempe (a l'eau : ≈ 2700 °C/min ou a l'huile : ≈ 1100 °C/min [550 a 350 °C] [17]) generalement requis apres le frittage afin d'obtenir une microstructure martensitique. Ainsi, le procede de fabrication est simplifie, moins couteux et la distorsion des pieces due au refroidissement rapide lors de la trempe est evitee. De plus, l'utilisation des bains d'huile est eliminee ce qui rend le procede plus securitaire et ecologique. Les principaux parametres commandant l'autotrempabilite sont : le taux de refroidissement et la composition chimique de l'acier. De nos jours, les systemes de refroidissement a convection forcee combines aux fours industriels permettent d'obtenir des taux de refroidissement eleves a la sortie des fours (60 300 °C/min [550 a 350 °C]) [18, 19]. De plus, le taux de refroidissement critique induisant la formation de la structure de trempe est largement influence par la composition chimique de l'acier. Ainsi, plus l'acier est allie (jusqu'a une certaine limite), plus ce taux de refroidissement critique est moindre. Le molybdene, le nickel et le cuivre sont les elements usuellement utilises en MP. Cependant, le manganese et le chrome sont moins couteux et ont un impact plus marque sur l'autotrempabilite, malgre cela, ils sont rarement utilises a cause de leur susceptibilite a l'oxydation et la degradation de la compressibilite causee par le manganese. L'objectif principal de ce projet est de developper des melanges autotrempants en ajoutant des alliages meres (MA : MA1, MA2 et MA4) fortement allies au manganese (5 15 %m) et au chrome (5 15 %m) qui contiennent beaucoup de carbone (≈ 4 %m) developpes par Ian Bailon-Poujol lors de sa maitrise [20]. La haute teneur en carbone de ces alliages meres assure la protection des elements d'alliage susceptibles a l'oxydation durant toutes les etapes du procede : dans le bain liquide lors de la fusion et l'atomisation a l'eau, pendant le broyage ainsi que lors du frittage des pieces contenant ces alliages meres. Precedemment, Ian Bailon-Poujol avait etudie le broyage de certains alliages meres atomises a l'eau et avait amorce le developpement de melanges autotrempants ainsi que des etudes de diffusion des elements d'alliage. Pour ce projet, le developpement des melanges autotrempants a implique l'optimisation de toutes les etapes de la mise en oeuvre afin d'obtenir les meilleures proprietes possibles des melanges avant frittage (ecoulement, resistance a cru...) et apres frittage (durete, microstructure...), et ce, pour des alliages meres atomises a l'eau par Ian Bailon-Poujol ainsi qu'un alliage mere de chimie similaire qui fut atomise au gaz. (Abstract shortened by ProQuest.).
Matungulu, Charles Matungulu; Kandolo, Simon Ilunga; Mukengeshayi, Abel Ntambue; Nkola, Angèle Musau; Mpoyi, Dorcas Ilunga; Mumba, Sylvie Katanga; Kabamba, Julie Ndayi; Cowgill, Karen; Kaj, Françoise Malonga
2015-01-01
Introduction Augmenter la prévalence contraceptive s'incarne dans les objectifs des tous les acteurs des programmes qui visent de réduire la mortalité maternelle et infantile, d'améliorer la santé reproductive des adolescents, de lutter contre le VIH/SIDA et les infections sexuellement transmissible (IST), de promouvoir le bien-être familial et de ralentir la croissance démographique. L'objectif de cette étude était de déterminer la prévalence contraceptive moderne et identifier les facteurs qui sont liés à l'utilisation des méthodes contraceptives dans la zone de santé de Mumbunda. Méthodes Une étude transversale à visé analytique a été effectuée auprès des femmes âgées de 15 à 49 ans en union maritale, de Mai à Juin 2014. Grâce à un questionnaire pré testé et validé, nous avons récolté les données par interview sur les caractéristiques sociodémographiques, obstétricales ainsi que sur la pratique contraceptive. Le logiciel SPSS version 21 nous a permis d'analyser les données. Résultats Au total 500 femmes ont été incluses dans cette étude dont l’âge moyen était de 27,9±6,1 ans. La prévalence contraceptive moderne était de 27,6%. L'attitude (ORa= 4,79; IC95%: 1,59-14,43; p<0,001), le niveau de connaissance des méthodes contraceptives (ORa=1,87; IC95%: 1,22-2,87; p<0,001), le soutien du conjoint (ORa=1,87; IC95%: 1,22-2,87; p<0,001) étaient significativement associés à l'utilisation des méthodes contraceptives modernes. Conclusion Tout effort d'augmentation de la prévalence contraceptive devrait cibler l'attitude, le niveau de connaissance de méthodes et le soutien du conjoint afin d'optimiser l'utilisation de la contraception moderne dans la zone de santé (ZS) Mumbunda. PMID:26977237
Multi-Optimisation Consensus Clustering
NASA Astrophysics Data System (ADS)
Li, Jian; Swift, Stephen; Liu, Xiaohui
Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.
Lee, Linda; Heckman, George; Molnar, Frank J.
2015-01-01
Résumé Objectif Aider les médecins de famille à mieux reconnaître la fragilité et ses répercussions sur la prise en charge des patients âgés. Sources des données Une recension a été effectuée dans PubMed-MEDLINE de 1990 à 2013 pour trouver des articles rédigés en anglais exclusivement, à l’aide des groupes suivants de mots clés et d’en-têtes MeSH : frail elderly, frail, frailty; aged, geriatrics, geriatric assessment, health services for the aged; primary health care, community health services et family practice. Message principal La fragilité est fréquente, en particulier chez les personnes âgées atteintes de maladies chroniques complexes comme l’insuffisance cardiaque et la bronchopneumopathie chronique obstructive. De nouvelles données probantes démontrent l’importance de la fragilité comme facteur de prédiction d’issues indésirables chez les personnes plus âgées. Quoiqu’on ne s’entende actuellement pas sur les meilleures façons d’évaluer et de diagnostiquer la fragilité en soins primaires, des marqueurs individuels de la fragilité comme la lenteur de la démarche offrent des moyens pratiques prometteurs de dépister la fragilité. La détection de la fragilité en soins primaires peut offrir la possibilité de retarder la progression de la fragilité au moyen d’interventions proactives comme l’exercice. La reconnaissance de l’existence de la fragilité peut orienter un counseling approprié et la prise de mesures préventives anticipatoires pour les patients lorsqu’on envisage des interventions médicales. Elle peut aussi aider à cerner des problèmes concomitants qui contribuent à la fragilité et l’influencent et à en optimiser la prise en charge. Les recherches futures devraient se pencher sur la détermination de moyens pratiques et efficaces pour évaluer et prendre en charge de manière appropriée ces patients vulnérables au niveau des soins primaires. Conclusion En dépit de son importance, on a accordé peu d’attention au concept de la fragilité en médecine familiale. La fragilité peut aisément passer inaperçue parce que ses manifestations sont souvent subtiles, de progression lente et, par conséquent, elle peut être méprise pour un vieillissement normal. La formation des médecins a insisté sur des maladies médicales spécifiques plutôt que sur la vulnérabilité globale. Pour les médecins en soins primaires, la reconnaissance de la fragilité peut les aider à offrir un counseling approprié aux patients et aux membres de leur famille au sujet des risques des interventions médicales.
The Dark Energy Survey Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Morganson, E.; Gruendl, R. A.; Menanteau, F.; Carrasco Kind, M.; Chen, Y.-C.; Daues, G.; Drlica-Wagner, A.; Friedel, D. N.; Gower, M.; Johnson, M. W. G.; Johnson, M. D.; Kessler, R.; Paz-Chinchón, F.; Petravick, D.; Pond, C.; Yanny, B.; Allam, S.; Armstrong, R.; Barkhouse, W.; Bechtol, K.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Buckley-Geer, E.; Covarrubias, R.; Desai, S.; Diehl, H. T.; Goldstein, D. A.; Gruen, D.; Li, T. S.; Lin, H.; Marriner, J.; Mohr, J. J.; Neilsen, E.; Ngeow, C.-C.; Paech, K.; Rykoff, E. S.; Sako, M.; Sevilla-Noarbe, I.; Sheldon, E.; Sobreira, F.; Tucker, D. L.; Wester, W.; DES Collaboration
2018-07-01
The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a ∼5000 deg2 survey of the southern sky in five optical bands (g, r, i, z, Y) to a depth of ∼24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g, r, i, z) over ∼27 deg2. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.
Almén, Anja; Båth, Magnus
2016-06-01
The overall aim of the present work was to develop a conceptual framework for managing radiation dose in diagnostic radiology with the intention to support optimisation. An optimisation process was first derived. The framework for managing radiation dose, based on the derived optimisation process, was then outlined. The outset of the optimisation process is four stages: providing equipment, establishing methodology, performing examinations and ensuring quality. The optimisation process comprises a series of activities and actions at these stages. The current system of diagnostic reference levels is an activity in the last stage, ensuring quality. The system becomes a reactive activity only to a certain extent engaging the core activity in the radiology department, performing examinations. Three reference dose levels-possible, expected and established-were assigned to the three stages in the optimisation process, excluding ensuring quality. A reasonably achievable dose range is also derived, indicating an acceptable deviation from the established dose level. A reasonable radiation dose for a single patient is within this range. The suggested framework for managing radiation dose should be regarded as one part of the optimisation process. The optimisation process constitutes a variety of complementary activities, where managing radiation dose is only one part. This emphasises the need to take a holistic approach integrating the optimisation process in different clinical activities. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Jin, Chenxia; Li, Fachao; Tsang, Eric C. C.; Bulysheva, Larissa; Kataev, Mikhail Yu
2017-01-01
In many real industrial applications, the integration of raw data with a methodology can support economically sound decision-making. Furthermore, most of these tasks involve complex optimisation problems. Seeking better solutions is critical. As an intelligent search optimisation algorithm, genetic algorithm (GA) is an important technique for complex system optimisation, but it has internal drawbacks such as low computation efficiency and prematurity. Improving the performance of GA is a vital topic in academic and applications research. In this paper, a new real-coded crossover operator, called compound arithmetic crossover operator (CAC), is proposed. CAC is used in conjunction with a uniform mutation operator to define a new genetic algorithm CAC10-GA. This GA is compared with an existing genetic algorithm (AC10-GA) that comprises an arithmetic crossover operator and a uniform mutation operator. To judge the performance of CAC10-GA, two kinds of analysis are performed. First the analysis of the convergence of CAC10-GA is performed by the Markov chain theory; second, a pair-wise comparison is carried out between CAC10-GA and AC10-GA through two test problems available in the global optimisation literature. The overall comparative study shows that the CAC performs quite well and the CAC10-GA defined outperforms the AC10-GA.
NASA Astrophysics Data System (ADS)
Kaliszewski, M.; Mazuro, P.
2016-09-01
Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.
The Dark Energy Survey Image Processing Pipeline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morganson, E.; et al.
The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On amore » bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.« less
Etude critique de la prise en charge de 159 personnes âgées en consultation de psychiatrie
Ben Thabet, Jihène; Ammar, Yousra; Charfi, Nada; Zouari, Lobna; Zouari, Nasreddine; Gaha, Lotfi; Maalej, Mohamed
2014-01-01
Introduction Le phénomène de vieillissement des populations est associé à une augmentation de la prévalence de la morbidité liée à l’âge. La prescription des psychotropes chez le sujet âgé est de plus en plus fréquente dans les institutions, les doses sont de plus en plus élevées, avec un recours fréquent à une poly pharmacothérapie. Nous nous sommes proposé de décrire les conduites thérapeutiques chez le sujet âgé consultant en psychiatrie, en vue de les confronter aux dernières recommandations en la matière. Méthodes L’étude était de type rétrospectif et descriptif. Elle a concerné les sujets âgés d'au moins 60 ans ayant consulté pour la première fois en psychiatrie, au CHU Hédi Chaker à Sfax, en 2010 ou 2011. Résultats Nous avons colligé 159 dossiers. L’âge moyen était de 73 ans. La démence et les troubles de l'humeur étaient les diagnostics les plus fréquents. Sur le plan thérapeutique, une poly thérapie faite d'au moins deux psychotropes de familles différentes a été prescrite pour 55,9%. Chez 60.3% des sujets, le traitement a été prescrit d'emblée à dose complète. Aucun dossier ne faisait état d'une prise en charge psychothérapeutique. Conclusion La prise en charge des malades de notre étude n’était pas conforme aux recommandations, notamment en matière d'association médicamenteuse, de progression des doses et d'association de la psychothérapie à la pharmacothérapie. L'information des médecins et leur sensibilisation aux particularités du sujet âgé contribuerait à optimiser les soins qui leur sont prodigués, y compris en psychiatrie. PMID:25120873
Optimisation in radiotherapy. III: Stochastic optimisation algorithms and conclusions.
Ebert, M
1997-12-01
This is the final article in a three part examination of optimisation in radiotherapy. Previous articles have established the bases and form of the radiotherapy optimisation problem, and examined certain types of optimisation algorithm, namely, those which perform some form of ordered search of the solution space (mathematical programming), and those which attempt to find the closest feasible solution to the inverse planning problem (deterministic inversion). The current paper examines algorithms which search the space of possible irradiation strategies by stochastic methods. The resulting iterative search methods move about the solution space by sampling random variates, which gradually become more constricted as the algorithm converges upon the optimal solution. This paper also discusses the implementation of optimisation in radiotherapy practice.
A supportive architecture for CFD-based design optimisation
NASA Astrophysics Data System (ADS)
Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong
2014-03-01
Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.
NASA Astrophysics Data System (ADS)
Fall, H.; Charon, W.; Kouta, R.
2002-12-01
Ces dernières décennies, des activités significatives dans le monde étaient dirigées autour du contrôle actif. Le but de ces recherches était essentiellement d'améliorer les performances, la fiabilité et la sécurité des systèmes. Notamment dans le cas des structures soumises à des vibrations aléatoires. D'importants travaux ont été consacré à l'utilisation des “matériaux intelligents” comme capteurs et actionneurs. Cette article propose l'analyse de la fiabilité des systèmes mécaniques en étudiant les pannes des actionneurs ou des capteurs. L'effet de ces pannes sur la stabilité et la performance du système y est démontré. Les méthodologies de conception y sont rappelées. Des exemples numériques sont fournis à travers le contrôle d'un panneau sous chargement dynamique pour illustrer la méthode proposée.
NASA Astrophysics Data System (ADS)
Convert, Laurence
De nouveaux radiotraceurs sont continuellement développés pour améliorer l'efficacité diagnostique en imagerie moléculaire, principalement en tomographie d'émission par positrons (TEP) et en tomographie d'émission monophotonique (TEM) dans les domaines de l'oncologie, de la cardiologie et de la neurologie. Avant de pouvoir être utilisés chez les humains, ces radiotraceurs doivent être caractérisés chez les petits animaux, principalement les rats et les souris. Pour cela, de nombreux échantillons sanguins doivent être prélevés et analysés (mesure de radioactivité, séparation de plasma, séparation d'espèces chimiques), ce qui représente un défi majeur chez les rongeurs à cause de leur très faible volume sanguin (˜1,4 ml pour une souris). Des solutions fournissant une analyse partielle sont présentées dans la littérature, mais aucune ne permet d'effectuer toutes les opérations dans un même système. Les présents travaux de recherche s'insèrent dans le contexte global d'un projet visant à développer un système microfluidique d'analyse sanguine complète en temps réel pour la caractérisation des nouveaux radiotraceurs TEP et TEM. Un cahier des charges a tout d'abord été établi et a permis de fixer des critères quantitatifs et qualitatifs à respecter pour chacune des fonctions de la puce. La fonction de détection microfluidique a ensuite été développée. Un état de l'art des travaux ayant déjà combiné la microfluidique et la détection de radioactivité a permis de souligner qu'aucune solution existante ne répondait aux critères du projet. Parmi les différentes technologies disponibles, des microcanaux en résine KMPR fabriqués sur des détecteurs semiconducteurs de type p-i-n ont été identifiés comme une solution technologique pour le projet. Des détecteurs p-i-n ont ensuite été fabriqués en utilisant un procédé standard. Les performances encourageantes obtenues ont mené à initier un projet de maîtrise pour leur optimisation. En parallèle, les travaux ont été poursuivis avec des détecteurs du commerce sous forme de gaufres non découpées. Un premier dispositif intégrant des canaux en KMPR sur ces gaufres a permis de valider le concept démontrant le grand potentiel de ces choix technologiques et incitant à poursuivre les développements dans cette voie, notamment en envisageant des expériences animales. L'utilisation prolongée des canaux avec du sang non dilué est cependant particulièrement exigeante pour les matériaux artificiels. Une passivation à l'albumine a permis d'augmenter considérablement la compatibilité sanguine de la résine KMPR. Le concept initial, incluant la passivation des canaux, a ensuite été optimisé et intégré dans un système de mesure complet avec toute l'électronique et l'informatique de contrôle. Le système final a été validé chez le petit animal avec un radiotraceur connu. Ces travaux ont donné lieu à la première démonstration d'un détecteur microfluidique de haute efficacité pour la TEP et la TEM. Cette première brique d'un projet plus global est déjà un outil innovant en soi qui permettra d'augmenter l'efficacité du développement d'outils diagnostiques plus spécifiques principalement pour l'oncologie, la cardiologie et la neurologie. Mots clefs : imagerie moléculaire, tomographie d'émission par positrons (TEP), tomographie d'émission monophotonique (TEM), microfluidique, détecteur de radioactivité, KMPR, diodes p-i-n, hémocompatibilité.
Les inconvénients de perdre du poids
Bosomworth, N. John
2012-01-01
Résumé Objectif Explorer les raisons pour lesquelles la perte de poids à long terme échoue la plupart du temps et évaluer les conséquences de diverses trajectoires pondérales, y compris la stabilité, la perte et le gain. Source des données Les études qui évaluent les paramètres pondéraux dans la population sont en majorité observationnelles. Des données probantes de niveau I ont été publiées pour évaluer l’influence des interventions relatives au poids sur la mortalité et la qualité de vie. Message principal Seulement un petit pourcentage des personnes qui désirent perdre du poids réussissent à le faire de manière durable. La mortalité est la plus faible chez les personnes se situant dans la catégorie de poids élevé-normal et surpoids. La trajectoire pondérale la plus sécuritaire est la stabilité du poids avec une optimisation de la condition physique et métabolique. Il est démontré que la mortalité est plus faible chez les personnes ayant des comorbidités reliées à l’obésité si elles perdent du poids. Il est aussi établi que la qualité de vie sur le plan de la santé est meilleure chez les personnes obèses qui perdent du poids. Par contre, la perte de poids chez une personne obèse autrement en santé est associée à une mortalité accrue. Conclusion La perte de poids est recommandable seulement chez les personnes qui ont des comorbidités reliées à l’obésité. Les personnes obèses en santé qui veulent perdre du poids devraient être informées qu’il peut y avoir des risques à le faire. Une stratégie qui se traduit par un indice de masse corporelle stable avec une condition physique et métabolique optimisée, peu importe le poids, est l’option d’intervention la plus sécuritaire en ce qui concerne le poids.
Isotopic micro generators 1 volt; Les microgenerateurs radioisotopiques 1 volt (in French)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bomal, R.; Devin, B.; Delaquaize, P.
1969-07-01
Various configurations for electrical isotopic generators in the milliwatt range are investigated; these generators are not of the classical thermoelectric type. The four following energy conversion method are examined : thermionic, thermo-photovoltaic, radio-voltaic, wired thermoelectric. The calculus has been conducted having in view not the best energy conversion efficiency, but the need to attain directly 1 volt output voltage. High temperature {sup 238}Pu sources (T above 1000 ) are isolated by multi layer thermal insulation material of the Moly/Alumina type. Optimised application is given for number 1 and 2 here above. Thermionic is interesting by its compactness and Wired-thermoelectric ismore » cheap, simple and rugged. Both method do not allow to extend output voltage range far above 1 volt. TPV and RV, can be designed for multi volt application. Radio-voltaic is 1 per cent efficient but irradiation defects in the semiconductor induced by high energy radiations can strongly limit the lifetime of the generator. Isotope sources technology is the determining factor for these micro generators. (author) [French] Cette etude examine les types de generateurs electriques realisables a partir de sources isotopiques en dehors du procede thermoelectrique classique. Les quatre procedes suivants sont examines: thermoionique, thermophoto-voltaique, radiovoltaique, thermoelectrique a fils. Les calculs sont conduits sans souci exagere du rendement de conversion pour aboutir a une puissance electrique delivree de 0,2 a 1 mW sous une tension au moins egale a 1 volt. Le probleme des sources thermiques de {sup 238}Pu a haute temperature (T > 1000 C) est resolu avec une isolation a structure multi-couche moly-alumine. L'optimalisation est calculee en vue d'une utilisation dans les procedes 1 et 2. Le procede 1 est interessant par sa compacite et le procede 4 par sa simplicite, sa robustesse et son prix de revient; mais avec ces deux generateurs on ne peut obtenir plus d'un volt en charge. Les procedes 2 et 3 peuvent delivrer des tensions de plusieurs volts. Le radiovoltaique donne des rendements de 1 pour cent, mais la creation de defauts dans le reseau cristallin du semiconducteur avec les particules de grande energie peut limiter son utilisation. Les performances de ces differents modes de conversion sont conditionnees avant tout par les technologies des sources isotopiques. (auteur)« less
Analyse des interactions energetiques entre un arena et son systeme de refrigeration
NASA Astrophysics Data System (ADS)
Seghouani, Lotfi
La presente these s'inscrit dans le cadre d'un projet strategique sur les arenas finance par le CRSNG (Conseil de Recherche en Sciences Naturelles et en Genie du Canada) qui a pour but principal le developpement d'un outil numerique capable d'estimer et d'optimiser la consommation d'energie dans les arenas et curlings. Notre travail s'inscrit comme une suite a un travail deja realise par DAOUD et coll. (2006, 2007) qui a developpe un modele 3D (AIM) en regime transitoire de l'arena Camilien Houde a Montreal et qui calcule les flux de chaleur a travers l'enveloppe du batiment ainsi que les distributions de temperatures et d'humidite durant une annee meteorologique typique. En particulier, il calcule les flux de chaleur a travers la couche de glace dus a la convection, la radiation et la condensation. Dans un premier temps nous avons developpe un modele de la structure sous la glace (BIM) qui tient compte de sa geometrie 3D, des differentes couches, de l'effet transitoire, des gains de chaleur du sol en dessous et autour de l'arena etudie ainsi que de la temperature d'entree de la saumure dans la dalle de beton. Par la suite le BIM a ete couple le AIM. Dans la deuxieme etape, nous avons developpe un modele du systeme de refrigeration (REFSYS) en regime quasi-permanent pour l'arena etudie sur la base d'une combinaison de relations thermodynamiques, de correlations de transfert de chaleur et de relations elaborees a partir de donnees disponibles dans le catalogue du manufacturier. Enfin le couplage final entre l'AIM +BIM et le REFSYS a ete effectue sous l'interface du logiciel TRNSYS. Plusieurs etudes parametriques on ete entreprises pour evaluer les effets du climat, de la temperature de la saumure, de l'epaisseur de la glace, etc. sur la consommation energetique de l'arena. Aussi, quelques strategies pour diminuer cette consommation ont ete etudiees. Le considerable potentiel de recuperation de chaleur au niveau des condenseurs qui peut reduire l'energie requise par le systeme de ventilation de l'arena a ete mis en evidence. Mots cles. Arena, Systeme de refrigeration, Consommation d'energie, Efficacite energetique, Conduction au sol, Performance annuelle.
NASA Astrophysics Data System (ADS)
Li, Dewei; Li, Jiwei; Xi, Yugeng; Gao, Furong
2017-12-01
In practical applications, systems are always influenced by parameter uncertainties and external disturbance. Both the H2 performance and the H∞ performance are important for the real applications. For a constrained system, the previous designs of mixed H2/H∞ robust model predictive control (RMPC) optimise one performance with the other performance requirement as a constraint. But the two performances cannot be optimised at the same time. In this paper, an improved design of mixed H2/H∞ RMPC for polytopic uncertain systems with external disturbances is proposed to optimise them simultaneously. In the proposed design, the original uncertain system is decomposed into two subsystems by the additive character of linear systems. Two different Lyapunov functions are used to separately formulate the two performance indices for the two subsystems. Then, the proposed RMPC is designed to optimise both the two performances by the weighting method with the satisfaction of the H∞ performance requirement. Meanwhile, to make the design more practical, a simplified design is also developed. The recursive feasible conditions of the proposed RMPC are discussed and the closed-loop input state practical stable is proven. The numerical examples reflect the enlarged feasible region and the improved performance of the proposed design.
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
NiP black: vers l'utilisation d'un traitement plus noir que noir contre la lumière parasite
NASA Astrophysics Data System (ADS)
Mazuray, L.; Petilon, JF.
2017-11-01
Le NiP black est un alliage de Nickel-Phosphore poreux présentant un coefficient d'absorption exceptionnel de 0.998, permettant un gain en diffusion d'un facteur 10 à 20 par rapport à la meilleure des peintures noires utilisée sur les baffles et montures des instruments d'optique. L'industrialisation de ce procédé fait l'objet d'une collaboration entre le CNES, SODERN et MMS afin de répondre au mieux aux exigences des différents instruments spatiaux. Le faible niveau de diffusion des baffles et des montures des instruments optiques est un élément clé d'un faible niveau de lumière parasite et par voie de conséquence, des performances en détection, imagerie et d'analyse. Les essais réalisés en 98 et 99 sur des visières de senseurs solaires haute précision implantés sur les plate-formes telecom MMS ont confirmé les excellentes performances du senseur obtenues grâce à ce traitement. NiP Black s'incrit dans une démarche générale MMS d'amélioration des performances en lumière parasite des instruments optiques. Il est proposé de présenter le NiP Black et les performances réalisées sur des visières de senseurs implantés sur les plate-formes MMS, ainsi que son potentiel à venir.
2005-04-01
alain.leger@fr.thalesgroup.com THALES Aerospace Rue Toussaint Catros 33187 Le Haillan FRANCE RESUME Les coques des écouteurs du casque Topowl ont...déjà fait l’objet d’une étude visant à optimiser leur protection auditive dans les stricts budgets de masse et de volume impartis. La présente...techniques audio (p. 17-1 – 17-14). Compte rendu de réunion RTO-MP-HFM-123, Communication 17. Neuilly-sur-Seine, France : RTO. Disponible sur le site
Seth, Ashok; Gupta, Sajal; Pratap Singh, Vivudh; Kumar, Vijay
2017-09-01
Final stent dimensions remain an important predictor of restenosis, target vessel revascularisation (TVR) and subacute stent thrombosis (ST), even in the drug-eluting stent (DES) era. Stent balloons are usually semi-compliant and thus even high-pressure inflation may not achieve uniform or optimal stent expansion. Post-dilatation with non-compliant (NC) balloons after stent deployment has been shown to enhance stent expansion and could reduce TVR and ST. Based on supporting evidence and in the absence of large prospective randomised outcome-based trials, post-dilatation with an NC balloon to achieve optimal stent expansion and maximal luminal area is a logical technical recommendation, particularly in complex lesion subsets.
Reticulation des fibres lignocellulosiques
NASA Astrophysics Data System (ADS)
Landrevy, Christel
Pour faire face à la crise économique la conception de papier à valeur ajoutée est développée par les industries papetières. Le but de se projet est l'amélioration des techniques actuelles de réticulation des fibres lignocellulosiques de la pâte à papier visant à produire un papier plus résistant. En effet, lors des réactions de réticulation traditionnelles, de nombreuses liaisons intra-fibres se forment ce qui affecte négativement l'amélioration anticipée des propriétés physiques du papier ou du matériau produit. Pour éviter la formation de ces liaisons intra-fibres, un greffage sur les fibres de groupements ne pouvant pas réagir entre eux est nécessaire. La réticulation des fibres par une réaction de « click chemistry » appelée cycloaddition de Huisgen entre un azide et un alcyne vrai, catalysée par du cuivre (CuAAC) a été l'une des solutions trouvée pour remédier à ce problème. De plus, une adaptation de cette réaction en milieux aqueux pourrait favoriser son utilisation en milieu industriel. L'étude que nous désirons entreprendre lors de ce projet vise à optimiser la réaction de CuAAC et les réactions intermédiaires (propargylation, tosylation et azidation) sur la pâte kraft, en milieu aqueux. Pour cela, les réactions ont été adaptées en milieu aqueux sur la cellulose microcristalline afin de vérifier sa faisabilité, puis transférée à la pâte kraft et l'influence de différents paramètres comme le temps de réaction ou la quantité de réactifs utilisée a été étudiée. Dans un second temps, une étude des différentes propriétés conférées au papier par les réactions a été réalisée à partir d'une série de tests papetiers optiques et physiques. Mots Clés Click chemistry, Huisgen, CuAAC, propargylation, tosylation, azidation, cellulose, pâte kraft, milieu aqueux, papier.
Andrighetto, Luke M; Stevenson, Paul G; Pearson, James R; Henderson, Luke C; Conlan, Xavier A
2014-11-01
In-silico optimised two-dimensional high performance liquid chromatographic (2D-HPLC) separations of a model methamphetamine seizure sample are described, where an excellent match between simulated and real separations was observed. Targeted separation of model compounds was completed with significantly reduced method development time. This separation was completed in the heart-cutting mode of 2D-HPLC where C18 columns were used in both dimensions taking advantage of the selectivity difference of methanol and acetonitrile as the mobile phases. This method development protocol is most significant when optimising the separation of chemically similar chemical compounds as it eliminates potentially hours of trial and error injections to identify the optimised experimental conditions. After only four screening injections the gradient profile for both 2D-HPLC dimensions could be optimised via simulations, ensuring the baseline resolution of diastereomers (ephedrine and pseudoephedrine) in 9.7 min. Depending on which diastereomer is present the potential synthetic pathway can be categorized.
Requirements analysis for a hardware, discrete-event, simulation engine accelerator
NASA Astrophysics Data System (ADS)
Taylor, Paul J., Jr.
1991-12-01
An analysis of a general Discrete Event Simulation (DES), executing on the distributed architecture of an eight mode Intel PSC/2 hypercube, was performed. The most time consuming portions of the general DES algorithm were determined to be the functions associated with message passing of required simulation data between processing nodes of the hypercube architecture. A behavioral description, using the IEEE standard VHSIC Hardware Description and Design Language (VHDL), for a general DES hardware accelerator is presented. The behavioral description specifies the operational requirements for a DES coprocessor to augment the hypercube's execution of DES simulations. The DES coprocessor design implements the functions necessary to perform distributed discrete event simulations using a conservative time synchronization protocol.
Tail mean and related robust solution concepts
NASA Astrophysics Data System (ADS)
Ogryczak, Włodzimierz
2014-01-01
Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.
Optimising the Parallelisation of OpenFOAM Simulations
2014-06-01
UNCLASSIFIED UNCLASSIFIED Optimising the Parallelisation of OpenFOAM Simulations Shannon Keough Maritime Division Defence...Science and Technology Organisation DSTO-TR-2987 ABSTRACT The OpenFOAM computational fluid dynamics toolbox allows parallel computation of...performance of a given high performance computing cluster with several OpenFOAM cases, running using a combination of MPI libraries and corresponding MPI
NASA Astrophysics Data System (ADS)
Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa
2017-08-01
The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing
(KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.
Metaheuristic optimisation methods for approximate solving of singular boundary value problems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong
2017-07-01
This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.
Analysis of the car body stability performance after coupler jack-knifing during braking
NASA Astrophysics Data System (ADS)
Guo, Lirong; Wang, Kaiyun; Chen, Zaigang; Shi, Zhiyong; Lv, Kaikai; Ji, Tiancheng
2018-06-01
This paper aims to improve car body stability performance by optimising locomotive parameters when coupler jack-knifing occurs during braking. In order to prevent car body instability behaviour caused by coupler jack-knifing, a multi-locomotive simulation model and a series of field braking tests are developed to analyse the influence of the secondary suspension and the secondary lateral stopper on the car body stability performance during braking. According to simulation and test results, increasing secondary lateral stiffness contributes to limit car body yaw angle during braking. However, it seriously affects the dynamic performance of the locomotive. For the secondary lateral stopper, its lateral stiffness and free clearance have a significant influence on improving the car body stability capacity, and have less effect on the dynamic performance of the locomotive. An optimised measure was proposed and adopted on the test locomotive. For the optimised locomotive, the lateral stiffness of secondary lateral stopper is increased to 7875 kN/m, while its free clearance is decreased to 10 mm. The optimised locomotive has excellent dynamic and safety performance. Comparing with the original locomotive, the maximum car body yaw angle and coupler rotation angle of the optimised locomotive were reduced by 59.25% and 53.19%, respectively, according to the practical application. The maximum derailment coefficient was 0.32, and the maximum wheelset lateral force was 39.5 kN. Hence, reasonable parameters of secondary lateral stopper can improve the car body stability capacity and the running safety of the heavy haul locomotive.
NASA Astrophysics Data System (ADS)
Vincent, Jean-Baptiste
This Master's thesis is part of a multidisciplinary optimisation project initiated by the Consortium for Research and Innovation in Aerospace in Quebec (CRIAQ) ; this project is about designing and manufacturing a morphing wing demonstrator. The morphing design adopted in this project is based on airfoil thickness variation applied to the upper skin. This morphing generates a change in the laminar to turbulent boundary layer transition position on top of the wing. The position of this transition area leads to significant changes in the aerodynamic performance of the wing. The study presented here focuses on the design of the conventional aileron actuation system and on the characterization of the high sensitivity differential pressure sensors installed on the upper skin in order to determine the laminar to turbulent transition position. Furthermore, the study focuses on the data acquisition system for the morphing wing structural test validation. The aileron actuation system is based on a linear actuator actuated by a brushless motor. The component choice is presented as well as the command method. A static validation as well as wind tunnel validation is presented. The pressure sensor characterization is performed by installing three of those high sensitivity differential pressure sensors in a bi-dimensional known airfoil. This study goes through the process of determining the sensor position in order to observe the transition area by using a computational fluid dynamics (CFD) statistic approach. The validation of the laminar to turbulent transition position is carried out with a series of wind tunnel tests. A structural test has been executed in order to validate the wing structure. This Master's thesis shows the data acquisition system for the microstrain measurement installed inside the morphing wing. A hardware and software architecture description is developed and presented as well as the practical results.
Electronic Messaging for the 90s (Les Messageries Electroniques des Annees 90)
1993-05-01
des autres unitds (un programme peut par exemple inclure un algorithme de 2. L CICULAIONDE ’INFRMAION recherche dans des donn~es...dit quc le rdseau, en particulier le r.dseau distant, cette coopdration homme /macbine. aifre des performances bien infirieures A celles des ordinateurs...dans ce sens, entre constructeurs d’ordinateurs, des opdrations traitementltransport. constructeurs didquipements de t~ l ~communications et exploitants de
Dimensionnement des actionneurs électriques alimentés à fréquence variable sous faible tension
NASA Astrophysics Data System (ADS)
Biedinger, J.-M.; Vilain, J.-P.
1999-09-01
In Part I we present a multidisciplinary analysis model for the prediction of functional connections between the design variables and the electromagnetical, electrical and thermal performances of a brushless permanent magnet motor. In this paper we have elaborated a design methodology for electrical motors supplied from a variable-frequency low-voltage source. The objective is to take into account the influence of the inverter's dynamics from the beginning of the design, for the same reasons as we do for electromechanical and thermal constraints. The procedure is based on a Sequential Quadratic Programming optimization method. Two techniques are used to take into account the influence of the inverter: the first one develops the performance analysis with the multidisciplinary model; the second one considers the inverter's current reference as a supplementary optimization variable for the control of the design. Optimization difficulties linked to the chopping of the converter are discuted in connection with a sensitivity analysis of the torque, with respect to the inverter's current reference; a method is proposed to enhance the performances of the algorithm. The method has been applied to the design of a permanent magnet brushless DC motor used in the propulsion system of an electrical scooter; evolution of the design with the complexity level of analysis model is evidenced. Dans une première partie nous avons développé un modèle d'analyse pluridisciplinaire qui établissait les dépendances fonctionnelles entre les variables de conception et les performances magnéto-électro-thermiques d'un moteur brushless à aimants permanents. Dans cet article nous décrivons une procédure de dimensionnement adaptée à la conception de la machine alimentée à fréquence variable sous faible tension. L'objectif est d'intégrer la dynamique du convertisseur électronique dès la phase initiale du dimensionnement, au même titre que les spécifications électromécaniques et thermiques. La procédure est gérée par une méthode d'optimisation déterministe de type Programmation Quadratique Séquentielle. Elle intègre l'influence du convertisseur au moyen de deux spécificités : la première consiste à évaluer les fonctions du problème d'optimisation sur la base du module d'analyse pluridisciplinaire ; la deuxième permet de contrôler l'évolution du dimensionnement au travers de variables d'optimisation dont la liste inclut les consignes de la commande en courant du convertisseur. Les difficultés d'optimisation liées au mode de fonctionnement discret du convertisseur sont discutées au regard du calcul de la sensibilité du couple électromagnétique envers la consigne de courant du convertisseur ; une méthode est proposée pour assurer la robustesse de la procédure en présence du convertisseur. L'application porte sur le dimensionnement d'un moteur à aimants permanents de type “brushless trapézoïdal" destiné à la traction d'un scooter électrique ; l'évolution de la structure optimale avec le degré de complexité du module d'analyse est mise en évidence.
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
NASA Astrophysics Data System (ADS)
Valentin, Olivier
Selon l'Organisation mondiale de la sante, le nombre de travailleurs exposes quotidiennement a des niveaux de bruit prejudiciables a leur audition est passe de 120 millions en 1995 a 250 millions en 2004. Meme si la reduction du bruit a la source devrait etre toujours privilegiee, la solution largement utilisee pour lutter contre le bruit au travail reste la protection auditive individuelle. Malheureusement, le port des protecteurs auditifs n'est pas toujours respecte par les travailleurs car il est difficile de fournir un protecteur auditif dont le niveau d'attenuation effective est approprie a l'environnement de travail d'un individu. D'autre part, l'occlusion du canal auditif induit une modification de la perception de la parole, ce qui cree un inconfort incitant les travailleurs a retirer leurs protecteurs. Ces deux problemes existent parce que les methodes actuelles de mesure de l'effet d'occlusion et de l'attenuation sont limitees. Les mesures objectives basees sur des mesures microphoniques intra-auriculaires ne tiennent pas compte de la transmission directe du son a la cochlee par conduction osseuse. Les mesures subjectives au seuil de l'audition sont biaisees a cause de l'effet de masquage aux basses frequences induit par le bruit physiologique. L'objectif principal de ce travail de these de doctorat est d'ameliorer la mesure de l'attenuation et de l'effet d'occlusion des protecteurs auditifs intra-auriculaires. L'approche generale consiste a : (i) verifier s'il est possible de mesurer l'attenuation des protecteurs auditifs grâce au recueil des potentiels evoques stationnaires et multiples (PEASM) avec et sans protecteur auditif (protocole 1), (ii) adapter cette methodologie pour mesurer l'effet d'occlusion induit par le port de protecteur auditifs intra-auriculaires (protocole 2), et (iii) valider chaque protocole par l'intermediaire de mesures realisees sur sujets humains. Les resultats du protocole 1 demontrent que les PEASM peuvent etre utilises pour mesurer objectivement l'attenuation des protecteurs auditifs : les resultats obtenus a 500 Hz et 1 kHz demontrent que l'attenuation mesuree a partir des PEASM est relativement equivalente a l'attenuation calculee par la methode REAT, ce qui est en accord avec ce qui etait attendu puisque l'effet de masquage induit par le bruit physiologique aux basses frequences est relativement negligeable a ces frequences. Les resultats du protocole 2 demontrent que les PEASM peuvent etre egalement utilises pour mesurer objectivement l'effet d'occlusion induit par le port de protecteurs auditifs : l'effet d'occlusion mesure a partir des PEASM a 500 Hz est plus eleve que celui calcule par l'intermediaire de la methode subjective au seuil de l'audition, ce qui est en accord avec ce qui etait attendu puisqu'en dessous d'1 kHz, l'effet de masquage induit par le bruit physiologique aux basses frequences est source de biais pour les resultats obtenus par la methode subjective car il y a surestimation des seuils de l'audition en basse frequence lors du port de protecteurs auditifs. Toutefois, les resultats obtenus a 250 Hz sont en contradiction avec les resultats attendus. D'un point de vue scientifique, ce travail de these a permis de realiser deux nouvelles methodes innovantes pour mesurer objectivement l'attenuation et l'effet d'occlusion des protecteurs auditifs intra-auriculaires par electroencephalographie. D'un point de vue sante et securite au travail, les avancees presentees dans cette these pourraient aider a concevoir des protecteurs auditifs plus performants. En effet, si ces deux nouvelles methodes objectives etaient normalisees pour caracteriser les protecteurs auditifs intra-auriculaires, elles pourraient permettre : (i) de mieux apprehender l'efficacite reelle de la protection auditive et (ii) de fournir une mesure de l'inconfort induit par l'occlusion du canal auditif lors du port de protecteurs. Fournir un protecteur auditif dont l'efficacite reelle est adaptee a l'environnement de travail et dont le confort est optimise permettrait, sur le long terme, d'ameliorer les conditions des travailleurs en minimisant le risque lie a la degradation de leur appareil auditif. Les perspectives de travail proposees a la fin de cette these consistent principalement a : (i) exploiter ces deux methodes avec une gamme frequentielle plus etendue, (ii) explorer la variabilite intra-individuelle de chacune des methodes, (iii) comparer les resultats des deux methodes avec ceux obtenus par l'intermediaire de la methode "Microphone in Real Ear" (MIRE) et (iv) verifier la compatibilite de chacune des methodes avec tous les types de protecteurs auditifs. De plus, pour la methode de mesure de l'effet d'occlusion utilisant les PEASM, une etude complementaire est necessaire pour lever la contradiction observee a 250 Hz.
NASA Astrophysics Data System (ADS)
Carrier, Jean-Francois
Les nanotubes de carbone de type monoparoi (C-SWNT) sont une classe recente de nanomateriaux qui ont fait leur apparition en 1991. L'interet qu'on leur accorde provient des nombreuses proprietes d'avant-plan qu'ils possedent. Leur resistance mecanique serait des plus rigide, tout comme ils peuvent conduire l'electricite et la chaleur d'une maniere inegalee. Non moins, les C-SWNT promettent de devenir une nouvelle classe de plateforme moleculaire, en servant de site d'attache pour des groupements reactifs. Les promesses de ce type particulier de nanomateriau sont nombreuses, la question aujourd'hui est de comment les realiser. La technologie de synthese par plasma inductif thermique se situe avantageusement pour la qualite de ses produits, sa productivite et les faibles couts d'operation. Par contre, des recherches recentes ont permis de mettre en lumiere des risques d'expositions reliees a l'utilisation du cobalt, comme catalyseur de synthese; son elimination ou bien son remplacement est devenu une preoccupation importante. Quatre recettes alternatives ont ete mises a l'essai afin de trouver une alternative plus securitaire a la recette de base; un melange catalytique ternaire, compose de nickel, de cobalt et d'oxyde d'yttrium. La premiere consiste essentiellement a remplacer la proportion massique de cobalt par du nickel, qui etait deja present dans la recette de base. Les trois options suivantes contiennent de nouveaux catalyseurs, en remplacement au Co, qui sont apparus dans plusieurs recherches scientifiques au courant des dernieres annees: le dioxyde de zircone (ZrO2), dioxyde de manganese (MnO2) et le molybdene (Mo). La methode utilisee consiste a vaporiser la matiere premiere, sous forme solide, dans un reacteur plasma a haute frequence (3 MHz) a paroi refroidi. Apres le passage dans le plasma, le systeme traverse une section dite de "croissance", isolee thermiquement a l'aide de graphite, afin de maintenir une certaine plage de temperature favorable a la synthese de C-SWNT. Le produit final est par la suite recolte sur des filtres metalliques poreux, une fois le systeme mis a l'arret. Dans un premier temps, une analyse thermodynamique, calculee avec le logiciel Fact-Sage, a permis de mettre en lumiere l'etat des differentes produits et reactifs, tout au long de leur passage dans le systeme. Elle a permis de reveler la similitude de composition de la phase liquide du melange catalytique ternaire de base, avec celui du melange binaire, avec nickel et oxyde d'yttrium. Par la suite, une analyse du bilan d'energie, a l'aide d'un systeme d'acquisition de donnees, a permis de determiner que les conditions operatoires des cinq echantillons mis a l'essai etaient similaires. Au total, le produit final a ete caracterise a l'aide de six methodes de caracterisations differentes : l'analyse thermogravimetrique, la diffraction de rayons X, la microscopie electronique a balayage a haute resolution (HRSEM), la microscopie electronique a transmission (MET), la spectroscopie RAMAN, ainsi que la mesure de la surface specifique (BET). Les resultats de ces analyses ont permis de constater, de facon coherente, que le melange a base de molybdene etait celui qui produisait la moins bonne qualite de produit. Ensuite, en ordre croissant, s'en suivait du melange a base de MnO2 et de ZrO2. Le melange de reference, a base de cobalt, est au deuxieme rang en matiere de qualite. La palme revient au melange binaire, dont la proportion est double en nickel. Les resultats de ce travail de recherche permettent d'affirmer qu'il existe une alternative performante au cobalt pour effectuer la synthese de nanotubes de carbone monoparoi, par plasma inductif thermique. Cette alternative est l'utilisation d'un melange catalytique binaire a base de nickel et d'oxyde d'yttrium. Il est suggere que les performances plus faibles des recettes alternatives, moins performantes, pourraient etre expliquees par le profil thermique fixe du reacteur. Ceci pourrait favoriser certains melanges, au detriment des autres, qui possedent des proprietes thermodynamiques differentes. Le montage, l'equipement, ainsi que les parametres d'operations, pourraient etre modifies en fonction de ces catalyseurs afin d'optimiser la synthese. Mots cles : nanotubes de carbone mono paroi, plasma inductif thermique, cobalt, nickel, dioxyde de zirconium, dioxyde de manganese, molybdene, trioxyde d'yttrium et noir de carbone
Lu, Jia-Yang; Cheung, Michael Lok-Man; Huang, Bao-Tian; Wu, Li-Li; Xie, Wen-Jia; Chen, Zhi-Jian; Li, De-Rui; Xie, Liang-Xi
2015-01-01
To assess the performance of a simple optimisation method for improving target coverage and organ-at-risk (OAR) sparing in intensity-modulated radiotherapy (IMRT) for cervical oesophageal cancer. For 20 selected patients, clinically acceptable original IMRT plans (Original plans) were created, and two optimisation methods were adopted to improve the plans: 1) a base dose function (BDF)-based method, in which the treatment plans were re-optimised based on the original plans, and 2) a dose-controlling structure (DCS)-based method, in which the original plans were re-optimised by assigning additional constraints for hot and cold spots. The Original, BDF-based and DCS-based plans were compared with regard to target dose homogeneity, conformity, OAR sparing, planning time and monitor units (MUs). Dosimetric verifications were performed and delivery times were recorded for the BDF-based and DCS-based plans. The BDF-based plans provided significantly superior dose homogeneity and conformity compared with both the DCS-based and Original plans. The BDF-based method further reduced the doses delivered to the OARs by approximately 1-3%. The re-optimisation time was reduced by approximately 28%, but the MUs and delivery time were slightly increased. All verification tests were passed and no significant differences were found. The BDF-based method for the optimisation of IMRT for cervical oesophageal cancer can achieve significantly better dose distributions with better planning efficiency at the expense of slightly more MUs.
2001-01-01
cin~tique Properties»> ou o IP »>. En fait, ces IP ne sont rien (exemple : un missile en vol libre ). Meme si on peut d’autres que des composants sur...probablement pas radicalement le problkme peuvent soit achet~s soit &tre issus de conceptions au niveau des composants 6lectroniques. pr&c~dentes. Les
2001-06-01
Corrège Buffet Active Control - Experimental and Numerical Results 15 by C. Despré, D. Caruana, A. Mignosi, O. Reberga, M. Corrège, H. Gassot, J.C...Park and S. Menon An Experimental Examination of the Relationship Between Chemiluminescent Light 20 Emissions and Heat-Release Rate Under Non -Adiabatic...D.A. Santavicca An Experimental Study on Actively Controlled Dump Combustors 36 by K. Yu, K.J. Wilson, T.P. Parr and K.C. Schadow xii Theme The
Aungkulanon, Pasura; Luangpaiboon, Pongchanun
2016-01-01
Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.
VLSI Technology for Cognitive Radio
NASA Astrophysics Data System (ADS)
VIJAYALAKSHMI, B.; SIDDAIAH, P.
2017-08-01
One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.
Error discrimination of an operational hydrological forecasting system at a national scale
NASA Astrophysics Data System (ADS)
Jordan, F.; Brauchli, T.
2010-09-01
The use of operational hydrological forecasting systems is recommended for hydropower production as well as flood management. However, the forecast uncertainties can be important and lead to bad decisions such as false alarms and inappropriate reservoir management of hydropower plants. In order to improve the forecasting systems, it is important to discriminate the different sources of uncertainties. To achieve this task, reanalysis of past predictions can be realized and provide information about the structure of the global uncertainty. In order to discriminate between uncertainty due to the weather numerical model and uncertainty due to the rainfall-runoff model, simulations assuming perfect weather forecast must be realized. This contribution presents the spatial analysis of the weather uncertainties and their influence on the river discharge prediction of a few different river basins where an operational forecasting system exists. The forecast is based on the RS 3.0 system [1], [2], which is also running the open Internet platform www.swissrivers.ch [3]. The uncertainty related to the hydrological model is compared to the uncertainty related to the weather prediction. A comparison between numerous weather prediction models [4] at different lead times is also presented. The results highlight an important improving potential of both forecasting components: the hydrological rainfall-runoff model and the numerical weather prediction models. The hydrological processes must be accurately represented during the model calibration procedure, while weather prediction models suffer from a systematic spatial bias. REFERENCES [1] Garcia, J., Jordan, F., Dubois, J. & Boillat, J.-L. 2007. "Routing System II, Modélisation d'écoulements dans des systèmes hydrauliques", Communication LCH n° 32, Ed. Prof. A. Schleiss, Lausanne [2] Jordan, F. 2007. Modèle de prévision et de gestion des crues - optimisation des opérations des aménagements hydroélectriques à accumulation pour la réduction des débits de crue, thèse de doctorat n° 3711, Ecole Polytechnique Fédérale, Lausanne [3] Keller, R. 2009. "Le débit des rivières au peigne fin", Revue Technique Suisse, N°7/8 2009, Swiss engineering RTS, UTS SA, Lausanne, p. 11 [4] Kaufmann, P., Schubiger, F. & Binder, P. 2003. Precipitation forecasting by a mesoscale numerical weather prediction (NWP) model : eight years of experience, Hydrology and Earth System
Aarab, Chadya; Elghazouani, Fatima; Aalouane, Rachid; Rammouz, Ismail
2015-01-01
Les progrès réalisés dans le traitement de la schizophrénie n'ont jusqu'ici pas modifié de manière radicale l'importance de l'adhésion des patients à leur médication. L'objectif de ce travail est d'identifier les facteurs de risque de l'abandon du traitement sur un échantillon marocain de malades schizophrènes. C'est une étude transversale menée au centre psychiatrique universitaire de Fès sur une période de 4 mois. Les malades inclus présentaient une schizophrénie ou un trouble schizo-affectif, sélectionnés en deux groupes observant et non observant. L’évaluation de l'observance a été faite par un hétéro-questionnaire comprenant une liste de causes d'abandon du traitement avec des réponses par oui ou non et à l'aide de l’échelle MARS (Medication Adherence Rating Scale). Le traitement statistique des résultats a été réalisé par le logiciel Epi Info version 3.5.1. On a recruté 164 participants, 112 étaient des malades non observants à leur traitement (cas) et 52 patients bien observants (témoins). L’âge moyen est 31 ans, avec une prédominance masculine. Les facteurs de risque d'inobservance thérapeutique sont: l’âge jeune, le sexe masculin, le célibat, les troubles addictifs. Les principales raisons d'abandon du traitement sont le changement fréquent du médecin, le manque d'informations sur la maladie, un mauvais insight et les effets secondaires des antipsychotiques. Le groupe de schizophrènes non adhérents à leur traitement pharmacologique avait un score élevé à l’échelle MARS dans 80% cas. Ces résultats doivent être pris en considération par le personnel soignant pour optimiser l'observance thérapeutique chez les patients souffrant de schizophrénie. PMID:26161196
Identification des objets et detection de leur alignement en utilisant la technologie RFID
NASA Astrophysics Data System (ADS)
Rahma, Zayoud
De nos jours, les vehicules motorises sont essentiels dans notre vie quotidienne, d'ou la necessite de leur approvisionnement en carburant. L'approvisionnement en carburant peut entrainer certains inconvenients, tels que: les files d'attente, la disponibilite non-continuelle du carburant et les fraudes. Les problemes d'attente et de disponibilite non-continuelle du carburant peuvent etre facilement resolus en allant a une autre station d'essence aux alentours si disponibles. Par contre le probleme de fraudes est plus difficile a resoudre. De ce fait, decoule notre solution qui consiste a developper un systeme intelligent pour la gestion d'approvisionnement en carburant afin de remedier a ce probleme de fraudes. Pour des raisons de surete, il faut eviter les risques d'etincelles dans l'environnement du carburant. En particulier, il convient de ne pas utiliser un systeme utilisant 1'electricite proche de la pompe, du tuyau ou du reservoir du carburant du vehicule. Nous avons choisi la technologie RFID (Radio Frequency IDentification) et avons opte pour l'utilisation des etiquettes passives, etant donne que les etiquettes semi-passives ou actives contiennent une batterie electrique et sont nettement plus cheres. Un vehicule motorise sera identifie avec une etiquette RFID passive collee au-dessus du goulot de son reservoir. Deux autres etiquettes RFID seront placees sur le pistolet de sorte que le flux du carburant ne sera autorise que lorsque les trois etiquettes sont alignees. Notre travail etait a la demande d'une entreprise petroliere ayant une chaine internationale de stations de carburant. Le travail consiste en la conception, par la recherche, du systeme requis et s'articule sur l'optimisation de la topologie des antennes et des etiquettes de sorte que le systeme juge qu'il y a alignement lorsque le bec du pistolet est fonce dans le goulot du reservoir, et par consequent autorise le versement du carburant. Dans tous les autres cas, le systeme doit juger qu'il n'y a pas alignement et par consequent le flux du carburant n'est pas autorise. Mots cles: RFID, identification, localisation, alignement, fraudes, station-service.
Kassem, Abdulsalam M; Ibrahim, Hany M; Samy, Ahmed M
2017-05-01
The objective of this study was to develop and optimise self-nanoemulsifying drug delivery system (SNEDDS) of atorvastatin calcium (ATC) for improving dissolution rate and eventually oral bioavailability. Ternary phase diagrams were constructed on basis of solubility and emulsification studies. The composition of ATC-SNEDDS was optimised using the Box-Behnken optimisation design. Optimised ATC-SNEDDS was characterised for various physicochemical properties. Pharmacokinetic, pharmacodynamic and histological findings were performed in rats. Optimised ATC-SNEDDS resulted in droplets size of 5.66 nm, zeta potential of -19.52 mV, t 90 of 5.43 min and completely released ATC within 30 min irrespective of pH of the medium. Area under the curve of optimised ATC-SNEDDS in rats was 2.34-folds higher than ATC suspension. Pharmacodynamic studies revealed significant reduction in serum lipids of rats with fatty liver. Photomicrographs showed improvement in hepatocytes structure. In this study, we confirmed that ATC-SNEDDS would be a promising approach for improving oral bioavailability of ATC.
NASA Astrophysics Data System (ADS)
Hazwan, M. H. M.; Shayfull, Z.; Sharif, S.; Nasir, S. M.; Zainal, N.
2017-09-01
In injection moulding process, quality and productivity are notably important and must be controlled for each product type produced. Quality is measured as the extent of warpage of moulded parts while productivity is measured as a duration of moulding cycle time. To control the quality, many researchers have introduced various of optimisation approaches which have been proven enhanced the quality of the moulded part produced. In order to improve the productivity of injection moulding process, some of researches have proposed the application of conformal cooling channels which have been proven reduced the duration of moulding cycle time. Therefore, this paper presents an application of alternative optimisation approach which is Response Surface Methodology (RSM) with Glowworm Swarm Optimisation (GSO) on the moulded part with straight-drilled and conformal cooling channels mould. This study examined the warpage condition of the moulded parts before and after optimisation work applied for both cooling channels. A front panel housing have been selected as a specimen and the performance of proposed optimisation approach have been analysed on the conventional straight-drilled cooling channels compared to the Milled Groove Square Shape (MGSS) conformal cooling channels by simulation analysis using Autodesk Moldflow Insight (AMI) 2013. Based on the results, melt temperature is the most significant factor contribute to the warpage condition and warpage have optimised by 39.1% after optimisation for straight-drilled cooling channels and cooling time is the most significant factor contribute to the warpage condition and warpage have optimised by 38.7% after optimisation for MGSS conformal cooling channels. In addition, the finding shows that the application of optimisation work on the conformal cooling channels offers the better quality and productivity of the moulded part produced.
A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system
NASA Astrophysics Data System (ADS)
Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun
2014-11-01
In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.
A novel global Harmony Search method based on Ant Colony Optimisation algorithm
NASA Astrophysics Data System (ADS)
Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi
2016-03-01
The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.
Analyse experimentale des performances d'une batterie au lithium pour l'aeronautique
NASA Astrophysics Data System (ADS)
Bonnin, Romain
Ce memoire a pour objectif d'identifier et d'etudier les performances necessaires pour qu'une batterie au lithium puisse etre utilisee dans le secteur de l'aeronautique. C'est pourquoi dans le cadre de notre recherche, nous allons proposer une procedure de tests permettant d'analyser et de determiner si la batterie au lithium peut etre implantee dans un avion. En vue de repondre a l'analyse des performances, une etude des fonctionnalites demandees par l'avion ainsi que des normes preexistantes vont etre effectuees. Suite a cette etape, nous allons elaborer un banc d'essais. Une fois le banc d'essais acheve, nous allons tester une batterie au lithium qui est supposee disposer de toutes les caracteristiques techniques requises pour etre implantee dans un avion. Ces tests nous permettront donc d'emettre un avis sur l'utilisation des batteries au lithium dans le domaine de l'aeronautique.
Topology optimisation for natural convection problems
NASA Astrophysics Data System (ADS)
Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe; Sigmund, Ole
2014-12-01
This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach for designing heat sink geometries cooled by natural convection and micropumps powered by natural convection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garillon, Brice
We present in this report our results for the exclusive electroproduction of f0(980) and f2(1270) off the proton. The data were taken during the e1-6 experiment (2001-2002) with the CLAS detector of Jff?erson Laboratory, using a 5.754 GeV beam and a liquid hydrogen target. We have measured for the first time the reduced differential cross sections for these two processes, in the kinematical region 1:5 < Q2 < 4:33 GeV2 and 0:15 < xB < 0:55. We propose an interpretation of our results according to a Regge-based model. An alternative analysis of the data in terms of partial waves amplitudesmore » as well as in terms of moments of the decay angular distributions has also been attempted. Finally, we have performed the calibration of the photomultipliers of the Central Neutron Detector (CND), to be installed in the CLAS12 detector. The CND has been optimised for the study of the n-DVCS process (Deeply Virtual Compton Scattering off the neutron).« less
Thermal buckling optimisation of composite plates using firefly algorithm
NASA Astrophysics Data System (ADS)
Kamarian, S.; Shakeri, M.; Yas, M. H.
2017-07-01
Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.
Intrinsic mechanical properties and strengthening methods in inorganic crystalline materials
NASA Astrophysics Data System (ADS)
Mecking, H.; Hartig, Ch.; Seeger, J.
1991-06-01
The paper deals with strength and fracture in metals, ceramics and intermetallic compounds. The emphasis is on the interrelation between microstructure and macroscopic behavior and how the concepts for alloy design are mirroring this interrelationship. The three materials classes are distinguished by the physical nature of the atomic bonding forces. In metals metallic bonding predominates which causes high ductility but poor strength. Accordingly material development concentrates on production of microstructures which optimize the yield strength without unacceptable loss in ductility. In ceramics covalent bonding prevails which results in high hardness and high elastic stiffness but at the same time extreme brittleness. Contrary to the metal-ease material development aims at a kind of pseudo ductility in order to rise the fracture toughness to sufficiently high levels. In intermetallic phases the atomic bonds are a mixture of metallic and covalent bonding where depending on the alloying system the balance between the two contributions may be quite different. Accordingly the properties of intermetallics are in the range between metals and ceramics. By a variety of microstructural measures their properties can be changed in direction. either towards metallic or ceramic behavior. General rules for alloy design are not available, rather every system demands very specific experience since properties depend to a considerable part on intrinsic properties of lattice defects such as dislocations, antiphase boundaries, stacking faults and grain boundaries. Cet article traite de la résistance et de la fracture des métaux, des céramiques et des composés intermétalliques. L'accent est mis sur les correspondances entre la microstructure et le comportement macroscopique ainsi que sur la façon dont de tels concepts se reflètent dans la création de nouveaux alliages. C'est la nature des forces de liaisons qui distingue chaque type de matériaux. Dans les métaux, les liaisons métalliques dominent, ce qui entraîne une grande ductilité mais une médiocre résistance. En conséquence, dans le développement de nouveaux matériaux on cherche préférentiellement à produire des microstructures qui optimisent la résistance élastique sans perte inacceptable de ductilité. Dans les céramiques, les liaisons covalentes prédominent; ceci entraîne une dureté élevée, une grande rigidité, mais en même temps une extrême fragilité. Au contraire des métaux, le développement de ces matériaux vise à obtenir une pseudoductilité afin d'amener la tenacité à des niveaux suffisamment élevés. Dans les phases intermétalliques les liaisons atomiques correspondent à un mélange de liaisons métalliques et covalentes. La contribution de chacune d'entre elles varie en fonction du système allié. En conséquence, les propriétés des intermétalliques se situent entre celles des métaux et des céramiques. Par divers changements microstructuraux des propriétés peuvent être déplacées pour se rapprocher d'un comportement de type métallique ou de type céramique. Donner des règles générales pour la création de nouveaux alliages n'est pas possible car chaque système demande à être testé, les propriétés dépendent en effet, pour une part considérable, des propriétés intrinsèques des défauts de réseau comme les dislocations, les parois d'antiphase ou les joints de grains.
NASA Astrophysics Data System (ADS)
Vasquez Padilla, Ricardo; Soo Too, Yen Chean; Benito, Regano; McNaughton, Robbie; Stein, Wes
2018-01-01
In this paper, optimisation of the supercritical CO? Brayton cycles integrated with a solar receiver, which provides heat input to the cycle, was performed. Four S-CO? Brayton cycle configurations were analysed and optimum operating conditions were obtained by using a multi-objective thermodynamic optimisation. Four different sets, each including two objective parameters, were considered individually. The individual multi-objective optimisation was performed by using Non-dominated Sorting Genetic Algorithm. The effect of reheating, solar receiver pressure drop and cycle parameters on the overall exergy and cycle thermal efficiency was analysed. The results showed that, for all configurations, the overall exergy efficiency of the solarised systems achieved at maximum value between 700°C and 750°C and the optimum value is adversely affected by the solar receiver pressure drop. In addition, the optimum cycle high pressure was in the range of 24.2-25.9 MPa, depending on the configurations and reheat condition.
Mauricio-Iglesias, Miguel; Montero-Castro, Ignacio; Mollerup, Ane L; Sin, Gürkan
2015-05-15
The design of sewer system control is a complex task given the large size of the sewer networks, the transient dynamics of the water flow and the stochastic nature of rainfall. This contribution presents a generic methodology for the design of a self-optimising controller in sewer systems. Such controller is aimed at keeping the system close to the optimal performance, thanks to an optimal selection of controlled variables. The definition of an optimal performance was carried out by a two-stage optimisation (stochastic and deterministic) to take into account both the overflow during the current rain event as well as the expected overflow given the probability of a future rain event. The methodology is successfully applied to design an optimising control strategy for a subcatchment area in Copenhagen. The results are promising and expected to contribute to the advance of the operation and control problem of sewer systems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Awab, Almahdi; Elahmadi, Brahim; Lamkinsi, Tarik; El Moussaoui, Rachid; El Hijri, Ahmed; Azzouzi, Abderrahim; Alilou, Mustapha
2013-01-01
Introduction L'incidence des complications respiratoires postopératoires (CRPO) reste très diversement appréciées selon les critères diagnostiques retenues dans les différentes études, ce qui la fait varier de 5 à plus de 50%. Les CRPO majeurs après chirurgie de l'aorte abdominale sont responsables d'une grande morbi-mortalité pouvant aller jusqu’à 36%, d'une durée d'hospitalisation et d'un coût plus importants. Ainsi dans l'optique d'améliorer notre prise en charge périopératoire de la chirurgie de l'aorte, nous avons décidé de mener une étude pour dresser le profil épidémiologique et déterminer les facteurs de risque des complications respiratoires dans notre contexte Méthodes Il s'agit d'une étude de cohorte rétrospective du mois de Janvier 2007 au mois de décembre 2011 portant sur l'ensemble des patients opérés pour pathologie aortique au bloc opératoire central de l'hôpital Ibn Sina de Rabat, Maroc. Résultats Cent vingt cinq patients ont été inclus dans notre étude, 24 patients ont été opérés pour anévrysme de l'aorte abdominale et 101 patients pour lésion occlusive aortoiliaque. Dans notre série 22 malades soit 17,6% ont présenté une complication respiratoire majeure avec, une reventilation dans 4,8% des cas, une difficulté de sevrage de la ventilation artificielle dans 3,2% des cas, une pneumopathie dans 4% des cas, un syndrome de détresse respiratoire aigue (SDRA) dans 4% des cas et une nécessité de fibroaspiration bronchique dans 1,6% des cas. En analyse univariée: l’âge, la présence d'une BPCO avec dyspnée stade 3 ou 4, la présence d'une anomalie à l'EFR préopératoire, la présence d'un stade avancé (III ou IV) de LOAI et la reprise chirurgicale étaient statistiquement associés à la survenue d'une complication respiratoire postopératoire. En analyse multivariée, seule une anomalie à l'EFR en préopératoire constituait un facteur de risque indépendant de survenue d'une complication respiratoire postopératoire dans notre série avec un Odds Ratio (OR): 11,5; un Intervalle de Confiance (IC) à 95% de (1,6 - 85,2) et un p = 0,016. Conclusion Au terme de notre étude, il nous parait donc nécessaire pour diminuer l'incidence des CRPO majeurs dans notre population, d'agir sur les facteurs que nous jugeons modifiables tel l'amélioration de l’état respiratoire basal moyennant une préparation respiratoire préopératoire, s'intégrant dans un véritable programme de réhabilitation et associant une rééducation à l'effort, une kinésithérapie incitative ainsi qu'une optimisation des thérapeutiques habituelles. PMID:23504435
A New Computational Technique for the Generation of Optimised Aircraft Trajectories
NASA Astrophysics Data System (ADS)
Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto
2017-12-01
A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.
Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella
2003-03-01
The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.
Optimisation of oxygen ion transport in materials for ceramic membrane devices.
Kilner, J A
2007-01-01
Oxygen transport in ceramic oxide materials has received much attention over the past few decades. Much of this interest has stemmed from the desire to construct high temperature electrochemical devices for energy conversion, an example being the solid oxide fuel cell. In order to achieve high performance for these devices, insights are needed in how to achieve optimum performance from the functional components such as the electrolytes and electrodes. This includes the optimisation of oxygen transport through the crystal lattice of electrode and electrolyte materials and across the homogeneous (grain boundary) and heterogeneous interfaces that exist in real devices. Strategies are discussed for the optimisation of these quantities and current problems in the characterisation of interfacial transport are explored.
Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm
NASA Astrophysics Data System (ADS)
Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana
2017-12-01
Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.
Developpement d'une commande pour une hydrolienne de riviere et optimisation =
NASA Astrophysics Data System (ADS)
Tetrault, Philippe
Suivant le developpement des energies renouvelables, la presente etude se veut une base theorique quant aux principes fondamentaux necessaires au bon fonctionnement et a l'implementation d'une hydrolienne de riviere. La problematique derriere ce nouveau type d'appareil est d'abord presentee. La machine electrique utilisee dans l'application, c'est-a-dire la machine synchrone a aimants permanents, est etudiee : ses equations dynamiques mecaniques et electriques sont developpees, introduisant en meme temps le concept de referentiel tournant. Le fonctionnement de l'onduleur utilise, soit un montage en pont complet a deux niveaux a semi-conducteurs, est explique et mit en equation pour permettre de comprendre les strategies de modulation disponibles. Un bref historique de ces strategies est fait avant de mettre l'emphase sur la modulation vectorielle qui sera celle utilisee pour l'application en cours. Les differents modules sont assembles dans une simulation Matlab pour confirmer leur bon fonctionnement et comparer les resultats de la simulation avec les calculs theoriques. Differents algorithmes permettant de traquer et maintenir un point de fonctionnement optimal sont presentes. Le comportement de la riviere est etudie afin d'evaluer l'ampleur des perturbations que le systeme devra gerer. Finalement, une nouvelle approche est presentee et comparee a une strategie plus conservatrice a l'aide d'un autre modele de simulation Matlab.
Les supraconducteurs en courant alternatif
NASA Astrophysics Data System (ADS)
Lacaze, A.; Laumond, Y.
1991-02-01
Since 1983, when the very first AC wire became available, the comprehension of electromagnetic phenomenas ruling over stability and losses of multifilamentary superconductors in AC use, has much improved. Improvements of manufacturing process has opened up the possibility of industrial scale manufacturing of up to one million, 140 nm in diameter, filaments. The AC loss performances and stability remains at the best level up to date. Les premiers brins supraconducteurs utilisables en courants alternatifs sont apparus en 1983. Depuis, des progrès importants ont été réalisés sur le plan de la compréhension des phénomènes électromagnétiques commandant les pertes et la stabilité dans des brins multifilamentaires à filaments ultrafins. L'amélioration des performances et des procédés de fabrication nous permet aujourd'hui de présenter des brins, fabriqués à l'échelle industrielle, comprenant jusqu'à près d'un million de filaments de 140 nm de diamètre, avec des niveaux de pertes et de stabilité inégalés à ce jour.
Optimised analytical models of the dielectric properties of biological tissue.
Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin
2017-05-01
The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Data encryption standard ASIC design and development report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Perry J.; Pierson, Lyndon George; Witzke, Edward L.
2003-10-01
This document describes the design, fabrication, and testing of the SNL Data Encryption Standard (DES) ASIC. This device was fabricated in Sandia's Microelectronics Development Laboratory using 0.6 {micro}m CMOS technology. The SNL DES ASIC was modeled using VHDL, then simulated, and synthesized using Synopsys, Inc. software and finally IC layout was performed using Compass Design Automation's CAE tools. IC testing was performed by Sandia's Microelectronic Validation Department using a HP 82000 computer aided test system. The device is a single integrated circuit, pipelined realization of DES encryption and decryption capable of throughputs greater than 6.5 Gb/s. Several enhancements accommodate ATMmore » or IP network operation and performance scaling. This design is the latest step in the evolution of DES modules.« less
Using XML to encode TMA DES metadata.
Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul
2011-01-01
The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs.
Using XML to encode TMA DES metadata
Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul
2011-01-01
Background: The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. Materials and Methods: We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. Results: We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. Conclusions: All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs. PMID:21969921
Evaluation d’une grille de supervision des laboratoires des leishmanioses cutanées au Maroc
El Mansouri, Bouchra; Amarir, Fatima; Hajli, Yamina; Fellah, Hajiba; Sebti, Faiza; Delouane, Bouchra; Sadak, Abderrahim; Adlaoui, El Bachir; Rhajaoui, Mohammed
2017-01-01
Introduction Afin d’évaluer une grille de contrôle standardisée de laboratoire de diagnostic des leishmanioses, comme nouveau outil de supervision. Méthodes Un essai pilote a été pratiqué sur sept laboratoires provinciaux, appartenant à quatre provinces au Maroc, en suivant l’évolution de leurs performances tous les deux ans, entre l’année 2006 et 2014. Cette étude détaille la situation des laboratoires provinciaux avant et après la mise en œuvre de la grille de supervision. Au total vingt et une grille sont analysées. Résultats En 2006, les résultats ont montré clairement une insuffisance des performances des laboratoires: besoin en formation (41.6%), personnel pratiquant le prélèvement cutané (25%), pénurie en matériels et réactifs (65%), gestions documentaire et local non conformes (85%). Différentes actions correctives ont été menées par le Laboratoire National de Référence des Leishmanioses (LNRL) durant la période d’étude. En 2014, le LNRL a enregistré une nette amélioration des performances des laboratoires. Les besoins en matière de formation, qualité du prélèvement, dotation en matériels et réactifs ont été comblés et une coordination efficace s’est établie entre le LNRL et les laboratoires provinciaux. Conclusion Ceci montre l'efficacité de la grille comme outil de supervision de grande qualité, et comme pierre angulaire de tout progrès qui doit être obtenu dans les programmes de lutte contre les leishmanioses. PMID:29187922
Syed, Zeeshan; Moscucci, Mauro; Share, David; Gurm, Hitinder S
2015-01-01
Background Clinical tools to stratify patients for emergency coronary artery bypass graft (ECABG) after percutaneous coronary intervention (PCI) create the opportunity to selectively assign patients undergoing procedures to hospitals with and without onsite surgical facilities for dealing with potential complications while balancing load across providers. The goal of our study was to investigate the feasibility of a computational model directly optimised for cohort-level performance to predict ECABG in PCI patients for this application. Methods Blue Cross Blue Shield of Michigan Cardiovascular Consortium registry data with 69 pre-procedural and angiographic risk variables from 68 022 PCI procedures in 2004–2007 were used to develop a support vector machine (SVM) model for ECABG. The SVM model was optimised for the area under the receiver operating characteristic curve (AUROC) at the level of the training cohort and validated on 42 310 PCI procedures performed in 2008–2009. Results There were 87 cases of ECABG (0.21%) in the validation cohort. The SVM model achieved an AUROC of 0.81 (95% CI 0.76 to 0.86). Patients in the predicted top decile were at a significantly increased risk relative to the remaining patients (OR 9.74, 95% CI 6.39 to 14.85, p<0.001) for ECABG. The SVM model optimised for the AUROC on the training cohort significantly improved discrimination, net reclassification and calibration over logistic regression and traditional SVM classification optimised for univariate performance. Conclusions Computational risk stratification directly optimising cohort-level performance holds the potential of high levels of discrimination for ECABG following PCI. This approach has value in selectively referring PCI patients to hospitals with and without onsite surgery. PMID:26688738
L’alimentation des enfants ayant une déficience neurologique
2009-01-01
La malnutrition, qu’il s’agisse de sous-alimentation ou de suralimentation, est courante chez les enfants ayant une déficience neurologique. Les besoins en énergie sont difficiles à définir au sein de cette population hétérogène. De plus, on manque d’information sur ce qui constitue la croissance normale chez ces enfants. Des facteurs non nutritionnels peuvent influer sur la croissance, mais des facteurs nutritionnels, tels qu’un apport calorique insuffisant, des pertes excessives d’éléments nutritifs et un métabolisme énergétique anormal, contribuent également au retard de croissance de ces enfants. La malnutrition est liée à une importante morbidité, tandis que la réadaptation nutritionnelle améliore l’état de santé global. Le soutien nutritionnel doit faire partie intégrante de la prise en charge des enfants ayant une déficience nutritionnelle et viser non seulement à améliorer l’état nutritionnel, mais également la qualité de vie des patients et de leur famille. Au moment d’envisager une intervention nutritionnelle, il faut tenir compte du dysfonctionnement oromoteur, du reflux gastro-œsophagien et de l’aspiration pulmonaire, et une équipe multidisciplinaire doit se concerter. Il faut repérer rapidement les enfants vulnérables à des troubles nutritionnels et procéder à une évaluation de leur état nutritionnel au moins une fois par année, et plus souvent chez les nourrissons et les jeunes enfants ou chez les enfants qui risquent de souffrir de malnutrition. Il faut optimiser l’apport oral s’il est sécuritaire, mais entreprendre une alimentation entérale chez les enfants ayant un dysfonctionnement oromoteur qui provoque une aspiration marquée ou chez ceux qui sont incapables de maintenir un état nutritionnel suffisant au moyen de l’apport oral. Il faut réserver l’alimentation par sonde nasogastrique aux interventions à court terme, mais si une intervention nutritionnelle prolongée s’impose, il faut envisager la gastrostomie. Il faut réserver les mesures antireflux aux enfants présentant un reflux gastro-œsophagien considérable. Il faut surveiller étroitement la réponse du patient à l’intervention nutritionnelle afin d’éviter une prise de poids excessive après l’amorce de l’alimentation entérale, et privilégier les préparations pédiatriques afin d’éviter les carences en micronutriments. PMID:20592968
NASA Astrophysics Data System (ADS)
Ayoub, Simon
Le reseau de distribution et de transport de l'electricite se modernise dans plusieurs pays dont le Canada. La nouvelle generation de ce reseau que l'on appelle smart grid, permet entre autre l'automatisation de la production, de la distribution et de la gestion de la charge chez les clients. D'un autre cote, des appareils domestiques intelligents munis d'une interface de communication pour des applications du smart grid commencent a apparaitre sur le marche. Ces appareils intelligents pourraient creer une communaute virtuelle pour optimiser leurs consommations d'une facon distribuee. La gestion distribuee de ces charges intelligentes necessite la communication entre un grand nombre d'equipements electriques. Ceci represente un defi important a relever surtout si on ne veut pas augmenter le cout de l'infrastructure et de la maintenance. Lors de cette these deux systemes distincts ont ete concus : un systeme de communication peer-to-peer, appele Ring-Tree, permettant la communication entre un nombre important de noeuds (jusqu'a de l'ordre de grandeur du million) tel que des appareils electriques communicants et une technique distribuee de gestion de la charge sur le reseau electrique. Le systeme de communication Ring-Tree inclut une nouvelle topologie reseau qui n'a jamais ete definie ou exploitee auparavant. Il inclut egalement des algorithmes pour la creation, l'exploitation et la maintenance de ce reseau. Il est suffisamment simple pour etre mis en oeuvre sur des controleurs associes aux dispositifs tels que des chauffe-eaux, chauffage a accumulation, bornes de recharges electriques, etc. Il n'utilise pas un serveur centralise (ou tres peu, seulement lorsqu'un noeud veut rejoindre le reseau). Il offre une solution distribuee qui peut etre mise en oeuvre sans deploiement d'une infrastructure autre que les controleurs sur les dispositifs vises. Finalement, un temps de reponse de quelques secondes pour atteindre 1'ensemble du reseau peut etre obtenu, ce qui est suffisant pour les besoins des applications visees. Les protocoles de communication s'appuient sur un protocole de transport qui peut etre un de ceux utilises sur l'Internet comme TCP ou UDP. Pour valider le fonctionnement de de la technique de controle distribuee et le systeme de communiction Ring-Tree, un simulateur a ete developpe; un modele de chauffe-eau, comme exemple de charge, a ete integre au simulateur. La simulation d'une communaute de chauffe-eaux intelligents a montre que la technique de gestion de la charge combinee avec du stockage d'energie sous forme thermique permet d'obtenir, sans affecter le confort de l'utilisateur, des profils de consommation varies dont un profil de consommation uniforme qui represente un facteur de charge de 100%. Mots-cles : Algorithme Distribue, Demand Response, Gestion de la Charge Electrique, M2M (Machine-to-Machine), P2P (Peer-to-Peer), Reseau Electrique Intelligent, Ring-Tree, Smart Grid
Evolving aerodynamic airfoils for wind turbines through a genetic algorithm
NASA Astrophysics Data System (ADS)
Hernández, J. J.; Gómez, E.; Grageda, J. I.; Couder, C.; Solís, A.; Hanotel, C. L.; Ledesma, JI
2017-01-01
Nowadays, genetic algorithms stand out for airfoil optimisation, due to the virtues of mutation and crossing-over techniques. In this work we propose a genetic algorithm with arithmetic crossover rules. The optimisation criteria are taken to be the maximisation of both aerodynamic efficiency and lift coefficient, while minimising drag coefficient. Such algorithm shows greatly improvements in computational costs, as well as a high performance by obtaining optimised airfoils for Mexico City's specific wind conditions from generic wind turbines designed for higher Reynolds numbers, in few iterations.
NASA Astrophysics Data System (ADS)
Aliouane, T.; Bouzid, D.; Belkhir, N.; Bouzid, S.; Herold, V.
2005-05-01
La fabrication des composants en verre optique nécessite des moyens de grande précision dans les procédés de finition vue l'importance accordée à leur qualité. Durant le processus de polissage des verres optiques, le polissoir est un élément clé et a un impact direct sur les performances des composants optiques, non seulement il est utilisé comme support de grains abrasifs mais il doit posséder la fonction de transmission de la pression aux grains. La connaissance de ses propriétés, essentiellement mécanique, est impérative afin d'obtenir un état de surface optimal des composants optiques destinés à remplir des fonctions très précises dans des appareils optiques très performants. Dans cette étude, nous avons constaté que les propriétés des polissoirs en polyuréthanne tel que la dureté, le module d'élasticité et la densité varient au cours du polissage. Ce changement a des effets sur l'état de surface de verre optique, causé par le changement microstructural de la surface du polissoir (distribution et dimensions des pores) et par conséquent sur la quantité des abrasifs (en oxyde de cérium) insérée dans les pores, ce qui influe sur la quantité de verre enlevée et sur l'état de surface du composant. Sur la base des résultats obtenus, il a été prouvé que le polissoir subit des modifications très importantes ce qui influe considérablement sur son efficacité de polissage.
Schram, Carrie A.
2012-01-01
Résumé Objectif Passer en revue le diagnostic des patients atteints de fibrose kystique (FK) atypique. Sources des données On a procédé à une recension exhaustive dans MEDLINE (de 1950 à la troisième semaine de mai 2009), dans MEDLINE In-Process and Other Non-Indexed Citations and Cases (de 1950 à la troisième semaine de 2009) et dans EMBASE (de 1980 à la quatrième semaine de mars 2009). On a aussi passé en revue le site web de Fibrose kystique Canada et on a consulté son plus récent rapport des inscriptions de données sur les patients. Message principal La FK atypique est une forme moins intense du trouble de la FK, qui est associée à des mutations du gène régulateur de la conductance transmembranaire de la fibrose kystique. Au lieu d’avoir les symptômes classiques, les personnes atteintes d’une FK atypique pourraient avoir une légère dysfonction d’un seul système organique et avoir ou non des concentrations de chlorure élevées dans la sueur. La FK atypique est un trouble très diversifié affectant différents systèmes organiques à divers degrés. Les symptômes du patient peuvent aussi fluctuer avec le temps; par ailleurs, certains signes et symptômes cliniques touchant les systèmes respiratoire, gastro-intestinal, endocrinien et métabolique et génito-urinaire devraient signaler aux médecins la possibilité d’une FK. Les patients atteints d’une FK atypique ont moins d’hospitalisations durant l’enfance que ceux qui ont une FK classique et le diagnostic peut passer inaperçu pendant de nombreuses années, parfois même jusqu’à l’âge adulte. Conclusion Même si les patients atteints d’une FK atypique ont une espérance de vie plus longue que les personnes atteintes de la forme classique, les issues à long terme pour de nombreuses personnes qui ont la forme atypique sont inconnues. Il est important de conseiller les patients à propos de la possibilité d’une manifestation future de la maladie. Renseigner les patients à propos de la FK peut les aider à comprendre leurs symptômes, à modifier leur mode de vie pour optimiser leur santé, à réduire l’incidence des complications et à recevoir au besoin du counseling sur la planification familiale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pluquet, Alain
Cette théetudie les techniques d'identication de l'electron dans l'experience D0 au laboratoire Fermi pres de Chicago Le premier chapitre rappelle quelques unes des motivations physiques de l'experience physique des jets physique electrofaible physique du quark top Le detecteur D0 est decrit en details dans le second chapitre Le troisieme cha pitre etudie les algorithmes didentication de lelectron trigger reconstruction ltres et leurs performances Le quatrieme chapitre est consacre au detecteur a radiation de transition TRD construit par le Departement dAstrophysique Physique des Particules Physique Nucleaire et dInstrumentation Associee de Saclay il presente son principe sa calibration et ses performances Ennmore » le dernier chapitre decrit la methode mise au point pour lanalyse des donnees avec le TRD et illustre son emploi sur quelques exemples jets simulant des electrons recherche du quark top« less
Cellules solaires photovoltaïques plastiques enjeux et perspectives
NASA Astrophysics Data System (ADS)
Sicot, L.; Dumarcher, V.; Raimond, P.; Rosilio, C.; Sentein, C.; Fiorini, C.
2002-04-01
Après avoir détaillé le fonctionnement d'une cellule photovoltaïque plastique et les paramètres photovoltaïques permettant de caractéiser son efficacité, un état de l'art des technologies de fabrication des cellules est présenté. Des moyens d'amélioration des performances des cellules photovoltaïques organiques sont ensuite illustrés par l'étude de dispositifs développés au Laboratoire Composants Organiques (LCO) du CEA Saclay.
Santonastaso, Giovanni Francesco; Bortone, Immacolata; Chianese, Simeone; Di Nardo, Armando; Di Natale, Michele; Erto, Alessandro; Karatza, Despina; Musmarra, Dino
2017-09-19
The following paper presents a method to optimise a discontinuous permeable adsorptive barrier (PAB-D). This method is based on the comparison of different PAB-D configurations obtained by changing some of the main PAB-D design parameters. In particular, the well diameters, the distance between two consecutive passive wells and the distance between two consecutive well lines were varied, and a cost analysis for each configuration was carried out in order to define the best performing and most cost-effective PAB-D configuration. As a case study, a benzene-contaminated aquifer located in an urban area in the north of Naples (Italy) was considered. The PAB-D configuration with a well diameter of 0.8 m resulted the best optimised layout in terms of performance and cost-effectiveness. Moreover, in order to identify the best configuration for the remediation of the aquifer studied, a comparison with a continuous permeable adsorptive barrier (PAB-C) was added. In particular, this showed a 40% reduction of the total remediation costs by using the optimised PAB-D.
NASA Astrophysics Data System (ADS)
Li, Guiqiang; Zhao, Xudong; Jin, Yi; Chen, Xiao; Ji, Jie; Shittu, Samson
2018-06-01
Geometrical optimisation is a valuable way to improve the efficiency of a thermoelectric element (TE). In a hybrid photovoltaic-thermoelectric (PV-TE) system, the photovoltaic (PV) and thermoelectric (TE) components have a relatively complex relationship; their individual effects mean that geometrical optimisation of the TE element alone may not be sufficient to optimize the entire PV-TE hybrid system. In this paper, we introduce a parametric optimisation of the geometry of the thermoelectric element footprint for a PV-TE system. A uni-couple TE model was built for the PV-TE using the finite element method and temperature-dependent thermoelectric material properties. Two types of PV cells were investigated in this paper and the performance of PV-TE with different lengths of TE elements and different footprint areas was analysed. The outcome showed that no matter the TE element's length and the footprint areas, the maximum power output occurs when A n /A p = 1. This finding is useful, as it provides a reference whenever PV-TE optimisation is investigated.
3D printed fluidics with embedded analytic functionality for automated reaction optimisation
Capel, Andrew J; Wright, Andrew; Harding, Matthew J; Weaver, George W; Li, Yuqi; Harris, Russell A; Edmondson, Steve; Goodridge, Ruth D
2017-01-01
Additive manufacturing or ‘3D printing’ is being developed as a novel manufacturing process for the production of bespoke micro- and milliscale fluidic devices. When coupled with online monitoring and optimisation software, this offers an advanced, customised method for performing automated chemical synthesis. This paper reports the use of two additive manufacturing processes, stereolithography and selective laser melting, to create multifunctional fluidic devices with embedded reaction monitoring capability. The selectively laser melted parts are the first published examples of multifunctional 3D printed metal fluidic devices. These devices allow high temperature and pressure chemistry to be performed in solvent systems destructive to the majority of devices manufactured via stereolithography, polymer jetting and fused deposition modelling processes previously utilised for this application. These devices were integrated with commercially available flow chemistry, chromatographic and spectroscopic analysis equipment, allowing automated online and inline optimisation of the reaction medium. This set-up allowed the optimisation of two reactions, a ketone functional group interconversion and a fused polycyclic heterocycle formation, via spectroscopic and chromatographic analysis. PMID:28228852
Zarb, Francis; McEntee, Mark F; Rainford, Louise
2015-06-01
To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.
NASA Astrophysics Data System (ADS)
Astley, R. J.; Sugimoto, R.; Mustafi, P.
2011-08-01
Novel techniques are presented to reduce noise from turbofan aircraft engines by optimising the acoustic treatment in engine ducts. The application of Computational Aero-Acoustics (CAA) to predict acoustic propagation and absorption in turbofan ducts is reviewed and a critical assessment of performance indicates that validated and accurate techniques are now available for realistic engine predictions. A procedure for integrating CAA methods with state of the art optimisation techniques is proposed in the remainder of the article. This is achieved by embedding advanced computational methods for noise prediction within automated and semi-automated optimisation schemes. Two different strategies are described and applied to realistic nacelle geometries and fan sources to demonstrate the feasibility of this approach for industry scale problems.
A Bayesian Approach for Sensor Optimisation in Impact Identification
Mallardo, Vincenzo; Sharif Khodaei, Zahra; Aliabadi, Ferri M. H.
2016-01-01
This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM) system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence. PMID:28774064
Gladman, John; Buckell, John; Young, John; Smith, Andrew; Hulme, Clare; Saggu, Satti; Godfrey, Mary; Enderby, Pam; Teale, Elizabeth; Longo, Roberto; Gannon, Brenda; Holditch, Claire; Eardley, Heather; Tucker, Helen
2017-01-01
Introduction To understand the variation in performance between community hospitals, our objectives are: to measure the relative performance (cost efficiency) of rehabilitation services in community hospitals; to identify the characteristics of community hospital rehabilitation that optimise performance; to investigate the current impact of community hospital inpatient rehabilitation for older people on secondary care and the potential impact if community hospital rehabilitation was optimised to best practice nationally; to examine the relationship between the configuration of intermediate care and secondary care bed use; and to develop toolkits for commissioners and community hospital providers to optimise performance. Methods and analysis 4 linked studies will be performed. Study 1: cost efficiency modelling will apply econometric techniques to data sets from the National Health Service (NHS) Benchmarking Network surveys of community hospital and intermediate care. This will identify community hospitals' performance and estimate the gap between high and low performers. Analyses will determine the potential impact if the performance of all community hospitals nationally was optimised to best performance, and examine the association between community hospital configuration and secondary care bed use. Study 2: a national community hospital survey gathering detailed cost data and efficiency variables will be performed. Study 3: in-depth case studies of 3 community hospitals, 2 high and 1 low performing, will be undertaken. Case studies will gather routine hospital and local health economy data. Ward culture will be surveyed. Content and delivery of treatment will be observed. Patients and staff will be interviewed. Study 4: co-designed web-based quality improvement toolkits for commissioners and providers will be developed, including indicators of performance and the gap between local and best community hospitals performance. Ethics and dissemination Publications will be in peer-reviewed journals, reports will be distributed through stakeholder organisations. Ethical approval was obtained from the Bradford Research Ethics Committee (reference: 15/YH/0062). PMID:28242766
NASA Astrophysics Data System (ADS)
Ratnadewi; Pramono Adhie, Roy; Hutama, Yonatan; Saleh Ahmar, A.; Setiawan, M. I.
2018-01-01
Cryptography is a method used to create secure communication by manipulating sent messages during the communication occurred so only intended party that can know the content of that messages. Some of the most commonly used cryptography methods to protect sent messages, especially in the form of text, are DES and 3DES cryptography method. This research will explain the DES and 3DES cryptography method and its use for stored data security in smart cards that working in the NFC-based communication system. Several things that will be explained in this research is the ways of working of DES and 3DES cryptography method in doing the protection process of a data and software engineering through the creation of application using C++ programming language to realize and test the performance of DES and 3DES cryptography method in encrypted data writing process to smart cards and decrypted data reading process from smart cards. The execution time of the entering and the reading process data using a smart card DES cryptography method is faster than using 3DES cryptography.
Subspace Compressive GLRT Detector for MIMO Radar in the Presence of Clutter.
Bolisetti, Siva Karteek; Patwary, Mohammad; Ahmed, Khawza; Soliman, Abdel-Hamid; Abdel-Maguid, Mohamed
2015-01-01
The problem of optimising the target detection performance of MIMO radar in the presence of clutter is considered. The increased false alarm rate which is a consequence of the presence of clutter returns is known to seriously degrade the target detection performance of the radar target detector, especially under low SNR conditions. In this paper, a mathematical model is proposed to optimise the target detection performance of a MIMO radar detector in the presence of clutter. The number of samples that are required to be processed by a radar target detector regulates the amount of processing burden while achieving a given detection reliability. While Subspace Compressive GLRT (SSC-GLRT) detector is known to give optimised radar target detection performance with reduced computational complexity, it however suffers a significant deterioration in target detection performance in the presence of clutter. In this paper we provide evidence that the proposed mathematical model for SSC-GLRT detector outperforms the existing detectors in the presence of clutter. The performance analysis of the existing detectors and the proposed SSC-GLRT detector for MIMO radar in the presence of clutter are provided in this paper.
Dynamic least-cost optimisation of wastewater system remedial works requirements.
Vojinovic, Z; Solomatine, D; Price, R K
2006-01-01
In recent years, there has been increasing concern for wastewater system failure and identification of optimal set of remedial works requirements. So far, several methodologies have been developed and applied in asset management activities by various water companies worldwide, but often with limited success. In order to fill the gap, there are several research projects that have been undertaken in exploring various algorithms to optimise remedial works requirements, but mostly for drinking water supply systems, and very limited work has been carried out for the wastewater assets. Some of the major deficiencies of commonly used methods can be found in either one or more of the following aspects: inadequate representation of systems complexity, incorporation of a dynamic model into the decision-making loop, the choice of an appropriate optimisation technique and experience in applying that technique. This paper is oriented towards resolving these issues and discusses a new approach for the optimisation of wastewater systems remedial works requirements. It is proposed that the optimal problem search is performed by a global optimisation tool (with various random search algorithms) and the system performance is simulated by the hydrodynamic pipe network model. The work on assembling all required elements and the development of an appropriate interface protocols between the two tools, aimed to decode the potential remedial solutions into the pipe network model and to calculate the corresponding scenario costs, is currently underway.
Stent Polymers: Do They Make a Difference?
Rizas, Konstantinos D; Mehilli, Julinda
2016-06-01
The necessity of polymers on drug-eluting stent (DES) platforms is dictated by the need of an adequate amount and optimal release kinetic of the antiproliferative drugs for achieving ideal DES performance. However, the chronic vessel wall inflammation related to permanent polymer persistence after the drug has been eluted might trigger late restenosis and stent thrombosis. Biodegradable polymers have the potential to avoid these adverse events. A variety of biodegradable polymer DES platforms have been clinically tested, showing equal outcomes with the standard-bearer permanent polymer DES within the first year of implantation. At longer-term follow-up, promising lower rates of stent thrombosis have been observed with the early generation biodegradable polymer DES platforms compared to first-generation DES. Whether this safety benefit still persists with newer biodegradable polymer DES generations against second-generation permanent polymer DES needs to be explored. © 2016 American Heart Association, Inc.
Xu, Ran-Fang; Sun, Min-Xia; Liu, Juan; Wang, Hong; Li, Xin; Zhu, Xue-Zhu; Ling, Wan-Ting
2014-08-01
Utilizing the diethylstilbestrol (DES)-degrading bacteria to biodegrade DES is a most reliable technique for cleanup of DES pollutants from the environment. However, little information is available heretofore on the isolation of DES-degrading bacteria and their DES removal performance in the environment. A novel bacterium capable of degrading DES was isolated from the activated sludge of a wastewater treatment plant. According to its morphology, physiochemical characteristics, and 16S rDNA sequence analysis, this strain was identified as Serratia sp.. The strain was an aerobic bacterium, and it could degrade 68.3% of DES (50 mg x L(-1)) after culturing for 7 days at 30 degrees C, 150 r x min(-1) in shaking flasks. The optimal conditions for DES biodegradation by the obtained strain were 30 degrees C, 40-60 mg x L(-1) DES, pH 7.0, 5% of inoculation volume, 0 g x L(-1) of added NaCl, and 10 mL of liquid medium volume in 100 mL flask.
2011-06-01
des performances clés et modélisation de la disponibilité avion. • Concepts et technologies de gestion de la maintenance/ du soutien pour...l’utilisation des concepts de maintenance et des technologies avancées pour améliorer la disponibilité des aéronefs et réduire le coût du cycle de vie. L’équipe...aériennes. Dans la ligne du travail fourni par la commission AVT, un atelier
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
Yin, Yan; Lin, Congxing; Zhang, Ivy; Fisher, Alexander V; Dhandha, Maulik; Ma, Liang
The fate of mouse uterine epithelial progenitor cells is determined between postnatal days 5 to 7. Around this critical time window, exposure to an endocrine disruptor, diethylstilbestrol (DES), can profoundly alter uterine cytodifferentiation. We have shown previously that a homeo domain transcription factor MSX-2 plays an important role in DES-responsiveness in the female reproductive tract (FRT). Mutant FRTs exhibited a much more severe phenotype when treated with DES, accompanied by gene expression changes that are dependent on Msx2 . To better understand the role that MSX-2 plays in uterine response to DES, we performed global gene expression profiling experiment in mice lacking Msx2 By comparing this result to our previously published microarray data performed on wild-type mice, we extracted common and differentially regulated genes in the two genotypes. In so doing, we identified potential downstream targets of MSX-2, as well as genes whose regulation by DES is modulated through MSX-2. Discovery of these genes will lead to a better understanding of how DES, and possibly other endocrine disruptors, affects reproductive organ development.
Yin, Yan; Lin, Congxing; Zhang, Ivy; Fisher, Alexander V; Dhandha, Maulik; Ma, Liang
2015-01-01
The fate of mouse uterine epithelial progenitor cells is determined between postnatal days 5 to 7. Around this critical time window, exposure to an endocrine disruptor, diethylstilbestrol (DES), can profoundly alter uterine cytodifferentiation. We have shown previously that a homeo domain transcription factor MSX-2 plays an important role in DES-responsiveness in the female reproductive tract (FRT). Mutant FRTs exhibited a much more severe phenotype when treated with DES, accompanied by gene expression changes that are dependent on Msx2. To better understand the role that MSX-2 plays in uterine response to DES, we performed global gene expression profiling experiment in mice lacking Msx2 By comparing this result to our previously published microarray data performed on wild-type mice, we extracted common and differentially regulated genes in the two genotypes. In so doing, we identified potential downstream targets of MSX-2, as well as genes whose regulation by DES is modulated through MSX-2. Discovery of these genes will lead to a better understanding of how DES, and possibly other endocrine disruptors, affects reproductive organ development. PMID:26457333
NASA Astrophysics Data System (ADS)
Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood
2015-10-01
Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.
Schmidt, Wolfram; Lanzer, Peter; Behrens, Peter; Brandt-Wunderlich, Christoph; Öner, Alper; Ince, Hüseyin; Schmitz, Klaus-Peter; Grabow, Niels
2018-01-08
Drug-eluting stents (DES) compared to bare metal stents (BMS) have shown superior clinical performance, but are considered less suitable in complex cases. Most studies do not distinguish between DES and BMS with respect to their mechanical performance. The objective was to obtain mechanical parameters for direct comparison of BMS and DES. In vitro bench tests evaluated crimped stent profiles, crossability in stenosis models, elastic recoil, bending stiffness (crimped and expanded), and scaffolding properties. The study included five pairs of BMS and DES each with the same stent platforms (all n = 5; PRO-Kinetic Energy, Orsiro: BIOTRONIK AG, Bülach, Switzerland; MULTI-LINK 8, XIENCE Xpedition: Abbott Vascular, Temecula, CA; REBEL Monorail, Promus PREMIER, Boston Scientific, Marlborough, MA; Integrity, Resolute Integrity, Medtronic, Minneapolis, MN; Kaname, Ultimaster: Terumo Corporation, Tokyo, Japan). Statistical analysis used pooled variance t tests for pairwise comparison of BMS with DES. Crimped profiles in BMS groups ranged from 0.97 ± 0.01 mm (PRO-Kinetic Energy) to 1.13 ± 0.01 mm (Kaname) and in DES groups from 1.02 ± 0.01 mm (Orsiro) to 1.13 ± 0.01 mm (Ultimaster). Crossability was best for low profile stent systems. Elastic recoil ranged from 4.07 ± 0.22% (Orsiro) to 5.87 ± 0.54% (REBEL Monorail) including both BMS and DES. The bending stiffness of crimped and expanded stents showed no systematic differences between BMS and DES neither did the scaffolding. Based on in vitro measurements BMS appear superior to DES in some aspects of mechanical performance, yet the differences are small and not class uniform. The data provide assistance in selecting the optimal system for treatment and assessment of new generations of bioresorbable scaffolds. not applicable.
Static Aeroelastic Effects on High Performance Aircraft
1987-06-01
davis la rffrence 9. L’avion est instrurnent6, en plus des capteurs classiques des param~tres de n~ca- nique du vol. de plusleurs centaines de jauges de...crites §2.3.5, et lensemble dv l’analyse, pernet- tent le calcul des r~ponues des jauges en fonction dv X soit ar ( X) , lv procesnus de d~rivation...travissonique. Rema rque La smine technique d’identification par rtponne dv jauges s’applique (plus simple- ment) sur len essais en soufflerie, pour la
Design of distributed PID-type dynamic matrix controller for fractional-order systems
NASA Astrophysics Data System (ADS)
Wang, Dawei; Zhang, Ridong
2018-01-01
With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.
Ashrafi, Parivash; Sun, Yi; Davey, Neil; Adams, Roderick G; Wilkinson, Simon C; Moss, Gary Patrick
2018-03-01
The aim of this study was to investigate how to improve predictions from Gaussian Process models by optimising the model hyperparameters. Optimisation methods, including Grid Search, Conjugate Gradient, Random Search, Evolutionary Algorithm and Hyper-prior, were evaluated and applied to previously published data. Data sets were also altered in a structured manner to reduce their size, which retained the range, or 'chemical space' of the key descriptors to assess the effect of the data range on model quality. The Hyper-prior Smoothbox kernel results in the best models for the majority of data sets, and they exhibited significantly better performance than benchmark quantitative structure-permeability relationship (QSPR) models. When the data sets were systematically reduced in size, the different optimisation methods generally retained their statistical quality, whereas benchmark QSPR models performed poorly. The design of the data set, and possibly also the approach to validation of the model, is critical in the development of improved models. The size of the data set, if carefully controlled, was not generally a significant factor for these models and that models of excellent statistical quality could be produced from substantially smaller data sets. © 2018 Royal Pharmaceutical Society.
Breuer, Christian; Lucas, Martin; Schütze, Frank-Walter; Claus, Peter
2007-01-01
A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis - especially for applications technically run in honeycomb structures - the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pre-treatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals.
Optimisation of SIW bandpass filter with wide and sharp stopband using space mapping
NASA Astrophysics Data System (ADS)
Xu, Juan; Bi, Jun Jian; Li, Zhao Long; Chen, Ru shan
2016-12-01
This work presents a substrate integrated waveguide (SIW) bandpass filter with wide and precipitous stopband, which is different from filters with a direct input/output coupling structure. Higher modes in the SIW cavities are used to generate the finite transmission zeros for improved stopband performance. The design of SIW filters requires full wave electromagnetic simulation and extensive optimisation. If a full wave solver is used for optimisation, the design process is very time consuming. The space mapping (SM) approach has been called upon to alleviate this problem. In this case, the coarse model is optimised using an equivalent circuit model-based representation of the structure for fast computations. On the other hand, the verification of the design is completed with an accurate fine model full wave simulation. A fourth-order filter with a passband of 12.0-12.5 GHz is fabricated on a single layer Rogers RT/Duroid 5880 substrate. The return loss is better than 17.4 dB in the passband and the rejection is more than 40 dB in the stopband. The stopband is from 2 to 11 GHz and 13.5 to 17.3 GHz, demonstrating a wide bandwidth performance.
SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres
NASA Astrophysics Data System (ADS)
Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei
2015-10-01
Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.
rPM6 parameters for phosphorous and sulphur-containing open-shell molecules
NASA Astrophysics Data System (ADS)
Saito, Toru; Takano, Yu
2018-03-01
In this article, we have introduced a reparameterisation of PM6 (rPM6) for phosphorus and sulphur to achieve a better description of open-shell species containing the two elements. Two sets of the parameters have been optimised separately using our training sets. The performance of the spin-unrestricted rPM6 (UrPM6) method with the optimised parameters is evaluated against 14 radical species, which contain either phosphorus or sulphur atom, comparing with the original UPM6 and the spin-unrestricted density functional theory (UDFT) methods. The standard UPM6 calculations fail to describe the adiabatic singlet-triplet energy gaps correctly, and may cause significant structural mismatches with UDFT-optimised geometries. Leaving aside three difficult cases, tests on 11 open-shell molecules strongly indicate the superior performance of UrPM6, which provides much better agreement with the results of UDFT methods for geometric and electronic properties.
Merchaoui, Irtyah; Chouchène, Asma; Bouanène, Ines; Chaari, Néila; Zrafi, Wassim; Henchi, Adnène; Akrout, Mohamed; Amri, Charfeddine
2017-01-01
Introduction L'insatisfaction de carrière des médecins de travail (MT) peut influencer leur performance et la qualité des prestations fournies. L'objectif de notre étude est d'évaluer la satisfaction au travail des MT du terrain de l'ensemble des Groupements de médecine de travail (GMT) de la Tunisie et préciser les facteurs déterminants. Méthodes Il s'agit d'une étude nationale transversale portant sur les MT de 22 GMT, basée sur le questionnaire, validé SAPHORA JOB. Résultats 58% des MT des GMT étaient insatisfaits de leur carrière. La satisfaction de carrière était statistiquement influencée par le nombre d'entreprises en charge (p=0,016), l'organisation du travail (p=0,010),le ressenti du métier (p=0,011),le salaire (p‹10-3) et l'information sur la réglementation en vigueur (p=0,047). Conclusion L'homogénéisation des grilles salariales et des échelons de carrière des MT des GMT basée sur une révision des textes législatifs est indiquée. L'amélioration de l'organisation et des conditions de travail peut permettre un épanouissement au travail et une amélioration des prestations. PMID:28819472
Nonlinear Dynamical Model of a Soft Viscoelastic Dielectric Elastomer
NASA Astrophysics Data System (ADS)
Zhang, Junshi; Chen, Hualing; Li, Dichen
2017-12-01
Actuated by alternating stimulation, dielectric elastomers (DEs) show a behavior of complicated nonlinear vibration, implying a potential application as dynamic electromechanical actuators. As is well known, for a vibrational system, including the DE system, the dynamic properties are significantly affected by the geometrical sizes. In this article, a nonlinear dynamical model is deduced to investigate the geometrical effects on dynamic properties of viscoelastic DEs. The DEs with square and arbitrary rectangular geometries are considered, respectively. Besides, the effects of tensile forces on dynamic performances of rectangular DEs with comparably small and large geometrical sizes are explored. Phase paths and Poincaré maps are utilized to detect the periodicity of the nonlinear vibrations of DEs. The resonance characteristics of DEs incorporating geometrical effects are also investigated. The results indicate that the dynamic properties of DEs, including deformation response, vibrational periodicity, and resonance, are tuned when the geometrical sizes vary.
NASA Astrophysics Data System (ADS)
Lallier-Daniels, Dominic
La conception de ventilateurs est souvent basée sur une méthodologie « essais/erreurs » d'amélioration de géométries existantes ainsi que sur l'expérience de design et les résultats expérimentaux cumulés par les entreprises. Cependant, cette méthodologie peut se révéler coûteuse en cas d'échec; même en cas de succès, des améliorations significatives en performance sont souvent difficiles, voire impossibles à obtenir. Le projet présent propose le développement et la validation d'une méthodologie de conception basée sur l'emploi du calcul méridien pour la conception préliminaire de turbomachines hélico-centrifuges (ou flux-mixte) et l'utilisation du calcul numérique d'écoulement fluides (CFD) pour la conception détaillée. La méthode de calcul méridien à la base du processus de conception proposé est d'abord présentée. Dans un premier temps, le cadre théorique est développé. Le calcul méridien demeurant fondamentalement un processus itératif, le processus de calcul est également présenté, incluant les méthodes numériques de calcul employée pour la résolution des équations fondamentales. Une validation du code méridien écrit dans le cadre du projet de maîtrise face à un algorithme de calcul méridien développé par l'auteur de la méthode ainsi qu'à des résultats de simulation numérique sur un code commercial est également réalisée. La méthodologie de conception de turbomachines développée dans le cadre de l'étude est ensuite présentée sous la forme d'une étude de cas pour un ventilateur hélico-centrifuge basé sur des spécifications fournies par le partenaire industriel Venmar. La méthodologie se divise en trois étapes: le calcul méridien est employé pour le pré-dimensionnement, suivi de simulations 2D de grilles d'aubes pour la conception détaillée des pales et finalement d'une analyse numérique 3D pour la validation et l'optimisation fine de la géométrie. Les résultats de calcul méridien sont en outre comparés aux résultats de simulation pour la géométrie 3D afin de valider l'emploi du calcul méridien comme outil de prédimensionnement.
Cultural-based particle swarm for dynamic optimisation problems
NASA Astrophysics Data System (ADS)
Daneshyari, Moayed; Yen, Gary G.
2012-07-01
Many practical optimisation problems are with the existence of uncertainties, among which a significant number belong to the dynamic optimisation problem (DOP) category in which the fitness function changes through time. In this study, we propose the cultural-based particle swarm optimisation (PSO) to solve DOP problems. A cultural framework is adopted incorporating the required information from the PSO into five sections of the belief space, namely situational, temporal, domain, normative and spatial knowledge. The stored information will be adopted to detect the changes in the environment and assists response to the change through a diversity-based repulsion among particles and migration among swarms in the population space, and also helps in selecting the leading particles in three different levels, personal, swarm and global levels. Comparison of the proposed heuristics over several difficult dynamic benchmark problems demonstrates the better or equal performance with respect to most of other selected state-of-the-art dynamic PSO heuristics.
A shrinking hypersphere PSO for engineering optimisation problems
NASA Astrophysics Data System (ADS)
Yadav, Anupam; Deep, Kusum
2016-03-01
Many real-world and engineering design problems can be formulated as constrained optimisation problems (COPs). Swarm intelligence techniques are a good approach to solve COPs. In this paper an efficient shrinking hypersphere-based particle swarm optimisation (SHPSO) algorithm is proposed for constrained optimisation. The proposed SHPSO is designed in such a way that the movement of the particle is set to move under the influence of shrinking hyperspheres. A parameter-free approach is used to handle the constraints. The performance of the SHPSO is compared against the state-of-the-art algorithms for a set of 24 benchmark problems. An exhaustive comparison of the results is provided statistically as well as graphically. Moreover three engineering design problems namely welded beam design, compressed string design and pressure vessel design problems are solved using SHPSO and the results are compared with the state-of-the-art algorithms.
Microfluidic converging/diverging channels optimised for homogeneous extensional deformation.
Zografos, K; Pimenta, F; Alves, M A; Oliveira, M S N
2016-07-01
In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field.
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions
NASA Astrophysics Data System (ADS)
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-03-01
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions.
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-03-23
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell's equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than -15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally.
Optimal design and operation of a photovoltaic-electrolyser system using particle swarm optimisation
NASA Astrophysics Data System (ADS)
Sayedin, Farid; Maroufmashat, Azadeh; Roshandel, Ramin; Khavas, Sourena Sattari
2016-07-01
In this study, hydrogen generation is maximised by optimising the size and the operating conditions of an electrolyser (EL) directly connected to a photovoltaic (PV) module at different irradiance. Due to the variations of maximum power points of the PV module during a year and the complexity of the system, a nonlinear approach is considered. A mathematical model has been developed to determine the performance of the PV/EL system. The optimisation methodology presented here is based on the particle swarm optimisation algorithm. By this method, for the given number of PV modules, the optimal sizeand operating condition of a PV/EL system areachieved. The approach can be applied for different sizes of PV systems, various ambient temperatures and different locations with various climaticconditions. The results show that for the given location and the PV system, the energy transfer efficiency of PV/EL system can reach up to 97.83%.
Topology Optimisation of Wideband Coaxial-to-Waveguide Transitions
Hassan, Emadeldeen; Noreland, Daniel; Wadbro, Eddie; Berggren, Martin
2017-01-01
To maximize the matching between a coaxial cable and rectangular waveguides, we present a computational topology optimisation approach that decides for each point in a given domain whether to hold a good conductor or a good dielectric. The conductivity is determined by a gradient-based optimisation method that relies on finite-difference time-domain solutions to the 3D Maxwell’s equations. Unlike previously reported results in the literature for this kind of problems, our design algorithm can efficiently handle tens of thousands of design variables that can allow novel conceptual waveguide designs. We demonstrate the effectiveness of the approach by presenting optimised transitions with reflection coefficients lower than −15 dB over more than a 60% bandwidth, both for right-angle and end-launcher configurations. The performance of the proposed transitions is cross-verified with a commercial software, and one design case is validated experimentally. PMID:28332585
Microfluidic converging/diverging channels optimised for homogeneous extensional deformation
Zografos, K.; Oliveira, M. S. N.
2016-01-01
In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field. PMID:27478523
What We Did Last Summer: Depicting DES Data to Enhance Simulation Utility and Use
NASA Technical Reports Server (NTRS)
Elfrey, Priscilla; Conroy, Mike; Lagares, Jose G.; Mann, David; Fahmi, Mona
2009-01-01
At Kennedy Space Center (KSC), an important use of Discrete Event Simulation (DES) addresses ground operations .of missions to space. DES allows managers, scientists and engineers to assess the number of missions KSC can complete on a given schedule within different facilities, the effects of various configurations of resources and detect possible problems or unwanted situations. For fifteen years, DES has supported KSC efficiency, cost savings and improved safety and performance. The dense and abstract DES data, however, proves difficult to comprehend and, NASA managers realized, is subject to misinterpretation, misunderstanding and even, misuse. In summer 2008, KSC developed and implemented a NASA Exploration Systems Mission Directorate (ESMD) project based on the premise that visualization could enhance NASA's understanding and use of DES.
Epidémiologie des PFA et les performances du système de surveillance en Mauritanie de 2008 à 2012
Doumtsop, Jean Gérard Tatou; Khalef, Ishagh; Diakite, Med Lemine Brahim; Boubker, Naouri
2014-01-01
Introduction La Commission Régionale de Certification de l'Eradication de la poliomyélite pour l'Afrique(CRCA) en session à Brazzaville du 8 au 10 octobre 2007 avait déclaré la Mauritanie « libérée de la poliomyélite ». Objectif Décrire l’épidémiologie des PFA (Paralysies flasques aigues) et évaluer les indicateurs de performance du système de surveillance pour la période de 2008 à 2012 ayant suivi cette déclaration. Méthodes Les données du service de surveillance épidémiologique ont été nettoyées et analysées à l'aide du logiciel Epi-infoversion 3.4.3 (CDC Atlanta). Résultats 319 cas de PFA ont été notifié soit une incidence moyenne de 4.61/100000 enfants de moins de 15 ans par an. La distribution des cas cumulés par mois montre une importante notification des cas PFA pendant la période de Février à Juillet et à la suite de l’épidémie de 2009 alors que l'incidence des cas confirmés a été plus importante entre Novembre et Février. L’âge moyen était de 4ans (E.T. ±4ans) et 77.4% avaient un âge = 5ans. 18 cas de PFA ont été confirmés poliovirus sauvage(PVS) dont 6 en 2009 et 12 en 2010 et tous d'importation. L’âge moyen était de 3.4 ans (E.T ±2.6 ans). 44,4% étaient des filles et 55.5% garçons. Cette proportion était proche de celle des PFA non polio (45.1% versus 54.9%). 61% avaient reçu au plus une dose de vaccin polio orale (VPO) pour les cas de PFA polio contre 7.4% pour les PFA non polio. Aucune discrimination de genre n'a été observé sur la population des PFA ayant reçu une dose au plus (ratio-sexe= 16/17=0.94). La fièvre était présente pour 90%des cas de PFA non polio contre 85% pour les cas PVS. Cette fièvre à progresser en 3 jours pour tous les cas de PVS et pour 82,7% des cas de PFA non polio. Le taux d'hospitalisation était de 13.6% pour les cas de PFA non polio contre 89% pour les PFA polio. Dans tous les deux groupes, les membres de prédilection étaient d'abord l'un des 2 membres inférieurs de façon alternative (46.8%), ensuite les deux membres inférieurs à la fois (22.2%). En troisième lieu, un membre supérieur et un membre inférieur de façon asymétrique (5.3%) et en quatrième lieu, tous les 4 membres (4.6%). Le diagnostic final des cas de PFA non polio n’était pas rapporté pendant toute la période de l’étude, cependant, 2 cas ont été classés compatible et un cas de paralysie associée à la vaccination contre la polio a été rapporté en 2012 chez un enfant féminin de 3 ans ayant reçu sa première dose de VPO. En ce qui concerne les performances des indicateurs de la surveillance, 4 indicateurs n'ont pas atteint les performances escomptées: Le pourcentage des échantillons arrivés au laboratoire national moins de 3 jours après les prélèvements est resté en deçà des objectifs, variant entre 40% et 70%. Il en est de même des Districts de santé ayant notifies au moins un cas (50 à 68%). Le pourcentage des prélèvements dans lesquels un entérovirus non polio a été isolé a connu une évolution en dents de scie avec des valeurs de 0.0% en 2008 et en 2010 et 1.9% en 2011. L'examen de suivi du60emejour n'a concerné que les cas dont l’échantillon était jugé inadéquat au laboratoire. Conclusion Les PFA qui ont été notifiés en Mauritanie de 2008 à 2012 présentent tableau clinique qui est classique, et sont notifiés avec un pic régulier en début des périodes de soudure (Mars-Juillet) suggérant l'hypothèse d'un lien écologique avec les changements de climat et de régime alimentaire. Les performances des indicateursde la surveillance des PFA sont à améliorer, notamment: 1) La compréhension des critères de jugement de la qualité des échantillons. 2) L'examen de laboratoire et l'interprétation des résultats. 3) La détermination du diagnostic final pour chaque cas de PFA non polio. 4) L'enregistrement des informations dans la base des données. PMID:25469198
NASA Astrophysics Data System (ADS)
Kamli, Emna
Les radars hautes-frequences (RHF) mesurent les courants marins de surface avec une portee pouvant atteindre 200 kilometres et une resolution de l'ordre du kilometre. Cette etude a pour but de caracteriser la performance des RHF, en terme de couverture spatiale, pour la mesure des courants de surface en presence partielle de glace de mer. Pour ce faire, les mesures des courants de deux radars de type CODAR sur la rive sud de l'estuaire maritime du Saint-Laurent, et d'un radar de type WERA sur la rive nord, prises pendant l'hiver 2013, ont ete utilisees. Dans un premier temps, l'aire moyenne journaliere de la zone ou les courants sont mesures par chaque radar a ete comparee a l'energie des vagues de Bragg calculee a partir des donnees brutes d'acceleration fournies par une bouee mouillee dans la zone couverte par les radars. La couverture des CODARs est dependante de la densite d'energie de Bragg, alors que la couverture du WERA y est pratiquement insensible. Un modele de fetch appele GENER a ete force par la vitesse du vent predite par le modele GEM d'Environnement Canada pour estimer la hauteur significative ainsi que la periode modale des vagues. A partir de ces parametres, la densite d'energie des vagues de Bragg a ete evaluee pendant l'hiver a l'aide du spectre theorique de Bretschneider. Ces resultats permettent d'etablir la couverture normale de chaque radar en absence de glace de mer. La concentration de glace de mer, predite par le systeme canadien operationnel de prevision glace-ocean, a ete moyennee sur les differents fetchs du vent selon la direction moyenne journaliere des vagues predites par GENER. Dans un deuxieme temps, la relation entre le ratio des couvertures journalieres obtenues pendant l'hiver 2013 et des couvertures normales de chaque radar d'une part, et la concentration moyenne journaliere de glace de mer d'autre part, a ete etablie. Le ratio des couvertures decroit avec l'augmentation de la concentration de glace de mer pour les deux types de radars, mais pour une concentration de glace de 20% la couverture du WERA est reduite de 34% alors que pour les CODARs elle est reduite de 67%. Les relations empiriques etablies entre la couverture des RHF et les parametres environnementaux (vent et glace de mer) permettront de predire la couverture que pourraient fournir des RHF installes dans d'autres regions soumises a la presence saisonniere de glace de mer.
Reservoir optimisation using El Niño information. Case study of Daule Peripa (Ecuador)
NASA Astrophysics Data System (ADS)
Gelati, Emiliano; Madsen, Henrik; Rosbjerg, Dan
2010-05-01
The optimisation of water resources systems requires the ability to produce runoff scenarios that are consistent with available climatic information. We approach stochastic runoff modelling with a Markov-modulated autoregressive model with exogenous input, which belongs to the class of Markov-switching models. The model assumes runoff parameterisation to be conditioned on a hidden climatic state following a Markov chain, whose state transition probabilities depend on climatic information. This approach allows stochastic modeling of non-stationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We calibrate the model on the inflows of the Daule Peripa reservoir located in western Ecuador, where the occurrence of El Niño leads to anomalously heavy rainfall caused by positive sea surface temperature anomalies along the coast. El Niño - Southern Oscillation (ENSO) information is used to condition the runoff parameterisation. Inflow predictions are realistic, especially at the occurrence of El Niño events. The Daule Peripa reservoir serves a hydropower plant and a downstream water supply facility. Using historical ENSO records, synthetic monthly inflow scenarios are generated for the period 1950-2007. These scenarios are used as input to perform stochastic optimisation of the reservoir rule curves with a multi-objective Genetic Algorithm (MOGA). The optimised rule curves are assumed to be the reservoir base policy. ENSO standard indices are currently forecasted at monthly time scale with nine-month lead time. These forecasts are used to perform stochastic optimisation of reservoir releases at each monthly time step according to the following procedure: (i) nine-month inflow forecast scenarios are generated using ENSO forecasts; (ii) a MOGA is set up to optimise the upcoming nine monthly releases; (iii) the optimisation is carried out by simulating the releases on the inflow forecasts, and by applying the base policy on a subsequent synthetic inflow scenario in order to account for long-term costs; (iv) the optimised release for the first month is implemented; (v) the state of the system is updated and (i), (ii), (iii), and (iv) are iterated for the following time step. The results highlight the advantages of using a climate-driven stochastic model to produce inflow scenarios and forecasts for reservoir optimisation, showing potential improvements with respect to the current management. Dynamic programming was used to find the best possible release time series given the inflow observations, in order to benchmark any possible operational improvement.
Le, Van So; Do, Zoe Phuc-Hien; Le, Minh Khoi; Le, Vicki; Le, Natalie Nha-Truc
2014-06-10
Methods of increasing the performance of radionuclide generators used in nuclear medicine radiotherapy and SPECT/PET imaging were developed and detailed for 99Mo/99mTc and 68Ge/68Ga radionuclide generators as the cases. Optimisation methods of the daughter nuclide build-up versus stand-by time and/or specific activity using mean progress functions were developed for increasing the performance of radionuclide generators. As a result of this optimisation, the separation of the daughter nuclide from its parent one should be performed at a defined optimal time to avoid the deterioration in specific activity of the daughter nuclide and wasting stand-by time of the generator, while the daughter nuclide yield is maintained to a reasonably high extent. A new characteristic parameter of the formation-decay kinetics of parent/daughter nuclide system was found and effectively used in the practice of the generator production and utilisation. A method of "early elution schedule" was also developed for increasing the daughter nuclide production yield and specific radioactivity, thus saving the cost of the generator and improving the quality of the daughter radionuclide solution. These newly developed optimisation methods in combination with an integrated elution-purification-concentration system of radionuclide generators recently developed is the most suitable way to operate the generator effectively on the basis of economic use and improvement of purposely suitable quality and specific activity of the produced daughter radionuclides. All these features benefit the economic use of the generator, the improved quality of labelling/scan, and the lowered cost of nuclear medicine procedure. Besides, a new method of quality control protocol set-up for post-delivery test of radionuclidic purity has been developed based on the relationship between gamma ray spectrometric detection limit, required limit of impure radionuclide activity and its measurement certainty with respect to optimising decay/measurement time and product sample activity used for QC quality control. The optimisation ensures a certainty of measurement of the specific impure radionuclide and avoids wasting the useful amount of valuable purified/concentrated daughter nuclide product. This process is important for the spectrometric measurement of very low activity of impure radionuclide contamination in the radioisotope products of much higher activity used in medical imaging and targeted radiotherapy.
The development of response surface pathway design to reduce animal numbers in toxicity studies
2014-01-01
Background This study describes the development of Response Surface Pathway (RSP) design, assesses its performance and effectiveness in estimating LD50, and compares RSP with Up and Down Procedures (UDPs) and Random Walk (RW) design. Methods A basic 4-level RSP design was used on 36 male ICR mice given intraperitoneal doses of Yessotoxin. Simulations were performed to optimise the design. A k-adjustment factor was introduced to ensure coverage of the dose window and calculate the dose steps. Instead of using equal numbers of mice on all levels, the number of mice was increased at each design level. Additionally, the binomial outcome variable was changed to multinomial. The performance of the RSP designs and a comparison of UDPs and RW were assessed by simulations. The optimised 4-level RSP design was used on 24 female NMRI mice given Azaspiracid-1 intraperitoneally. Results The in vivo experiment with basic 4-level RSP design estimated the LD50 of Yessotoxin to be 463 μg/kgBW (95% CI: 383–535). By inclusion of the k-adjustment factor with equal or increasing numbers of mice on increasing dose levels, the estimate changed to 481 μg/kgBW (95% CI: 362–566) and 447 μg/kgBW (95% CI: 378–504 μg/kgBW), respectively. The optimised 4-level RSP estimated the LD50 to be 473 μg/kgBW (95% CI: 442–517). A similar increase in power was demonstrated using the optimised RSP design on real Azaspiracid-1 data. The simulations showed that the inclusion of the k-adjustment factor, reduction in sample size by increasing the number of mice on higher design levels and incorporation of a multinomial outcome gave estimates of the LD50 that were as good as those with the basic RSP design. Furthermore, optimised RSP design performed on just three levels reduced the number of animals from 36 to 15 without loss of information, when compared with the 4-level designs. Simulated comparison of the RSP design with UDPs and RW design demonstrated the superiority of RSP. Conclusion Optimised RSP design reduces the number of animals needed. The design converges rapidly on the area of interest and is at least as efficient as both the UDPs and RW design. PMID:24661560
An Optimisation Procedure for the Conceptual Analysis of Different Aerodynamic Configurations
2000-06-01
G. Lombardi, G. Mengali Department of Aerospace Engineering , University of Pisa Via Diotisalvi 2, 56126 PISA, Italy F. Beux Scuola Normale Superiore...obtain engines , gears and various systems; their weights and centre configurations with improved performances with respect to a of gravity positions...design parameters have been arranged for The optimisation process includes the following steps: cruise: payload, velocity, range, cruise height, engine
Factors Influencing Manual Performance in Cold Water Diving
2008-04-01
montrent que « l’engourdissement » n’avait pas d’incidence sur la dextérité à une profondeur de 40 m (p > 0,05). Dans l’eau à 25 ºC, les gants en ...sensibilité tactile et la dextérité proprement dite. Le rendement ( en anglais performance ou p) a été évalué à des profondeurs de 0,4 m et de 40 m, avec...communication et d’affichage fait en sorte que l’aptitude des plongeurs à utiliser des systèmes de commande complexes constitue un facteur important de la
NASA Astrophysics Data System (ADS)
Sciambi, A.; Pelliccione, M.; Bank, S. R.; Gossard, A. C.; Goldhaber-Gordon, D.
2010-09-01
We propose a probe technique capable of performing local low-temperature spectroscopy on a two-dimensional electron system (2DES) in a semiconductor heterostructure. Motivated by predicted spatially-structured electron phases, the probe uses a charged metal tip to induce electrons to tunnel locally, directly below the tip, from a "probe" 2DES to a "subject" 2DES of interest. We test this concept with large-area (nonscanning) tunneling measurements, and predict a high spatial resolution and spectroscopic capability, with minimal influence on the physics in the subject 2DES.
Adaptability in Coalition Teamwork (Faculte d’adaptation au travail d’ equipe en coalition)
2008-04-01
et des outils sont nécessaires au développement rapide d’équipes multiculturelles efficaces pour assurer le succès des missions, celles-ci étant...Les principaux résultats des 30 communications théoriques et de recherche ont été les suivants : • Les outils de formation (jeux, simulations...parmi les militaires ; • Le retour d’information sur le moral et les performances des équipes en opérations est un instrument qui est particulièrement
Suh, Hae Sun; Song, Hyun Jin; Jang, Eun Jin; Kim, Jung-Sun; Choi, Donghoon; Lee, Sang Moo
2013-07-01
The goal of this study was to perform an economic analysis of a primary stenting with drug-eluting stents (DES) compared with bare-metal stents (BMS) in patients with acute myocardial infarction (AMI) admitted through an emergency room (ER) visit in Korea using population-based data. We employed a cost-minimization method using a decision analytic model with a two-year time period. Model probabilities and costs were obtained from a published systematic review and population-based data from which a retrospective database analysis of the national reimbursement database of Health Insurance Review and Assessment covering 2006 through 2010 was performed. Uncertainty was evaluated using one-way sensitivity analyses and probabilistic sensitivity analyses. Among 513 979 cases with AMI during 2007 and 2008, 24 742 cases underwent stenting procedures and 20 320 patients admitted through an ER visit with primary stenting were identified in the base model. The transition probabilities of DES-to-DES, DES-to-BMS, DES-to-coronary artery bypass graft, and DES-to-balloon were 59.7%, 0.6%, 4.3%, and 35.3%, respectively, among these patients. The average two-year costs of DES and BMS in 2011 Korean won were 11 065 528 won/person and 9 647 647 won/person, respectively. DES resulted in higher costs than BMS by 1 417 882 won/person. The model was highly sensitive to the probability and costs of having no revascularization. Primary stenting with BMS for AMI with an ER visit was shown to be a cost-saving procedure compared with DES in Korea. Caution is needed when applying this finding to patients with a higher level of severity in health status.
Preece, Stephen J; Chapman, Jonathan D; Braunstein, Bjoern; Brüggemann, Gert-Peter; Nester, Christopher J
2017-01-01
Appropriate footwear for individuals with diabetes but no ulceration history could reduce the risk of first ulceration. However, individuals who deem themselves at low risk are unlikely to seek out bespoke footwear which is personalised. Therefore, our primary aim was to investigate whether group-optimised footwear designs, which could be prefabricated and delivered in a retail setting, could achieve appropriate pressure reduction, or whether footwear selection must be on a patient-by-patient basis. A second aim was to compare responses to footwear design between healthy participants and people with diabetes in order to understand the transferability of previous footwear research, performed in healthy populations. Plantar pressures were recorded from 102 individuals with diabetes, considered at low risk of ulceration. This cohort included 17 individuals with peripheral neuropathy. We also collected data from 66 healthy controls. Each participant walked in 8 rocker shoe designs (4 apex positions × 2 rocker angles). ANOVA analysis was then used to understand the effect of two design features and descriptive statistics used to identify the group-optimised design. Using 200 kPa as a target, this group-optimised design was then compared to the design identified as the best for each participant (using plantar pressure data). Peak plantar pressure increased significantly as apex position was moved distally and rocker angle reduced ( p < 0.001). The group-optimised design incorporated an apex at 52% of shoe length, a 20° rocker angle and an apex angle of 95°. With this design 71-81% of peak pressures were below the 200 kPa threshold, both in the full cohort of individuals with diabetes and also in the neuropathic subgroup. Importantly, only small increases (<5%) in this proportion were observed when participants wore footwear which was individually selected. In terms of optimised footwear designs, healthy participants demonstrated the same response as participants with diabetes, despite having lower plantar pressures. This is the first study demonstrating that a group-optimised, generic rocker shoe might perform almost as well as footwear selected on a patient by patient basis in a low risk patient group. This work provides a starting point for clinical evaluation of generic versus personalised pressure reducing footwear.
Analysis and optimisation of a mixed fluid cascade (MFC) process
NASA Astrophysics Data System (ADS)
Ding, He; Sun, Heng; Sun, Shoujun; Chen, Cheng
2017-04-01
A mixed fluid cascade (MFC) process that comprises three refrigeration cycles has great capacity for large-scale LNG production, which consumes a great amount of energy. Therefore, any performance enhancement of the liquefaction process will significantly reduce the energy consumption. The MFC process is simulated and analysed by use of proprietary software, Aspen HYSYS. The effect of feed gas pressure, LNG storage pressure, water-cooler outlet temperature, different pre-cooling regimes, liquefaction, and sub-cooling refrigerant composition on MFC performance are investigated and presented. The characteristics of its excellent numerical calculation ability and the user-friendly interface of MATLAB™ and powerful thermo-physical property package of Aspen HYSYS are combined. A genetic algorithm is then invoked to optimise the MFC process globally. After optimisation, the unit power consumption can be reduced to 4.655 kW h/kmol, or 4.366 kW h/kmol on condition that the compressor adiabatic efficiency is 80%, or 85%, respectively. Additionally, to improve the process further, with regards its thermodynamic efficiency, configuration optimisation is conducted for the MFC process and several configurations are established. By analysing heat transfer and thermodynamic performances, the configuration entailing a pre-cooling cycle with three pressure levels, liquefaction, and a sub-cooling cycle with one pressure level is identified as the most efficient and thus optimal: its unit power consumption is 4.205 kW h/kmol. Additionally, the mechanism responsible for the weak performance of the suggested liquefaction cycle configuration lies in the unbalanced distribution of cold energy in the liquefaction temperature range.
Optimisation by hierarchical search
NASA Astrophysics Data System (ADS)
Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias
2015-03-01
Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.
NASA Astrophysics Data System (ADS)
Azadeh, A.; Foroozan, H.; Ashjari, B.; Motevali Haghighi, S.; Yazdanparast, R.; Saberi, M.; Torki Nejad, M.
2017-10-01
ISs and ITs play a critical role in large complex gas corporations. Many factors such as human, organisational and environmental factors affect IS in an organisation. Therefore, investigating ISs success is considered to be a complex problem. Also, because of the competitive business environment and the high amount of information flow in organisations, new issues like resilient ISs and successful customer relationship management (CRM) have emerged. A resilient IS will provide sustainable delivery of information to internal and external customers. This paper presents an integrated approach to enhance and optimise the performance of each component of a large IS based on CRM and resilience engineering (RE) in a gas company. The enhancement of the performance can help ISs to perform business tasks efficiently. The data are collected from standard questionnaires. It is then analysed by data envelopment analysis by selecting the optimal mathematical programming approach. The selected model is validated and verified by principle component analysis method. Finally, CRM and RE factors are identified as influential factors through sensitivity analysis for this particular case study. To the best of our knowledge, this is the first study for performance assessment and optimisation of large IS by combined RE and CRM.
Analysis of phospholipid fatty acids (PLFA) to characterize microbial communities in aquifers
NASA Astrophysics Data System (ADS)
Green, Christopher T.; Scow, Kate M.
This paper reviews published applications of lipid-based biochemical techniques for characterizing microbial communities in aquifers and other deep subsurface habitats. These techniques, such as phospholipid fatty acid (PLFA) analysis, can provide information on a variety of microbial characteristics, such as biomass, physiology, taxonomic and functional identity, and overall community composition. In addition, multivariate statistical analysis of lipid data can relate spatial or temporal changes in microbial communities to environmental factors. The use of lipid-based techniques in the study of groundwater microbiology is advantageous because they do not require culturing and can provide quantitative data on entire communities. However, combined effects of physiological and phylogenetic changes on the lipid composition of a community can confound interpretation of the data, and many questions remain about the validity of various lipid techniques. Despite these caveats, lipid-based research has begun to show trends in community composition in contaminated and pristine aquifers that contribute to our understanding of groundwater microbial ecology and have potential for use in optimization of bioremediation of groundwater pollutants. Résumé Ce papier passe en revue les applications des techniques biochimiques basées sur les lipides pour caractériser les communautés microbiennes présentes dans les aquifères et dans les autres habitats souterrains profonds. Ces techniques, telles que l'analyse des acides gras phospholipidiques (PLFA), peuvent fournir des informations sur un ensemble de caractères microbiens, tels que la biomasse, la physiologie, l'identité taxonomique et fonctionnelle, et surtout la composition de la communauté. En outre, l'analyse statistique multivariée des données sur les lipides peut établir les liens entre des changements spatiaux ou temporels dans la communauté microbienne et des facteurs environnementaux. L'utilisation des techniques basées sur les lipides dans l'étude de la microbiologie des eaux souterraines est intéressante parce qu'elle ne nécessite pas de mise en culture et qu'elle peut fournir des données quantitatives sur les communautés dans leur ensemble. Toutefois, les effets combinés de changements physiologiques et phylogénétiques sur la composition d'une communauté peuvent brouiller l'interprétation des données de nombreuses questions se posent sur la validité des différentes techniques lipidiques. Malgré ces oppositions, la recherche basée sur les lipides a commencéà montrer des tendances dans la composition des communautés dans les aquifères pollués et dans ceux non perturbés ces résultats contribuent ainsi à notre compréhension de l'écologie microbienne des eaux souterraines et montrent qu'il existe un potentiel pour leur utilisation en vue d'une optimisation de la dépollution biologique des eaux souterraines. Resumen Se revisan distintas técnicas bioquímicas que se basan en el análisis de lípidos para caracterizar las comunidades microbianas en hábitats subsuperficiales, incluyendo acuíferos. Estas técnicas, entre las que se incluye el análisis de ácidos grasos fosfolípidos (PLFA), pueden proporcionar información sobre toda una serie de características de las comunidades microbianas, como su biomasa, fisiología, identidad taxonómica y funcional y composición. Además, el análisis estadístico multivariado de los datos de lípidos permite relacionar los cambios espaciales o temporales en las comunidades microbianas con factores ambientales. Las técnicas basadas en lípidos son muy útiles para el estudio microbiológico de las aguas subterráneas, puesto que no requieren cultivos y además proporcionan datos cuantitativos de comunidades completas. Sin embargo, la acción combinada de los cambios fisiológicos y filogenéticos en la composición de lípidos en una comunidad pueden confundir la interpretación de los datos, por lo existen muchas cuestiones abiertas respecto a la validez de algunas de estas técnicas. A pesar de estas dificultades, estas técnicas han permitido detectar diferentes tendencias en la composición de las comunidades en acuíferos con y sin contaminación, lo que contribuye a nuestro entendimiento de la ecología microbiana de los acuíferos. Este último aspecto tiene un uso potencial en la optimización de los métodos de biorremediación de acuíferos.
NASA Astrophysics Data System (ADS)
McCarthy, Darragh; Trappe, Neil; Murphy, J. Anthony; O'Sullivan, Créidhe; Gradziel, Marcin; Doherty, Stephen; Huggard, Peter G.; Polegro, Arturo; van der Vorst, Maarten
2016-05-01
In order to investigate the origins of the Universe, it is necessary to carry out full sky surveys of the temperature and polarisation of the Cosmic Microwave Background (CMB) radiation, the remnant of the Big Bang. Missions such as COBE and Planck have previously mapped the CMB temperature, however in order to further constrain evolutionary and inflationary models, it is necessary to measure the polarisation of the CMB with greater accuracy and sensitivity than before. Missions undertaking such observations require large arrays of feed horn antennas to feed the detector arrays. Corrugated horns provide the best performance, however owing to the large number required (circa 5000 in the case of the proposed COrE+ mission), such horns are prohibitive in terms of thermal, mechanical and cost limitations. In this paper we consider the optimisation of an alternative smooth-walled piecewise conical profiled horn, using the mode-matching technique alongside a genetic algorithm. The technique is optimised to return a suitable design using efficient modelling software and standard desktop computing power. A design is presented showing a directional beam pattern and low levels of return loss, cross-polar power and sidelobes, as required by future CMB missions. This design is manufactured and the measured results compared with simulation, showing excellent agreement and meeting the required performance criteria. The optimisation process described here is robust and can be applied to many other applications where specific performance characteristics are required, with the user simply defining the beam requirements.
Optimisation techniques in vaginal cuff brachytherapy.
Tuncel, N; Garipagaoglu, M; Kizildag, A U; Andic, F; Toy, A
2009-11-01
The aim of this study was to explore whether an in-house dosimetry protocol and optimisation method are able to produce a homogeneous dose distribution in the target volume, and how often optimisation is required in vaginal cuff brachytherapy. Treatment planning was carried out for 109 fractions in 33 patients who underwent high dose rate iridium-192 (Ir(192)) brachytherapy using Fletcher ovoids. Dose prescription and normalisation were performed to catheter-oriented lateral dose points (dps) within a range of 90-110% of the prescribed dose. The in-house vaginal apex point (Vk), alternative vaginal apex point (Vk'), International Commission on Radiation Units and Measurements (ICRU) rectal point (Rg) and bladder point (Bl) doses were calculated. Time-position optimisations were made considering dps, Vk and Rg doses. Keeping the Vk dose higher than 95% and the Rg dose less than 85% of the prescribed dose was intended. Target dose homogeneity, optimisation frequency and the relationship between prescribed dose, Vk, Vk', Rg and ovoid diameter were investigated. The mean target dose was 99+/-7.4% of the prescription dose. Optimisation was required in 92 out of 109 (83%) fractions. Ovoid diameter had a significant effect on Rg (p = 0.002), Vk (p = 0.018), Vk' (p = 0.034), minimum dps (p = 0.021) and maximum dps (p<0.001). Rg, Vk and Vk' doses with 2.5 cm diameter ovoids were significantly higher than with 2 cm and 1.5 cm ovoids. Catheter-oriented dose point normalisation provided a homogeneous dose distribution with a 99+/-7.4% mean dose within the target volume, requiring time-position optimisation.
NASA Astrophysics Data System (ADS)
Luo, Bin; Lin, Lin; Zhong, ShiSheng
2018-02-01
In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.
Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA
NASA Astrophysics Data System (ADS)
Chandra, Abhijit; Chattopadhyay, Sudipta
2015-01-01
In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.
A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges
Asgari, B.; Osman, S. A.; Adnan, A.
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400
A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.
Asgari, B; Osman, S A; Adnan, A
2014-01-01
Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.
NASA Astrophysics Data System (ADS)
Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.
2017-03-01
General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.
NASA Astrophysics Data System (ADS)
Mebarki, Fouzia
Cette etude vise a etudier la possibilite d'utiliser des materiaux composites a matrice thermoplastique pour des applications electriques comme les supports des systemes d'allumage dans les moteurs d'automobile. Nous nous interessons plus particulierement aux composites a base de polyethylene terephtalate (PET) recycle. Les isolants classiques comme le PET ne peuvent pas satisfaire toutes les exigences. L'introduction des renforts comme les fibres de verre et le mica peuvent ameliorer les caracteristiques mecaniques de ces materiaux. Toutefois, cette amelioration peut etre accompagnee d'une diminution des proprietes electriques surtout que ces materiaux doivent operer sous contraintes thermiques et electriques tres severes. Afin d'estimer la duree de vie de ces isolants, des essais de vieillissement accelere ont ete effectues a une frequence de 300Hz dans une plage de temperature allant de la temperature ambiante a 140°C. L'etude a haute temperature permettra de determiner la temperature de service des materiaux candidats. Des essais de la rupture dielectrique ont ete realises sur un grand nombre d'echantillon selon la norme ASTM D-149 relative aux mesures de rigidite dielectrique des isolants solides. Ces tests ont permis de deceler les echantillons problematiques et de verifier la qualite de ces isolants solides. Les differentes connaissances acquises lors de cette analyse ont servi a predire les performances des materiaux en service et vont permettre a la compagnie Groupe Lavergne d'apporter des ameliorations au niveau des formulations existantes et par la suite developper un materiau ayant les proprietes electriques et thermiques adequates pour ce type d'application. None None None None
2015-02-01
du personnel » (SET-158 RTG) identifient et accèdent aux technologies potentielles de détection des personnes, qui améliorent la sécurité par...croissante de systèmes de détection à distance, qui soient robustes et de haute performance afin d’assurer la surveillance et l’acquisition des ...les caméras basse et haute résolution. Dans la seconde phase, des approches de phénoménologie des capteurs et de traitement du
Integrating Occupational Characteristics into Human Performance Models: IPME Versus ISMAT Approach
2009-08-01
modélisation générique de la performance humaine appelé Integrated Performance Modelling Environment (IPME). Ce projet a permis d’explorer l’utilisation de la...groupes professionnels dans des modèles de performance humaine : l’approche IPME et l’approche ISMAT Par Christy Lorenzen; RDDC RC 2009-059; R & D...application de simulation d’événements discrets disponible sur le marché et servant à développer des modèles qui simulent la performance humaine et de
2002-04-01
configuration associated with the HSCT program was analyzed in terms of inlet unstart and the effect of the regurgitated shock wave. Inlet start is a...heavily loaded take off or dog -fight phases of flight. Less critical issues, such as thrust loss during supersonic operations, may also appear. From the
NASA Astrophysics Data System (ADS)
Zhang, Junshi; Chen, Hualing; Li, Dichen
2018-02-01
Subject to an AC voltage, dielectric elastomers (DEs) behave as a nonlinear vibration, implying potential applications as soft dynamical actuators and robots. In this article, by utilizing the Lagrange's equation, a theoretical model is deduced to investigate the dynamic performances of DEs by considering three internal properties, including crosslinks, entanglements, and finite deformations of polymer chains. Numerical calculations are employed to describe the dynamic response, stability, periodicity, and resonance properties of DEs. It is observed that the frequency and nonlinearity of dynamic response are tuned by the internal properties of DEs. Phase paths and Poincaré maps are utilized to detect the stability and periodicity of the nonlinear vibrations of DEs, which demonstrate that transitions between aperiodic and quasi-periodic vibrations may occur when the three internal properties vary. The resonance of DEs involving the three internal properties of polymer chains is also investigated.
Baschet, Louise; Bourguignon, Sandrine; Marque, Sébastien; Durand-Zaleski, Isabelle; Teiger, Emmanuel; Wilquin, Fanny; Levesque, Karine
2016-01-01
To determine the cost-effectiveness of drug-eluting stents (DES) compared with bare-metal stents (BMS) in patients requiring a percutaneous coronary intervention in France, using a recent meta-analysis including second-generation DES. A cost-effectiveness analysis was performed in the French National Health Insurance setting. Effectiveness settings were taken from a meta-analysis of 117 762 patient-years with 76 randomised trials. The main effectiveness criterion was major cardiac event-free survival. Effectiveness and costs were modelled over a 5-year horizon using a three-state Markov model. Incremental cost-effectiveness ratios and a cost-effectiveness acceptability curve were calculated for a range of thresholds for willingness to pay per year without major cardiac event gain. Deterministic and probabilistic sensitivity analyses were performed. Base case results demonstrated that DES are dominant over BMS, with an increase in event-free survival and a cost-reduction of €184, primarily due to a diminution of second revascularisations, and an absence of myocardial infarction and stent thrombosis. These results are robust for uncertainty on one-way deterministic and probabilistic sensitivity analyses. Using a cost-effectiveness threshold of €7000 per major cardiac event-free year gained, DES has a >95% probability of being cost-effective versus BMS. Following DES price decrease, new-generation DES development and taking into account recent meta-analyses results, the DES can now be considered cost-effective regardless of selective indication in France, according to European recommendations.
Prevalence of dry eye syndrome after a three-year exposure to a clean room.
Cho, Hyun A; Cheon, Jae Jung; Lee, Jong Seok; Kim, Soo Young; Chang, Seong Sil
2014-01-01
To measure the prevalence of dry eye syndrome (DES) among clean room (relative humidity ≤1%) workers from 2011 to 2013. Three annual DES examinations were performed completely in 352 clean room workers aged 20-40 years who were working at a secondary battery factory. Each examination comprised the tear-film break-up test (TFBUT), Schirmer's test I, slit-lamp microscopic examination, and McMonnies questionnaire. DES grades were measured using the Delphi approach. The annual examination results were analyzed using a general linear model and post-hoc analysis with repeated-ANOVA (Tukey). Multiple logistic regression was performed using the examination results from 2013 (dependent variable) to analyze the effect of years spent working in the clean room (independent variable). The prevalence of DES among these workers was 14.8% in 2011, 27.1% in 2012, and 32.8% in 2013. The TFBUT and McMonnies questionnaire showed that DES grades worsened over time. Multiple logistic regression analysis indicated that the odds ratio for having dry eyes was 1.130 (95% CI 1.012-1.262) according to the findings of the McMonnies questionnaire. This 3-year trend suggests that the increased prevalence of DES was associated with longer working hours. To decrease the prevalence of DES, employees should be assigned reasonable working hours with shift assignments that include appropriate break times. Workers should also wear protective eyewear, subdivide their working process to minimize exposure, and utilize preservative-free eye drops.
Photometric redshift analysis in the Dark Energy Survey Science Verification data
NASA Astrophysics Data System (ADS)
Sánchez, C.; Carrasco Kind, M.; Lin, H.; Miquel, R.; Abdalla, F. B.; Amara, A.; Banerji, M.; Bonnett, C.; Brunner, R.; Capozzi, D.; Carnero, A.; Castander, F. J.; da Costa, L. A. N.; Cunha, C.; Fausti, A.; Gerdes, D.; Greisel, N.; Gschwend, J.; Hartley, W.; Jouvel, S.; Lahav, O.; Lima, M.; Maia, M. A. G.; Martí, P.; Ogando, R. L. C.; Ostrovski, F.; Pellegrini, P.; Rau, M. M.; Sadeh, I.; Seitz, S.; Sevilla-Noarbe, I.; Sypniewski, A.; de Vicente, J.; Abbot, T.; Allam, S. S.; Atlee, D.; Bernstein, G.; Bernstein, J. P.; Buckley-Geer, E.; Burke, D.; Childress, M. J.; Davis, T.; DePoy, D. L.; Dey, A.; Desai, S.; Diehl, H. T.; Doel, P.; Estrada, J.; Evrard, A.; Fernández, E.; Finley, D.; Flaugher, B.; Frieman, J.; Gaztanaga, E.; Glazebrook, K.; Honscheid, K.; Kim, A.; Kuehn, K.; Kuropatkin, N.; Lidman, C.; Makler, M.; Marshall, J. L.; Nichol, R. C.; Roodman, A.; Sánchez, E.; Santiago, B. X.; Sako, M.; Scalzo, R.; Smith, R. C.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, D. L.; Uddin, S. A.; Valdés, F.; Walker, A.; Yuan, F.; Zuntz, J.
2014-12-01
We present results from a study of the photometric redshift performance of the Dark Energy Survey (DES), using the early data from a Science Verification period of observations in late 2012 and early 2013 that provided science-quality images for almost 200 sq. deg. at the nominal depth of the survey. We assess the photometric redshift (photo-z) performance using about 15 000 galaxies with spectroscopic redshifts available from other surveys. These galaxies are used, in different configurations, as a calibration sample, and photo-z's are obtained and studied using most of the existing photo-z codes. A weighting method in a multidimensional colour-magnitude space is applied to the spectroscopic sample in order to evaluate the photo-z performance with sets that mimic the full DES photometric sample, which is on average significantly deeper than the calibration sample due to the limited depth of spectroscopic surveys. Empirical photo-z methods using, for instance, artificial neural networks or random forests, yield the best performance in the tests, achieving core photo-z resolutions σ68 ˜ 0.08. Moreover, the results from most of the codes, including template-fitting methods, comfortably meet the DES requirements on photo-z performance, therefore, providing an excellent precedent for future DES data sets.
Photometric redshift analysis in the Dark Energy Survey Science Verification data
Sanchez, C.; Carrasco Kind, M.; Lin, H.; ...
2014-10-09
In this study, we present results from a study of the photometric redshift performance of the Dark Energy Survey (DES), using the early data from a Science Verification period of observations in late 2012 and early 2013 that provided science-quality images for almost 200 sq. deg. at the nominal depth of the survey. We assess the photometric redshift (photo-z) performance using about 15 000 galaxies with spectroscopic redshifts available from other surveys. These galaxies are used, in different configurations, as a calibration sample, and photo-z's are obtained and studied using most of the existing photo-z codes. A weighting method inmore » a multidimensional colour–magnitude space is applied to the spectroscopic sample in order to evaluate the photo-z performance with sets that mimic the full DES photometric sample, which is on average significantly deeper than the calibration sample due to the limited depth of spectroscopic surveys. In addition, empirical photo-z methods using, for instance, artificial neural networks or random forests, yield the best performance in the tests, achieving core photo-z resolutions σ68 ~ 0.08. Moreover, the results from most of the codes, including template-fitting methods, comfortably meet the DES requirements on photo-z performance, therefore, providing an excellent precedent for future DES data sets.« less
Fabrication and optimisation of a fused filament 3D-printed microfluidic platform
NASA Astrophysics Data System (ADS)
Tothill, A. M.; Partridge, M.; James, S. W.; Tatam, R. P.
2017-03-01
A 3D-printed microfluidic device was designed and manufactured using a low cost (2000) consumer grade fusion deposition modelling (FDM) 3D printer. FDM printers are not typically used, or are capable, of producing the fine detailed structures required for microfluidic fabrication. However, in this work, the optical transparency of the device was improved through manufacture optimisation to such a point that optical colorimetric assays can be performed in a 50 µl device. A colorimetric enzymatic cascade assay was optimised using glucose oxidase and horseradish peroxidase for the oxidative coupling of aminoantipyrine and chromotropic acid to produce a blue quinoneimine dye with a broad absorbance peaking at 590 nm for the quantification of glucose in solution. For comparison the assay was run in standard 96 well plates with a commercial plate reader. The results show the accurate and reproducible quantification of 0-10 mM glucose solution using a 3D-printed microfluidic optical device with performance comparable to that of a plate reader assay.
NASA Astrophysics Data System (ADS)
Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong
2017-10-01
This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.
Detached Eddy Simulation of Flap Side-Edge Flow
NASA Technical Reports Server (NTRS)
Balakrishnan, Shankar K.; Shariff, Karim R.
2016-01-01
Detached Eddy Simulation (DES) of flap side-edge flow was performed with a wing and half-span flap configuration used in previous experimental and numerical studies. The focus of the study is the unsteady flow features responsible for the production of far-field noise. The simulation was performed at a Reynolds number (based on the main wing chord) of 3.7 million. Reynolds Averaged Navier-Stokes (RANS) simulations were performed as a precursor to the DES. The results of these precursor simulations match previous experimental and RANS results closely. Although the present DES simulations have not reached statistical stationary yet, some unsteady features of the developing flap side-edge flowfield are presented. In the final paper it is expected that statistically stationary results will be presented including comparisons of surface pressure spectra with experimental data.
Evolving optimised decision rules for intrusion detection using particle swarm paradigm
NASA Astrophysics Data System (ADS)
Sivatha Sindhu, Siva S.; Geetha, S.; Kannan, A.
2012-12-01
The aim of this article is to construct a practical intrusion detection system (IDS) that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining (KDD) data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.
Scientific Approach for Optimising Performance, Health and Safety in High-Altitude Observatories
NASA Astrophysics Data System (ADS)
Böcker, Michael; Vogy, Joachim; Nolle-Gösser, Tanja
2008-09-01
The ESO coordinated study “Optimising Performance, Health and Safety in High-Altitude Observatories” is based on a psychological approach using a questionnaire for data collection and assessment of high-altitude effects. During 2007 and 2008, data from 28 staff and visitors involved in APEX and ALMA were collected and analysed and the first results of the study are summarised. While there is a lot of information about biomedical changes at high altitude, relatively few studies have focussed on psychological changes, for example with respect to performance of mental tasks, safety consciousness and emotions. Both, biomedical and psychological changes are relevant factors in occupational safety and health. The results of the questionnaire on safety, health and performance issues demonstrate that the working conditions at high altitude are less detrimental than expected.
Performance benchmark of LHCb code on state-of-the-art x86 architectures
NASA Astrophysics Data System (ADS)
Campora Perez, D. H.; Neufeld, N.; Schwemmer, R.
2015-12-01
For Run 2 of the LHC, LHCb is replacing a significant part of its event filter farm with new compute nodes. For the evaluation of the best performing solution, we have developed a method to convert our high level trigger application into a stand-alone, bootable benchmark image. With additional instrumentation we turned it into a self-optimising benchmark which explores techniques such as late forking, NUMA balancing and optimal number of threads, i.e. it automatically optimises box-level performance. We have run this procedure on a wide range of Haswell-E CPUs and numerous other architectures from both Intel and AMD, including also the latest Intel micro-blade servers. We present results in terms of performance, power consumption, overheads and relative cost.
Modelling of enterobacterial loads to the Baie des Veys (Normandy, France).
Lafforgue, Michel; Gerard, Laure; Vieillard, Celine; Breton, Marguerite
2018-06-01
The Baie des Veys (Normandy, France) has abundant stocks of shellfish (oyster and cockle farms). Water quality in the bay is affected by pollutant inputs from a 3500 km 2 watershed and notably occasional episodes of contamination by faecal coliforms. In order to characterise enterobacterial loads and develop a plan of action to improve the quality of seawater and shellfish in the bay, a two-stage modelling procedure was adopted. This focused on Escherichia coli and included a catchment model describing the E. coli releases, and the transport and die-off of this bacteria up to the coast. The output from this model then served as input for a marine model used to determine the concentration of E. coli in seawater. A total 60 scenarios were tested, including different wind, tidal, rainfall and temperature conditions and accidental pollution events, for both current situations and future scenarios. The modelling results highlighted the impact of rainfall on E. coli loadings to the sea, as well as the effects of sluice gates and tidal cycles, which dictated the use of an hourly timescale for the modelling process. The coupled models also made it possible to identify the origin of these enterobacteria as found in shellfish harvesting areas, both in terms of the contributing watercourses and the sources of contamination of those watercourses. The tool can accordingly be used to optimise remedial action. Copyright © 2018 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Munk, David J.; Kipouros, Timoleon; Vio, Gareth A.; Steven, Grant P.; Parks, Geoffrey T.
2017-11-01
Recently, the study of micro fluidic devices has gained much interest in various fields from biology to engineering. In the constant development cycle, the need to optimise the topology of the interior of these devices, where there are two or more optimality criteria, is always present. In this work, twin physical situations, whereby optimal fluid mixing in the form of vorticity maximisation is accompanied by the requirement that the casing in which the mixing takes place has the best structural performance in terms of the greatest specific stiffness, are considered. In the steady state of mixing this also means that the stresses in the casing are as uniform as possible, thus giving a desired operating life with minimum weight. The ultimate aim of this research is to couple two key disciplines, fluids and structures, into a topology optimisation framework, which shows fast convergence for multidisciplinary optimisation problems. This is achieved by developing a bi-directional evolutionary structural optimisation algorithm that is directly coupled to the Lattice Boltzmann method, used for simulating the flow in the micro fluidic device, for the objectives of minimum compliance and maximum vorticity. The needs for the exploration of larger design spaces and to produce innovative designs make meta-heuristic algorithms, such as genetic algorithms, particle swarms and Tabu Searches, less efficient for this task. The multidisciplinary topology optimisation framework presented in this article is shown to increase the stiffness of the structure from the datum case and produce physically acceptable designs. Furthermore, the topology optimisation method outperforms a Tabu Search algorithm in designing the baffle to maximise the mixing of the two fluids.
Changes in the tear film and ocular surface from dry eye syndrome.
Johnson, Michael E; Murphy, Paul J
2004-07-01
Dry eye syndrome (DES) refers to a spectrum of ocular surface diseases with diverse and frequently multiple aetiologies. The common feature of the various manifestations of DES is an abnormal tear film. Tear film abnormalities associated with DES are tear deficiency, owing to insufficient supply or excessive loss, and anomalous tear composition. These categorizations are artificial, as in reality both often coexist. DES disrupts the homeostasis of the tear film with its adjacent structures, and adversely affects its ability to perform essential functions such as supporting the ocular surface epithelium and preventing microbial invasion. In addition, whatever the initial trigger, moderate and severe DES is characterized by ocular surface inflammation, which in turn becomes the cause and consequence of cell damage, creating a self-perpetuating cycle of deterioration. Progress has been made in our understanding of the aetiology and pathogenesis of DES, and these advances have encouraged a proliferation of therapeutic options. This article aims to amalgamate prevailing ideas of DES development, and to assist in that, relevant aspects of the structure, function, and production of the tear film are reviewed. Additionally, a synopsis of therapeutic strategies for DES is presented, detailing treatments currently available, and those in development.
ERIC Educational Resources Information Center
Montane, Angelica; Chesterfield, Ray
2005-01-01
This document summarizes the results obtained by the AprenDes project in 2004, the project's first year of implementation. It provides the principal findings on program performance from a baseline in May 2004 to the end of the school year (late October 2004). Progress on a number of project objectives related to decentralized school- and…
Singh, Narendra P; Singh, Udai P; Nagarkatti, Prakash S; Nagarkatti, Mitzi
2012-11-01
Prenatal exposure to diethylstilbestrol (DES) is known to cause altered immune functions and increased susceptibility to autoimmune disease in humans. In the current study, we investigated the effect of prenatal exposure to DES on thymocyte differentiation involving apoptotic pathways. Prenatal DES exposure caused thymic atrophy, apoptosis, and up-regulation of Fas and Fas ligand (FasL) expression in thymocytes. To examine the mechanism underlying DES-mediated regulation of Fas and FasL, we performed luciferase assays using T cells transfected with luciferase reporter constructs containing full-length Fas or FasL promoters. There was significant luciferase induction in the presence of Fas or FasL promoters after DES exposure. Further analysis demonstrated the presence of several cis-regulatory motifs on both Fas and FasL promoters. When DES-induced transcription factors were analyzed, estrogen receptor element (ERE), nuclear factor κB (NF-κB), nuclear factor of activated T cells (NF-AT), and activator protein-1 motifs on the Fas promoter, as well as ERE, NF-κB, and NF-AT motifs on the FasL promoter, showed binding affinity with the transcription factors. Electrophoretic mobility-shift assays were performed to verify the binding affinity of cis-regulatory motifs of Fas or FasL promoters with transcription factors. There was shift in mobility of probes (ERE or NF-κB2) of both Fas and FasL in the presence of nuclear proteins from DES-treated cells, and the shift was specific to DES because these probes failed to shift their mobility in the presence of nuclear proteins from vehicle-treated cells. Together, the current study demonstrates that prenatal exposure to DES triggers significant alterations in apoptotic molecules expressed on thymocytes, which may affect T-cell differentiation and cause long-term effects on the immune functions.
Singh, Narendra P.; Singh, Udai P.; Nagarkatti, Prakash S.
2012-01-01
Prenatal exposure to diethylstilbestrol (DES) is known to cause altered immune functions and increased susceptibility to autoimmune disease in humans. In the current study, we investigated the effect of prenatal exposure to DES on thymocyte differentiation involving apoptotic pathways. Prenatal DES exposure caused thymic atrophy, apoptosis, and up-regulation of Fas and Fas ligand (FasL) expression in thymocytes. To examine the mechanism underlying DES-mediated regulation of Fas and FasL, we performed luciferase assays using T cells transfected with luciferase reporter constructs containing full-length Fas or FasL promoters. There was significant luciferase induction in the presence of Fas or FasL promoters after DES exposure. Further analysis demonstrated the presence of several cis-regulatory motifs on both Fas and FasL promoters. When DES-induced transcription factors were analyzed, estrogen receptor element (ERE), nuclear factor κB (NF-κB), nuclear factor of activated T cells (NF-AT), and activator protein-1 motifs on the Fas promoter, as well as ERE, NF-κB, and NF-AT motifs on the FasL promoter, showed binding affinity with the transcription factors. Electrophoretic mobility-shift assays were performed to verify the binding affinity of cis-regulatory motifs of Fas or FasL promoters with transcription factors. There was shift in mobility of probes (ERE or NF-κB2) of both Fas and FasL in the presence of nuclear proteins from DES-treated cells, and the shift was specific to DES because these probes failed to shift their mobility in the presence of nuclear proteins from vehicle-treated cells. Together, the current study demonstrates that prenatal exposure to DES triggers significant alterations in apoptotic molecules expressed on thymocytes, which may affect T-cell differentiation and cause long-term effects on the immune functions. PMID:22888145
NASA Astrophysics Data System (ADS)
Grundmann, J.; Schütze, N.; Heck, V.
2014-09-01
Groundwater systems in arid coastal regions are particularly at risk due to limited potential for groundwater replenishment and increasing water demand, caused by a continuously growing population. For ensuring a sustainable management of those regions, we developed a new simulation-based integrated water management system. The management system unites process modelling with artificial intelligence tools and evolutionary optimisation techniques for managing both water quality and water quantity of a strongly coupled groundwater-agriculture system. Due to the large number of decision variables, a decomposition approach is applied to separate the original large optimisation problem into smaller, independent optimisation problems which finally allow for faster and more reliable solutions. It consists of an analytical inner optimisation loop to achieve a most profitable agricultural production for a given amount of water and an outer simulation-based optimisation loop to find the optimal groundwater abstraction pattern. Thereby, the behaviour of farms is described by crop-water-production functions and the aquifer response, including the seawater interface, is simulated by an artificial neural network. The methodology is applied exemplarily for the south Batinah re-gion/Oman, which is affected by saltwater intrusion into a coastal aquifer system due to excessive groundwater withdrawal for irrigated agriculture. Due to contradicting objectives like profit-oriented agriculture vs aquifer sustainability, a multi-objective optimisation is performed which can provide sustainable solutions for water and agricultural management over long-term periods at farm and regional scales in respect of water resources, environment, and socio-economic development.
Robustness analysis of bogie suspension components Pareto optimised values
NASA Astrophysics Data System (ADS)
Mousavi Bideleh, Seyed Milad
2017-08-01
Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.
Varela, P; Belo, J H; Quental, P B
2016-11-01
The design of the in-vessel antennas for the ITER plasma position reflectometry diagnostic is very challenging due to the need to cope both with the space restrictions inside the vacuum vessel and with the high mechanical and thermal loads during ITER operation. Here, we present the work carried out to assess and optimise the design of the antenna. We show that the blanket modules surrounding the antenna strongly modify its characteristics and need to be considered from the early phases of the design. We also show that it is possible to optimise the antenna performance, within the design restrictions.
Modelling the protocol stack in NCS with deterministic and stochastic petri net
NASA Astrophysics Data System (ADS)
Hui, Chen; Chunjie, Zhou; Weifeng, Zhu
2011-06-01
Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.
Hierarchical Discrete Event Supervisory Control of Aircraft Propulsion Systems
NASA Technical Reports Server (NTRS)
Yasar, Murat; Tolani, Devendra; Ray, Asok; Shah, Neerav; Litt, Jonathan S.
2004-01-01
This paper presents a hierarchical application of Discrete Event Supervisory (DES) control theory for intelligent decision and control of a twin-engine aircraft propulsion system. A dual layer hierarchical DES controller is designed to supervise and coordinate the operation of two engines of the propulsion system. The two engines are individually controlled to achieve enhanced performance and reliability, necessary for fulfilling the mission objectives. Each engine is operated under a continuously varying control system that maintains the specified performance and a local discrete-event supervisor for condition monitoring and life extending control. A global upper level DES controller is designed for load balancing and overall health management of the propulsion system.
A novel sleep optimisation programme to improve athletes' well-being and performance.
Van Ryswyk, Emer; Weeks, Richard; Bandick, Laura; O'Keefe, Michaela; Vakulin, Andrew; Catcheside, Peter; Barger, Laura; Potter, Andrew; Poulos, Nick; Wallace, Jarryd; Antic, Nick A
2017-03-01
To improve well-being and performance indicators in a group of Australian Football League (AFL) players via a six-week sleep optimisation programme. Prospective intervention study following observations suggestive of reduced sleep and excessive daytime sleepiness in an AFL group. Athletes from the Adelaide Football Club were invited to participate if they had played AFL senior-level football for 1-5 years, or if they had excessive daytime sleepiness (Epworth Sleepiness Scale [ESS] >10), measured via ESS. An initial education session explained normal sleep needs, and how to achieve increased sleep duration and quality. Participants (n = 25) received ongoing feedback on their sleep, and a mid-programme education and feedback session. Sleep duration, quality and related outcomes were measured during week one and at the conclusion of the six-week intervention period using sleep diaries, actigraphy, ESS, Pittsburgh Sleep Quality Index, Profile of Mood States, Training Distress Scale, Perceived Stress Scale and the Psychomotor Vigilance Task. Sleep diaries demonstrated an increase in total sleep time of approximately 20 min (498.8 ± 53.8 to 518.7 ± 34.3; p < .05) and a 2% increase in sleep efficiency (p < 0.05). There was a corresponding increase in vigour (p < 0.001) and decrease in fatigue (p < 0.05). Improvements in measures of sleep efficiency, fatigue and vigour indicate that a sleep optimisation programme may improve athletes' well-being. More research is required into the effects of sleep optimisation on athletic performance.
Assessing Vulnerability of Biometric Technologies for Identity Management Applications
2011-10-01
de vulnérabilité et d’analyse de la relation entre la performance du système et la force de sécurité de la fonction. En Octobre 2010, IBG-Canada...Les agences et ministères du gouvernement du Canada ont besoin d’information sur la performance , les vulnérabilités et l’efficacité des solutions...négligent des éléments humains qui peuvent stimuler la performance , de sécurité et les
The Impact of New Guidance and Control Systems on Military Aircraft Cockpit Design.
1981-08-01
de r~duction des surfaces de planche de bord et de complexit6 des interfaces homme /machine darns les a~ronefs de combat A haute performance...taut remarquer que dana l ’&tat actuel do la technique, une machine de reconnaissance do parole n’a pas do performances en propre. Sea performances...L’organe principal du dialogue 6tant une console A tube cathodique et clavier. L I ___ 15-3 Le vocabulaire comportait 119 mots, extraits de
Caracterisation thermique de modules de refroidissement pour la photovoltaique concentree
NASA Astrophysics Data System (ADS)
Collin, Louis-Michel
Pour rentabiliser la technologie des cellules solaires, une reduction du cout d'exploitation et de fabrication est necessaire. L'utilisation de materiaux photovoltaiques a un impact appreciable sur le prix final par quantite d'energie produite. Une technologie en developpement consiste a concentrer la lumiere sur les cellules solaires afin de reduire cette quantite de materiaux. Or, concentrer la lumiere augmente la temperature de la cellule et diminue ainsi son efficacite. Il faut donc assurer a la cellule un refroidissement efficace. La charge thermique a evacuer de la cellule passe au travers du recepteur, soit la composante soutenant physiquement la cellule. Le recepteur transmet le flux thermique de la cellule a un systeme de refroidissement. L'ensemble recepteur-systeme de refroidissement se nomme module de refroidissement. Habituellement, la surface du recepteur est plus grande que celle de la cellule. La chaleur se propage donc lateralement dans le recepteur au fur et a mesure qu'elle traverse le recepteur. Une telle propagation de la chaleur fournit une plus grande surface effective, reduisant la resistance thermique apparente des interfaces thermiques et du systeme de refroidissement en aval vers le module de refroidissement. Actuellement, aucune installation ni methode ne semble exister afin de caracteriser les performances thermiques des recepteurs. Ce projet traite d'une nouvelle technique de caracterisation pour definir la diffusion thermique du recepteur a l'interieur d'un module de refroidissement. Des indices de performance sont issus de resistances thermiques mesurees experimentalement sur les modules. Une plateforme de caracterisation est realisee afin de mesurer experimentalement les criteres de performance. Cette plateforme injecte un flux thermique controle sur une zone localisee de la surface superieure du recepteur. L'injection de chaleur remplace le flux thermique normalement fourni par la cellule. Un systeme de refroidissement est installe a la surface opposee du recepteur pour evacuer la chaleur injectee. Les resultats mettent egalement en evidence l'importance des interfaces thermiques et les avantages de diffuser la chaleur dans les couches metalliques avant de la conduire au travers des couches dielectriques du recepteur. Des recepteurs de multiples compositions ont ete caracterises, demontrant que les outils developpes peuvent definir la capacite de diffusion thermique. La repetabilite de la plateforme est evaluee par l'analyse de l'etendue des mesures repetees sur des echantillons selectionnes. La plateforme demontre une precision et reproductibilite de +/- 0.14 ° C/W. Ce travail fournit des outils pour la conception des recepteurs en proposant une mesure qui permet de comparer et d'evaluer l'impact thermique de ces recepteurs integres a uri module de refroidissement. Mots-cles : cellule solaire, photovoltaique, transfert de chaleur, concentration, resistances thermiques, plateforme de caracterisation, refroidissement
Optimal Discrete Event Supervisory Control of Aircraft Gas Turbine Engines
NASA Technical Reports Server (NTRS)
Litt, Jonathan (Technical Monitor); Ray, Asok
2004-01-01
This report presents an application of the recently developed theory of optimal Discrete Event Supervisory (DES) control that is based on a signed real measure of regular languages. The DES control techniques are validated on an aircraft gas turbine engine simulation test bed. The test bed is implemented on a networked computer system in which two computers operate in the client-server mode. Several DES controllers have been tested for engine performance and reliability.
Prevalence of Dry Eye Syndrome after a Three-Year Exposure to a Clean Room
2014-01-01
Objective To measure the prevalence of dry eye syndrome (DES) among clean room (relative humidity ≤1%) workers from 2011 to 2013. Methods Three annual DES examinations were performed completely in 352 clean room workers aged 20–40 years who were working at a secondary battery factory. Each examination comprised the tear-film break-up test (TFBUT), Schirmer’s test I, slit-lamp microscopic examination, and McMonnies questionnaire. DES grades were measured using the Delphi approach. The annual examination results were analyzed using a general linear model and post-hoc analysis with repeated-ANOVA (Tukey). Multiple logistic regression was performed using the examination results from 2013 (dependent variable) to analyze the effect of years spent working in the clean room (independent variable). Results The prevalence of DES among these workers was 14.8% in 2011, 27.1% in 2012, and 32.8% in 2013. The TFBUT and McMonnies questionnaire showed that DES grades worsened over time. Multiple logistic regression analysis indicated that the odds ratio for having dry eyes was 1.130 (95% CI 1.012–1.262) according to the findings of the McMonnies questionnaire. Conclusions This 3-year trend suggests that the increased prevalence of DES was associated with longer working hours. To decrease the prevalence of DES, employees should be assigned reasonable working hours with shift assignments that include appropriate break times. Workers should also wear protective eyewear, subdivide their working process to minimize exposure, and utilize preservative-free eye drops. PMID:25339991
Molecular Biology and Prevention of Endometrial Cancer
2009-07-01
us time to complete the study. Aim 2: To analyze vaginal and cervical adenocarcinomas , that have arisen in women exposed to DES in- utero , for...therapy. Methods: 1) Oligonucleotide microarray analysis was performed on a panel of endometrial cancers. 2) A subset of adenocarcinoma cases...from the International DES Registry (IDESR) was analyzed for MSI 3) A case-control study of the CASH database was performed to evaluate the
Turnaround Time Modeling for Conceptual Rocket Engines
NASA Technical Reports Server (NTRS)
Nix, Michael; Staton, Eric J.
2004-01-01
Recent years have brought about a paradigm shift within NASA and the Space Launch Community regarding the performance of conceptual design. Reliability, maintainability, supportability, and operability are no longer effects of design; they have moved to the forefront and are affecting design. A primary focus of this shift has been a planned decrease in vehicle turnaround time. Potentials for instituting this decrease include attacking the issues of removing, refurbishing, and replacing the engines after each flight. less, it is important to understand the operational affects of an engine on turnaround time, ground support personnel and equipment. One tool for visualizing this relationship involves the creation of a Discrete Event Simulation (DES). A DES model can be used to run a series of trade studies to determine if the engine is meeting its requirements, and, if not, what can be altered to bring it into compliance. Using DES, it is possible to look at the ways in which labor requirements, parallel maintenance versus serial maintenance, and maintenance scheduling affect the overall turnaround time. A detailed DES model of the Space Shuttle Main Engines (SSME) has been developed. Trades may be performed using the SSME Processing Model to see where maintenance bottlenecks occur, what the benefits (if any) are of increasing the numbers of personnel, or the number and location of facilities, in addition to trades previously mentioned, all with the goal of optimizing the operational turnaround time and minimizing operational cost. The SSME Processing Model was developed in such a way that it can easily be used as a foundation for developing DES models of other operational or developmental reusable engines. Performing a DES on a developmental engine during the conceptual phase makes it easier to affect the design and make changes to bring about a decrease in turnaround time and costs.
Acoustic Resonator Optimisation for Airborne Particle Manipulation
NASA Astrophysics Data System (ADS)
Devendran, Citsabehsan; Billson, Duncan R.; Hutchins, David A.; Alan, Tuncay; Neild, Adrian
Advances in micro-electromechanical systems (MEMS) technology and biomedical research necessitate micro-machined manipulators to capture, handle and position delicate micron-sized particles. To this end, a parallel plate acoustic resonator system has been investigated for the purposes of manipulation and entrapment of micron sized particles in air. Numerical and finite element modelling was performed to optimise the design of the layered acoustic resonator. To obtain an optimised resonator design, careful considerations of the effect of thickness and material properties are required. Furthermore, the effect of acoustic attenuation which is dependent on frequency is also considered within this study, leading to an optimum operational frequency range. Finally, experimental results demonstrated good particle levitation and capture of various particle properties and sizes ranging to as small as 14.8 μm.
An equipment for Rayleigh scattering of Mössbauer radiation
NASA Astrophysics Data System (ADS)
Enescu, S. E.; Bibicu, I.; Zoran, V.; Kluger, A.; Stoica, A. D.; Tripadus, V.
1998-07-01
A personal computer driven equipment designed for Rayleigh scattering of Mössbauer radiation experiments at room temperature is described. The performances of the system were tested using like scatterers crystals with different mosaic divergences: lithium fluoride (LiF) and pyrolytic graphite (C). The equipment, suitable for any kind of Mössbauer scattering experiments, permits low and adjustable horizontal divergences of the incident beam. On décrit un équipement dédié aux mesures de diffusion Rayleigh de la radiation Mössbauer controlée par ordinateur. Les performances du système ont été testées sur des cristaux ayant des divergences de mosaïque différentes: le fluorure de lithium (LiF) et le graphite pyrolytique (C). L'équipement, qui peut être utilisé dans des différents types d'expérimentations basées sur la diffusion de la radiation Mössbauer, admet des divergences horizontales du faisceau incident faibles et réglables.
1998-09-01
distribution CSAS-100-HV 7 m (Surf-2) 0.16-32 .gm PMS2 CSASP-200 Base of Pier 15 m (Surf-1,3) Aerosol size distribution 11.2 m (Surf-2) 0.2-20 pm PMS3 CSASP-100...270 4 180 V ~C2 go 1P-2 PMS3.(6m..) PM.Sl.(12.) PMS2 in.)... ...... Reference -2- ....... .... ..... ............... kn - .2 25 26 27 28 Day number...1 (12 in) are presently available do not allow for a more sophisticated 3 ..... PMS2 (15 m) approach. 2 The horizontal aerosol flux downwind from the
Revetements nanostructures pour la protection des metaux dans les environnements marins
NASA Astrophysics Data System (ADS)
Brassard, Jean-Denis
L'objectif de cette recherche est de verifier qu'un materiau superhydrophobe peut diminuer l'adherence et l'accumulation de la glace tout en conservant de bonnes proprietes anticorrosion. Afin de verifier cette assertion, trois familles de nouveaux revetements micros et nanostructures, identifiees par les lettres A, B, et C, ont ete developpes de facon a pouvoir en determiner l'efficacite glaciophobe en relation avec l'angle de contact particulier a chaque structure obtenue. Les revetements ont tous ete optimises pour que l'angle de contact et l'adherence au substrat soient maximaux. Les trois revetements optimises sont les suivants: Le revetement A a ete developpe pour application sur l'acier galvanise. Les microrugosites creees sont celles de la structure de la couche du zinc electrodepose en surface et les nanorugosites sont celles creees par le film de silicone copolymerise nanostructure. Un temps optimal de 10 min a ete retenu pour l'electrodeposition du zinc, ce dernier maximisant l'angle de contact a 155° lorsqu'enduit d'un film de silicone de 100 nm d'epaisseur. Le revetement B a ete developpe pour application sur un alliage d'aluminium. Les microrugosites creees sont celles de la microstructure granulaire obtenue par gravure de l'aluminium immerge dans un bain de HCl et les nanorugosites sont celles creees d'un meme film nanostructure de silicone copolymerise. La valeur optimale du temps de gravure est de 8 minutes et donne l'angle de contact le plus eleve a 154°, lorsqu'enduit du meme film de silicone de 100 nm d'epaisseur depose sur le revetement A. Le revetement C a ete developpe pour etre applique indifferemment sur tout substrat degraisse d'aluminium ou d'acier. Les microrugosites et les nanorugosites sont celles creees par les agregats de nanoparticules de ZnO rendues hydrophobes melangees au silicone qui sont pulverisees sur une couche d'appret composee de silicone et de polymethylhydrosiloxane. On obtient alors un produit composite rigide ou les nanoparticules de ZnO enrobees de silane sont imbriquees dans le reseau nanostructure de silicone copolymerise. Le revetement C produit de cette facon satisfaisait a la norme ASTM d'adherence a un substrat, mais n'est pas superhydrophobe, restant seulement hydrophobe avec un angle de contact reduit de 123°. Tous les revetements optimises developpes montrent un niveau de resistance a la corrosion beaucoup plus eleve que leur substrat tant en polarisation qu'en impedance electrochimique. Toutefois aucune relation entre l'hydrophobicite et le niveau de resistance a la corrosion n'a ete etablie. Il a seulement ete determine que ce sont la nature chimique de l'interface, a savoir ses proprietes electriques en isolation, son hydrophobicite et la presence d'air, de meme que l'epaisseur qui sont les deux aspects essentiels a respecter pour avoir de bonnes proprietes anticorrosion. Les trois revetements A, B, et C ont ete evalues quant a la reduction d'adherence et d'accumulation evaluee dans les environnements givrants rencontres en mer arctique. En premiere position, le revetement superhydrophobe B avec un angle de contact de 154°et un facteur de reduction l'adherence (ARF) de la glace sur l'aluminium de 14 et de la quantite accumulee 5% moindre. En seconde position, le revetement superhydrophobe A avec un angle de contact de 155° et un facteur ARF de 6, lequel n'a pas ete evalue en accumulation; et en troisieme position le revetement hydrophobe C avec un facteur de reduction de 3, lequel ne reduit pas la masse de glace accumulee. Deux mecanismes ont ete trouves responsables de l'adherence a la glace sur les revetements developpes soient: 1. l'effet d'ancrage diminue par la presence de nanorugosites hydrophobes recouvrant les microrugosites pouvant adsorber de l'air et 2. L'effet de la quantite d'air adsorbe dans la structure de l'interface, a savoir que pour des angles de contact egaux, si la quantite d'air augmente dans la microstructure, donc un Rrms plus eleve, ferait que l'adherence serait diminuee. Le fait que le revetement B possede un ARF plus eleve que le revetement A, alors qu'ils ont un angle de contact semblable, a 1° pres, est explique par le fait que la valeur des hauteurs des microrugosites Rrms y est plus elevee. Les essais d'accumulation en embruns marins ont permis d'observer que le revetement superhydrophobe accumulait moins de glace que ceux hydrophobe et hydrophile. Cette diminution peut s'expliquer par le fait que les gouttelettes d'eau surfondues vont rouler sans y adherer a cause de la nature superhydrophobe. (Abstract shortened by ProQuest.).
Optimisation of flight dynamic control based on many-objectives meta-heuristic: a comparative study
NASA Astrophysics Data System (ADS)
Bureerat, Sujin; Pholdee, Nantiwat; Radpukdee, Thana
2018-05-01
Development of many objective meta-heuristics (MnMHs) is a currently interesting topic as they are suitable to real applications of optimisation problems which usually require many ob-jectives. However, most of MnMHs have been mostly developed and tested based on stand-ard testing functions while the use of MnMHs to real applications is rare. Therefore, in this work, MnMHs are applied for optimisation design of flight dynamic control. The design prob-lem is posed to find control gains for minimising; the control effort, the spiral root, the damp-ing in roll root, sideslip angle deviation, and maximising; the damping ratio of the dutch-roll complex pair, the dutch-roll frequency, bank angle at pre-specified times 1 seconds and 2.8 second subjected to several constraints based on Military Specifications (1969) requirement. Several established many-objective meta-heuristics (MnMHs) are used to solve the problem while their performances are compared. With this research work, performance of several MnMHs for flight control is investigated. The results obtained will be the baseline for future development of flight dynamic and control.
NASA Astrophysics Data System (ADS)
Malko, Daniel; Lopes, Thiago; Ticianelli, Edson A.; Kucernak, Anthony
2016-08-01
The effect of the ionomer to carbon (I/C) ratio on the performance of single cell polymer electrolyte fuel cells is investigated for three different types of non-precious metal cathodic catalysts. Polarisation curves as well as impedance spectra are recorded at different potentials in the presence of argon or oxygen at the cathode and hydrogen at the anode. It is found that a optimised ionomer content is a key factor for improving the performance of the catalyst. Non-optimal ionomer loading can be assessed by two different factors from the impedance spectra. Hence this observation could be used as a diagnostic element to determine the ideal ionomer content and distribution in newly developed catalyst-electrodes. An electrode morphology based on the presence of inhomogeneous resistance distribution within the porous structure is suggested to explain the observed phenomena. The back-pressure and relative humidity effect on this feature is also investigated and supports the above hypothesis. We give a simple flowchart to aid optimisation of electrodes with the minimum number of trials.
Roosta, Mostafa; Ghaedi, Mehrorang; Daneshfar, Ali
2014-10-15
A novel approach, ultrasound-assisted reverse micelles dispersive liquid-liquid microextraction (USA-RM-DLLME) followed by high performance liquid chromatography (HPLC) was developed for selective determination of acetoin in butter. The melted butter sample was diluted and homogenised by n-hexane and Triton X-100, respectively. Subsequently, 400μL of distilled water was added and the microextraction was accelerated by 4min sonication. After 8.5min of centrifugation, sedimented phase (surfactant-rich phase) was withdrawn by microsyringe and injected into the HPLC system for analysis. The influence of effective variables was optimised using Box-Behnken design (BBD) combined with desirability function (DF). Under optimised experimental conditions, the calibration graph was linear over the range of 0.6-200mgL(-1). The detection limit of method was 0.2mgL(-1) and coefficient of determination was 0.9992. The relative standard deviations (RSDs) were less than 5% (n=5) while the recoveries were in the range of 93.9-107.8%. Copyright © 2014. Published by Elsevier Ltd.
Le pôle de métrologie de SOLEIL
NASA Astrophysics Data System (ADS)
Idir, M.; Brochet, S.; Delmotte, A.; Lagarde, B.; Mercere, P.; Moreno, T.; Polack, F.; Thomasset, M.
2006-12-01
Le Pôle de METROLOGIE de SOLEIL a pour objet de créer sur le synchrotron SOLEIL, une plateforme constituée : - une ligne de lumière utilisant le rayonnement synchrotron (métrologie dite à la longueur d'onde) - d'un laboratoire de métrologie associé (métrologie dite ll classique gg ) Ces deux types de Métrologie sont l'une et l'autre indispensables pour soutenir l'activité de recherche instrumentale en optique X et X-UV. Ce projet de pôle de METROLOGIE ne répondra pas seulement aux besoins des groupes chargés de l'équipement du synchrotron SOLEIL en optiques et détecteurs mais aussi pour préparer, tester et mettre au point les postes expérimentaux, ce qui concerne déjà une large communauté d'utilisateurs. Il sera aussi largement ouvert, dès sa mise en service, à l'ensemble de la communauté scientifique concernée par l'instrumentation X et XUV en Ile de France, en France, voire même en Europe si la demande continue de croître plus vite que l'offre dans ce domaine. Ligne de lumière Métrologie à la longueur d'onde La ligne de lumière sera équipée de plusieurs stations permettant de mesurer, dans la plus grande partie du spectre couvert par le synchrotron, les paramètres photométriques qui caractérisent les éléments optiques, tels que : la réflectivité de surfaces, l'efficacité de diffraction des réseaux, la diffusion des surfaces ou l'efficacité des détecteurs X et X-UV et la calibration absolue. Cette installation pourra servir également à développer des instruments et des diagnostics nécessaires à la caractérisation des faisceaux de rayons X (intensité, taille, degré de cohérence, polarisation etc.) Métrologie Classique La métrologie des surfaces optiques est devenue une nécessité critique pour les laboratoires et les industries qui utilisent les photons X et X-UV (synchrotrons, centres laser, etc. .). En effet, les progrès de calcul et de conception des systèmes optiques pour ces longueurs d'onde (optiques de microfocalisation, monochromateurs, diagnostics d'imagerie) font que les performances de ces instruments sont désormais limitées par les imperfections de fabrication des composants optiques. La métrologie des surfaces optiques est donc une nécessité impérieuse pour tous les acteurs du domaine, qui se doivent d'effectuer les contrôles appropriés. Cette pression s'exerce aussi sur les moyens utilisés pour effectuer ces mesures, car les incertitudes de mesure actuelles, notamment en ce qui concerne la régularité des surfaces, sont loin d'être négligeables vis à vis des tolérances demandées. Il est donc indispensable de faire évoluer les instruments de mesure et d'obtenir des gains significatifs de précision. Un travail particulier est en cours au laboratoire de Métrologie pour développer à côté des instruments commerciaux, des instruments prototypes sur des concepts originaux (mesures de profils de surface et mesures d'angle). Dans cet article, nous donnons des détails des choix techniques utilisés sur la ligne de METROLOGIE et TESTS et des performances attendues et nous décrirons le laboratoire de METROLOGIE en donnant des exemples d'optiques récemment testées.
Community Environmental Response Facilitation Act (CERFA) Report. Fort Des Moines, Des Moines, Iowa
1994-04-01
and bacteria in drinking water exceed recommended limits as a result of contaminated wells or infiltration of agricultural wastewater and runoff...inventory was performed as part of the Environmental Investigation. The inventory identified chemicals ranging from household cleaners to antiperspirant to
Kievit, Wietske; van Herwaarden, Noortje; van den Hoogen, Frank Hj; van Vollenhoven, Ronald F; Bijlsma, Johannes Wj; van den Bemt, Bart Jf; van der Maas, Aatke; den Broeder, Alfons A
2016-11-01
A disease activity-guided dose optimisation strategy of adalimumab or etanercept (TNFi (tumour necrosis factor inhibitors)) has shown to be non-inferior in maintaining disease control in patients with rheumatoid arthritis (RA) compared with usual care. However, the cost-effectiveness of this strategy is still unknown. This is a preplanned cost-effectiveness analysis of the Dose REduction Strategy of Subcutaneous TNF inhibitors (DRESS) study, a randomised controlled, open-label, non-inferiority trial performed in two Dutch rheumatology outpatient clinics. Patients with low disease activity using TNF inhibitors were included. Total healthcare costs were measured and quality adjusted life years (QALY) were based on EQ5D utility scores. Decremental cost-effectiveness analyses were performed using bootstrap analyses; incremental net monetary benefit (iNMB) was used to express cost-effectiveness. 180 patients were included, and 121 were allocated to the dose optimisation strategy and 59 to control. The dose optimisation strategy resulted in a mean cost saving of -€12 280 (95 percentile -€10 502; -€14 104) per patient per 18 months. There is an 84% chance that the dose optimisation strategy results in a QALY loss with a mean QALY loss of -0.02 (-0.07 to 0.02). The decremental cost-effectiveness ratio (DCER) was €390 493 (€5 085 184; dominant) of savings per QALY lost. The mean iNMB was €10 467 (€6553-€14 037). Sensitivity analyses using 30% and 50% lower prices for TNFi remained cost-effective. Disease activity-guided dose optimisation of TNFi results in considerable cost savings while no relevant loss of quality of life was observed. When the minimal QALY loss is compensated with the upper limit of what society is willing to pay or accept in the Netherlands, the net savings are still high. NTR3216; Post-results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imam, Neena; Poole, Stephen W
2013-01-01
In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET,more » and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.« less
Tsipa, Argyro; Koutinas, Michalis; Usaku, Chonlatep; Mantalaris, Athanasios
2018-05-02
Currently, design and optimisation of biotechnological bioprocesses is performed either through exhaustive experimentation and/or with the use of empirical, unstructured growth kinetics models. Whereas, elaborate systems biology approaches have been recently explored, mixed-substrate utilisation is predominantly ignored despite its significance in enhancing bioprocess performance. Herein, bioprocess optimisation for an industrially-relevant bioremediation process involving a mixture of highly toxic substrates, m-xylene and toluene, was achieved through application of a novel experimental-modelling gene regulatory network - growth kinetic (GRN-GK) hybrid framework. The GRN model described the TOL and ortho-cleavage pathways in Pseudomonas putida mt-2 and captured the transcriptional kinetics expression patterns of the promoters. The GRN model informed the formulation of the growth kinetics model replacing the empirical and unstructured Monod kinetics. The GRN-GK framework's predictive capability and potential as a systematic optimal bioprocess design tool, was demonstrated by effectively predicting bioprocess performance, which was in agreement with experimental values, when compared to four commonly used models that deviated significantly from the experimental values. Significantly, a fed-batch biodegradation process was designed and optimised through the model-based control of TOL Pr promoter expression resulting in 61% and 60% enhanced pollutant removal and biomass formation, respectively, compared to the batch process. This provides strong evidence of model-based bioprocess optimisation at the gene level, rendering the GRN-GK framework as a novel and applicable approach to optimal bioprocess design. Finally, model analysis using global sensitivity analysis (GSA) suggests an alternative, systematic approach for model-driven strain modification for synthetic biology and metabolic engineering applications. Copyright © 2018. Published by Elsevier Inc.
Drug-eluting stents to prevent stent thrombosis and restenosis.
Im, Eui; Hong, Myeong-Ki
2016-01-01
Although first-generation drug-eluting stents (DES) have significantly reduced the risk of in-stent restenosis, they have also increased the long-term risk of stent thrombosis. This safety concern directly triggered the development of new generation DES, with innovations in stent platforms, polymers, and anti-proliferative drugs. Stent platform materials have evolved from stainless steel to cobalt or platinum-chromium alloys with an improved strut design. Drug-carrying polymers have become biocompatible or biodegradable and even polymer-free DES were introduced. New limus-family drugs (such as everolimus, zotarolimus or biolimus) were adopted to enhance stent performances. As a result, these new DES demonstrated superior vascular healing responses on intracoronary imaging studies and lower stent thrombotic events in actual patients. Recently, fully-bioresorbable stents (scaffolds) have been introduced, and expanding their applications. In this article, the important concepts and clinical results of new generation DES and bioresorbable scaffolds are described.
STF Optimierung von single-bit CT ΣΔ Modulatoren basierend auf skalierten Filterkoeffizienten
NASA Astrophysics Data System (ADS)
Widemann, C.; Zorn, C.; Brückner, T.; Ortmanns, M.; Mathis, W.
2012-09-01
Die vorliegende Arbeit beschäftigt sich mit dem Signalübertragungsverhalten von single-bit continuous-time (CT) ΣΔ Modulatoren. Dabei liegt der Fokus der Untersuchung auf dem Peaking der Signaltransferfunktion (STF). Dieser Effekt kann die Performance und die Stabilität des Gesamtsystems negativ beeinflussen, da bei auftretendem STF-Peaking Signale außerhalb des Signalbands verstärkt werden. In dieser Arbeit wird ein neuer Ansatz zur Reduktion des Peakings vorgestellt, der auf der Optimierung der Systemdynamik basiert. Dabei werden die Filterkoeffizienten des Modulators systematisch angepasst. Anhand eines Beispielsystems wird gezeigt, dass der Ansatz genutzt werden kann, um das Übertragungsverhalten des Modulators abhängig vom Ausgangssystem zu verändern. So kann entweder die Systemsperformance verbessert werden, ohne Peaking in der STF zu erzeugen, oder das STF-Peaking reduziert werden, ohne die Systemperformance stark zu beeinflussen.
Hussain, Khalil K; Akhtar, Mahmood H; Kim, Moo-Hyun; Jung, Dong-Keun; Shim, Yoon-Bo
2018-06-30
The analytical performance of the multi enzymes loaded single electrode sensor (SES) and dual electrode sensor (DES) was compared for the detection of adenosine and metabolites. The SES was fabricated by covalent binding of tri-enzymes, adenosine deaminase (ADA), purine nucleoside phosphorylase (PNP), and xanthine oxidase (XO) along with hydrazine (Hyd) onto a functionalized conducting polymer [2,2:5,2-terthiophene-3-(p-benzoic acid)] (pTTBA). The enzyme reaction electrode in DES was fabricated by covalent binding of ADA and PNP onto pTTBA coated on Au nanoparticles. The detection electrode in DES was constructed by covalent binding of XO and Hyd onto pTTBA coated on porous Au. Due to the higher amount (3.5 folds) of the immobilized enzymes and Hyd onto the DES than SES, and the lower Michaelis constant (Km) value for DES (28.7 µM) compared to SES (36.1 µM), the sensitivity was significantly enhanced for the DES (8.2 folds). The dynamic range obtained using DES was from 0.5 nM to 120.0 µM with a detection limit of 1.43 nM ± 0.02, 0.76 nM ± 0.02, and 0.48 nM ± 0.01, for adenosine (AD), inosine (IN), and hypoxanthine (Hypo) respectively. Further, the DES was coupled with an electrochemical potential modulated microchannel for the separation and simultaneous detection of AD, IN, and Hypo in an extracellular matrix of cancerous (A549) and non-cancerous (Vero) cells. The sensor probe confirms a higher basal level of extracellular AD and its metabolites in cancer cells compared to normal cells. In addition, the effect of dipyridamole on released adenosine in A549 cells was investigated. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.
2018-05-01
The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest that the best use of resources for the network design problem would be spent in improvement of the prior estimates of the flux uncertainties rather than investing these resources in running a complex evolutionary optimisation algorithm. The authors recommend that, if time and computational resources allow, that multiple optimisation techniques should be used as a part of a comprehensive suite of sensitivity tests when performing such an optimisation exercise. This will provide a selection of best solutions which could be ranked based on their utility and practicality.
Better powder diffractometers. II—Optimal choice of U, V and W
NASA Astrophysics Data System (ADS)
Cussen, L. D.
2007-12-01
This article presents a technique for optimising constant wavelength (CW) neutron powder diffractometers (NPDs) using conventional nonlinear least squares methods. This is believed to be the first such design optimisation for a neutron spectrometer. The validity of this approach and discussion should extend beyond the Gaussian element approximation used and also to instruments using different radiation, such as X-rays. This approach could later be extended to include vertical and perhaps horizontal focusing monochromators and probably other types of instruments such as three axis spectrometers. It is hoped that this approach will help in comparisons of CW and time-of-flight (TOF) instruments. Recent work showed that many different beam element combinations can give identical resolution on CW NPDs and presented a procedure to find these combinations and also find an "optimum" choice of detector collimation. Those results enable the previous redundancy in the description of instrument performance to be removed and permit a least squares optimisation of design. New inputs are needed and are identified as the sample plane spacing ( dS) of interest in the measurement. The optimisation requires a "quality factor", QPD, chosen here to be minimising the worst Bragg peak separation ability over some measurement range ( dS) while maintaining intensity. Any other QPD desired could be substituted. It is argued that high resolution and high intensity powder diffractometers (HRPDs and HIPDs) should have similar designs adjusted by a single scaling factor. Simulated comparisons are described suggesting significant improvements in performance for CW HIPDs. Optimisation with unchanged wavelength suggests improvements by factors of about 2 for HRPDs and 25 for HIPDs. A recently quantified design trade-off between the maximum line intensity possible and the degree of variation of angular resolution over the scattering angle range leads to efficiency gains at short wavelengths. This in turn leads in practice to another trade-off between this efficiency gain and losses at short wavelength due to technical effects. The exact gains from varying wavelength depend on the details of the short wavelength technical losses. Simulations suggest that the total potential PD performance gains may be very significant-factors of about 3 for HRPDs and more than 90 for HIPDs.
Poder, Thomas G; Erraji, Jihane; Coulibaly, Lucien P; Koffi, Kouamé
2017-01-01
Drug-eluting stents (DESs) were considered as ground-breaking technology promising to eradicate restenosis and the necessity to perform multiple revascularization procedures subsequent to percutaneous coronary intervention. Soon after DESs were released on the market, however, there were reports of a potential increase in mortality and of early or late thrombosis. In addition, DESs are far more expensive than bare-metal stents (BMSs), which has led to their limited use in many countries. The technology has improved over the last few years with the second generation of DESs (DES-2). Moreover, costs have come down and an improved safety profile with decreased thrombosis has been reported. Perform a cost-benefit analysis of DES-2s versus BMSs in the context of a publicly funded university hospital in Quebec, Canada. A systematic review of meta-analyses was conducted between 2012 and 2016 to extract data on clinical effectiveness. The clinical outcome of interest for the cost-benefit analysis was target-vessel revascularization (TVR). Cost units are those used in the Quebec health-care system. The cost-benefit analysis was based on a 2-year perspective. Deterministic and stochastic models (discrete-event simulation) were used, and various risk factors of reintervention were considered. DES-2s are much more effective than BMSs with respect to TVR rate ratio (i.e., 0.29 to 0.62 in more recent meta-analyses). DES-2s seem to cause fewer deaths and in-stent thrombosis than BMSs, but results are rarely significant, with the exception of the cobalt-chromium everolimus DES. The rate ratio of myocardial infraction is systematically in favor of DES-2s and very often significant. Despite the higher cost of DES-2s, fewer reinterventions can lead to huge savings (i.e., -$479 to -$769 per patient). Moreover, the higher a patient's risk of reintervention, the higher the savings associated with the use of DES-2s. Despite the higher purchase cost of DES-2s compared to BMSs, generalizing their use, in particular for patients at high risk of reintervention, should enable significant savings.
Erraji, Jihane; Coulibaly, Lucien P.; Koffi, Kouamé
2017-01-01
Background Drug-eluting stents (DESs) were considered as ground-breaking technology promising to eradicate restenosis and the necessity to perform multiple revascularization procedures subsequent to percutaneous coronary intervention. Soon after DESs were released on the market, however, there were reports of a potential increase in mortality and of early or late thrombosis. In addition, DESs are far more expensive than bare-metal stents (BMSs), which has led to their limited use in many countries. The technology has improved over the last few years with the second generation of DESs (DES-2). Moreover, costs have come down and an improved safety profile with decreased thrombosis has been reported. Objective Perform a cost–benefit analysis of DES-2s versus BMSs in the context of a publicly funded university hospital in Quebec, Canada. Methods A systematic review of meta-analyses was conducted between 2012 and 2016 to extract data on clinical effectiveness. The clinical outcome of interest for the cost–benefit analysis was target-vessel revascularization (TVR). Cost units are those used in the Quebec health-care system. The cost–benefit analysis was based on a 2-year perspective. Deterministic and stochastic models (discrete-event simulation) were used, and various risk factors of reintervention were considered. Results DES-2s are much more effective than BMSs with respect to TVR rate ratio (i.e., 0.29 to 0.62 in more recent meta-analyses). DES-2s seem to cause fewer deaths and in-stent thrombosis than BMSs, but results are rarely significant, with the exception of the cobalt–chromium everolimus DES. The rate ratio of myocardial infraction is systematically in favor of DES-2s and very often significant. Despite the higher cost of DES-2s, fewer reinterventions can lead to huge savings (i.e., -$479 to -$769 per patient). Moreover, the higher a patient’s risk of reintervention, the higher the savings associated with the use of DES-2s. Conclusion Despite the higher purchase cost of DES-2s compared to BMSs, generalizing their use, in particular for patients at high risk of reintervention, should enable significant savings. PMID:28498849
Thermal Performance Analysis of Solar Collectors Installed for Combisystem in the Apartment Building
NASA Astrophysics Data System (ADS)
Žandeckis, A.; Timma, L.; Blumberga, D.; Rochas, C.; Rošā, M.
2012-01-01
The paper focuses on the application of wood pellet and solar combisystem for space heating and hot water preparation at apartment buildings under the climate of Northern Europe. A pilot project has been implemented in the city of Sigulda (N 57° 09.410 E 024° 52.194), Latvia. The system was designed and optimised using TRNSYS - a dynamic simulation tool. The pilot project was continuously monitored. To the analysis the heat transfer fluid flow rate and the influence of the inlet temperature on the performance of solar collectors were subjected. The thermal performance of a solar collector loop was studied using a direct method. A multiple regression analysis was carried out using STATGRAPHICS Centurion 16.1.15 with the aim to identify the operational and weather parameters of the system which cause the strongest influence on the collector's performance. The parameters to be used for the system's optimisation have been evaluated.
NASA Astrophysics Data System (ADS)
Ben-Romdhane, Hajer; Krichen, Saoussen; Alba, Enrique
2017-05-01
Optimisation in changing environments is a challenging research topic since many real-world problems are inherently dynamic. Inspired by the natural evolution process, evolutionary algorithms (EAs) are among the most successful and promising approaches that have addressed dynamic optimisation problems. However, managing the exploration/exploitation trade-off in EAs is still a prevalent issue, and this is due to the difficulties associated with the control and measurement of such a behaviour. The proposal of this paper is to achieve a balance between exploration and exploitation in an explicit manner. The idea is to use two equally sized populations: the first one performs exploration while the second one is responsible for exploitation. These tasks are alternated from one generation to the next one in a regular pattern, so as to obtain a balanced search engine. Besides, we reinforce the ability of our algorithm to quickly adapt after cnhanges by means of a memory of past solutions. Such a combination aims to restrain the premature convergence, to broaden the search area, and to speed up the optimisation. We show through computational experiments, and based on a series of dynamic problems and many performance measures, that our approach improves the performance of EAs and outperforms competing algorithms.
NASA Astrophysics Data System (ADS)
Suja Priyadharsini, S.; Edward Rajan, S.; Femilin Sheniha, S.
2016-03-01
Electroencephalogram (EEG) is the recording of electrical activities of the brain. It is contaminated by other biological signals, such as cardiac signal (electrocardiogram), signals generated by eye movement/eye blinks (electrooculogram) and muscular artefact signal (electromyogram), called artefacts. Optimisation is an important tool for solving many real-world problems. In the proposed work, artefact removal, based on the adaptive neuro-fuzzy inference system (ANFIS) is employed, by optimising the parameters of ANFIS. Artificial Immune System (AIS) algorithm is used to optimise the parameters of ANFIS (ANFIS-AIS). Implementation results depict that ANFIS-AIS is effective in removing artefacts from EEG signal than ANFIS. Furthermore, in the proposed work, improved AIS (IAIS) is developed by including suitable selection processes in the AIS algorithm. The performance of the proposed method IAIS is compared with AIS and with genetic algorithm (GA). Measures such as signal-to-noise ratio, mean square error (MSE) value, correlation coefficient, power spectrum density plot and convergence time are used for analysing the performance of the proposed method. From the results, it is found that the IAIS algorithm converges faster than the AIS and performs better than the AIS and GA. Hence, IAIS tuned ANFIS (ANFIS-IAIS) is effective in removing artefacts from EEG signals.
Discrete Event Supervisory Control Applied to Propulsion Systems
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Shah, Neerav
2005-01-01
The theory of discrete event supervisory (DES) control was applied to the optimal control of a twin-engine aircraft propulsion system and demonstrated in a simulation. The supervisory control, which is implemented as a finite-state automaton, oversees the behavior of a system and manages it in such a way that it maximizes a performance criterion, similar to a traditional optimal control problem. DES controllers can be nested such that a high-level controller supervises multiple lower level controllers. This structure can be expanded to control huge, complex systems, providing optimal performance and increasing autonomy with each additional level. The DES control strategy for propulsion systems was validated using a distributed testbed consisting of multiple computers--each representing a module of the overall propulsion system--to simulate real-time hardware-in-the-loop testing. In the first experiment, DES control was applied to the operation of a nonlinear simulation of a turbofan engine (running in closed loop using its own feedback controller) to minimize engine structural damage caused by a combination of thermal and structural loads. This enables increased on-wing time for the engine through better management of the engine-component life usage. Thus, the engine-level DES acts as a life-extending controller through its interaction with and manipulation of the engine s operation.
Basie Instrumentation of a Low Speed Axial Compressor
NASA Astrophysics Data System (ADS)
Blidi, Sami; Miton, Hubert
1995-07-01
The flow modelling depending on test results allows a best aerodynamic comprehension. For this reason, a test bed of L.E.M.F.I.'s axial compressor has been set and instrumented for a detailed exploration of the flow in a four-stage turbomachine characterised by a little spacing between blade rows. In this paper, first are given brief descriptions of the geometrical characteristics of this compressor, the test bed's control system operation and the instrumentation set. Next, measures for the exploration of flow are discussed. Finally, typical results concerning the global and local performance measurements and their analysis are presented. This work permitted to instrument the L.E.M.F.I.'s four-stage axial compressor test bed and obtain the flow steady and unsteady characteristics using the five-hole and hot film probes. Les études de modélisation de l'écoulement dans les compresseurs axiaux doivent s'appuyer sur des résultats expérimentaux permettant une meilleure compréhension des phénomènes aérodynamiques et convenant à la validation des codes numériques. À cette fin, un banc d'essais de compresseur axial basse vitesse a été rais au point et instrumenté au L.E.M.F.I.[1] pour des explorations détaillées de l'écoulement dans une machine comportant 4étages, caractérisée par le faible espacement entre les rangées d'aubages. Dans cet article, nous décrivons brièvement les caractéristiques géométriques du compresseur axial, les systèmes de contrôle du fonctionnement du banc d'essais et l'instrumentation mise au point. Nous parlons ensuite des moyens d'explorations de l'écoulement utilisés. Nous fournissons enfin des exemples de résultats de mesure des performances globales et locales du compresseur et analysons brièvement ces résultats. Le travail effectué a permis de mettre au point et d'instrumenter un banc d'essais de compresseur axial basse vitesse au L.E.M.F.I. et rendu possible la détermination des grandeurs stationnaires et instationnaires de l'écoulement entre les rangées d'aubages en utilisant des sondes clinométriques de pression et des sondes à films chauds.
1996-04-01
Amniotic fluid Debris?* Young women Long bone fracture Fat * Any age Chronic intravenous drug users Talc* Any age Disseminated intravascular coagulapathy...maximal stress at which bone fracture occurs. This study demonstrated the usefulness of finite Results from centrifuge experiments element analysis for...Vine Street M/S 455 Philadelphia, PA 19102-1192, USA SUMMARY Exposure to Impact Acceleration (15). In these reports, fracture of the bones, dislocation
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
NASA Astrophysics Data System (ADS)
Fournier, Marie-Claude
Une caracterisation des emissions atmospheriques provenant des sources fixes en operation, alimentees au gaz et a l'huile legere, a ete conduite aux installations visees des sites no.1 et no.2. La caracterisation et les calculs theoriques des emissions atmospheriques aux installations des sites no.1 et no.2 presentent des resultats qui sont en dessous des valeurs reglementaires pour des conditions d'operation normales en periode hivernale et par consequent, a de plus fortes demandes energetiques. Ainsi, pour une demande energetique plus basse, le taux de contaminants dans les emissions atmospheriques pourrait egalement etre en dessous des reglementations municipales et provinciales en vigueur. Dans la perspective d'une nouvelle reglementation provinciale, dont les termes sont discutes depuis 2005, il serait souhaitable que le proprietaire des infrastructures visees participe aux echanges avec le Ministere du Developpement Durable, de l'Environnement et des Parcs (MDDEP) du Quebec. En effet, meme si le principe de droit acquis permettrait d'eviter d'etre assujetti a la nouvelle reglementation, l'application de ce type de principe ne s'inscrit pas dans ceux d'un developpement durable. L'âge avance des installations etudiees implique la planification d'un entretien rigoureux afin d'assurer les conditions optimales de combustion en fonction du type de combustible. Des tests de combustion sur une base reguliere sont donc recommandes. Afin de supporter le processus de suivi et d'evaluation de la performance environnementale des sources fixes, un outil d'aide a la gestion de l'information environnementale a ete developpe. Dans ce contexte, la poursuite du developpement d'un outil d'aide a la gestion de l'information environnementale faciliterait non seulement le travail des personnes affectees aux inventaires annuels mais egalement le processus de communication entre les differents acteurs concernes tant intra- qu'inter-etablissement. Cet outil serait egalement un bon moyen pour sensibiliser le personnel a leur consommation energetique ainsi qu'a leur role dans la lutte contre les emissions polluantes et les gaz a effets de serre. En outre, ce type d'outil a pour principale fonction de generer des rapports dynamiques pouvant s'adapter a des besoins precis. Le decoupage coherent de l'information associe a un developpement par modules offre la perspective d'application de l'outil pour d'autres types d'activites. Dans ce cas, il s'agit de definir la part commune avec les modules existants et planifier les activites de developpement specifiques selon la meme demarche que celle presentee dans le present document.
2013-11-01
la raison d’être du recueil des données relatives aux variables centrales afin de soutenir ces efforts dans...communs utilisés par les pays membres et un ensemble central d’indicateurs complétant les Mesures de Performance (MOP) et Mesures d’Efficacité (MOE...dénomination « STO », « RTO » ou « AGARD » selon le cas, suivi du numéro de série. Des informations analogues, telles que le titre est la date de
Inhibition of DES-induced DNA adducts by diallyl sulfide: implications in liver cancer prevention.
Green, Mario; Thomas, Ronald; Gued, Lisa; Sadrud-Din, Sakeenah
2003-01-01
Diethylstilbesterol (DES) is known to cause cancer in humans and animals. Diallyl sulfide (DAS), a component of garlic, has been shown to prevent various types of cancer, presumably via metabolic modulation. Previously, we have demonstrated that DAS prevents the oxidation and reduction of DES in vitro. We hypothesize that DAS will inhibit the metabolism of DES in vivo thus preventing the formation of DES-induced DNA adducts. To test this hypothesis, five groups of five male Sprague-Dawley rats were treated as follows: the control received 0.5 ml of corn oil daily for four days. The second group received 50 mg/kg DAS daily for four days. The third group received 50 mg/kg DAS daily for four days followed by 150 mg/kg DES on day five. The fourth group received 400 mg/kg DAS on day five followed by 150 mg/kg DES. The fifth group received 150 mg/kg DES on day five. All of the rats were sacrificed on day five, 4 h after DES treatment. DNA was isolated from the liver and analyzed by 32P-post-labeling for DNA adducts. The in vitro study was performed utilizing four reactions described as follows: the control reaction contained 200 microg DNA, microsomes (346 microg protein/ml), and 10 mM DES; no oxidation co-factor (cumen hydroperoxide) was added. The second reaction, a complete oxidation system, contained 200 microg DNA, microsomes (346 microg protein/ml), 30 mM cumen hydroperoxide, and 10 mM DES. The third reaction contained 200 microg DNA, microsomes (346 microg protein/ml), 30 mM cumen hydroperoxide, 50 mM DAS, and 10 mM DES. The fourth reaction contained 200 microg DNA, microsomes (346 microg protein/ml), 30 mM cumen hydroperoxide, 100 mM DAS, and 10 mM DES. All of the in vitro reactions were buffered with 100 mM KPO4 pH 7.4 and incubated for 30 min at 37 degrees C. DNA was extracted and analyzed by 32P-post-labeling. We found that DAS inhibited the formation of DES-induced DNA adducts in a dose-dependent fashion. We have shown that DES-induced DNA adducts were inhibited in rats that received DAS pre-treatment and co-treatment with DES. These results suggest that DAS directly inhibits the metabolism of DES thus preventing the formation of DNA adducts. In addition to directly inhibiting the metabolism of DES, DAS appears to alter the expression of the metabolic machinery such that DES-induced adducts are inhibited. The inhibition of DES-induced DNA adducts by DAS may prevent the initiation of estrogen-induced cancer.
Karnon, Jonathan; Haji Ali Afzali, Hossein
2014-06-01
Modelling in economic evaluation is an unavoidable fact of life. Cohort-based state transition models are most common, though discrete event simulation (DES) is increasingly being used to implement more complex model structures. The benefits of DES relate to the greater flexibility around the implementation and population of complex models, which may provide more accurate or valid estimates of the incremental costs and benefits of alternative health technologies. The costs of DES relate to the time and expertise required to implement and review complex models, when perhaps a simpler model would suffice. The costs are not borne solely by the analyst, but also by reviewers. In particular, modelled economic evaluations are often submitted to support reimbursement decisions for new technologies, for which detailed model reviews are generally undertaken on behalf of the funding body. This paper reports the results from a review of published DES-based economic evaluations. Factors underlying the use of DES were defined, and the characteristics of applied models were considered, to inform options for assessing the potential benefits of DES in relation to each factor. Four broad factors underlying the use of DES were identified: baseline heterogeneity, continuous disease markers, time varying event rates, and the influence of prior events on subsequent event rates. If relevant, individual-level data are available, representation of the four factors is likely to improve model validity, and it is possible to assess the importance of their representation in individual cases. A thorough model performance evaluation is required to overcome the costs of DES from the users' perspective, but few of the reviewed DES models reported such a process. More generally, further direct, empirical comparisons of complex models with simpler models would better inform the benefits of DES to implement more complex models, and the circumstances in which such benefits are most likely.
Li, Qiang; Tong, Zichuan; Wang, Lefeng; Zhang, Jianjun; Ge, Yonggui; Wang, Hongshi; Li, Weiming; Xu, Li; Ni, Zhuhua
2013-01-01
Introduction With long-term follow-up, whether biodegradable polymer drug-eluting stents (DES) is efficient and safe in primary percutaneous coronary intervention (PCI) remains a controversial issue. This study aims to assess the long-term efficacy and safety of DES in PCI for ST-segment elevation myocardial infarction (STEMI). Material and methods A prospective, randomized single-blind study with 3-year follow-up was performed to compare biodegradable polymer DES with durable polymer DES in 332 STEMI patients treated with primary PCI. The primary end point was major adverse cardiac events (MACE) at 3 years after the procedure, defined as the composite of cardiac death, recurrent infarction, and target vessel revascularization. The secondary end points included in-segment late luminal loss (LLL) and binary restenosis at 9 months and cumulative stent thrombosis (ST) event rates up to 3 years. Results The rate of the primary end points and the secondary end points including major adverse cardiac events, in-segment late luminal loss, binary restenosis, and cumulative thrombotic event rates were comparable between biodegradable polymer DES and durable polymer DES in these 332 STEMI patients treated with primary PCI at 3 years. Conclusions Biodegradable polymer DES has similar efficacy and safety profiles at 3 years compared with durable polymer DES in STEMI patients treated with primary PCI. PMID:24482648
Rodriguez-Nogales, J M; Garcia, M C; Marina, M L
2006-02-03
A perfusion reversed-phase high performance liquid chromatography (RP-HPLC) method has been designed to allow rapid (3.4 min) separations of maize proteins with high resolution. Several factors, such as extraction conditions, temperature, detection wavelength and type and concentration of ion-pairing agent were optimised. A fine optimisation of the gradient elution was also performed by applying experimental design. Commercial maize products for human consumption (flours, precocked flours, fried snacks and extruded snacks) were characterised for the first time by perfusion RP-HPLC and their chromatographic profiles allowed a differentiation among products relating the different technological process used for their preparation. Furthermore, applying discriminant analysis makes it possible to group the samples according with the technological process suffered by maize products, obtaining a good prediction in 92% of the samples.
NASA Astrophysics Data System (ADS)
Dubey, M.; Chandra, H.; Kumar, Anil
2016-02-01
A thermal modelling for the performance evaluation of gas turbine cogeneration system with reheat is presented in this paper. The Joule-Brayton cogeneration reheat cycle is based on the total useful energy rate (TUER) has been optimised and the efficiency at the maximum TUER is determined. The variation of maximum dimensionless TUER and efficiency at maximum TUER with respect to cycle temperature ratio have also been analysed. From the results, it has been found that the dimensionless maximum TUER and the corresponding thermal efficiency decrease with the increase in power to heat ratio. The result also shows that the inclusion of reheat significantly improves the overall performance of the cycle. From the thermodynamic performance point of view, this methodology may be quite useful in the selection and comparison of combined energy production systems.
Stress and Thermal Stress Compensation in Quartz SAW Devices
1991-08-01
Laboratoire De Physique Et Metrologie Des Oscillateurs E. Bigler, D. Hauden, S. Ballandras OTIC hELECTLE E P19 99f "j APPROVED FOR PUBLIC RELASE, DISTRIBUTION...ORGANIZATION NAME(S) AND ADDRE$S(ES) 8. PERFORMING ORGANIZATION Laboratoire De Physique Et Metrologie Des Oscillateurs REPORT NUMBER Center National de la
2001-05-01
audio-visual aids. Rapid correction methods of the pilot’s performance capacity: * psychosomatic self-management; * rational psychotherapy; * music ... therapy ; * central nervous system (CNS) electro-tranquilization; * sauna; * hydrotherapy; * manual therapy; 10-3 * recreational therapy (active rest
Validating Human Performance Models of the Future Orion Crew Exploration Vehicle
NASA Technical Reports Server (NTRS)
Wong, Douglas T.; Walters, Brett; Fairey, Lisa
2010-01-01
NASA's Orion Crew Exploration Vehicle (CEV) will provide transportation for crew and cargo to and from destinations in support of the Constellation Architecture Design Reference Missions. Discrete Event Simulation (DES) is one of the design methods NASA employs for crew performance of the CEV. During the early development of the CEV, NASA and its prime Orion contractor Lockheed Martin (LM) strived to seek an effective low-cost method for developing and validating human performance DES models. This paper focuses on the method developed while creating a DES model for the CEV Rendezvous, Proximity Operations, and Docking (RPOD) task to the International Space Station. Our approach to validation was to attack the problem from several fronts. First, we began the development of the model early in the CEV design stage. Second, we adhered strictly to M&S development standards. Third, we involved the stakeholders, NASA astronauts, subject matter experts, and NASA's modeling and simulation development community throughout. Fourth, we applied standard and easy-to-conduct methods to ensure the model's accuracy. Lastly, we reviewed the data from an earlier human-in-the-loop RPOD simulation that had different objectives, which provided us an additional means to estimate the model's confidence level. The results revealed that a majority of the DES model was a reasonable representation of the current CEV design.
Energy efficiency in membrane bioreactors.
Barillon, B; Martin Ruel, S; Langlais, C; Lazarova, V
2013-01-01
Energy consumption remains the key factor for the optimisation of the performance of membrane bioreactors (MBRs). This paper presents the results of the detailed energy audits of six full-scale MBRs operated by Suez Environnement in France, Spain and the USA based on on-site energy measurement and analysis of plant operation parameters and treatment performance. Specific energy consumption is compared for two different MBR configurations (flat sheet and hollow fibre membranes) and for plants with different design, loads and operation parameters. The aim of this project was to understand how the energy is consumed in MBR facilities and under which operating conditions, in order to finally provide guidelines and recommended practices for optimisation of MBR operation and design to reduce energy consumption and environmental impacts.
Prediction du profil de durete de l'acier AISI 4340 traite thermiquement au laser
NASA Astrophysics Data System (ADS)
Maamri, Ilyes
Les traitements thermiques de surfaces sont des procedes qui visent a conferer au coeur et a la surface des pieces mecaniques des proprietes differentes. Ils permettent d'ameliorer la resistance a l'usure et a la fatigue en durcissant les zones critiques superficielles par des apports thermiques courts et localises. Parmi les procedes qui se distinguent par leur capacite en terme de puissance surfacique, le traitement thermique de surface au laser offre des cycles thermiques rapides, localises et precis tout en limitant les risques de deformations indesirables. Les proprietes mecaniques de la zone durcie obtenue par ce procede dependent des proprietes physicochimiques du materiau a traiter et de plusieurs parametres du procede. Pour etre en mesure d'exploiter adequatement les ressources qu'offre ce procede, il est necessaire de developper des strategies permettant de controler et regler les parametres de maniere a produire avec precision les caracteristiques desirees pour la surface durcie sans recourir au classique long et couteux processus essai-erreur. L'objectif du projet consiste donc a developper des modeles pour predire le profil de durete dans le cas de traitement thermique de pieces en acier AISI 4340. Pour comprendre le comportement du procede et evaluer les effets des differents parametres sur la qualite du traitement, une etude de sensibilite a ete menee en se basant sur une planification experimentale structuree combinee a des techniques d'analyse statistiques eprouvees. Les resultats de cette etude ont permis l'identification des variables les plus pertinentes a exploiter pour la modelisation. Suite a cette analyse et dans le but d'elaborer un premier modele, deux techniques de modelisation ont ete considerees, soient la regression multiple et les reseaux de neurones. Les deux techniques ont conduit a des modeles de qualite acceptable avec une precision d'environ 90%. Pour ameliorer les performances des modeles a base de reseaux de neurones, deux nouvelles approches basees sur la caracterisation geometrique du profil de durete ont ete considerees. Contrairement aux premiers modeles predisant le profil de durete en fonction des parametres du procede, les nouveaux modeles combinent les memes parametres avec les attributs geometriques du profil de durete pour refleter la qualite du traitement. Les modeles obtenus montrent que cette strategie conduit a des resultats tres prometteurs.
Dry Eye Syndrome After Proton Therapy of Ocular Melanomas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thariat, Juliette, E-mail: jthariat@gmail.com; Maschi, Celia; Lanteri, Sara
Purpose: To investigate whether proton therapy (PT) performs safely in superotemporal melanomas, in terms of risk of dry-eye syndrome (DES). Methods and Materials: Tumor location, DES grade, and dose to ocular structures were analyzed in patients undergoing PT (2005-2015) with 52 Gy (prescribed dose, not accounting for biologic effectiveness correction of 1.1). Prognostic factors of DES and severe DES (sDES, grades 2-3) were determined with Cox proportional hazard models. Visual acuity deterioration and enucleation rates were compared by sDES and tumor locations. Results: Median follow-up was 44 months (interquartile range, 18-60 months). Of 853 patients (mean age, 64 years), 30.5% had temporal and 11.4% superotemporalmore » tumors. Five-year incidence of DES and sDES was 23.0% (95% confidence interval [CI] 19.0%-27.7%) and 10.9% (95% CI 8.2%-14.4%), respectively. Multivariable analysis showed a higher risk for sDES in superotemporal (hazard ratio [HR] 5.82, 95% CI 2.72-12.45) and temporal tumors (HR 2.63, 95% CI 1.28-5.42), age ≥70 years (HR 1.90, 95% CI 1.09-3.32), distance to optic disk ≥5 mm (HR 2.71, 95% CI 1.52-4.84), ≥35% of retina receiving 12 Gy (HR 2.98, 95% CI 1.54-5.77), and eyelid rim irradiation (HR 2.68, 95% CI 1.49-4.80). The same risk factors were found for DES. Visual acuity deteriorated more in patients with sDES (0.86 ± 1.10 vs 0.64 ± 0.98 logMAR, P=.034) but not between superotemporal/temporal and other locations (P=.890). Enucleation rates were independent of sDES (P=.707) and tumor locations (P=.729). Conclusions: Severe DES was more frequent in superotemporal/temporal melanomas. Incidence of vision deterioration and enucleation was no higher in patients with superotemporal melanoma than in patients with tumors in other locations. Tumor location should not contraindicate PT.« less
de la Torre Hernández, José M; Rumoroso, José R; Ojeda, Soledad; Brugaletta, Salvatore; Cascón, José D; Ruisánchez, Cristina; Sánchez Gila, Joaquín; Roa, Jessica; Tizón, Helena; Gutiérrez, Hipólito; Larman, Mariano; García Camarero, Tamara; Pinar, Eduardo; Díaz, José F; Pan, Manuel; Morillas Bueno, Miren; Oyonarte, José M; Ruiz Guerrero, Luis; Ble, Mireia; Rubio Patón, Ramón; Arnold, Román; Echegaray, Kattalin; de la Morena, Gonzalo; Sabate, Manel
2018-05-01
Bioresorbable vascular scaffolds (BVS) have the potential to restore vasomotion but the clinical implications are unknown. We sought to evaluate angina and ischemia in the long-term in patients treated with BVS and metallic drug-eluting stents (mDES). Multicenter study including patients with 24 ± 6 months of uneventful follow-up, in which stress echocardiography was performed and functional status was assessed by the Seattle Angina Questionnaire (SAQ). The primary endpoint was a positive result in stress echocardiography. The study included 102 patients treated with BVS and 106 with mDES. There were no differences in the patients' baseline characteristics. Recurrent angina was found in 18 patients (17.6%) in the BVS group vs 25 (23.5%) in the mDES group (P = .37), but SAQ results were significantly better in the BVS group (angina frequency 96.0 ± 8.0 vs 89.2 ± 29.7; P = .02). Stress echocardiography was positive in 11/92 (11.9%) of BVS patients vs 9/96 (9.4%) of mDES patients in the (P = .71) and angina was induced in 2/102 (1.9%) vs 7/106 (6.6%) (P = .18), respectively, but exercise performance was better in the BVS group even in those with positive tests (exercise duration 9.0 ± 2.0minutes vs 7.7 ± 1.8minutes; P = .02). A propensity score matching analysis yielded similar results. The primary endpoint was similar in both groups. In addition, recurrent angina was similar in patients with BVS and mDES. The better functional status, assessed by means of SAQ and exercise performance, detected in patients receiving BVS should be confirmed in further studies. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Moreno, Raul; Martin-Reyes, Roberto; Jimenez-Valero, Santiago; Sanchez-Recalde, Angel; Galeote, Guillermo; Calvo, Luis; Plaza, Ignacio; Lopez-Sendon, Jose-Luis
2011-04-01
The use of drug-eluting stents (DES) in unfavourable patients has been associated with higher rates of clinical complications and stent thrombosis, and because of that concerns about the use of DES in high-risk settings have been raised. This study sought to demonstrate that the clinical benefit of DES increases as the risk profile of the patients increases. A meta-regression analysis from 31 randomized trials that compared DES and bare-metal stents, including overall 12,035 patients, was performed. The relationship between the clinical benefit of using DES (number of patients to treat [NNT] to prevent one episode of target lesion revascularization [TLR]), and the risk profile of the population (rate of TLR in patients allocated to bare-metal stents) in each trial was evaluated. The clinical benefit of DES increased as the risk profile of each study population increased: NNT for TLR=31.1-1.2 (TLR for bare-metal stents); p<0.001. The use of DES was safe regardless of the risk profile of each study population, since the effect of DES in mortality, myocardial infarction, and stent thrombosis, was not adversely affected by the risk profile of each study population (95% confidence interval for β value 0.09 to 0.11, -0.12 to 0.19, and -0.03 to-0.15 for mortality, myocardial infarction, and stent thrombosis, respectively). The clinical benefit of DES increases as the risk profile of the patients increases, without affecting safety. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.
High sensitivity pH sensing on the BEOL of industrial FDSOI transistors
NASA Astrophysics Data System (ADS)
Rahhal, Lama; Ayele, Getenet Tesega; Monfray, Stéphane; Cloarec, Jean-Pierre; Fornacciari, Benjamin; Pardoux, Eric; Chevalier, Celine; Ecoffey, Serge; Drouin, Dominique; Morin, Pierre; Garnier, Philippe; Boeuf, Frederic; Souifi, Abdelkader
2017-08-01
In this work we demonstrate the use of Fully Depleted Silicon On Insulator (FDSOI) transistors as pH sensors with a 23 nm silicon nitride sensing layer built in the Back-End-Of-Line (BEOL). The back end process to deposit the sensing layer and fabricate the electrical structures needed for testing is detailed. A series of tests employing different pH buffer solutions has been performed on transistors of different geometries, controlled via the back gate. The main findings show a shift of the drain current (ID) as a function of the back gate voltage (VB) when different pH buffer solutions are probed in the range of pH 6 to pH 8. This shift is observed at VB voltages swept from 0 V to 3 V, demonstrating the sensor operation at low voltage. A high sensitivity of up to 250 mV/pH unit (more than 4-fold larger than Nernstian response) is observed on FDSOI MOS transistors of 0.06 μm gate length and 0.08 μm gate width. She is currently working as a Postdoctoral researcher at Institut des nanotechnologies de Lyon in collaboration with STMicroelectronics and Université de Sherbrook (Canada) working on ;Integration of ultra-low-power gas and pH sensors with advanced technologies;. Her research interest includes selection, machining, optimisation and electrical characterisation of the sensitive layer for a low power consumption gas sensor based on advanced MOS transistors.
NASA Astrophysics Data System (ADS)
Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng
2018-04-01
Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.
Modulation aware cluster size optimisation in wireless sensor networks
NASA Astrophysics Data System (ADS)
Sriram Naik, M.; Kumar, Vinay
2017-07-01
Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.
NASA Astrophysics Data System (ADS)
Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.
2016-06-01
The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.
Use of a genetic algorithm to improve the rail profile on Stockholm underground
NASA Astrophysics Data System (ADS)
Persson, Ingemar; Nilsson, Rickard; Bik, Ulf; Lundgren, Magnus; Iwnicki, Simon
2010-12-01
In this paper, a genetic algorithm optimisation method has been used to develop an improved rail profile for Stockholm underground. An inverted penalty index based on a number of key performance parameters was generated as a fitness function and vehicle dynamics simulations were carried out with the multibody simulation package Gensys. The effectiveness of each profile produced by the genetic algorithm was assessed using the roulette wheel method. The method has been applied to the rail profile on the Stockholm underground, where problems with rolling contact fatigue on wheels and rails are currently managed by grinding. From a starting point of the original BV50 and the UIC60 rail profiles, an optimised rail profile with some shoulder relief has been produced. The optimised profile seems similar to measured rail profiles on the Stockholm underground network and although initial grinding is required, maintenance of the profile will probably not require further grinding.
On the optimisation of the use of 3He in radiation portal monitors
NASA Astrophysics Data System (ADS)
Tomanin, Alice; Peerani, Paolo; Janssens-Maenhout, Greet
2013-02-01
Radiation Portal Monitors (RPMs) are used to detect illicit trafficking of nuclear or other radioactive material concealed in vehicles, cargo containers or people at strategic check points, such as borders, seaports and airports. Most of them include neutron detectors for the interception of potential plutonium smuggling. The most common technology used for neutron detection in RPMs is based on 3He proportional counters. The recent severe shortage of this rare and expensive gas has created a problem of capacity for manufacturers to provide enough detectors to satisfy the market demand. In this paper we analyse the design of typical commercial RPMs and try to optimise the detector parameters in order either to maximise the efficiency using the same amount of 3He or minimise the amount of gas needed to reach the same detection performance: by reducing the volume or gas pressure in an optimised design.
Hybrid real-code ant colony optimisation for constrained mechanical design
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Bureerat, Sujin
2016-01-01
This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.
NASA Astrophysics Data System (ADS)
Dal Bianco, N.; Lot, R.; Matthys, K.
2018-01-01
This works regards the design of an electric motorcycle for the annual Isle of Man TT Zero Challenge. Optimal control theory was used to perform lap time simulation and design optimisation. A bespoked model was developed, featuring 3D road topology, vehicle dynamics and electric power train, composed of a lithium battery pack, brushed DC motors and motor controller. The model runs simulations over the entire ? or ? of the Snaefell Mountain Course. The work is validated using experimental data from the BX chassis of the Brunel Racing team, which ran during the 2009 to 2015 TT Zero races. Optimal control is used to improve drive train and power train configurations. Findings demonstrate computational efficiency, good lap time prediction and design optimisation potential, achieving a 2 minutes reduction of the reference lap time through changes in final drive gear ratio, battery pack size and motor configuration.
A target recognition method for maritime surveillance radars based on hybrid ensemble selection
NASA Astrophysics Data System (ADS)
Fan, Xueman; Hu, Shengliang; He, Jingbo
2017-11-01
In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.
De Greef, J; Villani, K; Goethals, J; Van Belle, H; Van Caneghem, J; Vandecasteele, C
2013-11-01
Due to ongoing developments in the EU waste policy, Waste-to-Energy (WtE) plants are to be optimized beyond current acceptance levels. In this paper, a non-exhaustive overview of advanced technical improvements is presented and illustrated with facts and figures from state-of-the-art combustion plants for municipal solid waste (MSW). Some of the data included originate from regular WtE plant operation - before and after optimisation - as well as from defined plant-scale research. Aspects of energy efficiency and (re-)use of chemicals, resources and materials are discussed and support, in light of best available techniques (BAT), the idea that WtE plant performance still can be improved significantly, without direct need for expensive techniques, tools or re-design. In first instance, diagnostic skills and a thorough understanding of processes and operations allow for reclaiming the silent optimisation potential. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Benadja, Mounir
Dans ce travail est presente un systeme de generation d'energie d'un parc eolien offshore et un systeme de transport utilisant les stations VSC-HVDC connectees au reseau principal AC onshore. Trois configurations ont ete etudiees, modelisees et validees par simulation. Dans chacune des configurations, des contributions ameliorant les cotes techniques et economiques sont decrites ci-dessous : La premiere contribution concerne un nouvel algorithme MPPT (Maximum Power Point Tracking) utilise pour l'extraction de la puissance maximale disponible dans les eoliennes des parcs offshores. Cette technique d'extraction du MPPT ameliore le rendement energetique de la chaine de conversion des energies renouvelables notamment l'energie eolienne a petite et a grande echelles (parc eolien offshore) qui constitue un probleme pour les constructeurs qui se trouvent confrontes a developper des dispositifs MPPT simples, moins couteux, robustes, fiables et capable d'obtenir un rendement energetique maximal. La deuxieme contribution concerne la reduction de la taille, du cout et de l'impact des defauts electriques (AC et DC) dans le systeme construit pour transporter l'energie d'un parc eolien offshore (OWF) vers le reseau principal AC onshore via deux stations 3L-NPC VSCHVDC. La solution developpee utilise des observateurs non-lineaires bases sur le filtre de Kalman etendu (EKF). Ce filtre permet d'estimer la vitesse de rotation et la position du rotor de chacune des generatrices du parc eolien offshore et de la tension du bus DC de l'onduleur DC-AC offshore et des deux stations 3L-NPC-VSC-HVDC (offshore et onshore). De plus, ce developpement du filtre de Kalman etendu a permis de reduire l'impact des defauts AC et DC. Deux commandes ont ete utilisees, l'une (commande indirect dans le plan abc) avec EKF integre destinee pour controler le convertisseur DC-AC offshore et l'autre (commande d-q) avec EKF integre pour controler les convertisseurs des deux stations AC-DC et DC-AC tout en tenant compte des entrees de chacune des stations. L'integration des observateurs non-lineaires (EKF) dans le controle des convertisseurs permet de resoudre le probleme des incertitudes de mesure, des incertitudes dans la modelisation, en cas du dysfonctionnement ou de panne des capteurs de mesure ainsi que le probleme de l'impact des defauts (AC et DC) sur la qualite d'energie dans les systemes de transmission. Ces estimations contribuent a rendre le cout global du systeme moins cher et sa taille moins encombrante ainsi que la reduction de l'impact des defauts (AC et DC) sur le systeme. La troisieme contribution concerne la reduction de la taille, du cout et de l'impact des defauts electriques (AC et DC) dans le systeme construit pour transporter l'energie d'un parc eolien offshore (OWF) vers le reseau principal AC onshore via deux stations VSC-HVDC. La solution developpee utilise des observateurs non-lineaires bases sur le filtre de Kalman etendu (EKF). Ce filtre permet d'estimer la vitesse de rotation et la position du rotor de chacune des generatrices du parc eolien et de la tension du bus DC de l'onduleur DC-AC offshore. La contribution porte surtout sur le developpement des deux commandes des deux stations. La premiere, la commande non-lineaire modifiee pour controler le premier convertisseur de la station VSC-HVDC offshore assurant le transfert de la puissance generee par le parc eolien vers la station VSC-HVDC onshore. La deuxieme commande non-lineaire modifiee avec integration de la regulation de la tension du bus DC et de la commande a modele de reference adaptative (MRAC) pour la compensation des surintensites et surtensions durant les defauts AC et DC. On peut constater que lors d'un defaut AC au PCC (Point of Common Coupling) du cote reseau onshore, la profondeur de l'impact du defaut AC sur l'amplitude des courants du reseau principal AC onshore qui etait reduit a 60% par les travaux de recherche (Erlich, Feltes et Shewarega, 2014), comparativement a la nouvelle commande proposee MRAC qui reduit la profondeur de l'impact a 35%. Lors de l'apparition des defauts AC et DC, une reduction de l'impact des defauts sur l'amplitude des courants de reseau AC terrestre et du temps de reponse a ete observee et la stabilite du systeme a ete renforcee par l'utilisation de la commande adaptative basee sur le modele de reference MRAC. La quatrieme contribution concerne une nouvelle commande basee sur le mode de glissement (SM) appliquee pour la station VSC-HVDC qui relie le parc eolien offshore (OWF) au reseau principal AC. Ce parc est compose de dix eoliennes basees sur des generatrices synchrones a aimant permanent (VSWT/PMSGs) connectees en parallele et chacune est controlee par son propre convertisseur DC-DC. Une comparaison des performances entre la commande SM et de la commande non-lineaire avec des controleurs PI pour les deux conditions (presence et absence de defaut DC) a ete analysee et montre la superiorite de la commande par SM. Un prototype du systeme etudie a echelle reduite a ete realise et teste au laboratoire GREPCI en utilisant la carte dSPACE-DS1104 pour la validation experimentale. L'analyse et la simulation des systemes etudies sont developpees sous l'environnement Matlab/Simulink/Simpowersystem. Les resultats obtenus a partir des configurations developpees sont valides par simulation et par experimentation. Les performances sont tres satisfaisantes du point de vue reponse dynamique, reponse en regime permanent, stabilite du systeme et qualite de l'energie.
NASA Astrophysics Data System (ADS)
Behera, Kishore Kumar; Pal, Snehanshu
2018-03-01
This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.
Ghamouss, Fouad; Ledru, Sophie; Ruillé, Nadine; Lantier, Françoise; Boujtita, Mohammed
2006-06-16
A screen-printed carbon electrode modified with both HRP and LOD (SPCE-HRP/LOD) has been developed for the determination of L-lactate concentration in real samples. The resulting SPCE-HRP/LOD was prepared in a one-step procedure, and was then optimised as an amperometric biosensor operating at [0, -100]mV versus Ag/AgCl for L-lactate determination in flow injection mode. A significant improvement in the reproducibility (coefficient variation of about 10%) of the preparation of the biosensors was obtained when graphite powder was modified with LOD in the presence of HRP previously oxidised by periodate ion (IO4-). Optimisation studies were performed by examining the effects of LOD loading, periodation step and rate of the binder on analytical performances of SPCE-HRP/LOD. The sensitivity of the optimised SPCE-HRP/LOD to L-lactate was 0.84 nAL micromol(-1) in a detection range between 10 and 180 microMol. The possibility of using the developed biosensor to determine L-lactate concentrations in various dairy products was also evaluated.
Prediction of road traffic death rate using neural networks optimised by genetic algorithm.
Jafari, Seyed Ali; Jahandideh, Sepideh; Jahandideh, Mina; Asadabadi, Ebrahim Barzegari
2015-01-01
Road traffic injuries (RTIs) are realised as a main cause of public health problems at global, regional and national levels. Therefore, prediction of road traffic death rate will be helpful in its management. Based on this fact, we used an artificial neural network model optimised through Genetic algorithm to predict mortality. In this study, a five-fold cross-validation procedure on a data set containing total of 178 countries was used to verify the performance of models. The best-fit model was selected according to the root mean square errors (RMSE). Genetic algorithm, as a powerful model which has not been introduced in prediction of mortality to this extent in previous studies, showed high performance. The lowest RMSE obtained was 0.0808. Such satisfactory results could be attributed to the use of Genetic algorithm as a powerful optimiser which selects the best input feature set to be fed into the neural networks. Seven factors have been known as the most effective factors on the road traffic mortality rate by high accuracy. The gained results displayed that our model is very promising and may play a useful role in developing a better method for assessing the influence of road traffic mortality risk factors.
NASA Astrophysics Data System (ADS)
Shaw-Stewart, James; Mattle, Thomas; Lippert, Thomas; Nagel, Matthias; Nüesch, Frank; Wokaun, Alexander
2013-08-01
Laser-induced forward transfer (LIFT) has already been used to fabricate various types of organic light-emitting diodes (OLEDs), and the process itself has been optimised and refined considerably since OLED pixels were first demonstrated. In particular, a dynamic release layer (DRL) of triazene polymer has been used, the environmental pressure has been reduced down to a medium vacuum, and the donor receiver gap has been controlled with the use of spacers. Insight into the LIFT process's effect upon OLED pixel performance is presented here, obtained through optimisation of three-colour polyfluorene-based OLEDs. A marked dependence of the pixel morphology quality on the cathode metal is observed, and the laser transfer fluence dependence is also analysed. The pixel device performances are compared to conventionally fabricated devices, and cathode effects have been looked at in detail. The silver cathode pixels show more heterogeneous pixel morphologies, and a correspondingly poorer efficiency characteristics. The aluminium cathode pixels have greater green electroluminescent emission than both the silver cathode pixels and the conventionally fabricated aluminium devices, and the green emission has a fluence dependence for silver cathode pixels.
Drug-eluting stents. Insights from invasive imaging technologies.
Honda, Yasuhiro
2009-08-01
Drug-eluting stents (DES) represent a revolutionary technology in their unique ability to provide both mechanical and biological solutions simultaneously to the target lesion. As a result of biological effects from the pharmacological agents and interaction of DES components with the arterial wall, considerable differences exist between DES and conventional bare metal stents (BMS), yet some of the old lessons learned in the BMS era remain clinically significant. In this context, contrast angiography provides very little information about in vivo device properties and their biomechanical effects on the arterial wall. In contrast, current catheter-based imaging tools, such as intravascular ultrasound, optical coherence tomography, and intracoronary angioscopy can offer unique insights into DES through direct assessment of the device and treated vessel in the clinical setting. This article reviews these insights from current DES with particular focus on performance and safety characteristics as well as discussing an optimal deployment technique, based upon findings obtained through the use of the invasive imaging technologies.
Integrated health management and control of complex dynamical systems
NASA Astrophysics Data System (ADS)
Tolani, Devendra K.
2005-11-01
A comprehensive control and health management strategy for human-engineered complex dynamical systems is formulated for achieving high performance and reliability over a wide range of operation. Results from diverse research areas such as Probabilistic Robust Control (PRC), Damage Mitigating/Life Extending Control (DMC), Discrete Event Supervisory (DES) Control, Symbolic Time Series Analysis (STSA) and Health and Usage Monitoring System (HUMS) have been employed to achieve this goal. Continuous-domain control modules at the lower level are synthesized by PRC and DMC theories, whereas the upper-level supervision is based on DES control theory. In the PRC approach, by allowing different levels of risk under different flight conditions, the control system can achieve the desired trade off between stability robustness and nominal performance. In the DMC approach, component damage is incorporated in the control law to reduce the damage rate for enhanced structural durability. The DES controller monitors the system performance and, based on the mission requirements (e.g., performance metrics and level of damage mitigation), switches among various lower-level controllers. The core idea is to design a framework where the DES controller at the upper-level, mimics human intelligence and makes appropriate decisions to satisfy mission requirements, enhance system performance and structural durability. Recently developed tools in STSA have been used for anomaly detection and failure prognosis. The DMC deals with the usage monitoring or operational control part of health management, where as the issue of health monitoring is addressed by the anomaly detection tools. The proposed decision and control architecture has been validated on two test-beds, simulating the operations of rotorcraft dynamics and aircraft propulsion.
Optimisation of shape kernel and threshold in image-processing motion analysers.
Pedrocchi, A; Baroni, G; Sada, S; Marcon, E; Pedotti, A; Ferrigno, G
2001-09-01
The aim of the work is to optimise the image processing of a motion analyser. This is to improve accuracy, which is crucial for neurophysiological and rehabilitation applications. A new motion analyser, ELITE-S2, for installation on the International Space Station is described, with the focus on image processing. Important improvements are expected in the hardware of ELITE-S2 compared with ELITE and previous versions (ELITE-S and Kinelite). The core algorithm for marker recognition was based on the current ELITE version, using the cross-correlation technique. This technique was based on the matching of the expected marker shape, the so-called kernel, with image features. Optimisation of the kernel parameters was achieved using a genetic algorithm, taking into account noise rejection and accuracy. Optimisation was achieved by performing tests on six highly precise grids (with marker diameters ranging from 1.5 to 4 mm), representing all allowed marker image sizes, and on a noise image. The results of comparing the optimised kernels and the current ELITE version showed a great improvement in marker recognition accuracy, while noise rejection characteristics were preserved. An average increase in marker co-ordinate accuracy of +22% was achieved, corresponding to a mean accuracy of 0.11 pixel in comparison with 0.14 pixel, measured over all grids. An improvement of +37%, corresponding to an improvement from 0.22 pixel to 0.14 pixel, was observed over the grid with the biggest markers.
Wilson, Gregory J; McGregor, Jennifer; Conditt, Gerard; Shibuya, Masahiko; Sushkova, Natalia; Eppihimer, Michael J; Hawley, Steven P; Rouselle, Serge D; Huibregtse, Barbara A; Dawkins, Keith D; Granada, Juan F
2018-02-20
Drug-eluting stents (DES) have evolved to using bioresorbable polymers as a method of drug delivery. The impact of bioresorbable polymer on long-term neointimal formation, inflammation, and healing has not been fully characterised. This study aimed to evaluate the biological effect of polymer resorption on vascular healing and inflammation. A comparative DES study was performed in the familial hypercholesterolaemic swine model of coronary stenosis. Permanent polymer DES (zotarolimus-eluting [ZES] or everolimus-eluting [EES]) were compared to bioresorbable polymer everolimus-eluting stents (BP-EES) and BMS. Post implantation in 29 swine, stents were explanted and analysed up to 180 days. Area stenosis was reduced in all DES compared to BMS at 30 days. At 180 days, BP-EES had significantly lower area stenosis than EES or ZES. Severe inflammatory activity persisted in permanent polymer DES at 180 days compared to BP-EES or BMS. Qualitative para-strut inflammation areas (graded as none to severe) were elevated but similar in all groups at 30 days, peaked at 90 days in DES compared to BMS (p<0.05) and, at 180 days, were similar between BMS and BP-EES but were significantly greater in DES. BP-EES resulted in a lower net long-term reduction in neointimal formation and inflammation compared to permanent polymer DES in an animal model. Further study of the long-term neointima formation deserves study in human clinical trials.
2013-12-01
de la performance de leurs troupes; b) d’importants écarts scientifiques / de capacités existent en ce qui concerne la susceptibilité, le... La HFM/RTG-187 avait pour but de : a) comparer et évaluer les ressources techniques qui sont actuellement utilisées pour un « système de gestion des ...d’une doctrine et de programmes d’enseignement intégrés pour la gestion des risques thermiques afin de garantir le maintien
2009-08-01
caractéristiques directionnelles dépendent beaucoup de la fréquence. Les mesures des courbes de réponse en tension d’émission du projecteur et des diagrammes ...Development Knowledge and Information Management FFT Fast fourier transform HF MMPP High frequency multi-mode pipe projector kHz kilohertz MMPP
NASA Astrophysics Data System (ADS)
Mallick, S.; Kar, R.; Mandal, D.; Ghoshal, S. P.
2016-07-01
This paper proposes a novel hybrid optimisation algorithm which combines the recently proposed evolutionary algorithm Backtracking Search Algorithm (BSA) with another widely accepted evolutionary algorithm, namely, Differential Evolution (DE). The proposed algorithm called BSA-DE is employed for the optimal designs of two commonly used analogue circuits, namely Complementary Metal Oxide Semiconductor (CMOS) differential amplifier circuit with current mirror load and CMOS two-stage operational amplifier (op-amp) circuit. BSA has a simple structure that is effective, fast and capable of solving multimodal problems. DE is a stochastic, population-based heuristic approach, having the capability to solve global optimisation problems. In this paper, the transistors' sizes are optimised using the proposed BSA-DE to minimise the areas occupied by the circuits and to improve the performances of the circuits. The simulation results justify the superiority of BSA-DE in global convergence properties and fine tuning ability, and prove it to be a promising candidate for the optimal design of the analogue CMOS amplifier circuits. The simulation results obtained for both the amplifier circuits prove the effectiveness of the proposed BSA-DE-based approach over DE, harmony search (HS), artificial bee colony (ABC) and PSO in terms of convergence speed, design specifications and design parameters of the optimal design of the analogue CMOS amplifier circuits. It is shown that BSA-DE-based design technique for each amplifier circuit yields the least MOS transistor area, and each designed circuit is shown to have the best performance parameters such as gain, power dissipation, etc., as compared with those of other recently reported literature.
Energy and wear optimisation of train longitudinal dynamics and of traction and braking systems
NASA Astrophysics Data System (ADS)
Conti, R.; Galardi, E.; Meli, E.; Nocciolini, D.; Pugi, L.; Rindi, A.
2015-05-01
Traction and braking systems deeply affect longitudinal train dynamics, especially when an extensive blending phase among different pneumatic, electric and magnetic devices is required. The energy and wear optimisation of longitudinal vehicle dynamics has a crucial economic impact and involves several engineering problems such as wear of braking friction components, energy efficiency, thermal load on components, level of safety under degraded or adhesion conditions (often constrained by the current regulation in force on signalling or other safety-related subsystem). In fact, the application of energy storage systems can lead to an efficiency improvement of at least 10% while, as regards the wear reduction, the improvement due to distributed traction systems and to optimised traction devices can be quantified in about 50%. In this work, an innovative integrated procedure is proposed by the authors to optimise longitudinal train dynamics and traction and braking manoeuvres in terms of both energy and wear. The new approach has been applied to existing test cases and validated with experimental data provided by Breda and, for some components and their homologation process, the results of experimental activities derive from cooperation performed with relevant industrial partners such as Trenitalia and Italcertifer. In particular, simulation results are referred to the simulation tests performed on a high-speed train (Ansaldo Breda Emu V250) and on a tram (Ansaldo Breda Sirio Tram). The proposed approach is based on a modular simulation platform in which the sub-models corresponding to different subsystems can be easily customised, depending on the considered application, on the availability of technical data and on the homologation process of different components.
Optimisation robuste des aeronefs et des groupes turboreacteurs
NASA Astrophysics Data System (ADS)
Couturier, Philippe
Future aircraft and powerplant designs will need to meet and perhaps anticipate increasingly demanding operational constraints. This progressive evolution in design requirements is already at work and arises from the combined impacts of increasingly stringent environmental norms with regards to noise and atmospheric emissions, a depletion of fossil fuel reserves which is expected to drive fuel costs upwards, as well as a steady increase in air traffic. In order to adapt to these market shifts, aircraft and powerplant companies will need to explore the potential range of benefits and risks associated with a wide spectrum of new designs and technologies. At the same time, it will be necessary to ensure that the resulting end products provide cost effective solutions when operated in the economic environment foreseen for the next generation of aircrafts. The objective of this study is to develop a methodology which enables the selection of optimal robust designs at the preliminary design stage as well as to quantify the compromise between a robust design and a potential gain in performance. The developed methodology is used in the design of a seventy passenger aircraft in order to determine the effects of uncertainty. The methodology seeks to optimize the design while attenuating its sensitivity to uncertainties. The goal is to reduce the likelihood of costly concept reformulations in the later stages of the product development process. A design platform was developed to enable the study at a conceptual level of aircraft and engine performance. It comprises four modules namely: the aircraft design and performance software Pacelab APD, a metamodel constructed with the software GasTurb to calculate engine performance, a module to predict the noise level, and a module to determine the operating costs. The last two modules were constructed using data from the literature. The effects related to two types of uncertainties present at the preliminary design stage were analyzed. These are uncertainties related to the market forecast for when the next generation of aircrafts will be in service as well as uncertainties of the level of fidelity of the models used. Based on predictions for future oil costs, the research found that an aircraft built for a similar cruising speed as today's jet aircrafts will minimize the mean of the predicted operating cost by having a configuration that minimizes fuel consumption. Conversely, it has been determined that fuel cost does not affect the design optimized to minimize the mean of the predicted operating costs when the cruise Mach number is variable. Furthermore, the use of Pareto fronts in order to quantify the compromise between a robust design and a potential gain in performance showed that the design variables have little influence on the sensitivity of the operating cost subject to model uncertainties. It has also been determined that neglecting uncertainties during the design process can lead to the selection of a configuration with a high risk of not satisfying the constraints.
Minimisation des inductances propres des condensateurs à film métallisé
NASA Astrophysics Data System (ADS)
Joubert, Ch.; Rojat, G.; Béroual, A.
1995-07-01
In this article, we examine the different factors responsible for the equivalent series inductance in metallized capacitors and we propose structures for capacitors that reduce this inductance. After recalling the structure of metallized capacitors we compare, by experimental measurements, the inductance due to the winding and that one added by the connections. The latter can become preponderant. In order to explain the experimental evolution of the winding impedance vs. frequency, we describe an analytical model which gives the current density in the winding and its impedance. This model enables us to determine the self resonant frequency for different types of capacitors. From where, we can infer the influence of the height of capacitors and their internal and external radius upon performances, It appears that to reduce the equivalent series inductance, it is better to use flat windings and annular windings. Dans cet article nous examinons les différents facteurs responsables de l'inductance équivalente série dans les condensateurs à film métallisé et proposons des géométries de condensateurs qui réduisent cette inductance. Après avoir rappelé la structure des condensateurs à film métallisé, nous comparons, par des mesures expérimentales, l'inductance due au bobinage et l'inductance ajoutée par les connexions. Cette dernière peut devenir prépondérante. Afin d'expliquer l'évolution de l'impédance du bobinage en fonction de la fréquence, nous décrivons un modèle analytique qui donne la densité du courant dans le bobinage et l'impédance de ce dernier. En outre, ce modèle permet de déterminer la fréquence de résonance série de divers types de condensateurs ce qui permet de déduire l'influence de la hauteur des condensateurs et de leurs rayons interne et externe sur les performances. Il apparaît ainsi que, pour diminuer l'inductance équivalente série, il vaut mieux employer des bobinages plats et des bobinages annulaires.
Analysis and optimisation of the convergence behaviour of the single channel digital tanlock loop
NASA Astrophysics Data System (ADS)
Al-Kharji Al-Ali, Omar; Anani, Nader; Al-Araji, Saleh; Al-Qutayri, Mahmoud
2013-09-01
The mathematical analysis of the convergence behaviour of the first-order single channel digital tanlock loop (SC-DTL) is presented. This article also describes a novel technique that allows controlling the convergence speed of the loop, i.e. the time taken by the phase-error to reach its steady-state value, by using a specialised controller unit. The controller is used to adjust the convergence speed so as to selectively optimise a given performance parameter of the loop. For instance, the controller may be used to speed up the convergence in order to increase the lock range and improve the acquisition speed. However, since increasing the lock range can degrade the noise immunity of the system, in a noisy environment the controller can slow down the convergence speed until locking is achieved. Once the system is in lock, the convergence speed can be increased to improve the acquisition speed. The performance of the SC-DTL system was assessed against similar arctan-based loops and the results demonstrate the success of the controller in optimising the performance of the SC-DTL loop. The results of the system testing using MATLAB/Simulink simulation are presented. A prototype of the proposed system was implemented using a field programmable gate array module and the practical results are in good agreement with those obtained by simulation.
End-to-end System Performance Simulation: A Data-Centric Approach
NASA Astrophysics Data System (ADS)
Guillaume, Arnaud; Laffitte de Petit, Jean-Luc; Auberger, Xavier
2013-08-01
In the early times of space industry, the feasibility of Earth observation missions was directly driven by what could be achieved by the satellite. It was clear to everyone that the ground segment would be able to deal with the small amount of data sent by the payload. Over the years, the amounts of data processed by the spacecrafts have been increasing drastically, leading to put more and more constraints on the ground segment performances - and in particular on timeliness. Nowadays, many space systems require high data throughputs and short response times, with information coming from multiple sources and involving complex algorithms. It has become necessary to perform thorough end-to-end analyses of the full system in order to optimise its cost and efficiency, but even sometimes to assess the feasibility of the mission. This paper presents a novel framework developed by Astrium Satellites in order to meet these needs of timeliness evaluation and optimisation. This framework, named ETOS (for “End-to-end Timeliness Optimisation of Space systems”), provides a modelling process with associated tools, models and GUIs. These are integrated thanks to a common data model and suitable adapters, with the aim of building suitable space systems simulators of the full end-to-end chain. A big challenge of such environment is to integrate heterogeneous tools (each one being well-adapted to part of the chain) into a relevant timeliness simulation.
Kerjean, A; Poirot, C; Epelboin, S; Jouannet, P
1999-06-01
Genital tract abnormalities and adverse pregnancy outcome are well known in women exposed in utero to diethylstilboestrol (DES). Data about adverse reproductive performance in women exposed to DES have been published, including controversial reports of menstrual dysfunction, poor responses after ovarian stimulation, oocyte maturation and fertilization abnormalities. We compared oocyte quality, in-vitro fertilization results and embryo quality for women exposed in utero to DES with a control group. Between 1989 and 1996, 56 DES-exposed women who had 125 in-vitro fertilization (IVF) attempts were retrospectively compared to a control group of 45 women with tubal disease, who underwent 73 IVF attempts. Couples suffering from male infertility were excluded. The parameters compared were oocyte quality (maturation abnormalities, immature oocyte, mature oocyte), fertilization and cleavage rate (per treated and metaphase II oocytes), and embryo quality (number and grade). We found no significant difference in oocyte maturational status, fertilization rates, cleavage rates, embryo quality and development between DES-exposed subjects and control subjects. These results suggest that in-utero exposure to DES has no significant influence on oocyte quality and fertilization ability as judged during IVF attempts.
NASA Astrophysics Data System (ADS)
Ostiguy, Pierre-Claude
Les matériaux composites sont de plus en plus utilisés en aéronautique. Leurs excellentes propriétés mécaniques et leur faible poids leur procurent un avantage certain par rapport aux matériaux métalliques. Ceux-ci étant soumis à diverses conditions de chargement et environnementales, ils sont suceptibles de subir plusieurs types d'endommagements, compromettant leur intégrité. Des méthodes fiables d'inspection sont donc nécessaires pour évaluer leur intégrité. Néanmoins, peu d'approches non destructives, embarquées et efficaces sont présentement utilisées. Ce travail de recherche se penche sur l'étude de l'effet de la composition des matériaux composites sur la détection et la caractérisation par ondes guidées. L'objectif du projet est de développer une approche de caractérisation mécanique embarquée permettant d'améliorer la performance d'une approche d'imagerie par antenne piézoélectriques sur des structures composite et métalliques. La contribution de ce projet est de proposer une approche embarquée de caractérisation mécanique par ultrasons qui ne requiert pas une mesure sur une multitude d'échantillons et qui est non destructive. Ce mémoire par articles est divisé en quatre parties, dont les parties deux A quatre présentant les articles publiés et soumis. La première partie présente l'état des connaissances dans la matière nécessaires à l'acomplissement de ce projet de maîtrise. Les principaux sujets traités portent sur les matériaux composites, propagation d'ondes, la modélisation des ondes guidées, la caractérisation par ondes guidées et la surveillance embarquée des structures. La deuxième partie présente une étude de l'effet des propriétés mécaniques sur la performance de l'algorithme d'imagerie Excitelet. L'étude est faite sur une structure isotrope. Les résultats ont démontré que l'algorithme est sensible à l'exactitude des propriétés mécaniques utilisées dans le modèle. Cette sensibilité a également été explorée afin de développer une méthode embarquée permettant d'évaluer les propriétés mécaniques d'une structure. La troisième partie porte sur une étude plus rigoureuse des performances de la méthode de caractérisation mécanique embarquée. La précision, la répétabilité et la robustesse de la méthode sont validés à l'aide d'un simulateur par FEM. Les propriétés estimées avec l'approche de caractérisation sont à moins de 1% des propriétés utilisées dans le modèle, ce qui rivalise avec l'incertitude des méthodes ASTM. L'analyse expérimentale s'est avérée précise et répétable pour des fréquences sous les 200 kHz, permettant d'estimer les propriétés mécaniques à moins de 1% des propriétés du fournisseur. La quatrième partie a démontrée la capacité de l'approche de caractérisation à identifier les propriétés mécaniques d'une plaques composite orthotrope. Les résultats estimés expérimentalement sont inclus dans les barres d'incertitude des propriétés estimées à l'aide des tests ASTM. Finalement, une simulation FEM a démontré la précision de l'approche avec des propriétés mécaniques à moins de 4 % des propriétés du modèle simulé.
NASA Astrophysics Data System (ADS)
Hamieh, Tayssir
2005-05-01
Né simultanément à Mulhouse et à Beyrouth en 1996 dans le cadre d'une collaboraiion franco-libanaise sur une initiative personnelle de Monsieur Tayssir HAMIEH. le Colloque Franco-Libanais sur la Science des Matériaux (CSM), qui s'inscrit dans le cadre des relations étroites entre la France et le Liban, est très vite devenu une occasion très importante de rencontre entre scientifiques de haut niveau, non seulement, du contour méditerranéen mais également des pays européens, américains et arabes. La quatrieme édition CSM4 est une véritable réussite grâce à la participation des chercheurs confirmés dans tous les domaines des sciences de matériaux et venant de plusieurs pays tels que la France, I'Algérie, Le Liban, la Syrie, le Maroc, la Tunisie, l'Italie, l'Espagne, le Portugal, le Royaume Uni, les États-Unis, la Russie, l'Allemagne, le Japon et I'Inde ; pour présenter plus de 350 communications orales et par affiche et couvrant presque toutes les disciplines des systèmes des matériaux. Le choix des diffèrents thèmes du colloque sur la science des matériaux a été dicté par l'importance capitale de cette discipline dans notre civilisation moderne. En fait, les matériaux utilisés pour la fabrication artisanale ou industrielle d'objets, de produits et de systèmes ainsi que pour la réalisation de constructions et d'équipements ont de tout temps défini le niveau de notre civilisation technique. La réalisation des objectifs communs de notre monde en plein développement, pour ne pas dire en pleine mutation, est en grande partie tributaire de la mise au point de nouveaux matériaux et de procédés de transformation et d'assemblages nouveaux, présentant des performances et des qualités améliorées. Le colloque a illustré et traduit, de manière remarquables, l'excellente collaboration entre chercheurs libanais et français. Le partenariat est exemplaire par la qualité des laboratoires impliqués et par le niveau scientifique des résultats. Nous souhaitons que ce colloque franco-libanais sur la science des matériaux continue son succès implacable et incontournable et oeuvre pour une collaboration fructueuse entre les chercheurs des deux pays et les autres chercheurs des pays arabes et francophones. Coordinateur et Éditeur du Colloque Tayssir HAMIEH
NASA Astrophysics Data System (ADS)
Chaud, X.; Gautier-Picard, P.; Beaugnon, E.; Porcar, L.; Bourgault, D.; Tournier, R.; Erraud, A.; Tixador, P.
1998-03-01
Industrial applications of the bulk superconducting YBa_2Cu_3O_7 material imply to control the growth of large oriented monodomains in samples of big size (several centimeters). The laboratory EPM-Matformag is committed to produce such materials according to three different methods (zone melting, solidification controlled by a magnetic field, crystal growth from a seed). The results obtained show that it is possible by such methods to elaborate a material with high performances at the centimeter scale and to produce it in series. The availability of such materials allows the measure of physical properties on a large scale and the testing of prototypes for cryo-electrotechnical applications (magnetic bearing, flywheel, coupling device, current lead...). Les applications industrielles des matériaux supraconducteurs massifs YBa_2Cu_3O_7 impliquent de contrôler la croissance de larges monodomaines orientés dans des échantillons de grande taille (plusieurs centimètres). Le laboratoire EPM-Matformag s'est appliqué à produire de tels matériaux selon trois techniques différentes (fusion de zone, solidification contrôlée sous champ magnétique, croissance cristalline à partir d'un germe). Les résultats obtenus montrent qu'il est possible par de telles techniques d'obtenir un matériau performant à l'échelle des centimètres et de le produire en série. La disponibilité de tels matériaux permet de mesurer des propriétés physiques à grande échelle et de tester des prototypes d'applications cryo-électrotechniques (palier magnétique, volant d'inertie, coupleur, amenée de courant, limiteur de courant...).
Rheingold, P D
1976-01-01
Focus is on the diethylstilbestrol (DES) litigation which has resulted from the 1971 discovery that this synthetic estrogen can cause cancer in the daughters of women who used the drug during pregnancy in an effort to prevent threatened abortion. Possibly 100 suits are pending at this time in which DES daughters claim injuries. In most of these vaginal or cervical cancer has appeared -- with or without a hysterectomy having been performed. Several women died from cancer. The fact that the use of DES occurred many years ago is the legal hurdle most troublesome to lawyers. The average women coming to a lawyer's office today has a mother who used some form of DES, perhaps in 1955. Few drugstores have records today of the prescriptions which they filled 20 years ago. It has been estimated that over the 1950-1970 period more than 200 different companies manufactured or "tabletized" under their own name DES plus a variety of similar synthetic estrogens promoted for the prevention of threatened abortion. A further hurdle caused by the passage of time is that even the records of the physicians are frequently lost. A final problem created by the age of the cases is statute of limitations. If the actual manufacturer of the DES cannot be identified, this is generally the end of the lawyer's interest in the case. The chance of the plaintiff winning may be increased if the action against all the manufacturers is a class action. Most of the pending DES suits are against the manufacturer and not against the doctor. Thus far no DES case has been tried to completion. Several have been settled by the manufacturers on the eve of the trial, generally for less than the full sum that a cancer victim would expect to receive.
Optimisation and characterisation of tungsten thick coatings on copper based alloy substrates
NASA Astrophysics Data System (ADS)
Riccardi, B.; Montanari, R.; Casadei, M.; Costanza, G.; Filacchioni, G.; Moriani, A.
2006-06-01
Tungsten is a promising armour material for plasma facing components of nuclear fusion reactors because of its low sputter rate and favourable thermo-mechanical properties. Among all the techniques able to realise W armours, plasma spray looks particularly attractive owing to its simplicity and low cost. The present work concerns the optimisation of spraying parameters aimed at 4-5 mm thick W coating on copper-chromium-zirconium (Cu,Cr,Zr) alloy substrates. Characterisation of coatings was performed in order to assess microstructure, impurity content, density, tensile strength, adhesion strength, thermal conductivity and thermal expansion coefficient. The work performed has demonstrated the feasibility of thick W coatings on flat and curved geometries. These coatings appear as a reliable armour for medium heat flux plasma facing component.
Marquis-Gravel, Guillaume; Matteau, Alexis; Potter, Brian J; Gobeil, François; Noiseux, Nicolas; Stevens, Louis-Mathieu; Mansour, Samer
2017-01-01
Background The place of drug-eluting balloons (DEB) in the treatment of in-stent restenosis (ISR) is not well-defined, particularly in a population of all-comers with acute coronary syndromes (ACS). Objective Compare the clinical outcomes of DEB with second-generation drug-eluting stents (DES) for the treatment of ISR in a real-world population with a high proportion of ACS. Methods A retrospective analysis of consecutive patients with ISR treated with a DEB compared to patients treated with a second-generation DES was performed. The primary endpoint was a composite of major adverse cardiovascular events (MACE: all-cause death, non-fatal myocardial infarction, and target lesion revascularization). Comparisons were performed using Cox proportional hazards multivariate adjustment and Kaplan-Meier analysis with log-rank. Results The cohort included 91 patients treated with a DEB and 89 patients treated with a DES (74% ACS). Median follow-up was 26 months. MACE occurred in 33 patients (36%) in the DEB group, compared to 17 patients (19%) in the DES group (p log-rank = 0.02). After multivariate adjustment, there was no significant difference between the groups (HR for DEB = 1.45 [95%CI: 0.75-2.83]; p = 0.27). Mortality rates at 1 year were 11% with DEB, and 3% with DES (p = 0.04; adjusted HR = 2.85 [95%CI: 0.98-8.32]; p = 0.06). Conclusion In a population with a high proportion of ACS, a non-significant numerical signal towards increased rates of MACE with DEB compared to second-generation DES for the treatment of ISR was observed, mainly driven by a higher mortality rate. An adequately-powered randomized controlled trial is necessary to confirm these findings. PMID:28977052
NASA Astrophysics Data System (ADS)
Donnadieu, P.; Dénoyer, F.
1996-11-01
A comparative X-ray and electron diffraction study has been performed on Al-Li-Cu icosahedral quasicrystal in order to investigate the diffuse scattering rings revealed by a previous work. Electron diffraction confirms the existence of rings but shows that the rings have a fine structure. The diffuse aspect on the X-ray diffraction patterns is then due to an averaging effect. Recent simulations based on the model of canonical cells related to the icosahedral packing give diffractions patterns in agreement with this fine structure effect. Nous comparons les diagrammes de diffraction des rayon-X et des électrons obtenus sur les mêmes échantillons du quasicristal icosaèdrique Al-Li-Cu. Notre but est d'étudier les anneaux de diffusion diffuse mis en évidence par un travail précédent. Les diagrammes de diffraction électronique confirment la présence des anneaux mais ils montrent aussi que ces anneaux possèdent une structure fine. L'aspect diffus des anneaux révélés par la diffraction des rayons X est dû à un effet de moyenne. Des simulations récentes basées sur la décomposition en cellules canoniques de l'empilement icosaédrique produisent des diagrammes de diffraction en accord avec ces effects de structure fine.
1992-05-01
the basis of gas generator speed implies both reduction in centrifugal stress and turbine inlet temperature . Calculations yield the values of all...and Transient Performance Calculation Method for Prediction, Analysis 3 and Identification by J.-P. Duponchel, J.I oisy and R.Carrillo Component...thrust changes without over- temperature or flame out. Comprehensive mathematical models of the complete power plant (intake-gas generator -exhaust) plus
1990-06-01
reduction software , prior to converting all remaining test which requires internal compensation. T he r sidual effect is pressures to engineering units...Reduction Conversion of Millivolts to Engineering Units. Carrying out numerical integrations to obtain area and mass weighted averages for various...Performance Assessment of Aircraft Turbine Engines and Components (Les MWthodes Recommande’es pour la Mesure de la Pression et de ]a Temperature de la
NASA Astrophysics Data System (ADS)
Ferretti, S.; Amadori, K.; Boccalatte, A.; Alessandrini, M.; Freddi, A.; Persiani, F.; Poli, G.
2002-01-01
The UNIBO team composed of students and professors of the University of Bologna along with technicians and engineers from Alenia Space Division and Siad Italargon Division, took part in the 3rd Student Parabolic Flight Campaign of the European Space Agency in 2000. It won the student competition and went on to take part in the Professional Parabolic Flight Campaign of May 2001. The experiment focused on "dendritic growth in aluminium alloy weldings", and investigated topics related to the welding process of aluminium in microgravity. The purpose of the research is to optimise the process and to define the areas of interest that could be improved by new conceptual designs. The team performed accurate tests in microgravity to determine which phenomena have the greatest impact on the quality of the weldings with respect to penetration, surface roughness and the microstructures that are formed during the solidification. Various parameters were considered in the economic-technical optimisation, such as the type of electrode and its tip angle. Ground and space tests have determined the optimum chemical composition of the electrodes to offer longest life while maintaining the shape of the point. Additionally, the power consumption has been optimised; this offers opportunities for promoting the product to the customer as well as being environmentally friendly. Tests performed on the Al-Li alloys showed a significant influence of some physical phenomena such as the Marangoni effect and thermal diffusion; predictions have been made on the basis of observations of the thermal flux seen in the stereophotos. Space transportation today is a key element in the construction of space stations and future planetary bases, because the volumes available for launch to space are directly related to the payload capacity of rockets or the Space Shuttle. The research performed gives engineers the opportunity to consider completely new concepts for designing structures for space applications. In fact, once the optimised parameters are defined for welding in space, it could be possible to weld different parts directly in orbit to obtain much larger sizes and volumes, for example for space tourism habitation modules. The second relevant aspect is technology transfer obtained by the optimisation of the TIG process on aluminium which is often used in the automotive industry as well as in mass production markets.
An Introduction to the IP/PCT Model Implementation in IPME
2011-03-01
perceptuel (TCP) mis en œuvre dans le logiciel Environnement intégré de modélisation des performances (EIMP) par Micro Analysis and Design. Ce...de la théorie du contrôle perceptuel (TCP) mis en œuvre dans le logiciel Environnement intégré de modélisation des performances (EIMP) par Micro...modèles de traitement de l’information (TI) et de la théorie du contrôle perceptuel (TCP) mis en œuvre dans le logiciel Environnement intégré de
2012-11-01
Bldg. 33 Wright- Patterson AFB, OH 45433 Email: janet.sutton@wpafb.af.mil xviii RTO-TR-HFM-138 RTO-TR-HFM-138 ES - 1 Adaptability in...711 Human Performance Wing/Human Effectiveness, Cognitive Systems Branch Wright- Patterson AFB, OH 45433 USA Tel: 1+ 937.656.4316 Fax: 1...AFRL) 711 Human Performance Wing/Human Effectiveness, Cognitive Systems Branch Wright- Patterson AFB, OH 45433 USA Tel: 1+ 937.785.3165 Fax: 1
Deman, P R; Kaiser, T M; Dirckx, J J; Offeciers, F E; Peeters, S A
2003-09-30
A 48 contact cochlear implant electrode has been constructed for electrical stimulation of the auditory nerve. The stimulating contacts of this electrode are organised in two layers: 31 contacts on the upper surface directed towards the habenula perforata and 17 contacts connected together as one longitudinal contact on the underside. The design of the electrode carrier aims to make radial current flow possible in the cochlea. The mechanical structure of the newly designed electrode was optimised to obtain maximal insertion depth. Electrode insertion tests were performed in a transparent acrylic model of the human cochlea.
Mikaeli, S; Thorsén, G; Karlberg, B
2001-01-12
A novel approach to multivariate evaluation of separation electrolytes for micellar electrokinetic chromatography is presented. An initial screening of the experimental parameters is performed using a Plackett-Burman design. Significant parameters are further evaluated using full factorial designs. The total resolution of the separation is calculated and used as response. The proposed scheme has been applied to the optimisation of the separation of phenols and the chiral separation of (+)-1-(9-anthryl)-2-propyl chloroformate-derivatized amino acids. A total of eight experimental parameters were evaluated and optimal conditions found in less than 48 experiments.
Staples, Emily; Ingram, Richard James Michael; Atherton, John Christopher; Robinson, Karen
2013-01-01
Sensitive measurement of multiple cytokine profiles from small mucosal tissue biopsies, for example human gastric biopsies obtained through an endoscope, is technically challenging. Multiplex methods such as Luminex assays offer an attractive solution but standard protocols are not available for tissue samples. We assessed the utility of three commercial Luminex kits (VersaMAP, Bio-Plex and MILLIPLEX) to measure interleukin-17A (IL-17) and interferon-gamma (IFNγ) concentrations in human gastric biopsies and we optimised preparation of mucosal samples for this application. First, we assessed the technical performance, limits of sensitivity and linear dynamic ranges for each kit. Next we spiked human gastric biopsies with recombinant IL-17 and IFNγ at a range of concentrations (1.5 to 1000 pg/mL) and assessed kit accuracy for spiked cytokine recovery and intra-assay precision. We also evaluated the impact of different tissue processing methods and extraction buffers on our results. Finally we assessed recovery of endogenous cytokines in unspiked samples. In terms of sensitivity, all of the kits performed well within the manufacturers' recommended standard curve ranges but the MILLIPLEX kit provided most consistent sensitivity for low cytokine concentrations. In the spiking experiments, the MILLIPLEX kit performed most consistently over the widest range of concentrations. For tissue processing, manual disruption provided significantly improved cytokine recovery over automated methods. Our selected kit and optimised protocol were further validated by measurement of relative cytokine levels in inflamed and uninflamed gastric mucosa using Luminex and real-time polymerase chain reaction. In summary, with proper optimisation Luminex kits (and for IL-17 and IFNγ the MILLIPLEX kit in particular) can be used for the sensitive detection of cytokines in mucosal biopsies. Our results should help other researchers seeking to quantify multiple low concentration cytokines in small tissue samples. PMID:23644159
Adsorption de gaz sur les materiaux microporeux modelisation, thermodynamique et applications
NASA Astrophysics Data System (ADS)
Richard, Marc-Andre
2009-12-01
Nos travaux sur l'adsorption de gaz dans les materiaux microporeux s'inscrivent dans le cadre des recherches visant a augmenter l'efficacite du stockage de l'hydrogene a bord des vehicules. Notre objectif etait d'etudier la possibilite d'utiliser l'adsorption afin d'ameliorer l'efficacite de la liquefaction de l'hydrogene des systemes a petite echelle. Nous avons egalement evalue les performances d'un systeme de stockage cryogenique de l'hydrogene base sur la physisorption. Comme nous avons affaire a des plages de temperatures particulierement etendues et a de hautes pressions dans la region supercritique du gaz, nous avons du commencer par travailler sur la modelisation et la thermodynamique de l'adsorption. La representation de la quantite de gaz adsorbee en fonction de la temperature et de la pression par un modele semi-empirique est un outil utile pour determiner la masse de gaz adsorbee dans un systeme mais egalement pour calculer les effets thermiques lies a l'adsorption. Nous avons adapte le modele Dubinin-Astakhov (D-A) pour modeliser des isothermes d'adsorption d'hydrogene, d'azote et de methane sur du charbon actif a haute pression et sur une grande plage de temperatures supercritiques en considerant un volume d'adsorption invariant. Avec cinq parametres de regression (incluant le volume d'adsorption Va), le modele que nous avons developpe permet de tres bien representer des isothermes experimentales d'adsorption d'hydrogene (de 30 a 293 K, jusqu'a 6 MPa), d'azote (de 93 a 298 K, jusqu'a 6 MPa) et de methane (de 243 a 333 K, jusqu'a 9 MPa) sur le charbon actif. Nous avons calcule l'energie interne de la phase adsorbee a partir du modele en nous servant de la thermodynamique des solutions sans negliger le volume d'adsorption. Par la suite, nous avons presente les equations de conservation de la niasse et de l'energie pour un systeme d'adsorption et valide notre demarche en comparant des simulations et des tests d'adsorption et de desorption. En plus de l'energie interne, nous avons evalue l'entropie, l'energie differentielle d'adsorption et la chaleur isosterique d'adsorption. Nous avons etudie la performance d'un systeme de stockage d'hydrogene par adsorption pour les vehicules. La capacite de stockage d'hydrogene et les performances thermiques d'un reservoir de 150 L contenant du charbon actif Maxsorb MSC-30(TM) (surface specifique ˜ 3000 m2/g) ont ete etudiees sur une plage de temperatures de 60 a 298 K et a des pressions allant jusqu'a 35 MPa. Le systeme a ete considere de facon globale, sans nous attarder a un design particulier. Il est possible de stocker 5 kg d'hydrogene a des pressions de 7.8, 15.2 et 29 MPa pour des temperatures respectivement de 80, 114 et 172 K, lorsqu'on recupere l'hydrogene residuel a 2.5 bars en le chauffant. La simulation des phenomenes thermiques nous a permis d'analyser le refroidissement necessaire lors du remplissage, le chauffage lors de la decharge et le temps de dormance. Nous avons developpe un cycle de liquefaction de l'hydrogene base sur l'adsorption avec compression mecanique (ACM) et avons evalue sa faisabilite. L'objectif etait d'augmenter sensiblement l'efficacite des systemes de liquefaction de l'hydrogene a petite echelle (moins d'une tonne/jour) et ce, sans en augmenter le cout en capital. Nous avons adapte le cycle de refrigeration par ACM afin qu'il puisse par la suite etre ajoute a un cycle de liquefaction de l'hydrogene. Nous avons ensuite simule des cycles idealises de refrigeration par ACM. Meme dans ces conditions ideales, la refrigeration specifique est faible. De plus, l'efficacite theorique maximale de ces cycles de refrigeration est d'environ 20 a 30% de l'ideal. Nous avons realise experimentalement un cycle de refrigeration par ACM avec le couple azote/charbon actif. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Luque, E.; Santiago, B.; Pieres, A.; Marshall, J. L.; Pace, A. B.; Kron, R.; Drlica-Wagner, A.; Queiroz, A.; Balbinot, E.; Ponte, M. dal; Neto, A. Fausti; da Costa, L. N.; Maia, M. A. G.; Walker, A. R.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Crocce, M.; Davis, C.; Doel, P.; Eifler, T. F.; Flaugher, B.; García-Bellido, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kuehn, K.; Kuropatkin, N.; Miquel, R.; Nichol, R. C.; Plazas, A. A.; Sanchez, E.; Scarpine, V.; Schindler, R.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.
2018-04-01
We report the discovery of a new star cluster, DES 3, in the constellation of Indus, and deeper observations of the previously identified satellite DES J0222.7-5217 (Eridanus III). DES 3 was detected as a stellar overdensity in first-year Dark Energy Survey data, and confirmed with deeper photometry from the 4.1 metre Southern Astrophysical Research (SOAR) telescope. The new system was detected with a relatively high significance and appears in the DES images as a compact concentration of faint blue point sources. We determine that DES 3 is located at a heliocentric distance of ≃ 76.2 kpc and it is dominated by an old (≃ 9.8 Gyr) and metal-poor ([Fe/H] ≃ -1.84) population. While the age and metallicity values of DES 3 are comparable to typical globular clusters (objects with a high stellar density, stellar mass of ˜105M⊙ and luminosity MV ˜ -7.3), its half-light radius (rh ˜ 6.87 pc) and luminosity (MV ˜ -1.7) are more indicative of faint star cluster. Based on the angular size, DES 3, with a value of rh ˜ 0{^'.}31, is among the smallest faint star clusters known to date. Furthermore, using deeper imaging of DES J0222.7-5217 taken with the SOAR telescope, we update structural parameters and perform the first isochrone modeling. Our analysis yields the first age (≃ 12.6 Gyr) and metallicity ([Fe/H] ≃ -2.01) estimates for this object. The half-light radius (rh ≃ 11.24 pc) and luminosity (MV ≃ -2.4) of DES J0222.7-5217 suggest that it is likely a faint star cluster. The discovery of DES 3 indicates that the census of stellar systems in the Milky Way is still far from complete, and demonstrates the power of modern wide-field imaging surveys to improve our knowledge of the Galaxy's satellite population.
von Guggenberg, E; Dietrich, H; Skvortsova, I; Gabriel, M; Virgolini, I J; Decristoforo, C
2007-08-01
Different attempts have been made to develop a suitable radioligand for targeting CCK-2 receptors in vivo, for staging of medullary thyroid carcinoma (MTC) and other receptor-expressing tumours. After initial successful clinical studies with [DTPA(0),D: Glu(1)]minigastrin (DTPA-MG0) radiolabelled with (111)In and (90)Y, our group developed a (99m)Tc-labelled radioligand, based on HYNIC-MG0. A major drawback observed with these derivatives is their high uptake by the kidneys. In this study we describe the preclinical evaluation of the optimised shortened peptide analogue, [HYNIC(0),D: Glu(1),desGlu(2-6)]minigastrin (HYNIC-MG11). (99m)Tc labelling of HYNIC-MG11 was performed using tricine and EDDA as coligands. Stability experiments were carried out by reversed phase HPLC analysis in PBS, PBS/cysteine and plasma as well as rat liver and kidney homogenates. Receptor binding and cell uptake experiments were performed using AR4-2J rat pancreatic tumour cells. Animal biodistribution was studied in AR4-2J tumour-bearing nude mice. Radiolabelling was performed at high specific activities and radiochemical purity was >90%. (99m)Tc-EDDA-HYNIC-MG11 showed high affinity for the CCK-2 receptor and cell internalisation comparable to that of (99m)Tc-EDDA-HYNIC-MG0. Despite high stability in solution, a low metabolic stability in rat tissue homogenates was found. In a nude mouse tumour model, very low unspecific retention in most organs, rapid renal excretion with reduced renal retention and high tumour uptake were observed. (99m)Tc-EDDA-HYNIC-MG11 shows advantages over (99m)Tc-EDDA-HYNIC-MG0 in terms of lower kidney retention with unchanged uptake in tumours and CCK-2 receptor-positive tissue. However, the lower metabolic stability and impurities formed in the labelling process still leave room for further improvement.
NASA Astrophysics Data System (ADS)
Freuchet, Florian
Dans le milieu marin, l'abondance du recrutement depend des processus qui vont affecter les adultes et le stock de larves. Sous l'influence de signaux fiables de la qualite de l'habitat, la mere peut augmenter (effet maternel anticipatoire, 'anticipatory mother effects', AME) ou reduire (effet maternel egoiste, 'selfish maternai effects', SME) la condition physiologique de la progeniture. Dans les zones tropicales, generalement plus oligotrophes, la ressource nutritive et la temperature sont deux composantes importantes pouvant limiter le recrutement. Les effets de l'apport nutritionnel et du stress thermique sur la production de larves et sur la stategie maternelle adoptee ont ete testes dans cette etude. Nous avons cible la balane Chthamalus bisinuatus (Pilsbry) comme modele biologique car el1e domine les zones intertidales superieures le long des cotes rocheuses du Sud-Est du Bresil (region tropicale). Les hypotheses de depart stipulaient que l'apport nutritionnel permet aux adultes de produire des larves de qualite elevee et que le stress thermique genere une ponte precoce, produisant des larves de faible qualite. Afin de tester ces hypotheses, des populations de C. bisinuatus ont ete elevees selon quatre groupes experimentaux differents, en combinant des niveaux d'apport nutritionnel (eleve et faible) et de stress thermique (stresse et non stresse). Des mesures de survie et de conditions physiologiques des adultes et des larves ont permis d'identifier les reponses parentales pouvant etre avantageuses dans un environnement tropical hostile. L'analyse des profils en acides gras a ete la methode utilisee pour evaluer la qualite physiologique des adultes et de larves. Les resultats du traitement alimentaire (fort ou faible apport nutritif), ne montrent aucune difference dans l'accumulation de lipides neutres, la taille des nauplii, l'effort de reproduction ou le temps de survie des nauplii en condition de jeune. Il semble que la faible ressource nutritive est compensee par les meres qui adoptent un modele AME qui se traduit par l'anticipation du milieu par les meres afin de produire des larves au phenotype approprie. A l'ajout d'un stress thermique, on observe des diminutions de 47% de la production de larves et celles-ci etaient 18 microm plus petites. Les meres semblent utiliser un modele SME caracterise par une diminution de la performance des larves. Suite a ces resultats, nous emettons l'hypothese qu'en zone subtropicale, comme sur les cotes de l'etat de Sao Paulo, l'elevation de la temperature subie par les balanes n'est, a priori, pas dommageable pour leur organisme si eIle est combinee a un apport nutritif suffisant.
NASA Astrophysics Data System (ADS)
LeBlanc, Luc R.
Les materiaux composites sont de plus en plus utilises dans des domaines tels que l'aerospatiale, les voitures a hautes performances et les equipements sportifs, pour en nommer quelques-uns. Des etudes ont demontre qu'une exposition a l'humidite nuit a la resistance des composites en favorisant l'initiation et la propagation du delaminage. De ces etudes, tres peu traitent de l'effet de l'humidite sur l'initiation du delaminage en mode mixte I/II et aucune ne traite des effets de l'humidite sur le taux de propagation du delaminage en mode mixte I/II dans un composite. La premiere partie de cette these consiste a determiner les effets de l'humidite sur la propagation du delaminage lors d'une sollicitation en mode mixte I/II. Des eprouvettes d'un composite unidirectionnel de carbone/epoxy (G40-800/5276-1) ont ete immergees dans un bain d'eau distillee a 70°C jusqu'a leur saturation. Des essais experimentaux quasi-statiques avec des chargements d'une gamme de mixites des modes I/II (0%, 25%, 50%, 75% et 100%) ont ete executes pour determiner les effets de l'humidite sur la resistance au delaminage du composite. Des essais de fatigue ont ete realises, avec la meme gamme de mixite des modes I/II, pour determiner 1'effet de 1'humidite sur l'initiation et sur le taux de propagation du delaminage. Les resultats des essais en chargement quasi-statique ont demontre que l'humidite reduit la resistance au delaminage d'un composite carbone/epoxy pour toute la gamme des mixites des modes I/II, sauf pour le mode I ou la resistance au delaminage augmente apres une exposition a l'humidite. Pour les chargements en fatigue, l'humidite a pour effet d'accelerer l'initiation du delaminage et d'augmenter le taux de propagation pour toutes les mixites des modes I/II. Les donnees experimentales recueillies ont ete utilisees pour determiner lesquels des criteres de delaminage en statique et des modeles de taux de propagation du delaminage en fatigue en mode mixte I/II proposes dans la litterature representent le mieux le delaminage du composite etudie. Une courbe de regression a ete utilisee pour determiner le meilleur ajustement entre les donnees experimentales et les criteres de delaminage en statique etudies. Une surface de regression a ete utilisee pour determiner le meilleur ajustement entre les donnees experimentales et les modeles de taux de propagation en fatigue etudies. D'apres les ajustements, le meilleur critere de delaminage en statique est le critere B-K et le meilleur modele de propagation en fatigue est le modele de Kenane-Benzeggagh. Afin de predire le delaminage lors de la conception de pieces complexes, des modeles numeriques peuvent etre utilises. La prediction de la longueur de delaminage lors des chargements en fatigue d'une piece est tres importante pour assurer qu'une fissure interlaminaire ne va pas croitre excessivement et causer la rupture de cette piece avant la fin de sa duree de vie de conception. Selon la tendance recente, ces modeles sont souvent bases sur l'approche de zone cohesive avec une formulation par elements finis. Au cours des travaux presentes dans cette these, le modele de progression du delaminage en fatigue de Landry & LaPlante (2012) a ete ameliore en y ajoutant le traitement des chargements en mode mixte I/II et en y modifiant l'algorithme du calcul de la force d'entrainement maximale du delaminage. Une calibration des parametres de zone cohesive a ete faite a partir des essais quasi-statiques experimentaux en mode I et II. Des resultats de simulations numeriques des essais quasi-statiques en mode mixte I/II, avec des eprouvettes seches et humides, ont ete compares avec les essais experimentaux. Des simulations numeriques en fatigue ont aussi ete faites et comparees avec les resultats experimentaux du taux de propagation du delaminage. Les resultats numeriques des essais quasi-statiques et de fatigue ont montre une bonne correlation avec les resultats experimentaux pour toute la gamme des mixites des modes I/II etudiee.
Statistical optimisation of diclofenac sustained release pellets coated with polymethacrylic films.
Kramar, A; Turk, S; Vrecer, F
2003-04-30
The objective of the present study was to evaluate three formulation parameters for the application of polymethacrylic films from aqueous dispersions in order to obtain multiparticulate sustained release of diclofenac sodium. Film coating of pellet cores was performed in a laboratory fluid bed apparatus. The chosen independent variables, i.e. the concentration of plasticizer (triethyl citrate), methacrylate polymers ratio (Eudragit RS:Eudragit RL) and the quantity of coating dispersion were optimised with a three-factor, three-level Box-Behnken design. The chosen dependent variables were cumulative percentage values of diclofenac dissolved in 3, 4 and 6 h. Based on the experimental design, different diclofenac release profiles were obtained. Response surface plots were used to relate the dependent and the independent variables. The optimisation procedure generated an optimum of 40% release in 3 h. The levels of plasticizer concentration, quantity of coating dispersion and polymer to polymer ratio (Eudragit RS:Eudragit RL) were 25% w/w, 400 g and 3/1, respectively. The optimised formulation prepared according to computer-determined levels provided a release profile, which was close to the predicted values. We also studied thermal and surface characteristics of the polymethacrylic films to understand the influence of plasticizer concentration on the drug release from the pellets.
Ye, Haoyu; Ignatova, Svetlana; Peng, Aihua; Chen, Lijuan; Sutherland, Ian
2009-06-26
This paper builds on previous modelling research with short single layer columns to develop rapid methods for optimising high-performance counter-current chromatography at constant stationary phase retention. Benzyl alcohol and p-cresol are used as model compounds to rapidly optimise first flow and then rotational speed operating conditions at a preparative scale with long columns for a given phase system using a Dynamic Extractions Midi-DE centrifuge. The transfer to a high value extract such as the crude ethanol extract of Chinese herbal medicine Millettia pachycarpa Benth. is then demonstrated and validated using the same phase system. The results show that constant stationary phase modelling of flow and speed with long multilayer columns works well as a cheap, quick and effective method of optimising operating conditions for the chosen phase system-hexane-ethyl acetate-methanol-water (1:0.8:1:0.6, v/v). Optimum conditions for resolution were a flow of 20 ml/min and speed of 1200 rpm, but for throughput were 80 ml/min at the same speed. The results show that 80 ml/min gave the best throughputs for tephrosin (518 mg/h), pyranoisoflavone (47.2 mg/h) and dehydrodeguelin (10.4 mg/h), whereas for deguelin (100.5 mg/h), the best flow rate was 40 ml/min.
NASA Astrophysics Data System (ADS)
Fung, Kenneth K. H.; Lewis, Geraint F.; Wu, Xiaofeng
2017-04-01
A vast wealth of literature exists on the topic of rocket trajectory optimisation, particularly in the area of interplanetary trajectories due to its relevance today. Studies on optimising interstellar and intergalactic trajectories are usually performed in flat spacetime using an analytical approach, with very little focus on optimising interstellar trajectories in a general relativistic framework. This paper examines the use of low-acceleration rockets to reach galactic destinations in the least possible time, with a genetic algorithm being employed for the optimisation process. The fuel required for each journey was calculated for various types of propulsion systems to determine the viability of low-acceleration rockets to colonise the Milky Way. The results showed that to limit the amount of fuel carried on board, an antimatter propulsion system would likely be the minimum technological requirement to reach star systems tens of thousands of light years away. However, using a low-acceleration rocket would require several hundreds of thousands of years to reach these star systems, with minimal time dilation effects since maximum velocities only reached about 0.2 c . Such transit times are clearly impractical, and thus, any kind of colonisation using low acceleration rockets would be difficult. High accelerations, on the order of 1 g, are likely required to complete interstellar journeys within a reasonable time frame, though they may require prohibitively large amounts of fuel. So for now, it appears that humanity's ultimate goal of a galactic empire may only be possible at significantly higher accelerations, though the propulsion technology requirement for a journey that uses realistic amounts of fuel remains to be determined.
Cahyaningrum, Fitrianna; Permadhi, Inge; Ansari, Muhammad Ridwan; Prafiantini, Erfi; Rachman, Purnawati Hustina; Agustina, Rina
2016-12-01
Diets with a specific omega-6/omega-3 fatty acid ratio have been reported to have favourable effects in controlling obesity in adults. However, development a local-based diet by considering the ratio of these fatty acids for improving the nutritional status of overweight and obese children is lacking. Therefore, using linear programming, we developed an affordable optimised diet focusing on the ratio of omega- 6/omega-3 fatty acid intake for obese children aged 12-23 months. A crosssectional study was conducted in two subdistricts of East Jakarta involving 42 normal-weight and 29 overweight and obese children, grouped on the basis of their body mass index for-age Z scores and selected through multistage random sampling. A 24-h recall was performed for 3-nonconsecutive days to assess the children's dietary intake levels and food patterns. We conducted group and structured interviews as well as market surveys to identify food availability, accessibility and affordability. Three types of affordable optimised 7-day diet meal plans were developed on the basis of breastfeeding status. The optimised diet plan fulfilled energy and macronutrient intake requirements within the acceptable macronutrient distribution range. The omega-6/omega-3 fatty acid ratio in the children was between 4 and 10. Moreover, the micronutrient intake level was within the range of the recommended daily allowance or estimated average recommendation and tolerable upper intake level. The optimisation model used in this study provides a mathematical solution for economical diet meal plans that approximate the nutrient requirements for overweight and obese children.
Díaz-Dinamarca, Diego A; Jerias, José I; Soto, Daniel A; Soto, Jorge A; Díaz, Natalia V; Leyton, Yessica Y; Villegas, Rodrigo A; Kalergis, Alexis M; Vásquez, Abel E
2018-03-01
Group B Streptococcus (GBS) is the leading cause of neonatal meningitis and a common pathogen in livestock and aquaculture industries around the world. Conjugate polysaccharide and protein-based vaccines are under development. The surface immunogenic protein (SIP) is a conserved protein in all GBS serotypes and has been shown to be a good target for vaccine development. The expression of recombinant proteins in Escherichia coli cells has been shown to be useful in the development of vaccines, and the protein purification is a factor affecting their immunogenicity. The response surface methodology (RSM) and Box-Behnken design can optimise the performance in the expression of recombinant proteins. However, the biological effect in mice immunised with an immunogenic protein that is optimised by RSM and purified by low-affinity chromatography is unknown. In this study, we used RSM for the optimisation of the expression of the rSIP, and we evaluated the SIP-specific humoral response and the property to decrease the GBS colonisation in the vaginal tract in female mice. It was observed by NI-NTA chromatography that the RSM increases the yield in the expression of rSIP, generating a better purification process. This improvement in rSIP purification suggests a better induction of IgG anti-SIP immune response and a positive effect in the decreased GBS intravaginal colonisation. The RSM applied to optimise the expression of recombinant proteins with immunogenic capacity is an interesting alternative in the evaluation of vaccines in preclinical phase, which could improve their immune response.
NASA Astrophysics Data System (ADS)
Sidibe, Souleymane
The implementation and monitoring of operational flight plans is a major occupation for a crew of commercial flights. The purpose of this operation is to set the vertical and lateral trajectories followed by airplane during phases of flight: climb, cruise, descent, etc. These trajectories are subjected to conflicting economical constraints: minimization of flight time and minimization of fuel consumed and environmental constraints. In its task of mission planning, the crew is assisted by the Flight Management System (FMS) which is used to construct the path to follow and to predict the behaviour of the aircraft along the flight plan. The FMS considered in our research, particularly includes an optimization model of flight only by calculating the optimal speed profile that minimizes the overall cost of flight synthesized by a criterion of cost index following a steady cruising altitude. However, the model based solely on optimization of the speed profile is not sufficient. It is necessary to expand the current optimization for simultaneous optimization of the speed and altitude in order to determine an optimum cruise altitude that minimizes the overall cost when the path is flown with the optimal speed profile. Then, a new program was developed. The latter is based on the method of dynamic programming invented by Bellman to solve problems of optimal paths. In addition, the improvement passes through research new patterns of trajectories integrating ascendant cruises and using the lateral plane with the effect of the weather: wind and temperature. Finally, for better optimization, the program takes into account constraint of flight domain of aircrafts which utilize the FMS.
Assessing communication as one of the drivers of urban resilience to weather extremes.
NASA Astrophysics Data System (ADS)
Vicari, Rosa; Schertzer, Daniel
2016-04-01
The quality of science and technology communication has become more challenging due to the fact that access to information has hugely increased in terms of variety and quantity. This is a consequence of different factors, among others the development of public relations by research institutes and the pervasive role of digital media (Bucchi 2013; Trench 2008). A key question is how can we objectively assess science and technology communication? Relatively few studies have been dedicated to the definition of pertinent indicators and standards (Neresini and Bucchi 2011). This research aims to understand how communication strategies, addressed to the general public, can optimise the impact of research findings in hydrology for resilient cities and how this can be assessed. Indeed urban resilience to extreme weather events relies both on engineering solutions and increased awareness of urban communities as it was highlighted by the FP7 SMARTesT project and the experiences carried out in the framework of TOMACS (Tokyo Metropolitan Area Convective Studies for Resilient Cities) and CASA (Engineering Research Center for Collaborative Adaptative Sensing of the Atmosphere, supported by the U.S. National Science Foundation). Communication assessment should be included in an alternative approach to evaluate urban resilience to weather extremes. Various qualitative and quantitative methods to monitor communication exist. According to this study, resilience indicators shouldn't only consider communication infrastructures but should also assess communication processes and their interactions with other resilience drivers; furthermore quantitative variables are considered as particularly relevant. Last, but not least, interesting inputs are provided by those case studies that exploit resilience assessment campaigns as an opportunity to practice participatory communication. This research is being led in the framework of the Chair Hydrology for Resilient Cities, co-founded by Veolia, Fondation des Ponts, and École des Ponts ParisTech.
DC and analog/RF performance optimisation of source pocket dual work function TFET
NASA Astrophysics Data System (ADS)
Raad, Bhagwan Ram; Sharma, Dheeraj; Kondekar, Pravin; Nigam, Kaushal; Baronia, Sagar
2017-12-01
We investigate a systematic study of source pocket tunnel field-effect transistor (SP TFET) with dual work function of single gate material by using uniform and Gaussian doping profile in the drain region for ultra-low power high frequency high speed applications. For this, a n+ doped region is created near the source/channel junction to decrease the depletion width results in improvement of ON-state current. However, the dual work function of the double gate is used for enhancement of the device performance in terms of DC and analog/RF parameters. Further, to improve the high frequency performance of the device, Gaussian doping profile is considered in the drain region with different characteristic lengths which decreases the gate to drain capacitance and leads to drastic improvement in analog/RF figures of merit. Furthermore, the optimisation is performed with different concentrations for uniform and Gaussian drain doping profile and for various sectional length of lower work function of the gate electrode. Finally, the effect of temperature variation on the device performance is demonstrated.
Sato, Koji; Fukata, Hideki; Kogo, Yasushi; Ohgane, Jun; Shiota, Kunio; Mori, Chisato
2009-01-01
Perinatal exposure to diethylstilbestrol (DES) can have numerous adverse effects on the reproductive organs later in life, such as vaginal clear-cell adenocarcinoma. Epigenetic processes including DNA methylation may be involved in the mechanisms. We subcutaneously injected DES to neonatal C57BL/6 mice. At days 5, 14, and 30, expressions of DNA methyltransferases (Dnmts) Dnmt1, Dnmt3a, and Dnmt3b, and transcription factors Sp1 and Sp3 were examined. We also performed restriction landmark genomic scanning (RLGS) to detect aberrant DNA methylation. Real-time RT-PCR revealed that expressions of Dnmt1, Dnmt3b, and Sp3 were decreased at day 5 in DES-treated mice, and that those of Dnmt1, Dnmt3a, and Sp1 were also decreased at day 14. RLGS analysis revealed that 5 genomic loci were demethylated, and 5 other loci were methylated by DES treatment. Two loci were cloned, and differential DNA methylation was quantified. Our results indicated that DES altered the expression levels of Dnmts and DNA methylation.
Griffanti, Ludovica; Zamboni, Giovanna; Khan, Aamira; Li, Linxin; Bonifacio, Guendalina; Sundaresan, Vaanathi; Schulz, Ursula G; Kuker, Wilhelm; Battaglini, Marco; Rothwell, Peter M; Jenkinson, Mark
2016-11-01
Reliable quantification of white matter hyperintensities of presumed vascular origin (WMHs) is increasingly needed, given the presence of these MRI findings in patients with several neurological and vascular disorders, as well as in elderly healthy subjects. We present BIANCA (Brain Intensity AbNormality Classification Algorithm), a fully automated, supervised method for WMH detection, based on the k-nearest neighbour (k-NN) algorithm. Relative to previous k-NN based segmentation methods, BIANCA offers different options for weighting the spatial information, local spatial intensity averaging, and different options for the choice of the number and location of the training points. BIANCA is multimodal and highly flexible so that the user can adapt the tool to their protocol and specific needs. We optimised and validated BIANCA on two datasets with different MRI protocols and patient populations (a "predominantly neurodegenerative" and a "predominantly vascular" cohort). BIANCA was first optimised on a subset of images for each dataset in terms of overlap and volumetric agreement with a manually segmented WMH mask. The correlation between the volumes extracted with BIANCA (using the optimised set of options), the volumes extracted from the manual masks and visual ratings showed that BIANCA is a valid alternative to manual segmentation. The optimised set of options was then applied to the whole cohorts and the resulting WMH volume estimates showed good correlations with visual ratings and with age. Finally, we performed a reproducibility test, to evaluate the robustness of BIANCA, and compared BIANCA performance against existing methods. Our findings suggest that BIANCA, which will be freely available as part of the FSL package, is a reliable method for automated WMH segmentation in large cross-sectional cohort studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
A support vector machine for predicting defibrillation outcomes from waveform metrics.
Howe, Andrew; Escalona, Omar J; Di Maio, Rebecca; Massot, Bertrand; Cromie, Nick A; Darragh, Karen M; Adgey, Jennifer; McEneaney, David J
2014-03-01
Algorithms to predict shock success based on VF waveform metrics could significantly enhance resuscitation by optimising the timing of defibrillation. To investigate robust methods of predicting defibrillation success in VF cardiac arrest patients, by using a support vector machine (SVM) optimisation approach. Frequency-domain (AMSA, dominant frequency and median frequency) and time-domain (slope and RMS amplitude) VF waveform metrics were calculated in a 4.1Y window prior to defibrillation. Conventional prediction test validity of each waveform parameter was conducted and used AUC>0.6 as the criterion for inclusion as a corroborative attribute processed by the SVM classification model. The latter used a Gaussian radial-basis-function (RBF) kernel and the error penalty factor C was fixed to 1. A two-fold cross-validation resampling technique was employed. A total of 41 patients had 115 defibrillation instances. AMSA, slope and RMS waveform metrics performed test validation with AUC>0.6 for predicting termination of VF and return-to-organised rhythm. Predictive accuracy of the optimised SVM design for termination of VF was 81.9% (± 1.24 SD); positive and negative predictivity were respectively 84.3% (± 1.98 SD) and 77.4% (± 1.24 SD); sensitivity and specificity were 87.6% (± 2.69 SD) and 71.6% (± 9.38 SD) respectively. AMSA, slope and RMS were the best VF waveform frequency-time parameters predictors of termination of VF according to test validity assessment. This a priori can be used for a simplified SVM optimised design that combines the predictive attributes of these VF waveform metrics for improved prediction accuracy and generalisation performance without requiring the definition of any threshold value on waveform metrics. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.
Kangasmaa, Tuija S; Sohlberg, Antti O
2014-07-01
Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.
Le Pape, G; Lassalle, J M
1979-10-01
Des enregistrements continus d'activité locomotrice ont été effectués sur des souris mâles isolées des lignées Balb/c et C57bl/6, vivant en cages d'élevage ou en milieu semi- naturel. Les résultats montrent que les différences entre ces deux situations ne sont pas perçues de la même façon par les animaux des deux lign'ees: alors qu'en cages d'élevage les souris des deux lignées experiment la même quantité totale d'activaté, en milieu semi-naturel les souris Balb/c sont plus actives que les C57bl/6. En outre, l≐s différences observées entre les lignées pour la repartition de l'activité au cours du nycthèmere s'inversent lorsque l'on passe d'une situation à l'autre. L'étude de la variabilité fait aparaître une dispersion plus grande des performances dans la lignée C57bl/6 en cages d'élevage, alors qu'en milieu semi-naturel la dispersion est plus chez Bal/c. Copyright © 1979. Published by Elsevier B.V.
Yoho, Robert M; Antonopoulos, Kosta; Vardaxis, Vassilios
2012-01-01
This study was performed to determine the relationship between undergraduate academic performance and total Medical College Admission Test score and academic performance in the podiatric medical program at Des Moines University. The allopathic and osteopathic medical professions have published educational research examining this relationship. To our knowledge, no such educational research has been published for podiatric medical education. The undergraduate cumulative and science grade point averages and total Medical College Admission Test scores of four podiatric medical classes (2007-2010, N = 169) were compared with their academic performance in the first 2 years of podiatric medical school using pairwise Pearson product moment correlations and multiple regression analysis. Significant low to moderate positive correlations were identified between undergraduate cumulative and science grade point averages and student academic performance in years 1 and 2 of podiatric medical school for each of the four classes (except one) and the pooled data. There was no significant correlation between Medical College Admission Test score and academic performance in years 1 and 2 (except one) and the pooled data. These results identify undergraduate cumulative grade point average as the strongest cognitive admissions variable in predicting academic performance in the podiatric medicine program at Des Moines University, followed by undergraduate science grade point average. These results also suggest limitations of the total Medical College Admission Test score in predicting academic performance. Information from this study can be used in the admissions process and to monitor student progress.
Optimising the Use of Note-Taking as an External Cognitive Aid for Increasing Learning
ERIC Educational Resources Information Center
Makany, Tamas; Kemp, Jonathan; Dror, Itiel E.
2009-01-01
Taking notes is of uttermost importance in academic and commercial use and success. Different techniques for note-taking utilise different cognitive processes and strategies. This experimental study examined ways to enhance cognitive performance via different note-taking techniques. By comparing performances of traditional, linear style…
NASA Astrophysics Data System (ADS)
Gorecki, A.; Brambilla, A.; Moulin, V.; Gaborieau, E.; Radisson, P.; Verger, L.
2013-11-01
Multi-energy (ME) detectors are becoming a serious alternative to classical dual-energy sandwich (DE-S) detectors for X-ray applications such as medical imaging or explosive detection. They can use the full X-ray spectrum of irradiated materials, rather than disposing only of low and high energy measurements, which may be mixed. In this article, we intend to compare both simulated and real industrial detection systems, operating at a high count rate, independently of the dimensions of the measurements and independently of any signal processing methods. Simulations or prototypes of similar detectors have already been compared (see [1] for instance), but never independently of estimation methods and never with real detectors. We have simulated both an ME detector made of CdTe - based on the characteristics of the MultiX ME100 and - a DE-S detector - based on the characteristics of the Detection Technology's X-Card 1.5-64DE model. These detectors were compared to a perfect spectroscopic detector and an optimal DE-S detector. For comparison purposes, two approaches were investigated. The first approach addresses how to distinguise signals, while the second relates to identifying materials. Performance criteria were defined and comparisons were made over a range of material thicknesses and with different photon statistics. Experimental measurements in a specific configuration were acquired to checks simulations. Results showed good agreement between the ME simulation and the ME100 detector. Both criteria seem to be equivalent, and the ME detector performs 3.5 times better than the DE-S detector with same photon statistics based on simulations and experimental measurements. Regardless of the photon statistics ME detectors appeared more efficient than DE-S detectors for all material thicknesses between 1 and 9 cm when measuring plastics with an attenuation signature close that of explosive materials. This translates into an improved false detection rate (FDR): DE-S detectors have an FDR 2.87±0.03-fold higher than ME detectors for 4 cm of POM with 20 000 incident photons, when identifications are screened against a two-material base.
Use of cone beam CT in children and young people in three United Kingdom dental hospitals.
Hidalgo-Rivas, Jose Alejandro; Theodorakou, Chrysoula; Carmichael, Fiona; Murray, Brenda; Payne, Martin; Horner, Keith
2014-09-01
There is limited evidence about the use of cone-beam computed tomography (CBCT) in paediatric dentistry. Appropriate use of CBCT is particularly important because of greater radiation risks in this age group. To survey the use of CBCT in children and young people in three Dental Hospitals in the United Kingdom (UK), with special attention paid to aspects of justification and optimisation. Retrospective analysis of patient records over a 24-month period, looking at CBCT examinations performed on subjects under 18 years of age. Clinical indications, region of interest, scan field of view (FoV), incidental findings and exposure factors used were recorded. There were 294 CBCT examinations performed in this age group, representing 13.7% of all scanned patients. CBCT was used more frequently in the >13 year age group. The most common use was for localisation of unerupted teeth in the anterior maxilla and the detection of root resorption. Optimisation of X-ray exposures did not appear to be consistent. When planning a CBCT service for children and young people, a limited FoV machine would be the appropriate choice for the majority of clinical requirements. It would facilitate clinical evaluation of scans, would limit the number of incidental findings and contribute to optimisation of radiation doses.
Using modified fruit fly optimisation algorithm to perform the function test and case studies
NASA Astrophysics Data System (ADS)
Pan, Wen-Tsao
2013-06-01
Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.
Optimizing Polymer Infusion Process for Thin Ply Textile Composites with Novel Matrix System
Bhudolia, Somen K.; Perrotey, Pavel; Joshi, Sunil C.
2017-01-01
For mass production of structural composites, use of different textile patterns, custom preforming, room temperature cure high performance polymers and simplistic manufacturing approaches are desired. Woven fabrics are widely used for infusion processes owing to their high permeability but their localised mechanical performance is affected due to inherent associated crimps. The current investigation deals with manufacturing low-weight textile carbon non-crimp fabrics (NCFs) composites with a room temperature cure epoxy and a novel liquid Methyl methacrylate (MMA) thermoplastic matrix, Elium®. Vacuum assisted resin infusion (VARI) process is chosen as a cost effective manufacturing technique. Process parameters optimisation is required for thin NCFs due to intrinsic resistance it offers to the polymer flow. Cycles of repetitive manufacturing studies were carried out to optimise the NCF-thermoset (TS) and NCF with novel reactive thermoplastic (TP) resin. It was noticed that the controlled and optimised usage of flow mesh, vacuum level and flow speed during the resin infusion plays a significant part in deciding the final quality of the fabricated composites. The material selections, the challenges met during the manufacturing and the methods to overcome these are deliberated in this paper. An optimal three stage vacuum technique developed to manufacture the TP and TS composites with high fibre volume and lower void content is established and presented. PMID:28772654
Othman, M A R; Cutajar, D L; Hardcastle, N; Guatelli, S; Rosenfeld, A B
2010-09-01
Monte Carlo simulations of the energy response of a conventionally packaged single metal-oxide field effect transistors (MOSFET) detector were performed with the goal of improving MOSFET energy dependence for personal accident or military dosimetry. The MOSFET detector packaging was optimised. Two different 'drop-in' design packages for a single MOSFET detector were modelled and optimised using the GEANT4 Monte Carlo toolkit. Absorbed photon dose simulations of the MOSFET dosemeter placed in free-air response, corresponding to the absorbed doses at depths of 0.07 mm (D(w)(0.07)) and 10 mm (D(w)(10)) in a water equivalent phantom of size 30 x 30 x 30 cm(3) for photon energies of 0.015-2 MeV were performed. Energy dependence was reduced to within + or - 60 % for photon energies 0.06-2 MeV for both D(w)(0.07) and D(w)(10). Variations in the response for photon energies of 15-60 keV were 200 and 330 % for D(w)(0.07) and D(w)(10), respectively. The obtained energy dependence was reduced compared with that for conventionally packaged MOSFET detectors, which usually exhibit a 500-700 % over-response when used in free-air geometry.
Optimizing Polymer Infusion Process for Thin Ply Textile Composites with Novel Matrix System.
Bhudolia, Somen K; Perrotey, Pavel; Joshi, Sunil C
2017-03-15
For mass production of structural composites, use of different textile patterns, custom preforming, room temperature cure high performance polymers and simplistic manufacturing approaches are desired. Woven fabrics are widely used for infusion processes owing to their high permeability but their localised mechanical performance is affected due to inherent associated crimps. The current investigation deals with manufacturing low-weight textile carbon non-crimp fabrics (NCFs) composites with a room temperature cure epoxy and a novel liquid Methyl methacrylate (MMA) thermoplastic matrix, Elium ® . Vacuum assisted resin infusion (VARI) process is chosen as a cost effective manufacturing technique. Process parameters optimisation is required for thin NCFs due to intrinsic resistance it offers to the polymer flow. Cycles of repetitive manufacturing studies were carried out to optimise the NCF-thermoset (TS) and NCF with novel reactive thermoplastic (TP) resin. It was noticed that the controlled and optimised usage of flow mesh, vacuum level and flow speed during the resin infusion plays a significant part in deciding the final quality of the fabricated composites. The material selections, the challenges met during the manufacturing and the methods to overcome these are deliberated in this paper. An optimal three stage vacuum technique developed to manufacture the TP and TS composites with high fibre volume and lower void content is established and presented.
Tanlock loop noise reduction using an optimised phase detector
NASA Astrophysics Data System (ADS)
Al-kharji Al-Ali, Omar; Anani, Nader; Al-Qutayri, Mahmoud; Al-Araji, Saleh
2013-06-01
This article proposes a time-delay digital tanlock loop (TDTL), which uses a new phase detector (PD) design that is optimised for noise reduction making it amenable for applications that require wide lock range without sacrificing the level of noise immunity. The proposed system uses an improved phase detector design which uses two phase detectors; one PD is used to optimise the noise immunity whilst the other is used to control the acquisition time of the TDTL system. Using the modified phase detector it is possible to reduce the second- and higher-order harmonics by at least 50% compared with the conventional TDTL system. The proposed system was simulated and tested using MATLAB/Simulink using frequency step inputs and inputs corrupted with varying levels of harmonic distortion. A hardware prototype of the system was implemented using a field programmable gate array (FPGA). The practical and simulation results indicate considerable improvement in the noise performance of the proposed system over the conventional TDTL architecture.
Vehicle trajectory linearisation to enable efficient optimisation of the constant speed racing line
NASA Astrophysics Data System (ADS)
Timings, Julian P.; Cole, David J.
2012-06-01
A driver model is presented capable of optimising the trajectory of a simple dynamic nonlinear vehicle, at constant forward speed, so that progression along a predefined track is maximised as a function of time. In doing so, the model is able to continually operate a vehicle at its lateral-handling limit, maximising vehicle performance. The technique used forms a part of the solution to the motor racing objective of minimising lap time. A new approach of formulating the minimum lap time problem is motivated by the need for a more computationally efficient and robust tool-set for understanding on-the-limit driving behaviour. This has been achieved through set point-dependent linearisation of the vehicle model and coupling the vehicle-track system using an intrinsic coordinate description. Through this, the geometric vehicle trajectory had been linearised relative to the track reference, leading to new path optimisation algorithm which can be formed as a computationally efficient convex quadratic programming problem.
Automation of route identification and optimisation based on data-mining and chemical intuition.
Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G
2017-09-21
Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.
Path optimisation of a mobile robot using an artificial neural network controller
NASA Astrophysics Data System (ADS)
Singh, M. K.; Parhi, D. R.
2011-01-01
This article proposed a novel approach for design of an intelligent controller for an autonomous mobile robot using a multilayer feed forward neural network, which enables the robot to navigate in a real world dynamic environment. The inputs to the proposed neural controller consist of left, right and front obstacle distance with respect to its position and target angle. The output of the neural network is steering angle. A four layer neural network has been designed to solve the path and time optimisation problem of mobile robots, which deals with the cognitive tasks such as learning, adaptation, generalisation and optimisation. A back propagation algorithm is used to train the network. This article also analyses the kinematic design of mobile robots for dynamic movements. The simulation results are compared with experimental results, which are satisfactory and show very good agreement. The training of the neural nets and the control performance analysis has been done in a real experimental setup.
Optimisation des structures métalliques fléchies dans un calcul plastique
NASA Astrophysics Data System (ADS)
Geara, F.; Raphael, W.; Kaddah, F.
2005-05-01
The steel structure is a type of construction that is very developed in civil engineering. In the phase of survey and then of execution and installation of a metal work, the phase of conception is often the place of discontinuities that prevents the global optimization of material steel. In our survey, we used the traditional approach of optimization that is essentially based on the minimization of the weight of the structure, while taking advantages of plastic properties of steel in the case of a bending structure. It has been permitted because of to the relation found between the areas of the sections of the steel elements and the plastic moment of these sections. These relations have been drawn for different types of steel. In order to take advantages of the linear programming, a simplification has been introduced in transforming these relation to linear relations, which permits us to use simple methods as the simplex theorem. This procedure proves to be very interesting in the first phases of the survey and give very interesting results.
Les métamatériaux, des micro-ondes à l'optique : théorie et applications
NASA Astrophysics Data System (ADS)
Kante, B.
2010-04-01
Cet article constitue une contribution originale et importante à la compréhension à la fois théorique et expérimentale des métamatériaux en micro-ondes et en infrarouge. Nous avons réalisé et caractérisé sur silicium des nano-structures metallo-diélectriques, briques de base des métamatériaux infrarouge et optique. Des caractérisations optiques exhaustives ont été réalisées pour la première fois sur ces structures en amplitude et en phase par interférométrie. Des topologies plus simples de métamatériaux d’un point de vue technologique et des performances optiques ont été introduites, et leur potentiel démontré dans la réalisation de fonctions aussi complexes que la réfraction négative, le couplage de mode plasmoniques, les nano senseurs pour la biologie et l’invisibilité électromagnétique en infrarouge. Les transformations d’espace, et le nouveau paradigme qu’elles offrent à l’optique, rendant possible une ingénierie de l’espace pour les photons ainsi que leur implémentation par métamatériaux ont été présentés par la première démonstration expérimentale d’une cape d’invisibilité non magnétique.
Operational modes, health, and status monitoring
NASA Astrophysics Data System (ADS)
Taljaard, Corrie
2016-08-01
System Engineers must fully understand the system, its support system and operational environment to optimise the design. Operations and Support Managers must also identify the correct metrics to measure the performance and to manage the operations and support organisation. Reliability Engineering and Support Analysis provide methods to design a Support System and to optimise the Availability of a complex system. Availability modelling and Failure Analysis during the design is intended to influence the design and to develop an optimum maintenance plan for a system. The remote site locations of the SKA Telescopes place emphasis on availability, failure identification and fault isolation. This paper discusses the use of Failure Analysis and a Support Database to design a Support and Maintenance plan for the SKA Telescopes. It also describes the use of modelling to develop an availability dashboard and performance metrics.
NASA Astrophysics Data System (ADS)
Hadade, Ioan; di Mare, Luca
2016-08-01
Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.
NASA Astrophysics Data System (ADS)
Louis, Ognel Pierre
Le but de cette etude est de developper un outil permettant d'estimer le niveau de risque de perte de vigueur des peuplements forestiers de la region de Gounamitz au nord-ouest du Nouveau-Brunswick via des donnees d'inventaires forestiers et des donnees de teledetection. Pour ce faire, un marteloscope de 100m x 100m et 20 parcelles d'echantillonnages ont ete delimites. A l'interieur de ces derniers, le niveau de risque de perte de vigueur des arbres ayant un DHP superieur ou egal a 9 cm a ete determine. Afin de caracteriser le risque de perte de vigueur des arbres, leurs positions spatiales ont ete repertoriees a partir d'un GPS en tenant compte des defauts au niveau des tiges. Pour mener a bien ce travail, les indices de vegetation et de textures et les bandes spectrales de l'image aeroportee ont ete extraits et consideres comme variables independantes. Le niveau de risque de perte de vigueur obtenu par espece d'arbre a travers les inventaires forestiers a ete considere comme variable dependante. En vue d'obtenir la superficie des peuplements forestiers de la region d'etude, une classification dirigee des images a partir de l'algorithme maximum de vraisemblance a ete effectuee. Le niveau de risque de perte de vigueur par type d'arbre a ensuite ete estime a l'aide des reseaux de neurones en utilisant un reseau dit perceptron multicouches. Il s'agit d'un modele de reseau de neurones compose de : 11 neurones sur la couche d'entree, correspondant aux variables independantes, 35 neurones sur la couche cachee et 4 neurones sur la couche de sortie. La prediction a partir des reseaux de neurones produit une matrice de confusion qui permet d'obtenir des mesures quantitatives d'estimation, notamment un pourcentage de classification globale de 91,7% pour la prediction du risque de perte de vigueur du peuplement de resineux et de 89,7% pour celui du peuplement de feuillus. L'evaluation de la performance des reseaux de neurones fournit une valeur de MSE globale de 0,04, et une RMSE (Mean Square Error) globale de 0,20 pour le peuplement de feuillus. Quant au peuplement de resineux, une valeur de MSE (Mean Square Error) globale de 0,05 et une valeur de RMSE globale de 0,22 ont ete obtenues. Pour la validation des resultats, le niveau de risque de perte de vigueur predit a ete compare avec le risque de perte de vigueur de reference. Les resultats obtenus donnent un coefficient de determination de 0,98 pour le peuplement de feuillus et 0,93 pour le peuplement de resineux.
Molecular materials for high performance OPV devices (Conference Presentation)
NASA Astrophysics Data System (ADS)
Jones, David J.
2016-09-01
We recently reported the high performing molecular donor for OPV devices based on a benzodithiophene core, a terthiophene bridge and a rhodamine acceptor (BTR) [1]. In this work we optimized side-chain placement of a known chromophore by ensuring the thiophene hexyl side-chains are regioregular, which should allow the chromophore to lie flat. The unexpected outcome was a nematic liquid crystalline material with significantly improved performance (now 9.6% PCE), excellent charge transport properties, reduced geminate recombination rates and excellent performance with active layers up to 400nm. Three phase changes were indicated by DSC analysis with a melt to a crystalline domain at 175 oC, transition to a nematic liquid crystalline domain at 186 oC and an isotropic melt at 196 oC. In our desire to better understand the structure property relationships of this class of p-type organic semiconductor we have synthesized a series of analogues where the length of the chromophore has been altered through modification of the oligothiophene bridge to generate, the monothiophene (BMR), the bisthiophene (BBR), the known terthiophene (BTR), the quaterthiophene (BQR) and the pentathiophene (BPR). BMR, BBR and BPR have clean melting points while BQR, like BTR shows a complicated series of phase transitions. Device efficiencies after solvent vapour annealing are BMR (3.5%), BBR (6.0%), BTR (9.3%), BQR (9.4%), and BPR (8.7%) unoptimised. OPV devices with BTR in the active layer are not stable under thermal annealing, however the bridge extended BQR and BPR form thermally stable devices. We are currently optimising these devices, but initial results indicate PCEs >9% for thermally annealed devices containing BQR, while BPR devices have not yet been optimised and have PCEs > 8%. In order to develop the device performance we have included BQR in ternary devices with the commercially available PTB7-Th and we report device efficiencies of over 10.5%. We are currently optimising device assembly and annealing conditions and relating these back to key materials properties. I will discuss the development of these new materials, their materials properties, structural data, and optimised device performance. I will examination of chromophore length on the Nematic Liquid Crystalline properties and on materials development and performance resulting in materials with > 9% PCE in OPV. [1] Sun, K.; Xiao, Z.; Lu, S.; Zajaczkowski, W.; Pisula, W.; Hanssen, E.; White, J. M.; Williamson, R. M.; Subbiah, J.; Ouyang, J.; Holmes, A. B.; Wong, W. W.; Jones, D. J., Nat. Commun. 2015, 6, 6013. DOI: 10.1038/ncomms7013
1992-09-01
qui a POUr object d~cvaluer la pertinence du symposium ci Ia mesure dans laquelle il a repondu aux attentes de Ia communautti atirospatiale. a ýtit...testing have progressed steadily in the last 30 years. the context of military and civil engine/airframe integration This paper will focus attention on...events of the early 90s close to our collective and military designs require increased attention to be paid to consciousness, it is clear that a
NASA Astrophysics Data System (ADS)
Taslim, Indra, Leonardo; Manurung, Renita; Winarta, Agus; Ramadhani, Debbie Aditia
2017-03-01
Biodiesel is usually produced from transesterification using methanol or ethanol as alcohol. However, biodiesel produced using methanol has several disadvantages because methanol is toxic and not entirely bio-based as it is generally produced from petroleum, natural gas and coal. On the other hand, ethanol also has several disadvantages such as lower reactivity in transesterification process and formation of stable emulsion between ester and glycerol. To improve ethanolysis process, deep eutectic solvent (DES) was prepared from choline chloride and ethylene glycol to be used as co-solvent in ethanolysis. Deep eutectic solvent was prepared by mixing choline chloride and ethylene glycol at molar ratio of 1:2, temperature of 80 °C, and stirring speed of 300 rpm for 1 hour. The DES was characterized by its density and viscosity. The ethanolysis of DPO / Degummed Palm Oil was performed at 70 °C, ethanol to oil molar ratio of 9:1, catalyst (potassium hydroxide) concentration of 0.75 wt.% concentration, co-solvent (DES) concentration of 1, 2, 3, 4, 5 and 6 wt.%, stirring speed of 600 rpm, and reaction time of 1 hour. The obtained biodiesel was then characterized by its density, viscosity and ester content. The oil - ethanol phase condition was observed in reaction tube. The oil - ethanol phase with DES tends to form meniscus compared to that without DES. Which implied that oil and ethanol become more slightly miscible, which favours the reaction. Using DES as co-solvent in ethanolysis resulted in an increase in yield and easier purification. The esters properties met the international standards ASTM D6751, with highest yield achieved at 81.72 % with 99.35 % ethyl ester contents at 4% DES concentration.
Hoerger, Michael; Chapman, Benjamin P; Mohile, Supriya G; Duberstein, Paul R
2016-09-01
In light of recent health care reforms, we have provided an illustrative example of new opportunities available for psychologists to develop patient-reported measures related to health care quality. Patient engagement in health care decision making has been increasingly acknowledged as a vital component of quality cancer care. We developed the 10-item Decisional Engagement Scale (DES-10), a patient-reported measure of engagement in decision making in cancer care that assesses patients' awareness of their diagnosis, sense of empowerment and involvement, and level of information seeking and planning. The National Institutes of Health's ResearchMatch recruitment tool was used to facilitate Internet-mediated data collection from 376 patients with cancer. DES-10 scores demonstrated good internal consistency reliability (α = .80), and the hypothesized unidimensional factor structure fit the data well. The reliability and factor structure were supported across subgroups based on demographic, socioeconomic, and health characteristics. Higher DES-10 scores were associated with better health-related quality of life (r = .31). In concurrent validity analyses controlling for age, socioeconomic status, and health-related quality of life, higher DES-10 scores were associated with higher scores on quality-of-care indices, including greater awareness of one's treatments, greater preferences for shared decision making, and clearer preferences about end-of-life care. A mini-measure, the DES-3, also performed well psychometrically. In conclusion, DES-10 and DES-3 scores showed evidence of reliability and validity, and these brief patient-reported measures can be used by researchers, clinicians, nonprofits, hospitals, insurers, and policymakers interested in evaluating and improving the quality of cancer care. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Boulogne, Sébastien; Andre-Obadia, Nathalie; Kimiskidis, Vasilios K; Ryvlin, Philippe; Rheims, Sylvain
2016-11-01
Paired-pulse (PP) paradigms are commonly employed to assess in vivo cortical excitability using transcranial magnetic stimulation (TMS) to stimulate the primary motor cortex and modulate the induced motor evoked potential (MEP). Single-pulse cortical direct electrical stimulation (DES) during intracerebral EEG monitoring allows the investigation of brain connectivity by eliciting cortico-cortical evoked potentials (CCEPs). However, PP paradigm using intracerebral DES has rarely been reported and has never been previously compared with TMS. The work was intended (i) to verify that the well-established modulations of MEPs following PP TMS remain similar using DES in the motor cortex, and (ii) to evaluate if a similar pattern could be observed in distant cortico-cortical connections through modulations of CCEP. Three patients undergoing intracerebral EEG monitoring with electrodes implanted in the central region were studied. Single-pulse DES (1-3 mA, 1 ms, 0.2 Hz) and PP DES using six interstimulus intervals (5, 15, 30, 50, 100, and 200 ms) in the motor cortex with concomitant recording of CCEPs and MEPs in contralateral muscles were performed. Finally, a navigated PP TMS session targeted the intracranial stimulation site to record TMS-induced MEPs in two patients. MEP modulations elicited by PP intracerebral DES proved similar among the three patients and to those obtained by PP TMS. CCEP modulations elicited by PP intracerebral DES usually showed a pattern comparable to that of MEP, although a different pattern could be observed occasionally. PP intracerebral DES seems to involve excitatory and inhibitory mechanisms similar to PP TMS and allows the recording of intracortical inhibition and facilitation modulation on cortico-cortical connections. Hum Brain Mapp 37:3767-3778, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
An Investigation of Transonic Resonance in a Mach 2.2 Round Convergent-Divergent Nozzle
NASA Technical Reports Server (NTRS)
Dippold, Vance F., III; Zaman, Khairul B. M. Q.
2015-01-01
Hot-wire and acoustic measurements were taken for a round convergent nozzle and a round convergent-divergent (C-D) nozzle at a jet Mach number of 0.61. The C-D nozzle had a design Mach number of 2.2. Compared to the convergent nozzle jet flow, the Mach 2.2 nozzle jet flow produced excess broadband noise (EBBN). It also produced a transonic resonance tone at 1200 Herz. Computational simulations were performed for both nozzle flows. A steady Reynolds-Averaged Navier-Stokes simulation was performed for the convergent nozzle jet flow. For the Mach 2.2 nozzle flow, a steady RANS simulation, an unsteady RANS (URANS) simulation, and an unsteady Detached Eddy Simulation (DES) were performed. The RANS simulation of the convergent nozzle showed good agreement with the hot-wire velocity and turbulence measurements, though the decay of the potential core was over-predicted. The RANS simulation of the Mach 2.2 nozzle showed poor agreement with the experimental data, and more closely resembled an ideally-expanded jet. The URANS simulation also showed qualitative agreement with the hot-wire data, but predicted a transonic resonance at 1145 Herz. The DES showed good agreement with the hot-wire velocity and turbulence data. The DES also produced a transonic tone at 1135 Herz. The DES solution showed that the destabilization of the shock-induced separation region inside the nozzle produced increased levels of turbulence intensity. This is likely the source of the EBBN.
First on-sky results of a neural network based tomographic reconstructor: Carmen on Canary
NASA Astrophysics Data System (ADS)
Osborn, J.; Guzman, D.; de Cos Juez, F. J.; Basden, A. G.; Morris, T. J.; Gendron, É.; Butterley, T.; Myers, R. M.; Guesalaga, A.; Sanchez Lasheras, F.; Gomez Victoria, M.; Sánchez Rodríguez, M. L.; Gratadour, D.; Rousset, G.
2014-07-01
We present on-sky results obtained with Carmen, an artificial neural network tomographic reconstructor. It was tested during two nights in July 2013 on Canary, an AO demonstrator on the William Hershel Telescope. Carmen is trained during the day on the Canary calibration bench. This training regime ensures that Carmen is entirely flexible in terms of atmospheric turbulence profile, negating any need to re-optimise the reconstructor in changing atmospheric conditions. Carmen was run in short bursts, interlaced with an optimised Learn and Apply reconstructor. We found the performance of Carmen to be approximately 5% lower than that of Learn and Apply.
Static and Dynamic Disorder in Bacterial Light-Harvesting Complex LH2: A 2DES Simulation Study.
Rancova, Olga; Abramavicius, Darius
2014-07-10
Two-dimensional coherent electronic spectroscopy (2DES) is a powerful technique in distinguishing homogeneous and inhomogeneous broadening contributions to the spectral line shapes of molecular transitions induced by environment fluctuations. Using an excitonic model of a double-ring LH2 aggregate, we perform simulations of its 2DES spectra and find that the model of a harmonic environment cannot provide a consistent set of parameters for two temperatures: 77 K and room temperature. This indicates the highly anharmonic nature of protein fluctuations for the pigments of the B850 ring. However, the fluctuations of B800 ring pigments can be assumed as harmonic in this temperature range.
On the performance of energy detection-based CR with SC diversity over IG channel
NASA Astrophysics Data System (ADS)
Verma, Pappu Kumar; Soni, Sanjay Kumar; Jain, Priyanka
2017-12-01
Cognitive radio (CR) is a viable 5G technology to address the scarcity of the spectrum. Energy detection-based sensing is known to be the simplest method as far as hardware complexity is concerned. In this paper, the performance of spectrum sensing-based energy detection technique in CR networks over inverse Gaussian channel for selection combining diversity technique is analysed. More specifically, accurate analytical expressions for the average detection probability under different detection scenarios such as single channel (no diversity) and with diversity reception are derived and evaluated. Further, the detection threshold parameter is optimised by minimising the probability of error over several diversity branches. The results clearly show the significant improvement in the probability of detection when optimised threshold parameter is applied. The impact of shadowing parameters on the performance of energy detector is studied in terms of complimentary receiver operating characteristic curve. To verify the correctness of our analysis, the derived analytical expressions are corroborated via exact result and Monte Carlo simulations.
Biodiesel production from ethanolysis of palm oil using deep eutectic solvent (DES) as co-solvent
NASA Astrophysics Data System (ADS)
Manurung, R.; Winarta, A.; Taslim; Indra, L.
2017-06-01
Biodiesel produced from ethanolysis is more renewable and have better properties (higher oxidation stability, lower cloud and pour point) compared to methanolysis, but it has a disadvantage such as complicated purification. To improve ethanolysis process, deep eutectic solvent (DES) can be prepared from choline chloride and glycerol and used as co-solvent in ethanolysis. The deep eutectic solvent is formed from a quaternary ammonium salt (choline chloride) and a hydrogen bond donor (Glycerol), it is a non-toxic, biodegradable solvent compared to a conventional volatile organic solvent such as hexane. The deep eutectic solvent is prepared by mixing choline chloride and glycerol with molar ratio 1:2 at temperature 80 °C, stirring speed 300 rpm for 1 hour. The DES is characterized by its density and viscosity. The ethanolysis is performed at a reaction temperature of 70 °C, ethanol to oil molar ratio of 9:1, potassium hydroxide as catalyst concentration of 1.2 wt. DES as co-solvent with concentration 0.5 to 3 wt. stirring speed 400 rpm, and a reaction time 1 hour. The obtained biodiesel is then characterized by its density, viscosity, and ester content. The oil - ethanol phase condition is observed in the reaction tube. The oil - ethanol phase with DES tends to form meniscus compared to without DES, showed that oil and ethanol become more slightly miscible, which favors the reaction. Using DES as co-solvent in ethanolysis showed increasing in yield and easier purification. The esters properties meet the international standards ASTM D6751, with the highest yield achieved 83,67 with 99,77 conversion at DES concentration 2 . Increasing DES concentration above 2 in ethanolysis decrease the conversion and yield, because of the excessive glycerol in the systems makes the reaction equilibrium moves to the reactant side.
MIND performance and prototyping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cervera-Villanueva, A.
2008-02-21
The performance of MIND (Magnetised Iron Neutrino Detector) at a neutrino factory has been revisited in a new analysis. In particular, the low neutrino energy region is studied, obtaining an efficiency plateau around 5 GeV for a background level below 10{sup -3}. A first look has been given into the detector optimisation and prototyping.
Tack, Denis; Jahnen, Andreas; Kohler, Sarah; Harpes, Nico; De Maertelaer, Viviane; Back, Carlo; Gevenois, Pierre Alain
2014-01-01
To report short- and long-term effects of an audit process intended to optimise the radiation dose from multidetector row computed tomography (MDCT). A survey of radiation dose from all eight MDCT departments in the state of Luxembourg performed in 2007 served as baseline, and involved the most frequently imaged regions (head, sinus, cervical spine, thorax, abdomen, and lumbar spine). CT dose index volume (CTDIvol), dose-length product per acquisition (DLP/acq), and DLP per examination (DLP/exa) were recorded, and their mean, median, 25th and 75th percentiles compared. In 2008, an audit conducted in each department helped to optimise doses. In 2009 and 2010, two further surveys evaluated the audit's impact on the dose delivered. Between 2007 and 2009, DLP/exa significantly decreased by 32-69 % for all regions (P < 0.001) except the lumbar spine (5 %, P = 0.455). Between 2009 and 2010, DLP/exa significantly decreased by 13-18 % for sinus, cervical and lumbar spine (P ranging from 0.016 to less than 0.001). Between 2007 and 2010, DLP/exa significantly decreased for all regions (18-75 %, P < 0.001). Collective dose decreased by 30 % and the 75th percentile (diagnostic reference level, DRL) by 20-78 %. The audit process resulted in long-lasting dose reduction, with DRLs reduced by 20-78 %, mean DLP/examination by 18-75 %, and collective dose by 30 %. • External support through clinical audit may optimise default parameters of routine CT. • Reduction of 75th percentiles used as reference diagnostic levels is 18-75 %. • The effect of this audit is sustainable over time. • Dose savings through optimisation can be added to those achievable through CT.
Omar, J; Boix, A; Kerckhove, G; von Holst, C
2016-12-01
Titanium dioxide (TiO 2 ) has various applications in consumer products and is also used as an additive in food and feeding stuffs. For the characterisation of this product, including the determination of nanoparticles, there is a strong need for the availability of corresponding methods of analysis. This paper presents an optimisation process for the characterisation of polydisperse-coated TiO 2 nanoparticles. As a first step, probe ultrasonication was optimised using a central composite design in which the amplitude and time were the selected variables to disperse, i.e., to break up agglomerates and/or aggregates of the material. The results showed that high amplitudes (60%) favoured a better dispersion and time was fixed in mid-values (5 min). In a next step, key factors of asymmetric flow field-flow fraction (AF4), namely cross-flow (CF), detector flow (DF), exponential decay of the cross-flow (CF exp ) and focus time (Ft), were studied through experimental design. Firstly, a full-factorial design was employed to establish the statistically significant factors (p < 0.05). Then, the information obtained from the full-factorial design was utilised by applying a central composite design to obtain the following optimum conditions of the system: CF, 1.6 ml min -1 ; DF, 0.4 ml min -1 ; Ft, 5 min; and CF exp , 0.6. Once the optimum conditions were obtained, the stability of the dispersed sample was measured for 24 h by analysing 10 replicates with AF4 in order to assess the performance of the optimised dispersion protocol. Finally, the recovery of the optimised method, particle shape and particle size distribution were estimated.
Omar, J.; Boix, A.; Kerckhove, G.; von Holst, C.
2016-01-01
ABSTRACT Titanium dioxide (TiO2) has various applications in consumer products and is also used as an additive in food and feeding stuffs. For the characterisation of this product, including the determination of nanoparticles, there is a strong need for the availability of corresponding methods of analysis. This paper presents an optimisation process for the characterisation of polydisperse-coated TiO2 nanoparticles. As a first step, probe ultrasonication was optimised using a central composite design in which the amplitude and time were the selected variables to disperse, i.e., to break up agglomerates and/or aggregates of the material. The results showed that high amplitudes (60%) favoured a better dispersion and time was fixed in mid-values (5 min). In a next step, key factors of asymmetric flow field-flow fraction (AF4), namely cross-flow (CF), detector flow (DF), exponential decay of the cross-flow (CFexp) and focus time (Ft), were studied through experimental design. Firstly, a full-factorial design was employed to establish the statistically significant factors (p < 0.05). Then, the information obtained from the full-factorial design was utilised by applying a central composite design to obtain the following optimum conditions of the system: CF, 1.6 ml min–1; DF, 0.4 ml min–1; Ft, 5 min; and CFexp, 0.6. Once the optimum conditions were obtained, the stability of the dispersed sample was measured for 24 h by analysing 10 replicates with AF4 in order to assess the performance of the optimised dispersion protocol. Finally, the recovery of the optimised method, particle shape and particle size distribution were estimated. PMID:27650879
NASA Astrophysics Data System (ADS)
Hoell, Simon; Omenzetter, Piotr
2018-02-01
To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.
NASA Astrophysics Data System (ADS)
Rimbault, Benjamin
Cette these de maitrise presentee par articles visait a etudier le comportement hydraulique et thermique d'un ecoulement de nanofluides en micro-canal chauffe. Nous avons etudie premierement de l'eau distillee, ensuite des melanges de particules d'oxyde de cuivre (taille 29nm) avec de l'eau distillee en concentrations particulaires volumiques 4.5%, 1.03%, et 0.24% (CuO-H2O). L'ecoulement force des differents fluides a ete realise au moyen de pompes a engrenages au sein d'un circuit ferme, comprenant un micro-canal a section rectangulaire (e=1.116mm,1=25.229mm) chauffe sur deux faces paralleles via des cartouches electriques, deux echangeurs de chaleurs en serie, ainsi qu'un debitmetre magnetique. A notre connaissance peu d'etudes sur l'ecoulement de nanofluides d'oxyde de cuivre-eau en micro-canal rectangulaire chauffe sont disponibles dans la litterature, cette recherche sert de contribution. Premierement, une validation avec la litterature a ete effectuee pour le cas d'un ecoulement d'eau entre plaques planes paralleles chauffees. Des essais hydrauliques ont ete realises pour une gamme du nombre de Reynolds allant jusqu'a Re=5000 a temperature constante. Par la suite des essais thermiques jusqu'a Re=2500 ont consiste en une elevation de temperature fixe (20.5°C a 30.5°C) a travers la longueur du micro-canal sous un regime stationnaire. Les resultats ont demontre une augmentation de la perte de pression et du coefficient de frottement des nanofluides sur l'eau pour un meme debit. Une telle augmentation de perte de pression etait de +70%, +25%, et +0 a 30% respectivement pour les concentrations 4.50%, 1.03%, et 0.24%. Concernant la transition laminaire a turbulent les comportements semblaient indiquer une valeur critique semblable entre l'eau et les differentes concentrations avec et sans chauffage a un nombre de Reynolds critique Rem 1000. Nous avons observe une legere augmentation du coefficient de convection thermique avec le debit massique pour les faibles concentrations (1.03% et 0.24%), alors que la concentration 4.5% demontre une nette diminution. En general la performance energetique globale definie par la chaleur transferee sur la puissance de pompage, reste inferieure compare a l'eau pour un meme Reynolds et un meme debit massique egalement. L'eau semble etre la meilleure solution en termes de performance energetique globale.
NASA Astrophysics Data System (ADS)
Nor, Nur Atikah Md; Mustapha, Wan Aida Wan; Hassan, Osman
2015-09-01
Oil Palm Empty Fruit Bunch (OPEFB) was pretreated using Deep Eutectic Solvent (DES) at different parameters to enable a highest yield of sugar. DES is a combination of two or more cheap and safe components to form a eutectic mixture through hydrogen bond interaction, which has a melting point lower than that of each component. DES can be used to replace ionic liquids (ILs), which are more expensive and toxic. In this study, OPEFB was pretreated with DES mixture of choline chloride: urea in 1:2 molar ratio. The pretreatment was performed at temperature 110°C and 80°C for 4 hours and 1 hour. Pretreatment A (110°C, 4 hours), B (110°C, 1 hour), C (80°C, 4 hours) and D (80°C, 1 hour). Enzymatic hydrolysis was done by using the combination of two enzymes, namely, Cellic Ctec2 and Cellic Htec2. The treated fiber is tested for crystallinity using XRD and functional group analysis using FTIR, to check the effect of the pretreatment on the fiber and compared it with the untreated fiber. From XRD analysis, DES successfully gave an effect towards degree of crystallinity of cellulose. Pretreatment A (110°C, 4 hours) and B (110°C, 1 hour) successfully reduce the percentage of crystallinity while pretreatment C (80°C, 4 hours) and D (80°C, 1 hour) increased the percentage of crystallinity. From FTIR analysis, DES cannot remove the functional group of lignin and hemicellulose but it is believed that DES can expose the structure of cellulose. Upon enzymatic hydrolysis, DES-treated fiber successfully produced sugar but not significantly when compared with raw. Pretreatment A (110°C, 4 hours), B (110°C, 1 hour), C (80°C, 4 hours) and D (80°C, 1 hour) produced glucose at the amount of 60.47 mg/ml, 66.33 mg/ml, 61.96 mg/ml and 59.12 mg/ml respectively. However, pretreatment C gave the highest xylose (70.01 mg/ml) production compared to other DES pretreatments.
Ferrone, Vincenzo; Genovese, Salvatore; Carlucci, Maura; Tiecco, Matteo; Germani, Raimondo; Preziuso, Francesca; Epifano, Francesco; Carlucci, Giuseppe; Taddeo, Vito Alessandro
2018-04-15
A green dispersive liquid-liquid microextraction (DLLME) using deep eutectic solvent (DES) as the extracting solvent has been developed and applied for the simultaneous quantification of ferulic acid, umbelliferone, boropinic acid, 7-isopentenyloxycoumarin, 4'-geranyloxyferulic acid (GOFA), and auraptene in some vegetable oils using ultra high performance liquid chromatography (UHPLC) with photodiode array detection (PDA). All parameters in the extraction step, including selection and loading of both extracting and dispersing solvents, amount of both extractant and disperser solvent were investigated and optimized. PhAA/TMG DES achieved higher recovery and enrichment factor compared to other DESs. The validated method showed good linearity with correlation coefficients, r 2 >0.9990 for all the analytes. Furthermore, this is the first time that eco-friendly solvents are used for the extraction of oxyprenylated phenylpropanoids and the corresponding extract analyzed with ultra high performance liquid chromatography with photodiode array detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hammons, Joshua A; Zhang, Fan; Ilavsky, Jan
2018-06-15
Many applications of deep eutectic solvents (DES) rely on exploitation of their unique yet complex liquid structures. Due to the ionic nature of the DES components, their diffuse structures are perturbed in the presence of a charged surface. We hypothesize that it is possible to perturb the bulk DES structure far (>100 nm) from a curved, charged surface with mesoscopic dimensions. We performed in situ, synchrotron-based ultra-small angle X-ray scattering (USAXS) experiments to study the solvent distribution near the surface of charged mesoporous silica particles (MPS) (≈0.5 µm in diameter) suspended in both water and a common type of DES (1:2 choline Cl-:ethylene glycol). A careful USAXS analysis reveals that the perturbation of electron density distribution within the DES extends ≈1 μm beyond the particle surface, and that this perturbation can be manipulated by the addition of salt ions (AgCl). The concentration of the pore-filling fluid is greatly reduced in the DES. Notably, we extracted the real-space structures of these fluctuations from the USAXS data using a simulated annealing approach that does not require a priori knowledge about the scattering form factor, and can be generalized to a wide range of complex small-angle scattering problems. Copyright © 2018 Elsevier Inc. All rights reserved.
Kolandaivelu, Kumaran; Bailey, Lynn; Buzzi, Stefano; Zucker, Arik; Milleret, Vincent; Ziogas, Algirdas; Ehrbar, Martin; Khattab, Ahmed A; Stanley, James R L; Wong, Gee K; Zani, Brett; Markham, Peter M; Tzafriri, Abraham R; Bhatt, Deepak L; Edelman, Elazer R
2017-04-20
Simple surface modifications can enhance coronary stent performance. Ultra-hydrophilic surface (UHS) treatment of contemporary bare metal stents (BMS) was assessed in vivo to verify whether such stents can provide long-term efficacy comparable to second-generation drug-eluting stents (DES) while promoting healing comparably to BMS. UHS-treated BMS, untreated BMS and corresponding DES were tested for three commercial platforms. A thirty-day and a 90-day porcine coronary model were used to characterise late tissue response. Three-day porcine coronary and seven-day rabbit iliac models were used for early healing assessment. In porcine coronary arteries, hydrophilic treatment reduced intimal hyperplasia relative to the BMS and corresponding DES platforms (1.5-fold to threefold reduction in 30-day angiographic and histological stenosis; p<0.04). Endothelialisation was similar on UHS-treated BMS and untreated BMS, both in swine and rabbit models, and lower on DES. Elevation in thrombotic indices was infrequent (never observed with UHS, rare with BMS, most often with DES), but, when present, correlated with reduced endothelialisation (p<0.01). Ultra-hydrophilic surface treatment of contemporary stents conferred good healing while moderating neointimal and thrombotic responses. Such surfaces may offer safe alternatives to DES, particularly when rapid healing and short dual antiplatelet therapy (DAPT) are crucial.
Li, Feng; Engelmann, Roger; Pesce, Lorenzo L; Doi, Kunio; Metz, Charles E; Macmahon, Heber
2011-12-01
To determine whether use of bone suppression (BS) imaging, used together with a standard radiograph, could improve radiologists' performance for detection of small lung cancers compared with use of standard chest radiographs alone and whether BS imaging would provide accuracy equivalent to that of dual-energy subtraction (DES) radiography. Institutional review board approval was obtained. The requirement for informed consent was waived. The study was HIPAA compliant. Standard and DES chest radiographs of 50 patients with 55 confirmed primary nodular cancers (mean diameter, 20 mm) as well as 30 patients without cancers were included in the observer study. A new BS imaging processing system that can suppress the conspicuity of bones was applied to the standard radiographs to create corresponding BS images. Ten observers, including six experienced radiologists and four radiology residents, indicated their confidence levels regarding the presence or absence of a lung cancer for each lung, first by using a standard image, then a BS image, and finally DES soft-tissue and bone images. Receiver operating characteristic (ROC) analysis was used to evaluate observer performance. The average area under the ROC curve (AUC) for all observers was significantly improved from 0.807 to 0.867 with BS imaging and to 0.916 with DES (both P < .001). The average AUC for the six experienced radiologists was significantly improved from 0.846 with standard images to 0.894 with BS images (P < .001) and from 0.894 to 0.945 with DES images (P = .001). Use of BS imaging together with a standard radiograph can improve radiologists' accuracy for detection of small lung cancers on chest radiographs. Further improvements can be achieved by use of DES radiography but with the requirement for special equipment and a potential small increase in radiation dose. © RSNA, 2011.
Evaluation de l'intergiciel de communication DDS pour son utilisation dans le domaine avionique
NASA Astrophysics Data System (ADS)
Levesque-Landry, Kevin
Les aeronefs modernes doivent combler de plus en plus de fonctionnalites afin de satisfaire les besoins de la clientele. De ce fait, les besoins en communications des systemes avioniques sont grandissants. De plus, la portabilite et la reutilisabilite des applications sont des defis d'actualite dans le domaine avionique. De ce fait, ce projet de recherche vise a faire une evaluation de la technologie d'intergiciel de service de distribution de donnees (DDS) pour son utilisation dans le domaine avionique. Cette technologie permettrait de reduire la complexite des communications et faciliter la portabilite et reutilisabilite des applications grâce a son interface standardisee. Dans ce projet de recherche, la norme DDS est tout d'abord etudiee pour cibler les fonctionnalites qui sont utiles au domaine avionique. Les differentes polices de qualite de services sont ainsi etudiees et denotent la flexibilite de la technologie DDS. Un intergiciel DDS est egalement evalue dans un environnement de laboratoire afin de mesurer l'impact de l'utilisation de cette technologie sur les performances de latence ainsi que sur l'utilisation de la bande passante. Les resultats montrent une faible augmentation de la latence moyenne lorsque l'intergiciel DDS est utilise. L'intergiciel DDS est egalement utilise dans une etude de cas avec un AFCS (automatic flight control system) afin de quantifier les effets de son utilisation sur une application avionique. Les resultats montrent que l'utilisation de l'intergiciel DDS n'empeche pas l'AFCS d'atteindre la stabilite, mais qu'elle ralentit l'atteinte de cette derniere. Finalement, une etude de cas est effectuee afin de valider que la technologie DDS peut etre utilisee pour construire des systemes redondants. Les resultats montrent que l'intergiciel DDS permet de faire de la redondance de reserve sans avoir un impact visible sur les performances du systeme redondant.
Ribichini, Flavio; Tomai, Fabrizio; Pesarini, Gabriele; Zivelonghi, Carlo; Rognoni, Andrea; De Luca, Giuseppe; Boccuzzi, Giacomo; Presbitero, Patrizia; Ferrero, Valeria; Ghini, Anna S; Marino, Paolo; Vassanelli, Corrado
2013-06-01
To analyse the clinical outcome at 4 years in patients with coronary artery disease treated with bare metal stents (BMS) vs. BMS and oral prednisone, or drug-eluting stents (DES), all assuming similar adjunctive medical treatment. Five Italian hospitals enrolled 375 non-diabetic, ischaemic patients without contraindications to dual anti-platelet treatment or corticosteroid therapy in a randomized controlled study. The primary endpoint was the event-free survival of cardiovascular death, myocardial infarction, and recurrence of ischaemia needing repeated target vessel revascularization at 1 year, and this was significantly lower in the BMS group (80.8%) compared with the prednisone (88.0%) and DES group (88.8%, P = 0.04 and 0.006, respectively). The long-term analysis of the primary endpoint was a pre-specified aim of the trial, and was performed at 1447 days (median, IQ range = 1210-1641). Patients receiving BMS alone had significantly lower event-free survival (75.3%) compared with 84.1% in the prednisone group (HR: 0.447; 95% CI: 0.25-0.80, P = 0.007) and 80.6% in DES patients (HR: 0.519; 95% CI: 0.29-0.93, P = 0.03). Prednisone-treated patients did not develop new treatment-related clinical problems. Drug-eluting stents patients suffered more very late stent thrombosis as a cause of spontaneous myocardial infarction. The need for target vessel revascularization remained lower in the prednisone and DES groups (13.6 and 15.2%, respectively), compared with BMS (23.2%). The clinical benefits of prednisone compared with BMS only persisted almost unchanged at 4 years. Drug-eluting stents performed better than BMS at long-term, although the advantages observed at 1 year were in part attenuated because of the occurrence of very late stent thrombosis and late revascularizations. Clinical Trial NCT 00369356.
Tengku Hashim, Tengku Juhana; Mohamed, Azah
2017-01-01
The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate. PMID:28991919
Optimisation of phase ratio in the triple jump using computer simulation.
Allen, Sam J; King, Mark A; Yeadon, M R Fred
2016-04-01
The triple jump is an athletic event comprising three phases in which the optimal proportion of each phase to the total distance jumped, termed the phase ratio, is unknown. This study used a whole-body torque-driven computer simulation model of all three phases of the triple jump to investigate optimal technique. The technique of the simulation model was optimised by varying torque generator activation parameters using a Genetic Algorithm in order to maximise total jump distance, resulting in a hop-dominated technique (35.7%:30.8%:33.6%) and a distance of 14.05m. Optimisations were then run with penalties forcing the model to adopt hop and jump phases of 33%, 34%, 35%, 36%, and 37% of the optimised distance, resulting in total distances of: 13.79m, 13.87m, 13.95m, 14.05m, and 14.02m; and 14.01m, 14.02m, 13.97m, 13.84m, and 13.67m respectively. These results indicate that in this subject-specific case there is a plateau in optimum technique encompassing balanced and hop-dominated techniques, but that a jump-dominated technique is associated with a decrease in performance. Hop-dominated techniques are associated with higher forces than jump-dominated techniques; therefore optimal phase ratio may be related to a combination of strength and approach velocity. Copyright © 2016 Elsevier B.V. All rights reserved.
Tengku Hashim, Tengku Juhana; Mohamed, Azah
2017-01-01
The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate.
Estimating ICU bed capacity using discrete event simulation.
Zhu, Zhecheng; Hen, Bee Hoon; Teow, Kiok Liang
2012-01-01
The intensive care unit (ICU) in a hospital caters for critically ill patients. The number of the ICU beds has a direct impact on many aspects of hospital performance. Lack of the ICU beds may cause ambulance diversion and surgery cancellation, while an excess of ICU beds may cause a waste of resources. This paper aims to develop a discrete event simulation (DES) model to help the healthcare service providers determine the proper ICU bed capacity which strikes the balance between service level and cost effectiveness. The DES model is developed to reflect the complex patient flow of the ICU system. Actual operational data, including emergency arrivals, elective arrivals and length of stay, are directly fed into the DES model to capture the variations in the system. The DES model is validated by open box test and black box test. The validated model is used to test two what-if scenarios which the healthcare service providers are interested in: the proper number of the ICU beds in service to meet the target rejection rate and the extra ICU beds in service needed to meet the demand growth. A 12-month period of actual operational data was collected from an ICU department with 13 ICU beds in service. Comparison between the simulation results and the actual situation shows that the DES model accurately captures the variations in the system, and the DES model is flexible to simulate various what-if scenarios. DES helps the healthcare service providers describe the current situation, and simulate the what-if scenarios for future planning.
Tandjung, Kenneth; Basalus, Mounir W Z; Sen, Hanim; Jessurun, Gillian A J; Danse, Peter W; Stoel, Martin; Linssen, Gerard C M; Derks, Anita; van Loenhout, Ton T; Nienhuis, Mark B; Hautvast, Raymond W M; von Birgelen, Clemens
2012-04-01
Drug-eluting stents (DES) are increasingly used for the treatment of coronary artery disease. An optimized DES performance is desirable to successfully treat various challenging coronary lesions in a broad population of patients. In response to this demand, third-generation DES with an improved deliverability were developed. Promus Element (Boston Scientific, Natick, MA) and Resolute Integrity (Medtronic Vascular, Santa Rosa, CA) are 2 novel third-generation DES for which limited clinical data are available. Accordingly, we designed the current multicenter study to investigate in an all-comers population whether the clinical outcome is similar after stenting with Promus Element versus Resolute Integrity. DUTCH PEERS is a multicenter, prospective, single-blinded, randomized trial in a Dutch all-comers population. Patients with all clinical syndromes who require percutaneous coronary interventions with DES implantation are eligible. In these patients, the type of DES implanted will be randomized in a 1:1 ratio between Resolute Integrity versus Promus Element. The trial is powered based on a noninferiority hypothesis. For each stent arm, 894 patients will be enrolled, resulting in a total study population of 1,788 patients. The primary end point is the incidence of target vessel failure at 1-year follow-up. DUTCH PEERS is the first randomized multicenter trial with a head-to-head comparison of Promus Element and Resolute Integrity to investigate the safety and efficacy of these third-generation DES. Copyright © 2012 Mosby, Inc. All rights reserved.
Raja, Shahzad G.; Ilsley, Charles; De Robertis, Fabio; Lane, Rebecca; Kabir, Tito; Bahrami, Toufan; Simon, Andre; Popov, Aron; Dalby, Miles C.; Mason, Mark; Grocott-Mason, Richard; Smith, Robert D.
2018-01-01
Background Studies comparing coronary artery bypass graft (CABG) and percutaneous coronary intervention (PCI) have largely been performed in the bare-metal stent (BMS) and first-generation drug eluting stent (F-DES) era. Second-generation DES (S-DES) have shown improved outcomes when compared to F-DES, but data comparing CABG with PCI using S-DES is limited. We compared mortality following CABG versus PCI for patients with multivessel disease and analyzed different stent types. Methods A total of 6,682 patients underwent multivessel revascularization at Harefield Hospital, UK. We stratified CABG patients into single arterial graft (SAG) or multiple arterial grafts (MAG); and PCI patients into BMS, F-DES or S-DES groups. We analyzed all-cause mortality at 5 years. Results 4,388 patients had CABG (n[SAG] = 3,358; n[MAG] = 1,030) and 2,294 patients had PCI (n[BMS] = 416; n[F-DES] = 752; n[S-DES] = 1,126). PCI had higher 5-year mortality with BMS (HR = 2.27, 95% CI:1.70–3.05, p<0.001); F-DES (HR = 1.52, 95% CI:1.14–2.01, p = 0.003); and S-DES (HR = 1.84, 95% CI:1.42–2.38, p<0.001). This was confirmed in inverse probability treatment weighted analyses. When adjusting for both measured and unmeasured factors using instrumental variable analyses, PCI had higher 5-year mortality with BMS (Δ = 15.5, 95% CI:3.6,27.5, p = 0.011) and FDES (Δ = 16.5, 95% CI:6.6,26.4, p<0.001), but had comparable mortality with CABG for PCI with SDES (Δ = 0.9, 95% CI: -9.6,7.9, p = 0.844), and when exclusively compared to CABG patients with SAG (Δ = 0.4, 95% CI: -8.0,8.7, p = 0.931) or MAG (Δ = 4.6, 95% CI: -0.4,9.6, p = 0.931). Conclusions In this real-world analysis, when adjusting for measured and unmeasured confounding, PCI with SDES had comparable 5-year mortality when compared to CABG. This warrants evaluation in adequately-powered randomized controlled trials. PMID:29408926
Design and analysis of magneto rheological fluid brake for an all terrain vehicle
NASA Astrophysics Data System (ADS)
George, Luckachan K.; Tamilarasan, N.; Thirumalini, S.
2018-02-01
This work presents an optimised design for a magneto rheological fluid brake for all terrain vehicles. The actuator consists of a disk which is immersed in the magneto rheological fluid surrounded by an electromagnet. The braking torque is controlled by varying the DC current applied to the electromagnet. In the presence of a magnetic field, the magneto rheological fluid particle aligns in a chain like structure, thus increasing the viscosity. The shear stress generated causes friction in the surfaces of the rotating disk. Electromagnetic analysis of the proposed system is carried out using finite element based COMSOL multi-physics software and the amount of magnetic field generated is calculated with the help of COMSOL. The geometry is optimised and performance of the system in terms of braking torque is carried out. Proposed design reveals better performance in terms of braking torque from the existing literature.
Hardware Design of the Energy Efficient Fall Detection Device
NASA Astrophysics Data System (ADS)
Skorodumovs, A.; Avots, E.; Hofmanis, J.; Korāts, G.
2016-04-01
Health issues for elderly people may lead to different injuries obtained during simple activities of daily living. Potentially the most dangerous are unintentional falls that may be critical or even lethal to some patients due to the heavy injury risk. In the project "Wireless Sensor Systems in Telecare Application for Elderly People", we have developed a robust fall detection algorithm for a wearable wireless sensor. To optimise the algorithm for hardware performance and test it in field, we have designed an accelerometer based wireless fall detector. Our main considerations were: a) functionality - so that the algorithm can be applied to the chosen hardware, and b) power efficiency - so that it can run for a very long time. We have picked and tested the parts, built a prototype, optimised the firmware for lowest consumption, tested the performance and measured the consumption parameters. In this paper, we discuss our design choices and present the results of our work.
NASA Astrophysics Data System (ADS)
Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor
2012-08-01
The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.
Molecular Biology and Prevention of Endometrial Cancer
2006-07-01
adenocarcinoma cases from the International DES Registry (IDESR) was analyzed for MSI 3) A case-control study of the CASH database was performed to...that have arisen in women exposed to DES in- utero , for methylation and mutation of PTEN and MLH1 in order to determine if estrogen induces genetic...and analyzed, which would most likely take an additional 3-6 months after enrollment. Aim 2: To analyze vaginal and cervical adenocarcinomas
Etude analytique du fonctionnement des moteurs à réluctance alimentés à fréquence variable
NASA Astrophysics Data System (ADS)
Sargos, F. M.; Gudefin, E. J.; Zaskalicky, P.
1995-03-01
In switched reluctance motors fed by a constant voltage source (like a battery) at high frequencies, the current becomes unpredictable and often cannot reach a given reference value, because of the variation of the inductances with the rotor position ; the “motional” m.m.f. generates commutation troubles which increase with the frequency. An optimal control as well as an approximate design of the motor require a quick and simple calculation of currents, powers and losses ; now, in principle, the non-linear electrical equation needs a numerical resolution, whose results cannot be extrapolated. By linearizing this equation by intervals, the method proposed here allows to express analytically, in any case, the phase currents, the torque and the copper losses, when the feeding voltage itself is constant by intervals. The model neglects saturation, but a simple adjustment of the inductance (chosen ad libitum) allows to deal with it. The calculation is immediate and perfectly accurate as long as the machine parameters themselves are well defined. Some results are given as examples for two usual feeding modes. Dans les machines à réluctance alimentées à haute fréquence par une source à tension constante, comme une batterie, le courant varie de manière difficilement prévisible, à cause de la variation des inductances avec la position du rotor, et souvent ne parvient pas à s'établir à une valeur de consigne imposée ; la f.é.m. “motionnelle” engendre des difficultés de communication qui s'aggravent avec l'augmentation de fréquence jusqu'à empêcher le fonctionnement. Tant pour optimiser la commande que pour dimensionner approximativement un moteur ; on doit pouvoir calculer simplement et rapidement le courant et la puissance ; or l'équation électrique, non linéaire, doit en principe être résolue numériquement et les résultats ne sont pratiquement pas extrapolables. En linéarisant par intervalles cette équation, la méthode proposée ici permer d'exprimer analytiquement et dans tous les cas les courants de phase, la puissance fournie et les pertes Joule, lorsque la tension aux bornes de l'enroulement est constante par morceaux. Le modèle utilisé néglige la saturation ; mais il est possible de tenir compte de celle-ci par des ajustements, facilement calculables, de la courbe d'inductance, quelle que soit son allure. Les calculs sont immédiats et parfaitement précis pour autant que les paramètres soient bien définis. Quelques résultats sont donnés à titre d'exemple, pour deux modes d'alimentation usuels.
1994-03-01
Oft-«) D-mt) Munich 50 Prof. Dring. Wolfgang Braig Universität Stuttgart Institut für Luftfahrtantriebe Pfaffenwaldring 6 D-7000 Stuttgart 80...Symposium, May, 1977. 3.20 MQff»t,R.J., Designing Thermocouples for Response Rate, ASME Paper No. 57-CTP-8. 3.21 Glawe, G.E., Holanda, R., and Krause ...Science and Industry, Vol.3, Part 2, 1962. 4.3.3 Glawe, G.E., Holanda, R., Krause , L.N., Recovery and Radiation Corrections and Time Constants of
2005-03-01
Significant numbers of in-service inspections are occurring but at present, there is no organized process whereby these data are collected and...Reliability under Field Conditions” was held in Brussels in May 1998. The processes under which this data could be collected must be defined and...Requirement 2-3 2.3 Impact on Existing Certification Issues 2-5 2.4 Risk Assessment and POD 2-5 Chapter 3 – Data Collection Process 3-1 3.1
New Advanced Mass Casualty Breathing System for Oxygen Therapy: Phase 1
2006-10-01
of National Defence, 2006 © Sa Majesté la Reine , représentée par le ministre de la Défense nationale, 2006 DRDC Toronto TM 2006-201 i...Toronto a été chargé d’examiner la performance du masque PulmanexMD Hi-OxMD (Hi-Ox) à des débits d’oxygène (O2) de 4 litres par minute (L·min-1) en...d’examiner la performance du masque PulmanexMD Hi-OxMD (Hi-Ox) à des débits d’oxygène (O2) de 4 litres par minute (L·min-1). Le masque Hi-Ox est un
The application of cat swarm optimisation algorithm in classifying small loan performance
NASA Astrophysics Data System (ADS)
Kencana, Eka N.; Kiswanti, Nyoman; Sari, Kartika
2017-10-01
It is common for banking system to analyse the feasibility of credit application before its approval. Although this process has been carefully done, there is no warranty that all credits will be repaid smoothly. This study aimed to know the accuracy of Cat Swarm Optimisation (CSO) algorithm in classifying small loans’ performance that is approved by Bank Rakyat Indonesia (BRI), one of several public banks in Indonesia. Data collected from 200 lenders were used in this work. The data matrix consists of 9 independent variables that represent profile of the credit, and one categorical dependent variable reflects credit’s performance. Prior to the analyses, data was divided into two data subset with equal size. Ordinal logistic regression (OLR) procedure is applied for the first subset and gave 3 out of 9 independent variables i.e. the amount of credit, credit’s period, and income per month of lender proved significantly affect credit performance. By using significantly estimated parameters from OLR procedure as the initial values for observations at the second subset, CSO procedure started. This procedure gave 76 percent of classification accuracy of credit performance, slightly better compared to 64 percent resulted from OLR procedure.
Green extraction of grape skin phenolics by using deep eutectic solvents.
Cvjetko Bubalo, Marina; Ćurko, Natka; Tomašević, Marina; Kovačević Ganić, Karin; Radojčić Redovniković, Ivana
2016-06-01
Conventional extraction techniques for plant phenolics are usually associated with high organic solvent consumption and long extraction times. In order to establish an environmentally friendly extraction method for grape skin phenolics, deep eutectic solvents (DES) as a green alternative to conventional solvents coupled with highly efficient microwave-assisted and ultrasound-assisted extraction methods (MAE and UAE, respectively) have been considered. Initially, screening of five different DES for proposed extraction was performed and choline chloride-based DES containing oxalic acid as a hydrogen bond donor with 25% of water was selected as the most promising one, resulting in more effective extraction of grape skin phenolic compounds compared to conventional solvents. Additionally, in our study, UAE proved to be the best extraction method with extraction efficiency superior to both MAE and conventional extraction method. The knowledge acquired in this study will contribute to further DES implementation in extraction of biologically active compounds from various plant sources. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhu, Z T; Zhang, X X; Liu, J; Jin, G Z
1996-01-01
To study the spontaneous firing of CA1 neurons in rat hippocampus after transient cerebral ischemia and the effect of desipramine (Des) on the post-ischemic electric activity of CA1 neurons. Single-unit extracellular recordings were performed in rats on d 3 after 10 min of cerebral ischemia by occlusion of 4 arteries. Des and saline were injected into a tail vein. The histological changes of CA1 neurons was assessed by the neuronal density of the CA1 sector. The spontaneous firing rate of CA1 neurons on d 3 after ischemia was enhanced in comparison with the control value. Des (0.2 and 0.4 mg.kg-1, i.v., n = 5 & 6, respectively) reduced dose-dependently the increase of firing rate with maximal inhibition by 6 min (58% & 85%) to 9 min (69% & 94%) (vs vehicle group, P < 0.01). About 50% cells in CA1 region showed necrotic changes. Des antagonized the hyperexcitability of CA1 neurons after cerebral ischemia.
AllAboard: Visual Exploration of Cellphone Mobility Data to Optimise Public Transport.
Di Lorenzo, G; Sbodio, M; Calabrese, F; Berlingerio, M; Pinelli, F; Nair, R
2016-02-01
The deep penetration of mobile phones offers cities the ability to opportunistically monitor citizens' mobility and use data-driven insights to better plan and manage services. With large scale data on mobility patterns, operators can move away from the costly, mostly survey based, transportation planning processes, to a more data-centric view, that places the instrumented user at the center of development. In this framework, using mobile phone data to perform transit analysis and optimization represents a new frontier with significant societal impact, especially in developing countries. In this paper we present AllAboard, an intelligent tool that analyses cellphone data to help city authorities in visually exploring urban mobility and optimizing public transport. This is performed within a self contained tool, as opposed to the current solutions which rely on a combination of several distinct tools for analysis, reporting, optimisation and planning. An interactive user interface allows transit operators to visually explore the travel demand in both space and time, correlate it with the transit network, and evaluate the quality of service that a transit network provides to the citizens at very fine grain. Operators can visually test scenarios for transit network improvements, and compare the expected impact on the travellers' experience. The system has been tested using real telecommunication data for the city of Abidjan, Ivory Coast, and evaluated from a data mining, optimisation and user prospective.
2002-11-01
slots in the side of the tail boom, which, by coupling with the predominately downward flow induced by the main rotor , produces “Coanda Effect ” and thus...conduct and promote cooperative research and information exchange. The objective is to support the development and effective use of national defence...decision makers. The RTO performs its mission with the support of an extensive network of national experts. It also ensures effective coordination with
Bağda, Esra; Altundağ, Huseyin; Tüzen, Mustafa; Soylak, Mustafa
2017-08-01
In the present study, a simple, mono step deep eutectic solvent (DES) extraction was developed for selective extraction of copper from sediment samples. The optimization of all experimental parameters, e.g. DES type, sample/DES ratio, contact time and temperature were performed with using BCR-280 R (lake sediment certified reference material). The limit of detection (LOD) and the limit of quantification (LOQ) were found as 1.2 and 3.97 µg L -1 , respectively. The RSD of the procedure was 7.5%. The proposed extraction method was applied to river and lake sediments sampled from Serpincik, Çeltek, Kızılırmak (Fadl and Tecer region of the river), Sivas-Turkey.
NASA Astrophysics Data System (ADS)
Minakov, A.; Platonov, D.; Sentyabov, A.; Gavrilov, A.
2017-01-01
We performed numerical simulation of flow in a laboratory model of a Francis hydroturbine at three regimes, using two eddy-viscosity- (EVM) and a Reynolds stress (RSM) RANS models (realizable k-ɛ, k-ω SST, LRR) and detached-eddy-simulations (DES), as well as large-eddy simulations (LES). Comparison of calculation results with the experimental data was carried out. Unlike the linear EVMs, the RSM, DES, and LES reproduced well the mean velocity components, and pressure pulsations in the diffusor draft tube. Despite relatively coarse meshes and insufficient resolution of the near-wall region, LES, DES also reproduced well the intrinsic flow unsteadiness and the dominant flow structures and the associated pressure pulsations in the draft tube.
O'Boyle, Noel M; Palmer, David S; Nigsch, Florian; Mitchell, John Bo
2008-10-29
We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC), that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR) models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024-1029). We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581-590) of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6 degrees C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, epsilon of 0.21) and an RMSE of 45.1 degrees C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3 degrees C, R2 of 0.47) for the same data and has similar performance to a Random Forest model (RMSE of 44.5 degrees C, R2 of 0.55). However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors.
2012-01-01
Background The purpose of this study was to investigate participation and performance changes in the multistage ultramarathon ‘Marathon des Sables’ from 2003 to 2012. Methods Participation and performance trends in the four- or six-stage running event covering approximately 250 km were analyzed with special emphasis on the nationality and age of the athletes. The relations between gender, age, and nationality of finishers and performance were investigated using regression analyses and analysis of variance. Results Between 2003 and 2012, a number of 7,275 athletes with 938 women (12.9%) and 6,337 men (87.1%) finished the Marathon des Sables. The finisher rate in both women (r2 = 0.62) and men (r2 = 0.60) increased across years (p < 0.01). Men were significantly (p < 0.01) faster than women for overall finishers (5.9 ± 1.6 km·h−1 versus 5.1 ± 1.3 km·h−1) and for the top three finishers (12.2 ± 0.4 km·h−1 versus 8.3 ± 0.6 km·h−1). The gender difference in running speed of the top three athletes decreased (r2 = 0.72; p < 0.01) from 39.5% in 2003 to 24.1% in 2012 with a mean gender difference of 31.7 ± 2.0%. In men, Moroccans won nine of ten competitions, and one edition was won by a Jordanian athlete. In women, eight races were won by Europeans (France five, Luxembourg two, and Spain one, respectively), and two events were won by Moroccan runners. Conclusions The finisher rate in the Marathon des Sables increased this last decade. Men were significantly faster than women with a higher gender difference in performance compared to previous reports. Social or cultural inhibitions may determine the outcome in this event. Future studies need to investigate participation trends regarding nationalities and socioeconomic background, as well as the motivation to compete in ultramarathons. PMID:23849138
Modélisation à long terme et optimisation du stock d'énergie des installations solaires autonomes
NASA Astrophysics Data System (ADS)
Maafi, A.; Delorme, C.
1996-04-01
A sizing approach of storage capacity for autonomous solar system has been developed by analyzing the difference between solar radiation input and energy demand on long-term periods. This approach has been implemented using series of daily global irradiation data recorded or generated on a horizontal surface in the meteorological stations of Abidjan (Ivory Coast), Algiers and Tamanrasset (Algeria) and Carpentras (France). The obtained results show that for an energy demand equaling the pluriannual mean of solar irradiation a storage of several months is needed to obtain the autonomy of solar systems located in Algiers, Carpentras and Tamanrasset. It should be too expensive investment to set-up such solar systems in these locations. Therefore, the size of the storage is minimized by using a non-zero initial energy storage and reducing the energy demand. This procedure show that it is possible to provide every day a programmed energy using autonomous solar systems which are competitive economically. Une approche de dimensionnement de la capacité de stockage des installations solaires autonomes a été développé. Elle est basée sur l'analyse de la différence entre l'énergie incidente et la consommation requise. Cette méthode a été mise au point à partir de données journalières d'irradiation globale enregistrées ou reconstituées sur plan horizontal dans les stations météorologiques d'Abidjan (Côte d'Ivoire), d'Alger et de Tamanrasset (Algérie) et de Carpentras (France). Pour une demande énergétique correspondant à la production moyenne pluriannuelle d'énergie solaire, les résultats montrent qu'un stockage de plusieurs mois est requis pour garantir l'autonomie de fonctionnement des installations solaires implantées à Alger, Carpentras et Tamanrasset. L'autonomie ainsi obtenue n'est pas économiquement rentable. Aussi, la taille du stockage a été minimisée en utilisant un stock initial d'énergie au démarrage de l'installation et en réduisant la consommation. Ceci a permis de montrer que la couverture de besoins énergétiques planifiés à l'aide d'installations solaires autonomes, compétitives sur le plan économique, est possible.
Optimal control of LQR for discrete time-varying systems with input delays
NASA Astrophysics Data System (ADS)
Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng
2018-04-01
In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.
Adaptive optimisation-offline cyber attack on remote state estimator
NASA Astrophysics Data System (ADS)
Huang, Xin; Dong, Jiuxiang
2017-10-01
Security issues of cyber-physical systems have received increasing attentions in recent years. In this paper, deception attacks on the remote state estimator equipped with the chi-squared failure detector are considered, and it is assumed that the attacker can monitor and modify all the sensor data. A novel adaptive optimisation-offline cyber attack strategy is proposed, where using the current and previous sensor data, the attack can yield the largest estimation error covariance while ensuring to be undetected by the chi-squared monitor. From the attacker's perspective, the attack is better than the existing linear deception attacks to degrade the system performance. Finally, some numerical examples are provided to demonstrate theoretical results.
2015-04-13
cope with dynamic, online optimisation problems with uncertainty, we developed some powerful and sophisticated techniques for learning heuristics...NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) National ICT Australia United NICTA, Locked Bag 6016 Kensington...ABSTRACT Optimization solvers should learn to improve their performance over time. By learning both during the course of solving an optimization
Pertinence et limitations de la loi de Mott dans les isolants désordonnés
NASA Astrophysics Data System (ADS)
Ladieu, François; Sanquer, Marc
Twenty five years ago, Mott's law was established in order to describe electrical transport in disordered insulators at low temperature. In this review, we briefly summarize the different theoretical steps involved in the rigourous derivation of Mott's law. We stress upon the fact that Mott's law gives the mean conductance of an ensemble of macroscopic samples as long as electron-electron interactions remain negligible. We then study what happens when at least one of the key assumptions of Mott's law no longer holds. We first focus on systems whose size — at least in one dimension — is not macroscopic: the optimization involved in Mott's law is no longer relevant for the measured conductance. Eventually, we try to gather different works dealing with electron-electron interactions. It is now established that interactions generally produce a stronger divergence for the electrical resistance than the one predicted by Mott's law at the lowest temperatures. But the exact shape of this divergence, as well as its interpretation, remain debated. We try to make a link between Efros and Shklovskii 's work and their famous "Coulomb gap" and a more recent work about granular media. In this latter work, the size of the grains is the key parameter for the shape of the divergence of the resistance at low temperature. We suggest this could indicate a way for a model accounting for the different shapes of divergence of the electrical resistance at the lowest temperatures. Furthermore this framework of granular media allows us to deal with non linear regime: we explain the main differences between the predictions of the hot electrons model and the ones recently derived for a d -dimensional network of grains. Il y a déjà un quart de siècle que la loi de Mott a été établie afin de décrire le transport dans les milieux isolants désordonnés à basse température. Dans cet article de revue, nous rappelons brièvement les diverses étapes théoriques nécessaires à l'établissement rigoureux de la loi de Mott. Nous insistons sur le fait que la loi de Mott ne donne que la conductance moyenne d'un ensemble d'échantillons macroscopiques, tant que les interactions entre électrons restent négligeables. Nous étudions ensuite ce qu'il se passe lorsque l'on lève au moins l'une des hypothèses nécessaires à l'établissement de la loi de Mott. Nous nous penchons d'abord sur le cas des systèmes dont au moins une dimension n'est pas macroscopique : l'optimisation contenue dans la loi de Mott n'est alors plus représentative de la conductance mesurée. Enfin, nous essayons de rassembler des travaux relatifs aux effets des interactions entre électrons. S'il est acquis que, de façon générale, les interactions entraînent une divergence de la résistance plus rapide que ne le prédit la loi de Mott aux plus basses températures, la forme exacte de cette divergence ainsi que son interprétation restent encore sujets à caution. Nous rapprochons le travail d'Efros et Shklovskii et leur célèbre "gap de Coulomb" d'un travail plus récent sur les milieux granulaires dans lequel la taille des grains joue un rôle discriminant sur la nature de la divergence de la résistance à basse température. Nous suggérons que ce rapprochement ouvre une piste pour un modèle rendant compte de manière unifiée des diverses divergences mesurées à basse température pour la résistance électrique. Cette incursion dans les modèles granulaires nous permet aussi d'aborder la question des non linéarités en exposant les différences entre les prédictions du modèle des électrons chauds et celles proposées récemment pour un réseau de grains en dimension d .
Baugreet, Sephora; Kerry, Joseph P; Brodkorb, André; Gomez, Carolina; Auty, Mark; Allen, Paul; Hamill, Ruth M
2018-08-01
With the goal of optimising a protein-enriched restructured beef steak targeted at the nutritional and chemosensory requirements of older adults, technological performance of thirty formulations, containing plant-based ingredients, pea protein isolate (PPI), rice protein (RP) and lentil flour (LF) with transglutaminase (TG) to enhance binding of meat pieces, were analysed. Maximal protein content of 28% in cooked product was achieved with PPI, RP and LF. Binding strength was primarily affected by TG, while textural parameters were improved with LF inclusion. Optimal formulation (F) to obtain a protein-enriched steak with lowest hardness values was achieved with TG (2%), PPI (8%), RP (9.35%) and LF (4%). F, F1S (optimal formulation 1 with added seasoning) and control restructured products (not containing plant proteins or seasonings) were scored by 120 consumers' aged over-65 years. Controls were most preferred (P < .05), while F1S were least liked by the older consumers. Consumer testing suggests further refinement and optimisation of restructured products with plant proteins should be undertaken. Copyright © 2018 Elsevier Ltd. All rights reserved.
A review on simple assembly line balancing type-e problem
NASA Astrophysics Data System (ADS)
Jusop, M.; Rashid, M. F. F. Ab
2015-12-01
Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.
Multi-objective optimisation of aircraft flight trajectories in the ATM and avionics context
NASA Astrophysics Data System (ADS)
Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian
2016-05-01
The continuous increase of air transport demand worldwide and the push for a more economically viable and environmentally sustainable aviation are driving significant evolutions of aircraft, airspace and airport systems design and operations. Although extensive research has been performed on the optimisation of aircraft trajectories and very efficient algorithms were widely adopted for the optimisation of vertical flight profiles, it is only in the last few years that higher levels of automation were proposed for integrated flight planning and re-routing functionalities of innovative Communication Navigation and Surveillance/Air Traffic Management (CNS/ATM) and Avionics (CNS+A) systems. In this context, the implementation of additional environmental targets and of multiple operational constraints introduces the need to efficiently deal with multiple objectives as part of the trajectory optimisation algorithm. This article provides a comprehensive review of Multi-Objective Trajectory Optimisation (MOTO) techniques for transport aircraft flight operations, with a special focus on the recent advances introduced in the CNS+A research context. In the first section, a brief introduction is given, together with an overview of the main international research initiatives where this topic has been studied, and the problem statement is provided. The second section introduces the mathematical formulation and the third section reviews the numerical solution techniques, including discretisation and optimisation methods for the specific problem formulated. The fourth section summarises the strategies to articulate the preferences and to select optimal trajectories when multiple conflicting objectives are introduced. The fifth section introduces a number of models defining the optimality criteria and constraints typically adopted in MOTO studies, including fuel consumption, air pollutant and noise emissions, operational costs, condensation trails, airspace and airport operations. A brief overview of atmospheric and weather modelling is also included. Key equations describing the optimality criteria are presented, with a focus on the latest advancements in the respective application areas. In the sixth section, a number of MOTO implementations in the CNS+A systems context are mentioned with relevant simulation case studies addressing different operational tasks. The final section draws some conclusions and outlines guidelines for future research on MOTO and associated CNS+A system implementations.
NASA Astrophysics Data System (ADS)
Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.
2017-09-01
This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.
ERIC Educational Resources Information Center
Lambert, Jean-Francois
1997-01-01
Discusses the importance of genetic and epigenetic factors in the development of the nervous system and the performances it conditions. From the perspective of rules, play, and relaxation of rules, learning and education are not considered as a kind of conditioning but as providing a content in which the cumulative expression of potential can take…
1993-11-01
Eliezer N. Solomon Steve Sedrel Westinghouse Electronic Systems Group P.O. Box 746, MS 432, Baltimore, Maryland 21203-0746, USA SUMMARY The United States...subset of the Joint Intergrated Avionics NewAgentCollection which has four Working Group (JIAWG), Performance parameters: Acceptor, of type Task._D...Published Noember 1993 Distribution and Availability on Back Cover SAGARD-CP54 ADVISORY GROUP FOR AERSACE RESEARCH & DEVELOPMENT 7 RUE ANCELLE 92200
Design, Performance, and Operation of Efficient Ramjet/Scramjet Combined Cycle Hypersonic Propulsion
2009-10-16
simulations, the blending of the RANS and LES portions is handled by the standard DES equations, now referred to as DES97. The one-equation Spalart...think that RANS can capture these dynamics. • Much remains to be learned about how to model chemistry-turbulence interactions in scramjet flows...BILLIG, F. S., R. BAURLE, AND C. TAM 1999 Design and Analysis of Streamline Traced Hypersonic Inlets. AIAA Paper 1999-4974. BILLIG, F.S., AND
Computer-aided classification of breast masses using contrast-enhanced digital mammograms
NASA Astrophysics Data System (ADS)
Danala, Gopichandh; Aghaei, Faranak; Heidari, Morteza; Wu, Teresa; Patel, Bhavika; Zheng, Bin
2018-02-01
By taking advantages of both mammography and breast MRI, contrast-enhanced digital mammography (CEDM) has emerged as a new promising imaging modality to improve efficacy of breast cancer screening and diagnosis. The primary objective of study is to develop and evaluate a new computer-aided detection and diagnosis (CAD) scheme of CEDM images to classify between malignant and benign breast masses. A CEDM dataset consisting of 111 patients (33 benign and 78 malignant) was retrospectively assembled. Each case includes two types of images namely, low-energy (LE) and dual-energy subtracted (DES) images. First, CAD scheme applied a hybrid segmentation method to automatically segment masses depicting on LE and DES images separately. Optimal segmentation results from DES images were also mapped to LE images and vice versa. Next, a set of 109 quantitative image features related to mass shape and density heterogeneity was initially computed. Last, four multilayer perceptron-based machine learning classifiers integrated with correlationbased feature subset evaluator and leave-one-case-out cross-validation method was built to classify mass regions depicting on LE and DES images, respectively. Initially, when CAD scheme was applied to original segmentation of DES and LE images, the areas under ROC curves were 0.7585+/-0.0526 and 0.7534+/-0.0470, respectively. After optimal segmentation mapping from DES to LE images, AUC value of CAD scheme significantly increased to 0.8477+/-0.0376 (p<0.01). Since DES images eliminate overlapping effect of dense breast tissue on lesions, segmentation accuracy was significantly improved as compared to regular mammograms, the study demonstrated that computer-aided classification of breast masses using CEDM images yielded higher performance.
NASA Astrophysics Data System (ADS)
Manurung, Renita; Ramadhani, Debbie Aditia; Maisarah, Siti
2017-06-01
Biodiesel production by using sludge palm oil (SPO) as raw material is generally synthesized in two step reactions, namely esterification and transesterification, because the free fatty acid (FFA) content of SPO is relatively high. However, the presence of choline chloride (ChCl), glycerol based deep eutectic solvent (DES), in transesterification may produce biodiesel from SPO in just one step. In this study, DES was produced by the mixture of ChCl and glycerol at molar ratio of 1:2 at a temperature of 80°C and stirring speed of 400 rpm for 1 hour. DES was characterized by its density and viscosity. The transesterification process was performed at reaction temperature of 70 °C, ethanol to oil molar with ratio of 9:1, sodium hydroxide as catalyst concentration of 1 % wt, DES as cosolvent with concentration of 0 to 5 % wt, stirring speed of 400 rpm, and one hour reaction time. The obtained biodiesel was then assessed with density, viscosity, and ester content as the parameters. FFA content of SPO as the raw material was 7.5290 %. In this case, DES as cosolvent in one step transesterification process of low feedstock could reduce the side reaction (saponification), decrease the time reaction, decrease the surface tension between ethanol and oil, and increase the mass transfer that simultaneously simplified the purification process and obtained the highest yield. The esters properties met the international standards of ASTM D 6751, with the highest yield obtained was 83.19% with 99.55% of ester content and the ratio of ethanol:oil of 9:1, concentration of DES of 4%, catalyst amount of 1%, temperature of reaction at 70°C and stirring speed of 400 rpm.
Chen, J R; Takahashi, M; Kushida, K; Suzuki, M; Suzuki, K; Horiuchi, K; Nagano, A
2000-02-15
Collagen and elastin are recognized as two major connective tissue proteins of human yellow ligament. In both collagen and elastin there are many kinds of intra- or intermolecular crosslinks. Pyridinoline (Pyr) and deoxypyridinoline (Dpyr) are mature crosslinks which maintain the structure of the collagen fibril. Desmosine (Des) and isodesmosine (Isodes) represent the major crosslinking components of elastin. Pentosidine (Pen), which is a senescent crosslink and one of the advanced glycation end products, accumulates with age in tissue proteins including collagen. We developed a direct and one-injection HPLC method to measure Pyr, Dpyr, Des, Isodes, and Pen in the hydrolysate of human yellow ligament. This method used one column and two detectors. Recovery rates of Pyr, Dpyr, Pen, Des, and Isodes were 86.4-98.3, 83.6-96.8, 78.7-95.6, 83.6-97.9, and 85.6-99.3%, respectively (n = 8). The intraassay coefficients of variation for Pyr, Dpyr, Pen, Des, and Isodes were 3.7, 4.1, 5.4, 4.5, and 4.7%, respectively (n = 8), and the interassay coefficients of variation for Pyr, Dpyr, Pen, Des, and Isodes were 4.4, 5.1, 4.9, 4.6 and 4.1%, respectively. Linear regression analysis showed the linearity (r = 0.99, P = 0.0001) of calibration line for each Pyr, Dpyr, Pen, Des, and Isodes. Using this method, we investigated age-related changes in the crosslinks of collagen and elastin in human yellow ligament. There was a significant correlation between Pen and age, but no correlations with Pyr, Dpyr, Des, and Isodes. We believe that this method is useful for investigating the content of these crosslinks in both collagen and elastin under various conditions. Copyright 2000 Academic Press.
Iserson, Kenneth V
2017-09-01
Emergency medicine personnel frequently respond to major disasters. They expect to have an effective and efficient management system to elegantly allocate available resources. Despite claims to the contrary, experience demonstrates this rarely occurs. This article describes privatizing disaster assessment using a single-purposed, accountable, and well-trained organization. The goal is to achieve elegant disaster assessment, rather than repeatedly exhorting existing groups to do it. The Rapid Disaster Evaluation System (RaDES) would quickly and efficiently assess a postdisaster population's needs. It would use an accountable nongovernmental agency's teams with maximal training, mobility, and flexibility. Designed to augment the Inter-Agency Standing Committee's 2015 Emergency Response Preparedness Plan, RaDES would provide the initial information needed to avoid haphazard and overlapping disaster responses. Rapidly deployed teams would gather information from multiple sources and continually communicate those findings to their base, which would then disseminate them to disaster coordinators in a concise, coherent, and transparent way. The RaDES concept represents an elegant, minimally bureaucratic, and effective rapid response to major disasters. However, its implementation faces logistical, funding, and political obstacles. Developing and maintaining RaDES would require significant funding and political commitment to coordinate the numerous agencies that claim to be performing the same tasks. Although simulations can demonstrate efficacy and deficiencies, only field tests will demonstrate RaDES' power to improve interagency coordination and decrease the cost of major disaster response. At the least, the RaDES concept should serve as a model for discussing how to practicably improve our current chaotic disaster responses. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Adam, J. F.; Moy, J. P.
2005-06-01
La biologie étudie des structures ou des phénomènes sub-cellulaires. Pour cela la microscopie est la technique d'observation privilégiée. La résolution spatiale de la microscopie optique s'avère bien souvent insuffisante pour de telles observations. Les techniques plus résolvantes, comme la microscopie électronique par transmission sont souvent destructrices et d'une complexité peu adaptée aux besoins des biologistes. La microscopie par rayons X dans la fenêtre de l'eau permet l'imagerie rapide de cellules dans leur milieu naturel, nécessite peu de préparation et offre des résolutions de quelques dizaines de nanomètres. De plus, il existe un bon contraste naturel entre les structures carbonées (protéines, lipides) et l'eau. Actuellement cette technique est limitée aux centres de rayonnement synchrotron, ce qui impose une planification et des déplacements incompatibles avec les besoins de la biologie. Un tel microscope fonctionnant avec uns source de laboratoire serait d'une grande utilité. Ce document présente un état de l'art de la microscopie par rayons X dans la fenêtre de l'eau. Un cahier des charges détaillé pour un appareil de laboratoire ayant les performances optiques requises par les biologistes est présenté et confronté aux microscopes X de laboratoire déjà existants. Des solutions concernant la source et les optiques sont également discutées.
Etude de la dynamique des porteurs dans des nanofils de silicium par spectroscopie terahertz
NASA Astrophysics Data System (ADS)
Beaudoin, Alexandre
Ce memoire presente une etude des proprietes de conduction electrique et de la dynamique temporelle des porteurs de charges dans des nanofils de silicium sondes par rayonnement terahertz. Les cas de nanofils de silicium non intentionnellement dopes et dopes type n sont compares pour differentes configurations du montage experimental. Les mesures de spectroscopie terahertz en transmission montre qu'il est possible de detecter la presence de dopants dans les nanofils via leur absorption du rayonnement terahertz (˜ 1--12 meV). Les difficultes de modelisation de la transmission d'une impulsion electromagnetique dans un systeme de nanofils sont egalement discutees. La detection differentielle, une modification au systeme de spectroscopie terahertz, est testee et ses performances sont comparees au montage de caracterisation standard. Les instructions et des recommendations pour la mise en place de ce type de mesure sont incluses. Les resultats d'une experience de pompe optique-sonde terahertz sont egalement presentes. Dans cette experience, les porteurs de charge temporairement crees suite a l'absorption de la pompe optique (lambda ˜ 800 nm) dans les nanofils (les photoporteurs) s'ajoutent aux porteurs initialement presents et augmentent done l'absorption du rayonnement terahertz. Premierement, l'anisotropie de l'absorption terahertz et de la pompe optique par les nanofils est demontree. Deuxiemement, le temps de recombinaison des photoporteurs est etudie en fonction du nombre de photoporteurs injectes. Une hypothese expliquant les comportements observes pour les nanofils non-dopes et dopes-n est presentee. Troisiemement, la photoconductivite est extraite pour les nanofils non-dopes et dopes-n sur une plage de 0.5 a 2 THz. Un lissage sur la photoconductivite permet d'estimer le nombre de dopants dans les nanofils dopes-n. Mots-cles: nanofil, silicium, terahertz, conductivite, spectroscopie, photoconductivite.
Facteurs de risque dans le trouble déficitaire de l’attention et de l’hyperactivité: étude familiale
Poissant, Hélène; Rapin, Lucile
2012-01-01
Résumé Objectif: Notre étude a pour but d’évaluer les facteurs de risque associés au trouble déficitaire de l’attention et de l’hyperactivité (TDAH) en termes de comorbidités et de facteurs d’adversité à l’intérieur des familles avec un TDAH. Méthodologie: 137 parents de 104 enfants avec un TDAH et 40 parents de 34 enfants contrôles ont répondu aux items d’un questionnaire. Des tests Chi-carrés et des tests de Student ont mesuré l’association de chaque item avec les groupes et les différences entre les groupes. Résultats: Les enfants avec un TDAH avaient des performances scolaires plus faibles et une plus forte prévalence des troubles d’apprentissage, oppositionnel, des conduites et anxieux que celle des enfants contrôles. Des difficultés d’apprentissage étaient plus souvent rapportées chez les pères d’enfants avec un TDAH. Par ailleurs, l’isolement social et les accidents de la route étaient davantage présents chez les mères d’enfants avec un TDAH. Ces dernières souffraient plus de dépression et de trouble anxieux et prenaient davantage de médicaments que les mères contrôles. Conclusion: L’étude de facteurs de risque révèle un lien entre les parents et les enfants, spécifiquement la présence de dépression parmi les mères d’enfants avec un TDAH et de difficultés d’apprentissage chez les pères, suggérant une composante familiale dans le trouble. La sous-représentation du TDAH chez les pères d’enfants avec un TDAH est discutée. PMID:23133459
Reduction of In-Stent Restenosis by Cholesteryl Ester Transfer Protein Inhibition.
Wu, Ben J; Li, Yue; Ong, Kwok L; Sun, Yidan; Shrestha, Sudichhya; Hou, Liming; Johns, Douglas; Barter, Philip J; Rye, Kerry-Anne
2017-12-01
Angioplasty and stent implantation, the most common treatment for atherosclerotic lesions, have a significant failure rate because of restenosis. This study asks whether increasing plasma high-density lipoprotein (HDL) levels by inhibiting cholesteryl ester transfer protein activity with the anacetrapib analog, des-fluoro-anacetrapib, prevents stent-induced neointimal hyperplasia. New Zealand White rabbits received normal chow or chow supplemented with 0.14% (wt/wt) des-fluoro-anacetrapib for 6 weeks. Iliac artery endothelial denudation and bare metal steel stent deployment were performed after 2 weeks of des-fluoro-anacetrapib treatment. The animals were euthanized 4 weeks poststent deployment. Relative to control, dietary supplementation with des-fluoro-anacetrapib reduced plasma cholesteryl ester transfer protein activity and increased plasma apolipoprotein A-I and HDL cholesterol levels by 53±6.3% and 120±19%, respectively. Non-HDL cholesterol levels were unaffected. Des-fluoro-anacetrapib treatment reduced the intimal area of the stented arteries by 43±5.6% ( P <0.001), the media area was unchanged, and the arterial lumen area increased by 12±2.4% ( P <0.05). Des-fluoro-anacetrapib treatment inhibited vascular smooth muscle cell proliferation by 41±4.5% ( P <0.001). Incubation of isolated HDLs from des-fluoro-anacetrapib-treated animals with human aortic smooth muscle cells at apolipoprotein A-I concentrations comparable to their plasma levels inhibited cell proliferation and migration. These effects were dependent on scavenger receptor-B1, the adaptor protein PDZ domain-containing protein 1, and phosphatidylinositol-3-kinase/Akt activation. HDLs from des-fluoro-anacetrapib-treated animals also inhibited proinflammatory cytokine-induced human aortic smooth muscle cell proliferation and stent-induced vascular inflammation. Inhibiting cholesteryl ester transfer protein activity in New Zealand White rabbits with iliac artery balloon injury and stent deployment increases HDL levels, inhibits vascular smooth muscle cell proliferation, and reduces neointimal hyperplasia in an scavenger receptor-B1, PDZ domain-containing protein 1- and phosphatidylinositol-3-kinase/Akt-dependent manner. © 2017 American Heart Association, Inc.
Methodologies nouvelles pour la realisation d'essais dans la soufflerie Price-Paidoussis
NASA Astrophysics Data System (ADS)
Flores Salinas, Manuel
Le present memoire en genie de la production automatisee vise a decrire le travail effectue dans la soufflerie Price-Paidoussis du laboratoire LARCASE pour trouver les methodologies experimentales et les procedures de tests, qui seront utilisees avec les modeles d'ailes actuellement au laboratoire. Les methodologies et procedures presentees ici vont permettre de preparer les tests en soufflerie du projet MDO-505 Architectures et technologies deformables pour l'amelioration des performances des ailes, qui se derouleront durant l'annee 2015. D'abord, un bref historique des souffleries subsoniques sera fait. Les differentes sections de la soufflerie Price-Paidoussis seront decrites en mettant l'emphase sur leur influence dans la qualite de l'ecoulement qui se retrouve dans la chambre d'essai. Ensuite, une introduction a la pression, a sa mesure lors de tests en soufflerie et les instruments utilises pour les tests en soufflerie au laboratoire LARCASE sera presente, en particulier le capteur piezoelectrique XCQ-062. Une attention particuliere sera portee au mode de fonctionnement, a son installation, a la mesure et a la detection des frequences et aux sources d'erreurs lorsqu'on utilise des capteurs de haute precision comme la serie XCQ-062 du fournisseur Kulite. Finalement, les procedures et les methodologies elaborees pour les tests dans la soufflerie Price-Paidoussis seront utilisees sur quatre types d'ailes differentes. L'article New methodology for wind tunnel calibration using neural networks - EGD approch portant sur une nouvelle facon de predire les caracteristiques de l'ecoulement a l'interieur de la soufflerie Price-Paidoussis se trouve dans l'annexe 2 de ce document. Cet article porte sur la creation d'un reseau de neurones multicouche et sur l'entrainement des neurones, Ensuite, une comparaison des resultats du reseau de neurones a ete fait avec des valeurs simules avec le logiciel Fluent.
Kube, Tobias; D'Astolfo, Lisa; Glombiewski, Julia A; Doering, Bettina K; Rief, Winfried
2017-09-01
Dysfunctional expectations are considered to be core features of various mental disorders. The aim of the study was to develop the Depressive Expectations Scale (DES) as a depression-specific measure for the assessment of dysfunctional expectations. Whereas previous research primarily focused on general cognitions and attitudes, the DES assesses 25 future-directed expectations (originally 75 items) which are situation-specific and falsifiable. To evaluate the psychometric properties of the DES, the scale was completed by 175 participants with and without severe depressive symptoms in an online survey. Participants additionally completed the Patient Health Questionnaire modules for depression (PHQ-9) and anxiety (GAD-7). People experiencing depressive symptoms were informed about the study with the help of self-help organizations. Reliability analyses indicated excellent internal consistency of the scale. An exploratory factor analyses revealed four factors: social rejection, social support, mood regulation, and ability to perform. The DES sum score strongly correlated with the severity of depressive symptoms. The DES sum score also significantly correlated with symptoms of generalized anxiety. The DES was shown to have excellent reliability; validity analyses were promising. As the DES items are situation-specific and falsifiable, they can be tested by the individual using behavioural experiments and may therefore facilitate cognitive restructuring. Thus, a structured assessment of patients' expectation with help of the DES can provide a basis for interventions within cognitive-behavioural treatment of depression. Assessing situation-specific expectations in patients experiencing depressive symptoms can provide a basis for the conduction of behavioural experiments to test patients' expectations. For the use of behavioural experiments, therapists should choose those dysfunctional expectations which a patient strongly agrees on. To modify patients' expectations, they should be exposed to situations where the discrepancy between patients' expectations and actual situational outcomes can be maximized. The Depressive Expectations Scale can be completed repeatedly to monitor a patient's progress within cognitive-behavioural treatment. © 2016 The British Psychological Society.
From SED HI concept to Pleiades FM detection unit measurements
NASA Astrophysics Data System (ADS)
Renard, Christophe; Dantes, Didier; Neveu, Claude; Lamard, Jean-Luc; Oudinot, Matthieu; Materne, Alex
2017-11-01
The first flight model PLEIADES high resolution instrument under Thales Alenia Space development, on behalf of CNES, is currently in integration and test phases. Based on the SED HI detection unit concept, PLEIADES detection unit has been fully qualified before the integration at telescope level. The main radiometric performances have been measured on engineering and first flight models. This paper presents the results of performances obtained on the both models. After a recall of the SED HI concept, the design and performances of the main elements (charge coupled detectors, focal plane and video processing unit), detection unit radiometric performances are presented and compared to the instrument specifications for the panchromatic and multispectral bands. The performances treated are the following: - video signal characteristics, - dark signal level and dark signal non uniformity, - photo-response non uniformity, - non linearity and differential non linearity, - temporal and spatial noises regarding system definitions PLEIADES detection unit allows tuning of different functions: reference and sampling time positioning, anti-blooming level, gain value, TDI line number. These parameters are presented with their associated criteria of optimisation to achieve system radiometric performances and their sensitivities on radiometric performances. All the results of the measurements performed by Thales Alenia Space on the PLEIADES detection units demonstrate the high potential of the SED HI concept for Earth high resolution observation system allowing optimised performances at instrument and satellite levels.
Measuring Diversity and Inclusion in Academic Medicine: The Diversity Engagement Survey (DES)
Person, Sharina D.; Jordan, C. Greer; Allison, Jeroan J.; Fink Ogawa, Lisa M.; Castillo-Page, Laura; Conrad, Sarah; Nivet, Marc A.; Plummer, Deborah L.
2018-01-01
Purpose To produce a physician and scientific workforce capable of delivering high quality, culturally competent health care and research, academic medical centers must assess their capacity for diversity and inclusion and respond to identified opportunities. Thus, the Diversity Engagement Survey (DES) is presented as a diagnostic and benchmarking tool. Method The 22-item DES connects workforce engagement theory with inclusion and diversity constructs. Face and content validity were established based on decades of previous work to promote institutional diversity. The survey was pilot tested at a single academic medical center and subsequently administered at 13 additional academic medical centers. Cronbach alphas assessed internal consistency and Confirmatory Factor Analysis (CFA) established construct validity. Criterion validity was assessed by observed separation in scores for groups traditionally recognized to have less workforce engagement. Results The sample consisted of 13,694 individuals at 14 medical schools from across the U.S. who responded to the survey administered between 2011– 2012. The Cronbach alphas for inclusion and engagement factors (range: 0.68 to 0.85), CFA fit indices, and item correlations with latent constructs, indicated an acceptable model fit and that questions measured the intended concepts. DES scores clearly distinguished higher and lower performing institutions. The DES detected important disparities for black, women, and those who did not have heterosexual orientation. Conclusions This study demonstrated that the DES is a reliable and valid instrument for internal assessment and evaluation or external benchmarking of institutional progress in building inclusion and engagement. PMID:26466376
Optimisation and establishment of diagnostic reference levels in paediatric plain radiography
NASA Astrophysics Data System (ADS)
Paulo, Graciano do Nascimento Nobre
Purpose: This study aimed to propose Diagnostic Reference Levels (DRLs) in paediatric plain radiography and to optimise the most frequent paediatric plain radiography examinations in Portugal following an analysis and evaluation of current practice. Methods and materials: Anthropometric data (weight, patient height and thickness of the irradiated anatomy) was collected from 9,935 patients referred for a radiography procedure to one of the three dedicated paediatric hospitals in Portugal. National DRLs were calculated for the three most frequent X-ray procedures at the three hospitals: chest AP/PA projection; abdomen AP projection; pelvis AP projection. Exposure factors and patient dose were collected prospectively at the clinical sites. In order to analyse the relationship between exposure factors, the use of technical features and dose, experimental tests were made using two anthropomorphic phantoms: a) CIRSTM ATOM model 705; height: 110cm, weight: 19kg and b) Kyoto kagakuTM model PBU-60; height: 165cm, weight: 50kg. After phantom data collection, an objective image analysis was performed by analysing the variation of the mean value of the standard deviation, measured with OsiriX software (Pixmeo, Switzerland). After proposing new exposure criteria, a Visual Grading Characteristic image quality evaluation was performed blindly by four paediatric radiologists, each with a minimum of 10 years of professional experience, using anatomical criteria scoring. Results: DRLs by patient weight groups have been established for the first time. ESAKP75 DRLs for both patient age and weight groups were also obtained and are described in the thesis. Significant dose reduction was achieved through the implementation of an optimisation programme: an average reduction of 41% and 18% on KAPP75 and ESAKP75, respectively for chest plain radiography; an average reduction of 58% and 53% on KAPP75 and ESAKP75, respectively for abdomen plain radiography; and an average reduction of 47% and 48% on KAPP75 and ESAKP75, respectively for pelvis plain radiography. Conclusion: Portuguese DRLs for plain radiography were obtained for paediatric plain radiography (chest AP/PA, abdomen and pelvis). Experimental phantom tests identified adequate plain radiography exposure criteria, validated by objective and subjective image quality analysis. The new exposure criteria were put into practice in one of the paediatric hospitals, by introducing an optimisation programme. The implementation of the optimisation programme allowed a significant dose reduction to paediatric patients, without compromising image quality. (Abstract shortened by ProQuest.).
The path toward HEP High Performance Computing
NASA Astrophysics Data System (ADS)
Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro
2014-06-01
High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from the recent technology evolution in computing.
Kitabata, Hironori; Loh, Joshua P; Pendyala, Lakshmana K; Omar, Alfazir; Ota, Hideaki; Minha, Sa'ar; Magalhaes, Marco A; Torguson, Rebecca; Chen, Fang; Satler, Lowell F; Pichard, Augusto D; Waksman, Ron
2014-04-01
We aimed to compare neointimal tissue characteristics between bare-metal stents (BMS) and drug-eluting stents (DES) at long-term follow-up using optical coherence tomography (OCT) and virtual histology intravascular ultrasound (VH-IVUS). Neoatherosclerosis in neointima has been reported in BMS and in DES. Thirty patients with 36 stented lesions [BMS (n=17) or DES (n=19)] >3years after implantation were prospectively enrolled. OCT and VH-IVUS were performed and analyzed independently. Stents with ≥70% diameter stenosis were excluded. The median duration from implantation was 126.0months in the BMS group and 60.0months in the DES group (p <0.001). Lipid-laden intima (58.8% vs. 42.1%, p=0.317), thrombus (17.6% vs. 5.3%, p=0.326), and calcification (35.3% vs. 26.3%, p=0.559) did not show significant differences between BMS and DES. When divided into 3 time periods, the cumulative incidence of lipid-laden neointima from >3years to <9years was similar between BMS and DES (42.9% vs. 42.1%, p=1.000). Furthermore, it continued to gradually increase over time in both groups. OCT-derived thin-cap fibroatheroma (TCFA) was observed in 17.6% of BMS- and 5.3% of DES-treated lesions (p=0.326). No stents had evidence of intimal disruption. The percentage volume of necrotic core (16.1% [9.7, 20.3] vs. 9.7% [7.0, 16.5], p=0.062) and dense calcium (9.5% [3.8, 13.6] vs. 2.7% [0.4, 4.9], p=0.080) in neointima tended to be greater in BMS-treated lesions. Intra-stent VH-TCFA (BMS vs. DES 45.5% vs. 18.2%, p=0.361) did not differ significantly. At long-term follow-up beyond 3 years after implantation, the intra-stent neointimal tissue characteristics appeared similar for both BMS and DES. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Houehanou, Ernesto C.
L'incorporation des ajouts cimentaires dans le beton est connue pour ses avantages technologiques et environnementaux. Pour assurer une plus grande utilisation des ajouts cimentaires, il faut accroitre la connaissance dans ce domaine surtout les facteurs relatifs a la durabilite des ouvrages construits avec les betons contenant des ajouts mineraux. Jusqu'a present, la plupart des etudes sur les betons contenant de la cendre volante et du laitier semble s'accorder sur leur moins bonne durabilite a l'ecaillage surtout lorsqu'on les compare au beton ordinaire. Les raisons de cette moins bonne performance ne sont pas toutes connues et cela limite bien des fois l'incorporation de la cendre volante et de laitier dans le beton pour des ouvrages fortement exposes aux cycles de gel-degel en presence de sels fondants. Cette these vise la comprehension des problematiques de la durabilite a l'ecaillage des betons contenant des ajouts cimentaires tels la cendre volante et le laitier. Les objectifs sont de mieux comprendre la representativite et la severite relative des essais normalises ASTM C672 et NQ 2621-900 pour l'evaluation de la durabilite a l'ecaillage de ces betons, d'etudier l'influence de la methode de murissement sur la durabilite a l'ecaillage et d'etudier la relation entre la durabilite a l'ecaillage et la sorptivite des surfaces de beton ainsi que la particularite de la microstructure des betons contenant de la cendre volante. Cinq types de betons a air entraine contenant 25% et 35% de cendre volante et laitier ainsi que 1% et 2% de fumee de silice ont ete produits, muris selon differentes methodes et soumis a des essais acceleres selon les deux procedures normalisees ainsi qu'un essai de sorptivite. Les differentes methodes de murissement sont choisies de facon a mettre en evidence aussi bien l'influence des parametres des essais que celle de la methode de murissement elle-meme. La durabilite en laboratoire des betons testes a ete comparee avec celle de betons similaires apres 4 et 6 annees de service. La microstructure des betons en service a ete analysee au moyen du microscope a balayage electronique (MEB). Les resultats montrent que la qualite du murissement influence grandement la durabilite a l'ecaillage des betons contenant de la cendre volante et de laitier surtout lorsqu'ils sont soumis ' aux essais acceleres en laboratoire. La duree du pretraitement humide est un parametre cle de la durabilite a l'ecaillage des betons testes en laboratoire. Le pretraitement humide correspond a la duree totale du murissement humide (100% HR) et de la periode de presaturation. Pour les deux methodes d'essai, l'allongement du pretraitement humide a 28 jours ameliore la resistance a l'ecaillage de tous les types de betons et en particulier celle des betons avec cendres volantes. Pour les deux methodes d'essai, l'allongement du pretraitement humide a 28 jours ameliore la resistance a l'ecaillage de tous les types de betons et en particulier celle des betons avec cendres volantes. La periode de presaturation de 7 jours de la procedure NQ 2621-900 a un effet similaire a celui d'un murissement humide de meme longueur. Un murissement humide de 28 jours apparait optimal et conduit a une estimation plus realiste de la resistance a l'ecaillage reelle des betons. Pour une meme duree de pretraitement humide, les procedures NQ 2621-900 et ASTM C672 donnent des resultats equivalents. L'utilisation d'un moule a fond drainant n'a pas d'effet sur la resistance a l'ecaillage des betons de cette etude. Bien que le murissement dans l'eau saturee de chaux offre toute l'eau requise pour favoriser le developpement des proprietes du beton et l'amelioration de sa durabilite a l'ecaillage, elle lessive cependant les ions alcalins ce qui diminue defavorablement l'alcalinite et le pH de la solution interstitielle de la pate de ciment pres de la surface exposee. L'utilisation d'un agent de murissement protege mieux les betons contenant de la cendre volante et ameliore significativement leurs resistances a l'ecaillage mais elle a tendance a reduire la durabilite a l'ecaillage des betons contenant de laitier. Pour developper une bonne resistance a l'ecaillage, il est essentiel de produire une surface de beton impermeable qui resiste a la penetration de l'eau externe. La permeabilite et la porosite de la peau du beton sont etroitement liees a la sorptivite. L' allongement de la duree de murissement humide des betons avec ajouts cimentaires diminue systematiquement la sorptivite et ameliore leur durabilite a l'ecaillage, particulierement dans le cas des betons avec cendres volantes. Les resultats montrent qu'il existe une bonne correlation entre les resultats des essais d'ecaillage et les mesures de sorptivite. La correlation etablie entre la sorptivite et la durabilite a l'ecaillage des betons, l'effet • determinant de la carbonatation sur la durabilite a l'ecaillage des betons avec cendres volantes ainsi que l'explication de l'origine de la difference de severite entre les essais ASTM C-672 et NQ 2621-900 sont les contributions scientifiques de cette these. Au plan technique et industriel, elle met en evidence le mode de murissement qui favorise une meilleure durabilite a l'ecaillage des betons et suggere une methode de caracterisation en laboratoire qui ferait une meilleure prediction du comportement en service. Mots cles : Beton, cendre volante, laitier, durabilite, ecaillage, murissement, sorptivite
Hastings, Gareth D.; Marsack, Jason D.; Nguyen, Lan Chi; Cheng, Han; Applegate, Raymond A.
2017-01-01
Purpose To prospectively examine whether using the visual image quality metric, visual Strehl (VSX), to optimise objective refraction from wavefront error measurements can provide equivalent or better visual performance than subjective refraction and which refraction is preferred in free viewing. Methods Subjective refractions and wavefront aberrations were measured on 40 visually-normal eyes of 20 subjects, through natural and dilated pupils. For each eye a sphere, cylinder, and axis prescription was also objectively determined that optimised visual image quality (VSX) for the measured wavefront error. High contrast (HC) and low contrast (LC) logMAR visual acuity (VA) and short-term monocular distance vision preference were recorded and compared between the VSX-objective and subjective prescriptions both undilated and dilated. Results For 36 myopic eyes, clinically equivalent (and not statistically different) HC VA was provided with both the objective and subjective refractions (undilated mean ±SD was −0.06 ±0.04 with both refractions; dilated was −0.05 ±0.04 with the objective, and −0.05 ±0.05 with the subjective refraction). LC logMAR VA provided by the objective refraction was also clinically equivalent and not statistically different to that provided by the subjective refraction through both natural and dilated pupils for myopic eyes. In free viewing the objective prescription was preferred over the subjective by 72% of myopic eyes when not dilated. For four habitually undercorrected high hyperopic eyes, the VSX-objective refraction was more positive in spherical power and VA poorer than with the subjective refraction. Conclusions A method of simultaneously optimising sphere, cylinder, and axis from wavefront error measurements, using the visual image quality metric VSX, is described. In myopic subjects, visual performance, as measured by HC and LC VA, with this VSX-objective refraction was found equivalent to that provided by subjective refraction, and was typically preferred over subjective refraction. Subjective refraction was preferred by habitually undercorrected hyperopic eyes. PMID:28370389
Advances in drug eluting stents – focus on the Endeavor® zotarolimus stent
Bridges, Jonathan; Cutlip, Donald
2009-01-01
Coronary artery disease remains one of the leading causes of death in the United States. Over the last 30 years, the development of coronary artery angioplasty and stenting has drastically reduced mortality during acute coronary syndromes while also reducing symptoms of chronic coronary artery disease. Unfortunately, the placement of stents in a coronary artery can be complicated by in-stent thrombosis or restenosis. In 2003–2004, a new generation of stents was introduced to the market with the goal of reducing the rate of restenosis. These stents, called drug eluting stents (DES), are coated with a pharmacological agent designed to reduce the neointimal hyperplasia associated with restenosis. Within a year, approximately 80% of all percutaneous coronary interventions performed within the US involved placement of a DES. In 2006, a controversy arose about the possibility of a statistically significant increased risk of acute stent thrombosis associated with DES especially when used for an “off label” indication. This risk was attributed to delayed endothelization. This controversy has led to a reduction in the use of DES along with longer use of dual platelet inhibition with aspirin and clopidogrel. Recently Medtronic introduced a new DES to the market called the Endeavor® stent – a zotarolimus eluting stent. PMID:22915908
NASA Astrophysics Data System (ADS)
Goyette, Stephane
1995-11-01
Le sujet de cette these concerne la modelisation numerique du climat regional. L'objectif principal de l'exercice est de developper un modele climatique regional ayant les capacites de simuler des phenomenes de meso-echelle spatiale. Notre domaine d'etude se situe sur la Cote Ouest nord americaine. Ce dernier a retenu notre attention a cause de la complexite du relief et de son controle sur le climat. Les raisons qui motivent cette etude sont multiples: d'une part, nous ne pouvons pas augmenter, en pratique, la faible resolution spatiale des modeles de la circulation generale de l'atmosphere (MCG) sans augmenter a outrance les couts d'integration et, d'autre part, la gestion de l'environnement exige de plus en plus de donnees climatiques regionales determinees avec une meilleure resolution spatiale. Jusqu'alors, les MCG constituaient les modeles les plus estimes pour leurs aptitudes a simuler le climat ainsi que les changements climatiques mondiaux. Toutefois, les phenomenes climatiques de fine echelle echappent encore aux MCG a cause de leur faible resolution spatiale. De plus, les repercussions socio-economiques des modifications possibles des climats sont etroitement liees a des phenomenes imperceptibles par les MCG actuels. Afin de circonvenir certains problemes inherents a la resolution, une approche pratique vise a prendre un domaine spatial limite d'un MCG et a y imbriquer un autre modele numerique possedant, lui, un maillage de haute resolution spatiale. Ce processus d'imbrication implique alors une nouvelle simulation numerique. Cette "retro-simulation" est guidee dans le domaine restreint a partir de pieces d'informations fournies par le MCG et forcee par des mecanismes pris en charge uniquement par le modele imbrique. Ainsi, afin de raffiner la precision spatiale des previsions climatiques de grande echelle, nous developpons ici un modele numerique appele FIZR, permettant d'obtenir de l'information climatique regionale valide a la fine echelle spatiale. Cette nouvelle gamme de modeles-interpolateurs imbriques qualifies d'"intelligents" fait partie de la famille des modeles dits "pilotes". L'hypothese directrice de notre etude est fondee sur la supposition que le climat de fine echelle est souvent gouverne par des forcages provenant de la surface plutot que par des transports atmospheriques de grande echelle spatiale. La technique que nous proposons vise donc a guider FIZR par la Dynamique echantillonnee d'un MCG et de la forcer par la Physique du MCG ainsi que par un forcage orographique de meso-echelle, en chacun des noeuds de la grille fine de calculs. Afin de valider la robustesse et la justesse de notre modele climatique regional, nous avons choisi la region de la Cote Ouest du continent nord americain. Elle est notamment caracterisee par une distribution geographique des precipitations et des temperatures fortement influencee par le relief sous-jacent. Les resultats d'une simulation d'un mois de janvier avec FIZR demontrent que nous pouvons simuler des champs de precipitations et de temperatures au niveau de l'abri beaucoup plus pres des observations climatiques comparativement a ceux simules a partir d'un MCG. Ces performances sont manifestement attribuees au forcage orographique de meso-echelle de meme qu'aux caracteristiques de surface determinees a fine echelle. Un modele similaire a FIZR peut, en principe, etre implante sur l'importe quel MCG, donc, tout organisme de recherche implique en modelisation numerique mondiale de grande echelle pourra se doter d'un el outil de regionalisation.
Navigation d'un vehicule autonome autour d'un asteroide
NASA Astrophysics Data System (ADS)
Dionne, Karine
Les missions d'exploration planetaire utilisent des vehicules spatiaux pour acquerir les donnees scientifiques qui font avancer notre connaissance du systeme solaire. Depuis les annees 90, ces missions ciblent non seulement les planetes, mais aussi les corps celestes de plus petite taille comme les asteroides. Ces astres representent un defi particulier du point de vue des systemes de navigation, car leur environnement dynamique est complexe. Une sonde spatiale doit reagir rapidement face aux perturbations gravitationnelles en presence, sans quoi sa securite pourrait etre compromise. Les delais de communication avec la Terre pouvant souvent atteindre plusieurs dizaines de minutes, il est necessaire de developper des logiciels permettant une plus grande autonomie d'operation pour ce type de mission. Ce memoire presente un systeme de navigation autonome qui determine la position et la vitesse d'un satellite en orbite autour d'un asteroide. Il s'agit d'un filtre de Kalman etendu adaptatif a trois degres de liberte. Le systeme propose se base sur l'imagerie optique pour detecter des " points de reperes " qui ont ete prealablement cartographies. Il peut s'agir de crateres, de rochers ou de n'importe quel trait physique discernable a la camera. Les travaux de recherche realises se concentrent sur les techniques d'estimation d'etat propres a la navigation autonome. Ainsi, on suppose l'existence d'un logiciel approprie qui realise les fonctions de traitement d'image. La principale contribution de recherche consiste en l'inclusion, a chaque cycle d'estimation, d'une mesure de distance afin d'ameliorer les performances de navigation. Un estimateur d'etat de type adaptatif est necessaire pour le traitement de ces mesures, car leur precision varie dans le temps en raison de l'erreur de pointage. Les contributions secondaires de recherche sont liees a l'analyse de l'observabilite du systeme ainsi qu'a une analyse de sensibilite pour six parametres principaux de conception. Les resultats de simulation montrent que l'ajout d'une mesure de distance par cycle de mise a jour entraine une amelioration significative des performances de navigation. Ce procede reduit l'erreur d'estimation ainsi que les periodes de non-observabilite en plus de contrer la dilution de precision des mesures. Les analyses de sensibilite confirment quant a elles la contribution des mesures de distance a la diminution globale de l'erreur d'estimation et ce pour une large gamme de parametres de conception. Elles indiquent egalement que l'erreur de cartographie est un parametre critique pour les performances du systeme de navigation developpe. Mots cles : Estimation d'etat, filtre de Kalman adaptatif, navigation optique, lidar, asteroide, simulations numeriques
CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox
NASA Astrophysics Data System (ADS)
Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano
2018-03-01
Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.
Boundary element based multiresolution shape optimisation in electrostatics
NASA Astrophysics Data System (ADS)
Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan
2015-09-01
We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.
NASA Astrophysics Data System (ADS)
Poikselkä, Katja; Leinonen, Mikko; Palosaari, Jaakko; Vallivaara, Ilari; Röning, Juha; Juuti, Jari
2017-09-01
This paper introduces a new type of piezoelectric actuator, Mikbal. The Mikbal was developed from a Cymbal by adding steel structures around the steel cap to increase displacement and reduce the amount of piezoelectric material used. Here the parameters of the steel cap of Mikbal and Cymbal actuators were optimised by using genetic algorithms in combination with Comsol Multiphysics FEM modelling software. The blocking force of the actuator was maximised for different values of displacement by optimising the height and the top diameter of the end cap profile so that their effect on displacement, blocking force and stresses could be analysed. The optimisation process was done for five Mikbal- and two Cymbal-type actuators with different diameters varying between 15 and 40 mm. A Mikbal with a Ø 25 mm piezoceramic disc and a Ø 40 mm steel end cap was produced and the performances of unclamped measured and modelled cases were found to correspond within 2.8% accuracy. With a piezoelectric disc of Ø 25 mm, the Mikbal created 72% greater displacement while blocking force was decreased 57% compared with a Cymbal with the same size disc. Even with a Ø 20 mm piezoelectric disc, the Mikbal was able to generate ∼10% higher displacement than a Ø 25 mm Cymbal. Thus, the introduced Mikbal structure presents a way to extend the displacement capabilities of a conventional Cymbal actuator for low-to-moderate force applications.
The use of surrogates for an optimal management of coupled groundwater-agriculture hydrosystems
NASA Astrophysics Data System (ADS)
Grundmann, J.; Schütze, N.; Brettschneider, M.; Schmitz, G. H.; Lennartz, F.
2012-04-01
For ensuring an optimal sustainable water resources management in arid coastal environments, we develop a new simulation based integrated water management system. It aims at achieving best possible solutions for groundwater withdrawals for agricultural and municipal water use including saline water management together with a substantial increase of the water use efficiency in irrigated agriculture. To achieve a robust and fast operation of the management system regarding water quality and water quantity we develop appropriate surrogate models by combining physically based process modelling with methods of artificial intelligence. Thereby we use an artificial neural network for modelling the aquifer response, inclusive the seawater interface, which was trained on a scenario database generated by a numerical density depended groundwater flow model. For simulating the behaviour of high productive agricultural farms crop water production functions are generated by means of soil-vegetation-atmosphere-transport (SVAT)-models, adapted to the regional climate conditions, and a novel evolutionary optimisation algorithm for optimal irrigation scheduling and control. We apply both surrogates exemplarily within a simulation based optimisation environment using the characteristics of the south Batinah region in the Sultanate of Oman which is affected by saltwater intrusion into the coastal aquifer due to excessive groundwater withdrawal for irrigated agriculture. We demonstrate the effectiveness of our methodology for the evaluation and optimisation of different irrigation practices, cropping pattern and resulting abstraction scenarios. Due to contradicting objectives like profit-oriented agriculture vs. aquifer sustainability a multi-criterial optimisation is performed.
NASA Astrophysics Data System (ADS)
Eriksen, Janus J.
2017-09-01
It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.
NASA Astrophysics Data System (ADS)
Biermann, D.; Gausemeier, J.; Heim, H.-P.; Hess, S.; Petersen, M.; Ries, A.; Wagner, T.
2014-05-01
In this contribution a framework for the computer-aided planning and optimisation of functional graded components is presented. The framework is divided into three modules - the "Component Description", the "Expert System" for the synthetisation of several process chains and the "Modelling and Process Chain Optimisation". The Component Description module enhances a standard computer-aided design (CAD) model by a voxel-based representation of the graded properties. The Expert System synthesises process steps stored in the knowledge base to generate several alternative process chains. Each process chain is capable of producing components according to the enhanced CAD model and usually consists of a sequence of heating-, cooling-, and forming processes. The dependencies between the component and the applied manufacturing processes as well as between the processes themselves need to be considered. The Expert System utilises an ontology for that purpose. The ontology represents all dependencies in a structured way and connects the information of the knowledge base via relations. The third module performs the evaluation of the generated process chains. To accomplish this, the parameters of each process are optimised with respect to the component specification, whereby the result of the best parameterisation is used as representative value. Finally, the process chain which is capable of manufacturing a functionally graded component in an optimal way regarding to the property distributions of the component description is presented by means of a dedicated specification technique.
Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.
Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip
2015-11-01
The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.
Collaborative development for setup, execution, sharing and analytics of complex NMR experiments.
Irvine, Alistair G; Slynko, Vadim; Nikolaev, Yaroslav; Senthamarai, Russell R P; Pervushin, Konstantin
2014-02-01
Factory settings of NMR pulse sequences are rarely ideal for every scenario in which they are utilised. The optimisation of NMR experiments has for many years been performed locally, with implementations often specific to an individual spectrometer. Furthermore, these optimised experiments are normally retained solely for the use of an individual laboratory, spectrometer or even single user. Here we introduce a web-based service that provides a database for the deposition, annotation and optimisation of NMR experiments. The application uses a Wiki environment to enable the collaborative development of pulse sequences. It also provides a flexible mechanism to automatically generate NMR experiments from deposited sequences. Multidimensional NMR experiments of proteins and other macromolecules consume significant resources, in terms of both spectrometer time and effort required to analyse the results. Systematic analysis of simulated experiments can enable optimal allocation of NMR resources for structural analysis of proteins. Our web-based application (http://nmrplus.org) provides all the necessary information, includes the auxiliaries (waveforms, decoupling sequences etc.), for analysis of experiments by accurate numerical simulation of multidimensional NMR experiments. The online database of the NMR experiments, together with a systematic evaluation of their sensitivity, provides a framework for selection of the most efficient pulse sequences. The development of such a framework provides a basis for the collaborative optimisation of pulse sequences by the NMR community, with the benefits of this collective effort being available to the whole community. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Homier, Ram
Dans le contexte environnemental actuel, le photovoltaïque bénéficie de l'augmentation des efforts de recherche dans le domaine des énergies renouvelables. Pour réduire le coût de la production d'électricité par conversion directe de l'énergie lumineuse en électricité, le photovoltaïque concentré est intéressant. Le principe est de concentrer une grande quantité d'énergie lumineuse sur des petites surfaces de cellules solaires multi-jonction à haute efficacité. Lors de la fabrication d'une cellule solaire, il est essentiel d'inclure une méthode pour réduire la réflexion de la lumière à la surface du dispositif. Le design d'un revêtement antireflet (ARC) pour cellules solaires multi-jonction présente des défis à cause de la large bande d'absorption et du besoin d'égaliser le courant produit par chaque sous-cellule. Le nitrure de silicium déposé par PECVD en utilisant des conditions standards est largement utilisé dans l'industrie des cellules solaires à base de silicium. Cependant, ce diélectrique présente de l'absorption dans la plage des courtes longueurs d'onde. Nous proposons l'utilisation du nitrure de silicium déposé par PECVD basse fréquence (LFSiN) optimisé pour avoir un haut indice de réfraction et une faible absorption optique pour l'ARC pour cellules solaires triple-jonction III-V/Ge. Ce matériau peut aussi servir de couche de passivation/encapsulation. Les simulations montrent que l'ARC double couche SiO2/LFSiN peut être très efficace pour réduire les pertes par réflexion dans la plage de longueurs d'onde de la sous-cellule limitante autant pour des cellules solaires triple-jonction limitées par la sous-cellule du haut que pour celles limitées par la sous-cellule du milieu. Nous démontrons aussi que la performance de la structure est robuste par rapport aux fluctuations des paramètres des couches PECVD (épaisseurs, indice de réfraction). Mots-clés : Photovoltaïque concentré (CPV), cellules solaires multi-jonction (MJSC), revêtement antireflet (ARC), passivation des semiconducteurs III-V, nitrure de silicium (Si"Ny), PECVD.
Dyer, Bryce; Woolley, Howard
2017-10-01
It has been reported that cycling-specific research relating to participants with an amputation is extremely limited in both volume and frequency. However, practitioners might participate in the development of cycling-specific prosthetic limbs. This technical note presents the development of a successful design of a prosthetic limb developed specifically for competitive cycling. This project resulted in a hollow composite construction which was low in weight and shaped to reduce a rider's aerodynamic drag. The new prosthesis reduces the overall mass of more traditional designs by a significant amount yet provides a more aerodynamic shape over traditional approaches. These decisions have yielded a measurable increase in cycling performance. While further refinement is needed to reduce the aerodynamic drag as much as possible, this project highlights the benefits that can exist by optimising the design of sports-specific prosthetic limbs. Clinical relevance This project resulted in the creation of a cycling-specific prosthesis which was tailored to the needs of a high-performance environment. Whilst further optimisation is possible, this project provides insight into the development of sports-specific prostheses.
Statistical methods for convergence detection of multi-objective evolutionary algorithms.
Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J
2009-01-01
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.
Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates
NASA Astrophysics Data System (ADS)
Todorovic, Andrijana; Plavsic, Jasna
2015-04-01
A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.
Modélisation des charges d'espace dans les isolants solides par une analyse spectrale
NASA Astrophysics Data System (ADS)
Haas, V.; Scouarnec, Ch.; Franceschi, J. L.
1998-01-01
A mathematical method based on spectral algebra is developped for the thermal modulation method. These methods permit to measure the space charge distribution in solid insulators. The modelling presented permits to evaluate the performances and the limitations of the measurement method. Une linéarisation par l'algèbre spectrale a été développée dans une méthode de modulation thermique pour mesurer la distribution des charges électriques dans les isolants solides. La modélisation présentée permet d'évaluer les performances et les limites tant numériques que physiques de la méthode de mesure.
Diethylstilbestrol activates CatSper and disturbs progesterone actions in human spermatozoa.
Zou, Qian-Xing; Peng, Zhen; Zhao, Qing; Chen, Hou-Yang; Cheng, Yi-Min; Liu, Qing; He, Yuan-Qiao; Weng, Shi-Qi; Wang, Hua-Feng; Wang, Tao; Zheng, Li-Ping; Luo, Tao
2017-02-01
Is diethylstilbestrol (DES), a prototypical endocrine-disrupting chemical (EDC), able to induce physiological changes in human spermatozoa and affect progesterone actions? DES promoted Ca 2+ flux into human spermatozoa by activating the cation channel of sperm (CatSper) and suppressed progesterone-induced Ca 2+ signaling, tyrosine phosphorylation and sperm functions. DES significantly impairs the male reproductive system both in fetal and postnatal exposure. Although various EDCs affect human spermatozoa in a non-genomic manner, the effect of DES on human spermatozoa remains unknown. Sperm samples from normozoospermic donors were exposed in vitro to a range of DES concentrations with or without progesterone at 37°C in a 5% CO 2 incubator to mimic the putative exposure to this toxicant in seminal plasma and the female reproductive tract fluids. The incubation time varied according to the experimental protocols. All experiments were repeated at least five times using different individual sperm samples. Human sperm intracellular calcium concentrations ([Ca 2+ ] i ) were monitored with a multimode plate reader following sperm loading with Ca 2+ indicator Fluo-4 AM, and the whole-cell patch-clamp technique was performed to record CatSper and alkalinization-activated sperm K + channel (KSper) currents. Sperm viability and motility parameters were assessed by an eosin-nigrosin staining kit and a computer-assisted semen analysis system, respectively. The ability of sperm to penetrate into viscous media was examined by penetration into 1% methylcellulose. The sperm acrosome reaction was measured using chlortetracycline staining. The level of tyrosine phosphorylation was determined by western blot assay. DES exposure rapidly increased human sperm [Ca 2+ ] i dose dependently and even at an environmentally relevant concentration (100 pM). The elevation of [Ca 2+ ] i was derived from extracellular Ca 2+ influx and mainly mediated by CatSper. Although DES did not affect sperm viability, motility, penetration into viscous media, tyrosine phosphorylation or the acrosome reaction, it suppressed progesterone-stimulated Ca 2+ signaling and tyrosine phosphorylation. Consequently, DES (1-100 μM) significantly inhibited progesterone-induced human sperm penetration into viscous media and acrosome reaction. N/A. Although DES has been shown to disturb progesterone actions on human spermatozoa, this study was performed in vitro, and caution must be taken when extrapolating the results in practical applications. The present study revealed that DES interfered with progesterone-stimulated Ca 2+ signaling and tyrosine phosphorylation, ultimately inhibited progesterone-induced human sperm functions and, thereby, might impair sperm fertility. The non-genomic manner in which DES disturbs progesterone actions may be a potential mechanism for some estrogenic endocrine disruptors to affect human sperm function. National Natural Science Foundation of China (No. 31400996); Natural Science Foundation of Jiangxi, China (No. 20161BAB204167 and No. 20142BAB215050); open project of National Population and Family Planning Key Laboratory of Contraceptives and Devices Research (No. 2016KF07) to T. Luo; National Natural Science Foundation of China (No. 81300539) to L.P. Zheng. The authors have no conflicts of interest to declare. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Choosing the appropriate forecasting model for predictive parameter control.
Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars
2014-01-01
All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.
ERIC Educational Resources Information Center
von Treuer, Kathryn; McHardy, Katherine; Earl, Celisha
2013-01-01
Workplace training is a key strategy often used by organisations to optimise performance. Further, trainee motivation is a key determinant of the degree to which the material learned in a training programme will be transferred to the workplace, enhancing the performance of the trainee. This study investigates the relationship between several…
Prozornaia, L P; Brzhevskiĭ, V V
2013-01-01
110 patients aged from 3 to 42 years old were examined to estimate the efficacy of chronic blepharitis treatment: 50 patients with chronic blepharitis and dry eye syndrome (DES), 28 with DES due to computer vision syndrome and 32 with isolated chronic blepharitis. All patients received eyelid massage. If the secretion was too thick and difficult to evacuate from meibomian glands then duct probing was performed. In addition a complex of hygienic procedures was performed using phytoproducts ("Geltec-Medika", Russia): blepharoshampoo, blepharolotion, blepharogel 1 and 2. Moist warm pads (with blepharolotion and calendula extraction) were applied on the eyelids in 25 patients. Massage and probing of meibomian gland ducts and hygienic procedures were showed to be effective in management of clinical signs of chronic blepharitis including coexisting DES. Moist warm pads improve efficacy of background therapy in patients with meibomian gland hypofunction and have no effect in blepharitis with excessive meibomian gland secretion. Eyelid hygiene was showed to be effective in adults and children as well including infants.
1999-03-01
aerodynamics to affect load motions. The effects include a load trail angle in proportion to the drag specific force, and modification of the load pendulum...equations algorithm for flight data filtering architeture . and data consistency checking; and SCIDNT 8, an output architecture. error identification...accelerations at the seven sensor locations, identified system is proportional to the number When system identification is performed, as of flexible modes
Development of Measures of Effectiveness and Performance from Cognitive Work Analysis Products
2012-02-01
la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2012 DRDC Atlantic CR 2011...le nouveau concept du système. Ce rapport vise à déterminer si les résultats du travail de conception peuvent aider à élaborer des mesures de ...l’information (IIDS). Le contrat actuel comportait deux objectifs : élaborer des mesures de l’efficacité (MOE) et du rendement (MOP) pour
Bistatic Synthetic Aperture Radar, TIF - Report (Phase 1)
2004-11-01
Cette recherche permet d’obtenir une compr6hension en profondeur des capacit6s et des difficult6s associ6es aux concepts du ROS bistatique et...Radar (SAR) Bistatic SAR Performance Analysis Defence R&D Canada R & D pour la defense Canada Canada’s Leader in Defence Chef de file au Canada en ...I 1f1 Defence Research and Recherche et developpement Development Canada pour la defense Canada DEFENCE DEFENSE Bistatic Synthetic Aperture Radar TIF
NASA Astrophysics Data System (ADS)
van Daal-Rombouts, Petra; Sun, Siao; Langeveld, Jeroen; Bertrand-Krajewski, Jean-Luc; Clemens, François
2016-07-01
Optimisation or real time control (RTC) studies in wastewater systems increasingly require rapid simulations of sewer systems in extensive catchments. To reduce the simulation time calibrated simplified models are applied, with the performance generally based on the goodness of fit of the calibration. In this research the performance of three simplified and a full hydrodynamic (FH) model for two catchments are compared based on the correct determination of CSO event occurrences and of the total discharged volumes to the surface water. Simplified model M1 consists of a rainfall runoff outflow (RRO) model only. M2 combines the RRO model with a static reservoir model for the sewer behaviour. M3 comprises the RRO model and a dynamic reservoir model. The dynamic reservoir characteristics were derived from FH model simulations. It was found that M2 and M3 are able to describe the sewer behaviour of the catchments, contrary to M1. The preferred model structure depends on the quality of the information (geometrical database and monitoring data) available for the design and calibration of the model. Finally, calibrated simplified models are shown to be preferable to uncalibrated FH models when performing optimisation or RTC studies.
NASA Astrophysics Data System (ADS)
Amrani, Salah
La fabrication de l'aluminium est realisee dans une cellule d'electrolyse, et cette operation utilise des anodes en carbone. L'evaluation de la qualite de ces anodes reste indispensable avant leur utilisation. La presence des fissures dans les anodes provoque une perturbation du procede l'electrolyse et une diminution de sa performance. Ce projet a ete entrepris pour determiner l'impact des differents parametres de procedes de fabrication des anodes sur la fissuration des anodes denses. Ces parametres incluent ceux de la fabrication des anodes crues, des proprietes des matieres premieres et de la cuisson. Une recherche bibliographique a ete effectuee sur tous les aspects de la fissuration des anodes en carbone pour compiler les travaux anterieurs. Une methodologie detaillee a ete mise au point pour faciliter le deroulement des travaux et atteindre les objectifs vises. La majorite de ce document est reservee pour la discussion des resultats obtenus au laboratoire de l'UQAC et au niveau industriel. Concernant les etudes realisees a l'UQAC, une partie des travaux experimentaux est reservee a la recherche des differents mecanismes de fissuration dans les anodes denses utilisees dans l'industrie d'aluminium. L'approche etait d'abord basee sur la caracterisation qualitative du mecanisme de la fissuration en surface et en profondeur. Puis, une caracterisation quantitative a ete realisee pour la determination de la distribution de la largeur de la fissure sur toute sa longueur, ainsi que le pourcentage de sa surface par rapport a la surface totale de l'echantillon. Cette etude a ete realisee par le biais de la technique d'analyse d'image utilisee pour caracteriser la fissuration d'un echantillon d'anode cuite. L'analyse surfacique et en profondeur de cet echantillon a permis de voir clairement la formation des fissures sur une grande partie de la surface analysee. L'autre partie des travaux est basee sur la caracterisation des defauts dans des echantillons d'anodes crues fabriquees industriellement. Cette technique a consiste a determiner le profil des differentes proprietes physiques. En effet, la methode basee sur la mesure de la distribution de la resistivite electrique sur la totalite de l'echantillon est la technique qui a ete utilisee pour localiser la fissuration et les macro-pores. La microscopie optique et l'analyse d'image ont, quant a elles, permis de caracteriser les zones fissurees tout en determinant la structure des echantillons analyses a l'echelle microscopique. D'autres tests ont ete menes, et ils ont consiste a etudier des echantillons cylindriques d'anodes de 50 mm de diametre et de 130 mm de longueur. Ces derniers ont ete cuits dans un four a UQAC a differents taux de chauffage dans le but de pouvoir determiner l'influence des parametres de cuisson sur la formation de la fissuration dans ce genre de carottes. La caracterisation des echantillons d'anodes cuites a ete faite a l'aide de la microscopie electronique a balayage et de l'ultrason. La derniere partie des travaux realises a l'UQAC contient une etude sur la caracterisation des anodes fabriquees au laboratoire sous differentes conditions d'operation. L'evolution de la qualite de ces anodes a ete faite par l'utilisation de plusieurs techniques. L'evolution de la temperature de refroidissement des anodes crues de laboratoire a ete mesuree; et un modele mathematique a ete developpe et valide avec les donnees experimentales. Cela a pour objectif d'estimer la vitesse de refroidissement ainsi que le stress thermique. Toutes les anodes fabriquees ont ete caracterisees avant la cuisson par la determination de certaines proprietes physiques (resistivite electrique, densite apparente, densite optique et pourcentage de defauts). La tomographie et la distribution de la resistivite electrique, qui sont des techniques non destructives, ont ete employees pour evaluer les defauts internes des anodes. Pendant la cuisson des anodes de laboratoire, l'evolution de la resistivite electrique a ete suivie et l'etape de devolatilisation a ete identifiee. Certaines anodes ont ete cuites a differents taux de chauffage (bas, moyen, eleve et un autre combine) dans l'objectif de trouver les meilleures conditions de cuisson en vue de minimiser la fissuration. D'autres anodes ont ete cuites a differents niveaux de cuisson, cela dans le but d'identifier a quelle etape de l'operation de cuisson la fissuration commence a se developper. Apres la cuisson, les anodes ont ete recuperees pour, a nouveau, faire leur caracterisation par les memes techniques utilisees precedemment. L'objectif principal de cette partie etait de reveler l'impact de differents parametres sur le probleme de fissuration, qui sont repartis sur toute la chaine de production des anodes. Le pourcentage de megots, la quantite de brai et la distribution des particules sont des facteurs importants a considerer pour etudier l'effet de la matiere premiere sur le probleme de la fissuration. Concernant l'effet des parametres du procede de fabrication sur le meme probleme, le temps de vibration, la pression de compaction et le procede de refroidissement ont ete a la base de cette etude. Finalement, l'influence de la phase de cuisson sur l'apparition de la fissuration a ete prise en consideration par l'intermediaire du taux de chauffage et du niveau de cuisson. Les travaux realises au niveau industriel ont ete faits lors d'une campagne de mesure dans le but d'evaluer la qualite des anodes de carbone en general et l'investigation du probleme de fissuration en particulier. Ensuite, il s'agissait de reveler les effets de differents parametres sur le probleme de la fissuration. Vingt-quatre anodes cuites ont ete utilisees. Elles ont ete fabriquees avec differentes matieres premieres (brai, coke, megots) et sous diverses conditions (pression, temps de vibration). Le parametre de la densite de fissuration a ete calcule en se basant sur l'inspection visuelle de la fissuration des carottes. Cela permet de classifier les differentes fissurations en plusieurs categories en se basant sur certains criteres tels que le type de fissures (horizontale, verticale et inclinee), leurs localisations longitudinales (bas, milieu et haut de l'anode) et transversales (gauche, centrale et droite). Les effets de la matiere premiere, les parametres de fabrication des anodes crues ainsi que les conditions de cuisson sur la fissuration ont ete etudies. La fissuration des anodes denses en carbones cause un serieux probleme pour l'industrie d'aluminium primaire. La realisation de ce projet a permis la revelation de differents mecanismes de fissuration, la classification de fissuration par plusieurs criteres (position, types localisation) et l'evaluation de l'impact de differents parametres sur la fissuration. Les etudes effectuees dans le domaine de cuisson ont donne la possibilite d'ameliorer l'operation et reduire la fissuration des anodes. Le travail consiste aussi a identifier des techniques capables d'evaluer la qualite d'anodes (l'ultrason, la tomographie et la distribution de la resistivite electrique). La fissuration des anodes en carbone est consideree comme un probleme complexe, car son apparition depend de plusieurs parametres repartis sur toute la chaine de production. Dans ce projet, plusieurs nouvelles etudes ont ete realisees, et elles permettent de donner de l'originalite aux travaux de recherches faits dans le domaine de la fissuration des anodes de carbone pour l'industrie de l'aluminium primaire. Les etudes realisees dans ce projet permettent d'ajouter d'un cote, une valeur scientifique pour mieux comprendre le probleme de fissuration des anodes et d'un autre cote, d'essayer de proposer des methodes qui peuvent reduire ce probleme a l'echelle industrielle.
NASA Astrophysics Data System (ADS)
Meyer-Aurich, Andreas
1999-11-01
Mit der vorliegenden Arbeit werden exemplarisch Chancen und Grenzen der Integration von Umwelt- und Naturschutz in Verfahren der ackerbaulichen Landnutzung aufgezeigt. Die Umsetzung von Zielen des Umwelt- und Naturschutzes in Verfahren der Landnutzung ist mit verschiedenen Schwierigkeiten verbunden. Diese liegen zum einen in der Konkretisierung der Ziele, um diese umsetzen zu können, zum anderen in vielfach unzulänglichem Wissen über den Zusammenhang zwischen unterschiedlichen Formen der Landnutzung und insbesondere den biotischen Naturschutzzielen. Zunächst wird die Problematik der Zielfestlegung und Konkretisierung erörtert. Das Umweltqualitätszielkonzept von Fürst et al. (1992) stellt einen Versuch dar, Ziele des Umwelt- und Naturschutzes zu konkretisieren. Dieses Konzept haben Heidt et al. (1997) auf einen Landschaftsausschnitt von ca. 6000 ha im Biosphärenreservat Schorfheide-Chorin im Nordosten Brandenburgs angewendet. Eine Auswahl der von Heidt et al. (1997) formulierten Umweltqualitätsziele bildet die Basis dieser Arbeit. Für die ausgewählten Umweltqualitätsziele wurden wesentliche Einflussfaktoren der Landnutzung identifiziert und ein Bewertungssystem entwickelt, mit dem die Auswirkungen von landwirtschaftlichen Anbauverfahren auf diese Umweltqualitätsziele abgebildet werden können. Die praktizierte Landnutzung von 20 Betrieben im Biosphärenreservat Schorfheide-Chorin wurde von 1994 bis 1997 hinsichtlich ihrer Auswirkungen auf die Umweltqualitätsziele analysiert. Die Analyse ergab ein sehr differenziertes Bild, das zum Teil Unterschiede in der Auswirkung auf die Umweltqualitätsziele für den Anbau einzelner Kulturen oder für bestimmte Betriebstypen zeigte. Es zeigte sich aber auch, dass es bei der Gestaltung des Anbaus einzelner Kulturarten große Unterschiede gab, die für Umweltqualitätsziele Bedeutung haben. Neben der Analyse der Landnutzung im Biosphärenreservat Schorfheide-Chorin wurde ein System entwickelt, mit dem die modellhafte Abbildung von Verfahren der Landnutzung möglich ist. Die Modellverfahren wurden in eine umfangreiche Datenbank eingebunden. Sie wurden mit Hilfe eines Fuzzy- Regelsystems hinsichtlich ihrer Auswirkungen auf die Umweltqualitätsziele bewertet. Die systematisch bewerteten Verfahren wurden in ein Betriebsmodell integriert, womit eine weitergehende Analyse der Zielbeziehungen und die Berechnung von Szenarien mit unterschiedlichen Rahmenbedingungen ermöglicht wurde. Die Analyse der Beziehung verschiedener Ziele zueinander (Zieldivergenz, Zielkonvergenz) zeigte, dass sich mit der Verfolgung vieler Umweltqualitätsziele auch positive Effekte für andere Umweltqualitätsziele ergaben. Teilweise konnte allerdings auch eine Zieldivergenz festgestellt werden, die auf mögliche Zielkonflikte hinweist. Bei der Analyse der Szenarienergebnisse zeigte sich, dass die vorgeschlagenen Veränderungen von Rahmenbedingungen vielfach eine Verschlechterung für verschiedene Umweltqualitätsziele mit sich bringen. Eine Ursache dafür liegt darin, dass bei der Definition der Szenarien die Bedeutung der Stilllegungen unterschätzt wurde. The objective of this research was to show the opportunities and limitations associated with integrating of environmental goals into agricultural land use management. For this purpose, the impact of agricultural land uses on six environmental quality goals was analysed for an approximately 16.000 ha study region within the biosphere reserve Schorfheide-Chorin in north-eastern Brandenburg (Germany). The environmental quality goals considered were protection of ground water, preservation of groundwater recharge, protection of the soil against wind and water erosion and preservation of animal species typical of the agricultural landscape, in particular partridge, amphibians and cranes. For each environmental quality goal, an evaluation framework is presented which enables an assessment of the impact of agricultural land uses on the environmental quality goals. The evaluation framework was applied to assess the impact of land uses of twenty farms in the study area from 1994 to 1997. It was demonstrated that the impact of agricultural land uses on the environmental quality of the region is very complex, and - although some crops did significantly impact certain aspects, one factor did not overwhelmingly contribute to the overall environmental quality. The evaluation framework was integrated further into a system of modelled cropping practices for optimisation calculations. For this purpose, a model framework was established based on a MS Access database. In the database, cropping practices for 17 crops are stored. Inputs and yields are adapted to the site-specific yield potential. In addition to the cropping practices, the model framework is comprised of an evaluation module and optimisation module. Hence, the impact of the cropping practices can be assessed for each site. This model framework provides the basis for optimisation calculations to forecast various land uses under different conditions. A step-wise integration of the environmental quality goals into the optimisation algorithm of a model farm allows for showing trade-offs between economic and ecological goals. The results of the trade-offs show that an improvement of ecological goal achievement is possible with little impact on the gross margin, as long as the improvement is less than 30% of the starting situation. If improvements greater than 30% are desired, very high losses of gross margin must be taken into consideration. Another result of the calculations shows that high achievements of environmental quality goals often correlate with the percentage of set-aside land within the total area.
Walsh, Simon J; Hanratty, Colm G; Watkins, Stuart; Oldroyd, Keith G; Mulvihill, Niall T; Hensey, Mark; Chase, Alex; Smith, Dave; Cruden, Nick; Spratt, James C; Mylotte, Darren; Johnson, Tom; Hill, Jonathan; Hussein, Hafiz M; Bogaerts, Kris; Morice, Marie-Claude; Foley, David P
2018-05-24
The aim of this study was to provide contemporary outcome data for patients with de novo coronary disease and Medina 1,1,1 lesions who were treated with a culotte two-stent technique, and to compare the performance of two modern-generation drug-eluting stent (DES) platforms, the 3-connector XIENCE and the 2-connector SYNERGY. Patients with Medina 1,1,1 bifurcation lesions who had disease that was amenable to culotte stenting were randomised 1:1 to treatment with XIENCE or SYNERGY DES. A total of 170 patients were included. Technical success and final kissing balloon inflation occurred in >96% of cases. Major adverse cardiovascular or cerebrovascular events (MACCE: a composite of death, myocardial infarction [MI], cerebrovascular accident [CVA] and target vessel revascularisation [TVR]) occurred in 5.9% of patients by nine months. The primary endpoint was a composite of death, MI, CVA, target vessel failure (TVF), stent thrombosis and binary angiographic restenosis. At nine months, the primary endpoint occurred in 19% of XIENCE patients and 16% of SYNERGY patients (p=0.003 for non-inferiority for platform performance). MACCE rates for culotte stenting using contemporary everolimus-eluting DES are low at nine months. The XIENCE and SYNERGY stents demonstrated comparable performance for the primary endpoint.
Goehring, Jenny L; Neff, Donna L; Baudhuin, Jacquelyn L; Hughes, Michelle L
2014-08-01
This study compared pitch ranking, electrode discrimination, and electrically evoked compound action potential (ECAP) spatial excitation patterns for adjacent physical electrodes (PEs) and the corresponding dual electrodes (DEs) for newer-generation Cochlear devices (Cochlear Ltd., Macquarie, New South Wales, Australia). The first goal was to determine whether pitch ranking and electrode discrimination yield similar outcomes for PEs and DEs. The second goal was to determine if the amount of spatial separation among ECAP excitation patterns (separation index, Σ) between adjacent PEs and the PE-DE pairs can predict performance on the psychophysical tasks. Using non-adaptive procedures, 13 subjects completed pitch ranking and electrode discrimination for adjacent PEs and the corresponding PE-DE pairs (DE versus each flanking PE) from the basal, middle, and apical electrode regions. Analysis of d' scores indicated that pitch-ranking and electrode-discrimination scores were not significantly different, but rather produced similar levels of performance. As expected, accuracy was significantly better for the PE-PE comparison than either PE-DE comparison. Correlations of the psychophysical versus ECAP Σ measures were positive; however, not all test/region correlations were significant across the array. Thus, the ECAP separation index is not sensitive enough to predict performance on behavioral tasks of pitch ranking or electrode discrimination for adjacent PEs or corresponding DEs.
Goehring, Jenny L.; Neff, Donna L.; Baudhuin, Jacquelyn L.; Hughes, Michelle L.
2014-01-01
This study compared pitch ranking, electrode discrimination, and electrically evoked compound action potential (ECAP) spatial excitation patterns for adjacent physical electrodes (PEs) and the corresponding dual electrodes (DEs) for newer-generation Cochlear devices (Cochlear Ltd., Macquarie, New South Wales, Australia). The first goal was to determine whether pitch ranking and electrode discrimination yield similar outcomes for PEs and DEs. The second goal was to determine if the amount of spatial separation among ECAP excitation patterns (separation index, Σ) between adjacent PEs and the PE-DE pairs can predict performance on the psychophysical tasks. Using non-adaptive procedures, 13 subjects completed pitch ranking and electrode discrimination for adjacent PEs and the corresponding PE-DE pairs (DE versus each flanking PE) from the basal, middle, and apical electrode regions. Analysis of d′ scores indicated that pitch-ranking and electrode-discrimination scores were not significantly different, but rather produced similar levels of performance. As expected, accuracy was significantly better for the PE-PE comparison than either PE-DE comparison. Correlations of the psychophysical versus ECAP Σ measures were positive; however, not all test/region correlations were significant across the array. Thus, the ECAP separation index is not sensitive enough to predict performance on behavioral tasks of pitch ranking or electrode discrimination for adjacent PEs or corresponding DEs. PMID:25096106
Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine
NASA Astrophysics Data System (ADS)
Erdogan, Gamze; Yavuz, Mahmut
2017-12-01
The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.
Design Optimisation of a Magnetic Field Based Soft Tactile Sensor
Raske, Nicholas; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Culmer, Peter; Hewson, Robert
2017-01-01
This paper investigates the design optimisation of a magnetic field based soft tactile sensor, comprised of a magnet and Hall effect module separated by an elastomer. The aim was to minimise sensitivity of the output force with respect to the input magnetic field; this was achieved by varying the geometry and material properties. Finite element simulations determined the magnetic field and structural behaviour under load. Genetic programming produced phenomenological expressions describing these responses. Optimisation studies constrained by a measurable force and stable loading conditions were conducted; these produced Pareto sets of designs from which the optimal sensor characteristics were selected. The optimisation demonstrated a compromise between sensitivity and the measurable force, a fabricated version of the optimised sensor validated the improvements made using this methodology. The approach presented can be applied in general for optimising soft tactile sensor designs over a range of applications and sensing modes. PMID:29099787
Wiener-Hammerstein system identification - an evolutionary approach
NASA Astrophysics Data System (ADS)
Naitali, Abdessamad; Giri, Fouad
2016-01-01
The problem of identifying parametric Wiener-Hammerstein (WH) systems is addressed within the evolutionary optimisation context. Specifically, a hybrid culture identification method is developed that involves model structure adaptation using genetic recombination and model parameter learning using particle swarm optimisation. The method enjoys three interesting features: (1) the risk of premature convergence of model parameter estimates to local optima is significantly reduced, due to the constantly maintained diversity of model candidates; (2) no prior knowledge is needed except for upper bounds on the system structure indices; (3) the method is fully autonomous as no interaction is needed with the user during the optimum search process. The performances of the proposed method will be illustrated and compared to alternative methods using a well-established WH benchmark.
Designing synthetic networks in silico: a generalised evolutionary algorithm approach.
Smith, Robert W; van Sluijs, Bob; Fleck, Christian
2017-12-02
Evolution has led to the development of biological networks that are shaped by environmental signals. Elucidating, understanding and then reconstructing important network motifs is one of the principal aims of Systems & Synthetic Biology. Consequently, previous research has focused on finding optimal network structures and reaction rates that respond to pulses or produce stable oscillations. In this work we present a generalised in silico evolutionary algorithm that simultaneously finds network structures and reaction rates (genotypes) that can satisfy multiple defined objectives (phenotypes). The key step to our approach is to translate a schema/binary-based description of biological networks into systems of ordinary differential equations (ODEs). The ODEs can then be solved numerically to provide dynamic information about an evolved networks functionality. Initially we benchmark algorithm performance by finding optimal networks that can recapitulate concentration time-series data and perform parameter optimisation on oscillatory dynamics of the Repressilator. We go on to show the utility of our algorithm by finding new designs for robust synthetic oscillators, and by performing multi-objective optimisation to find a set of oscillators and feed-forward loops that are optimal at balancing different system properties. In sum, our results not only confirm and build on previous observations but we also provide new designs of synthetic oscillators for experimental construction. In this work we have presented and tested an evolutionary algorithm that can design a biological network to produce desired output. Given that previous designs of synthetic networks have been limited to subregions of network- and parameter-space, the use of our evolutionary optimisation algorithm will enable Synthetic Biologists to construct new systems with the potential to display a wider range of complex responses.
Metric optimisation for analogue forecasting by simulated annealing
NASA Astrophysics Data System (ADS)
Bliefernicht, J.; Bárdossy, A.
2009-04-01
It is well known that weather patterns tend to recur from time to time. This property of the atmosphere is used by analogue forecasting techniques. They have a long history in weather forecasting and there are many applications predicting hydrological variables at the local scale for different lead times. The basic idea of the technique is to identify past weather situations which are similar (analogue) to the predicted one and to take the local conditions of the analogues as forecast. But the forecast performance of the analogue method depends on user-defined criteria like the choice of the distance function and the size of the predictor domain. In this study we propose a new methodology of optimising both criteria by minimising the forecast error with simulated annealing. The performance of the methodology is demonstrated for the probability forecast of daily areal precipitation. It is compared with a traditional analogue forecasting algorithm, which is used operational as an element of a hydrological forecasting system. The study is performed for several meso-scale catchments located in the Rhine basin in Germany. The methodology is validated by a jack-knife method in a perfect prognosis framework for a period of 48 years (1958-2005). The predictor variables are derived from the NCEP/NCAR reanalysis data set. The Brier skill score and the economic value are determined to evaluate the forecast skill and value of the technique. In this presentation we will present the concept of the optimisation algorithm and the outcome of the comparison. It will be also demonstrated how a decision maker should apply a probability forecast to maximise the economic benefit from it.
Prosperi, Mattia C. F.; Rosen-Zvi, Michal; Altmann, André; Zazzi, Maurizio; Di Giambenedetto, Simona; Kaiser, Rolf; Schülter, Eugen; Struck, Daniel; Sloot, Peter; van de Vijver, David A.; Vandamme, Anne-Mieke; Sönnerborg, Anders
2010-01-01
Background Although genotypic resistance testing (GRT) is recommended to guide combination antiretroviral therapy (cART), funding and/or facilities to perform GRT may not be available in low to middle income countries. Since treatment history (TH) impacts response to subsequent therapy, we investigated a set of statistical learning models to optimise cART in the absence of GRT information. Methods and Findings The EuResist database was used to extract 8-week and 24-week treatment change episodes (TCE) with GRT and additional clinical, demographic and TH information. Random Forest (RF) classification was used to predict 8- and 24-week success, defined as undetectable HIV-1 RNA, comparing nested models including (i) GRT+TH and (ii) TH without GRT, using multiple cross-validation and area under the receiver operating characteristic curve (AUC). Virological success was achieved in 68.2% and 68.0% of TCE at 8- and 24-weeks (n = 2,831 and 2,579), respectively. RF (i) and (ii) showed comparable performances, with an average (st.dev.) AUC 0.77 (0.031) vs. 0.757 (0.035) at 8-weeks, 0.834 (0.027) vs. 0.821 (0.025) at 24-weeks. Sensitivity analyses, carried out on a data subset that included antiretroviral regimens commonly used in low to middle income countries, confirmed our findings. Training on subtype B and validation on non-B isolates resulted in a decline of performance for models (i) and (ii). Conclusions Treatment history-based RF prediction models are comparable to GRT-based for classification of virological outcome. These results may be relevant for therapy optimisation in areas where availability of GRT is limited. Further investigations are required in order to account for different demographics, subtypes and different therapy switching strategies. PMID:21060792
Kernel learning at the first level of inference.
Cawley, Gavin C; Talbot, Nicola L C
2014-05-01
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optimisation de fonctionnements de pompe à chaleur chimique : synchronisation et commande du procédé
NASA Astrophysics Data System (ADS)
Cassou, T.; Amouroux, M.; Labat, P.
1995-04-01
We present the mathematical modelling of a chemical heat pump and the associated simulator. This simulator is able to determine the influence of different parameters (which would be associated to the heat exchanges or to the chemical kinetics), but also to simulate the main operating modes. An optimal management of process represents the objective to reach; we materialize it by a continuous and steady production of the power delivered by the machine. Nous présentons le modèle mathématique d'un pilote de pompe à chaleur chimique et le simulateur numérique correspondant. Ce simulateur est capable de déterminer l'influence de divers paramètres (qu'ils soient liés aux échanges de chaleur ou à la cinétique chimique), mais aussi de simuler les principaux modes de fonctionnement. Une gestion optimale du procédé représente le but à atteindre: une conduite optimisée du système permet, par une gestion des différentes phases, une production continue et stable de la puissance délivrée par la machine.
Caby, Isabelle; Olivier, N; Mendelek, F; Kheir, R Bou; Vanvelcenaher, J; Pelayo, P
2014-01-01
HISTORIQUE : La lombalgie chronique est une douleur lombaire persistante d’origine multifactorielle. Le niveau de douleur initial reste faiblement utilisé pour analyser et comparer les réponses des patients lombalgiques au programme de reconditionnement. OBJECTIFS : Apprécier et évaluer les réponses des sujets lombalgiques chroniques très douloureux à une prise en charge dynamique et intensive. MÉTHODOLOGIE : 144 sujets atteints de lombalgie chronique ont été inclus dans un programme de restauration fonctionnelle du rachis de 5 semaines. Les sujets ont été classés en deux groupes de niveau de douleur: un groupe atteint de douleur sévère (n = 28) et un groupe atteint de douleur légère à modérée (n = 106). L’ensemble des sujets ont bénéficié d’une prise en charge identique comprenant principalement de la kinésithérapie, de l’ergothérapie, du reconditionnement musculaire et cardio-vasculaire ainsi qu’un suivi psychologique. Les paramètres physiques (flexibilité, force musculaire) et psychologiques (qualité de vie) ont été mesurés avant (T0) et après le programme (T5sem). RÉSULTATS : L’ensemble des performances physiques et fonctionnelles des sujets très douloureux sont moins bonnes et le retentissement de la lombalgie sur la qualité de vie, pour ces mêmes sujets, est majoré à T0. Toutes les différences significatives constatées à T0 entre les deux groupes s’effacent à T5sem. CONCLUSIONS : Les sujets lombalgiques chroniques très douloureux répondent favorablement au programme dynamique et intensif. L’intensité douloureuse de la lombalgie n’aurait pas d’effet sur les réponses au programme. La restauration fonctionnelle du rachis apporterait aux sujets la possibilité de mieux gérer leur douleur quel que soit son niveau. PMID:25299476
A New Hemodynamic Ex Vivo Model for Medical Devices Assessment.
Maurel, Blandine; Sarraf, Christophe; Bakir, Farid; Chai, Feng; Maton, Mickael; Sobocinski, Jonathan; Hertault, Adrien; Blanchemain, Nicolas; Haulon, Stephan; Lermusiaux, Patrick
2015-11-01
In-stent restenosis (ISR) remains a major public health concern associated with an increased morbidity, mortality, and health-related costs. Drug-eluting stents (DES) have reduced ISR, but generate healing-related issues or hypersensitivity reactions, leading to an increased risk of late acute stent thrombosis. Assessments of new DES are based on animal models or in vitro release systems, which have several limitations. The role of flow and shear stress on endothelial cell and ISR has also been emphasized. The aim of this work was to design and first evaluate an original bioreactor, replicating ex vivo hemodynamic and biological conditions similar to human conditions, to further evaluate new DES. This bioreactor was designed to study up to 6 stented arteries connected in bypass, immersed in a culture box, in which circulated a physiological systolo-diastolic resistive flow. Two centrifugal pumps drove the flow. The main pump generated pulsating flows by modulation of rotation velocity, and the second pump worked at constant rotation velocity, ensuring the counter pressure levels and backflows. The flow rate, the velocity profile, the arterial pressure, and the resistance of the flow were adjustable. The bioreactor was placed in an incubator to reproduce a biological environment. A first feasibility experience was performed over a 24-day period. Three rat aortic thoracic arteries were placed into the bioreactor, immersed in cell culture medium changed every 3 days, and with a circulating systolic and diastolic flux during the entire experimentation. There was no infection and no leak. At the end of the experimentation, a morphometric analysis was performed confirming the viability of the arteries. We designed and patented an original hemodynamic ex vivo model to further study new DES, as well as a wide range of vascular diseases and medical devices. This bioreactor will allow characterization of the velocity field and drug transfers within a stented artery with new functionalized DES, with experimental means not available in vivo. Another major benefit will be the reduction of animal experimentation and the opportunity to test new DES or other vascular therapeutics in human tissues (human infrapopliteal or coronary arteries collected during human donation). Copyright © 2015 Elsevier Inc. All rights reserved.
Joint measurement of lensing-galaxy correlations using SPT and DES SV data
Baxter, E. J.
2016-07-04
We measure the correlation of galaxy lensing and cosmic microwave background lensing with a set of galaxies expected to trace the matter density field. The measurements are performed using pre-survey Dark Energy Survey (DES) Science Verification optical imaging data and millimeter-wave data from the 2500 square degree South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) survey. The two lensing-galaxy correlations are jointly fit to extract constraints on cosmological parameters, constraints on the redshift distribution of the lens galaxies, and constraints on the absolute shear calibration of DES galaxy lensing measurements. We show that an attractive feature of these fits is that they are fairly insensitive to the clustering bias of the galaxies used as matter tracers. The measurement presented in this work confirms that DES and SPT data are consistent with each other and with the currently favoredmore » $$\\Lambda$$CDM cosmological model. In conclusion, it also demonstrates that joint lensing-galaxy correlation measurement considered here contains a wealth of information that can be extracted using current and future surveys.« less
The Dark Energy Survey and Operations: Years 1 to 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diehl, H. T.
2016-01-01
The Dark Energy Survey (DES) is an operating optical survey aimed at understanding the accelerating expansion of the universe using four complementary methods: weak gravitational lensing, galaxy cluster counts, baryon acoustic oscillations, and Type Ia supernovae. To perform the 5000 sq-degree wide field and 30 sq-degree supernova surveys, the DES Collaboration built the Dark Energy Camera (DECam), a 3 square-degree, 570-Megapixel CCD camera that was installed at the prime focus of the Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory (CTIO). DES has completed its third observing season out of a nominal five. This paper describes DES “Year 1”more » (Y1) to “Year 3” (Y3), the strategy, an outline of the survey operations procedures, the efficiency of operations and the causes of lost observing time. It provides details about the quality of the first three season's data, and describes how we are adjusting the survey strategy in the face of the El Niño Southern Oscillation« less
Joint measurement of lensing-galaxy correlations using SPT and DES SV data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baxter, E. J.
We measure the correlation of galaxy lensing and cosmic microwave background lensing with a set of galaxies expected to trace the matter density field. The measurements are performed using pre-survey Dark Energy Survey (DES) Science Verification optical imaging data and millimeter-wave data from the 2500 square degree South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) survey. The two lensing-galaxy correlations are jointly fit to extract constraints on cosmological parameters, constraints on the redshift distribution of the lens galaxies, and constraints on the absolute shear calibration of DES galaxy lensing measurements. We show that an attractive feature of these fits is that they are fairly insensitive to the clustering bias of the galaxies used as matter tracers. The measurement presented in this work confirms that DES and SPT data are consistent with each other and with the currently favoredmore » $$\\Lambda$$CDM cosmological model. In conclusion, it also demonstrates that joint lensing-galaxy correlation measurement considered here contains a wealth of information that can be extracted using current and future surveys.« less
Joint measurement of lensing–galaxy correlations using SPT and DES SV data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baxter, E.; Clampitt, J.; Giannantonio, T.
We measure the correlation of galaxy lensing and cosmic microwave background lensing with a set of galaxies expected to trace the matter density field. The measurements are performed using pre-survey Dark Energy Survey (DES) Science Verification optical imaging data and millimetre-wave data from the 2500 sq. deg. South Pole Telescope Sunyaev–Zel'dovich (SPT-SZ) survey. The two lensing–galaxy correlations are jointly fit to extract constraints on cosmological parameters, constraints on the redshift distribution of the lens galaxies, and constraints on the absolute shear calibration of DES galaxy-lensing measurements. We show that an attractive feature of these fits is that they are fairlymore » insensitive to the clustering bias of the galaxies used as matter tracers. The measurement presented in this work confirms that DES and SPT data are consistent with each other and with the currently favoured Λ cold dark matter cosmological model. It also demonstrates that joint lensing–galaxy correlation measurement considered here contains a wealth of information that can be extracted using current and future surveys.« less
The Thistle Field - Analysis of its past performance and optimisation of its future development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayat, M.G.; Tehrani, D.H.
1985-01-01
The Thistle Field geology and its reservoir performance over the past six years have been reviewed. The latest reservoir simulation study of the field, covering the performance history-matching, and the conclusions of various prediction cases are reported. The special features of PORES, Britoil in-house 3D 3-phase fully implicit numerical simulator and its modeling aids as applied to the Thistle Field are presented.
Quantifying Safety Performance of Driveways on State Highways
DOT National Transportation Integrated Search
2012-08-01
This report documents a research effort to quantify the safety performance of driveways in the State of Oregon. In : particular, this research effort focuses on driveways located adjacent to principal arterial state highways with urban or : rural des...
Martens, Leon; Goode, Grahame; Wold, Johan F H; Beck, Lionel; Martin, Georgina; Perings, Christian; Stolt, Pelle; Baggerman, Lucas
2014-01-01
To conduct a pilot study on the potential to optimise care pathways in syncope/Transient Loss of Consciousness management by using Lean Six Sigma methodology while maintaining compliance with ESC and/or NICE guidelines. Five hospitals in four European countries took part. The Lean Six Sigma methodology consisted of 3 phases: 1) Assessment phase, in which baseline performance was mapped in each centre, processes were evaluated and a new operational model was developed with an improvement plan that included best practices and change management; 2) Improvement phase, in which optimisation pathways and standardised best practice tools and forms were developed and implemented. Staff were trained on new processes and change-management support provided; 3) Sustaining phase, which included support, refinement of tools and metrics. The impact of the implementation of new pathways was evaluated on number of tests performed, diagnostic yield, time to diagnosis and compliance with guidelines. One hospital with focus on geriatric populations was analysed separately from the other four. With the new pathways, there was a 59% reduction in the average time to diagnosis (p = 0.048) and a 75% increase in diagnostic yield (p = 0.007). There was a marked reduction in repetitions of diagnostic tests and improved prioritisation of indicated tests. Applying a structured Lean Six Sigma based methodology to pathways for syncope management has the potential to improve time to diagnosis and diagnostic yield.
Martens, Leon; Goode, Grahame; Wold, Johan F. H.; Beck, Lionel; Martin, Georgina; Perings, Christian; Stolt, Pelle; Baggerman, Lucas
2014-01-01
Aims To conduct a pilot study on the potential to optimise care pathways in syncope/Transient Loss of Consciousness management by using Lean Six Sigma methodology while maintaining compliance with ESC and/or NICE guidelines. Methods Five hospitals in four European countries took part. The Lean Six Sigma methodology consisted of 3 phases: 1) Assessment phase, in which baseline performance was mapped in each centre, processes were evaluated and a new operational model was developed with an improvement plan that included best practices and change management; 2) Improvement phase, in which optimisation pathways and standardised best practice tools and forms were developed and implemented. Staff were trained on new processes and change-management support provided; 3) Sustaining phase, which included support, refinement of tools and metrics. The impact of the implementation of new pathways was evaluated on number of tests performed, diagnostic yield, time to diagnosis and compliance with guidelines. One hospital with focus on geriatric populations was analysed separately from the other four. Results With the new pathways, there was a 59% reduction in the average time to diagnosis (p = 0.048) and a 75% increase in diagnostic yield (p = 0.007). There was a marked reduction in repetitions of diagnostic tests and improved prioritisation of indicated tests. Conclusions Applying a structured Lean Six Sigma based methodology to pathways for syncope management has the potential to improve time to diagnosis and diagnostic yield. PMID:24927475
Jolley, Rachel J; Jetté, Nathalie; Sawka, Keri Jo; Diep, Lucy; Goliath, Jade; Roberts, Derek J; Yipp, Bryan G; Doig, Christopher J
2015-01-01
Objective Administrative health data are important for health services and outcomes research. We optimised and validated in intensive care unit (ICU) patients an International Classification of Disease (ICD)-coded case definition for sepsis, and compared this with an existing definition. We also assessed the definition's performance in non-ICU (ward) patients. Setting and participants All adults (aged ≥18 years) admitted to a multisystem ICU with general medicosurgical ICU care from one of three tertiary care centres in the Calgary region in Alberta, Canada, between 1 January 2009 and 31 December 2012 were included. Research design Patient medical records were randomly selected and linked to the discharge abstract database. In ICU patients, we validated the Canadian Institute for Health Information (CIHI) ICD-10-CA (Canadian Revision)-coded definition for sepsis and severe sepsis against a reference standard medical chart review, and optimised this algorithm through examination of other conditions apparent in sepsis. Measures Sensitivity (Sn), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) were calculated. Results Sepsis was present in 604 of 1001 ICU patients (60.4%). The CIHI ICD-10-CA-coded definition for sepsis had Sn (46.4%), Sp (98.7%), PPV (98.2%) and NPV (54.7%); and for severe sepsis had Sn (47.2%), Sp (97.5%), PPV (95.3%) and NPV (63.2%). The optimised ICD-coded algorithm for sepsis increased Sn by 25.5% and NPV by 11.9% with slightly lowered Sp (85.4%) and PPV (88.2%). For severe sepsis both Sn (65.1%) and NPV (70.1%) increased, while Sp (88.2%) and PPV (85.6%) decreased slightly. Conclusions This study demonstrates that sepsis is highly undercoded in administrative data, thus under-ascertaining the true incidence of sepsis. The optimised ICD-coded definition has a higher validity with higher Sn and should be preferentially considered if used for surveillance purposes. PMID:26700284
ATLAS software configuration and build tool optimisation
NASA Astrophysics Data System (ADS)
Rybkin, Grigory; Atlas Collaboration
2014-06-01
ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.
Hermans, Michel P; Brotons, Carlos; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank
2013-12-01
Micro- and macrovascular complications of type 2 diabetes have an adverse impact on survival, quality of life and healthcare costs. The OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) trial comparing physicians' individual performances with a peer group evaluates the hypothesis that benchmarking, using assessments of change in three critical quality indicators of vascular risk: glycated haemoglobin (HbA1c), low-density lipoprotein-cholesterol (LDL-C) and systolic blood pressure (SBP), may improve quality of care in type 2 diabetes in the primary care setting. This was a randomised, controlled study of 3980 patients with type 2 diabetes. Six European countries participated in the OPTIMISE study (NCT00681850). Quality of care was assessed by the percentage of patients achieving pre-set targets for the three critical quality indicators over 12 months. Physicians were randomly assigned to receive either benchmarked or non-benchmarked feedback. All physicians received feedback on six of their patients' modifiable outcome indicators (HbA1c, fasting glycaemia, total cholesterol, high-density lipoprotein-cholesterol (HDL-C), LDL-C and triglycerides). Physicians in the benchmarking group additionally received information on levels of control achieved for the three critical quality indicators compared with colleagues. At baseline, the percentage of evaluable patients (N = 3980) achieving pre-set targets was 51.2% (HbA1c; n = 2028/3964); 34.9% (LDL-C; n = 1350/3865); 27.3% (systolic blood pressure; n = 911/3337). OPTIMISE confirms that target achievement in the primary care setting is suboptimal for all three critical quality indicators. This represents an unmet but modifiable need to revisit the mechanisms and management of improving care in type 2 diabetes. OPTIMISE will help to assess whether benchmarking is a useful clinical tool for improving outcomes in type 2 diabetes.
El-Nashar, Y I; Asrar, A A
2016-05-06
Chemical mutagenesis is an efficient tool used in mutation-breeding programs to improve the vital characters of the floricultural crops. This study aimed to estimate the effects of different concentrations of two chemical mutagens; sodium azide (SA) and diethyl sulfate (DES). The vegetative growth and flowering characteristics in two generations (M1 and M2) of calendula plants were investigated. Seeds were treated with five different concentrations of SA and DES (at the same rates) of 1000, 2000, 3000, 4000, and 5000 ppm, in addition to a control treatment of 0 ppm. Results showed that lower concentrations of SA mutagen had significant effects on seed germination percentage, plant height, leaf area, plant fresh weight, flowering date, inflorescence diameter, and gas-exchange measurements in plants of both generations. Calendula plants tended to flower earlier under low mutagen concentrations (1000 ppm), whereas higher concentrations delayed flowering significantly. Positive results on seed germination, plant height, number of branches, plant fresh weight, and leaf area were observed in the M2-generation at lower concentrations of SA (1000 ppm), as well as at 4000 ppm DES on number of leaves and inflorescences. The highest total soluble protein was detected at the concentrations of 1000 ppm SA and 2000 ppm DES. DES showed higher average of acid phosphatase activity than SA. Results indicated that lower concentrations of SA and DES mutagens had positive effects on seed germination percentage, plant height, leaf area, plant fresh weight, flowering date, inflorescence diameter, and gas-exchange measurements. Thus, lower mutagen concentrations could be recommended for better floral and physio-chemical performance.
Small RNAs as important regulators for the hybrid vigour of super-hybrid rice.
Zhang, Lei; Peng, Yonggang; Wei, Xiaoli; Dai, Yan; Yuan, Dawei; Lu, Yufei; Pan, Yangyang; Zhu, Zhen
2014-11-01
Heterosis is an important biological phenomenon; however, the role of small RNA (sRNA) in heterosis of hybrid rice remains poorly described. Here, we performed sRNA profiling of F1 super-hybrid rice LYP9 and its parents using high-throughput sequencing technology, and identified 355 distinct mature microRNAs and trans-acting small interfering RNAs, 69 of which were differentially expressed sRNAs (DES) between the hybrid and the mid-parental value. Among these, 34 DES were predicted to target 176 transcripts, of which 112 encoded 94 transcription factors. Further analysis showed that 67.6% of DES expression levels were negatively correlated with their target mRNAs either in flag leaves or panicles. The target genes of DES were significantly enriched in some important biological processes, including the auxin signalling pathway, in which existed a regulatory network mediated by DES and their targets, closely associated with plant growth and development. Overall, 20.8% of DES and their target genes were significantly enriched in quantitative trait loci of small intervals related to important rice agronomic traits including growth vigour, grain yield, and plant architecture, suggesting that the interaction between sRNAs and their targets contributes to the heterotic phenotypes of hybrid rice. Our findings revealed that sRNAs might play important roles in hybrid vigour of super-hybrid rice by regulating their target genes, especially in controlling the auxin signalling pathway. The above finding provides a novel insight into the molecular mechanism of heterosis. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology.
Huang, Yanhua; Wang, Yuzhi; Pan, Qi; Wang, Ying; Ding, Xueqin; Xu, Kaijia; Li, Na; Wen, Qian
2015-06-02
Four kinds of green deep eutectic solvents (DESs) based on choline chloride (ChCl) have been synthesized and coated on the surface of magnetic graphene oxide (Fe3O4@GO) to form Fe3O4@GO-DES for the magnetic solid-phase extraction of protein. X-ray diffraction (XRD), vibrating sample magnetometer (VSM), Fourier transform infrared spectrometry (FTIR), field emission scanning electron microscopy (FESEM) and thermal gravimetric analysis (TGA) were employed to characterize Fe3O4@GO-DES, and the results indicated the successful preparation of Fe3O4@GO-DES. The UV-vis spectrophotometer was used to measure the concentration of protein after extraction. Single factor experiments proved that the extraction amount was influenced by the types of DESs, solution temperature, solution ionic strength, extraction time, protein concentration and the amount of Fe3O4@GO-DES. Comparison of Fe3O4@GO and Fe3O4@GO-DES was carried out by extracting bovine serum albumin, ovalbumin, bovine hemoglobin and lysozyme. The experimental results showed that the proposed Fe3O4@GO-DES performs better than Fe3O4@GO in the extraction of acidic protein. Desorption of protein was carried out by eluting the solid extractant with 0.005 mol L(-1) Na2HPO4 contained 1 mol L(-1) NaCl. The obtained elution efficiency was about 90.9%. Attributed to the convenient magnetic separation, the solid extractant could be easily recycled. Copyright © 2015 Elsevier B.V. All rights reserved.
Lam, Ming Kai; Sen, Hanim; Tandjung, Kenneth; van Houwelingen, K Gert; de Vries, Arie G; Danse, Peter W; Schotborgh, Carl E; Scholte, Martijn; Löwik, Marije M; Linssen, Gerard C M; Ijzerman, Maarten J; van der Palen, Job; Doggen, Carine J M; von Birgelen, Clemens
2014-04-01
To evaluate the safety and efficacy of 2 novel drug-eluting stents (DES) with biodegradable polymer-based coatings versus a durable coating DES. BIO-RESORT is an investigator-initiated, prospective, patient-blinded, randomized multicenter trial in 3540 Dutch all-comers with various clinical syndromes, requiring percutaneous coronary interventions (PCI) with DES implantation. Randomization (stratified for diabetes mellitus) is being performed in a 1:1:1 ratio between ORSIRO sirolimus-eluting stent with circumferential biodegradable coating, SYNERGY everolimus-eluting stent with abluminal biodegradable coating, and RESOLUTE INTEGRITY zotarolimus-eluting stent with durable coating. The primary endpoint is the incidence of the composite endpoint target vessel failure at 1 year, consisting of cardiac death, target vessel-related myocardial infarction, or clinically driven target vessel revascularization. Power calculation assumes a target vessel failure rate of 8.5% with a 3.5% non-inferiority margin, giving the study a power of 85% (α level .025 adjusted for multiple testing). The impact of diabetes mellitus on post-PCI outcome will be evaluated. The first patient was enrolled on December 21, 2012. BIO-RESORT is a large, prospective, randomized, multicenter trial with three arms, comparing two DES with biodegradable coatings versus a reference DES with a durable coating in 3540 all-comers. The trial will provide novel insights into the clinical outcome of modern DES and will address the impact of known and so far undetected diabetes mellitus on post-PCI outcome. Copyright © 2014 The Authors. Published by Mosby, Inc. All rights reserved.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471
Al-Mamun, Mohammad; Zhu, Zhengju; Yin, Huajie; Su, Xintai; Zhang, Haimin; Liu, Porun; Yang, Huagui; Wang, Dan; Tang, Zhiyong; Wang, Yun; Zhao, Huijun
2016-08-04
A novel surface sulfur (S) doped cobalt (Co) catalyst for the oxygen evolution reaction (OER) is theoretically designed through the optimisation of the electronic structure of highly reactive surface atoms which is also validated by electrocatalytic OER experiments.
Load optimised piezoelectric generator for powering battery-less TPMS
NASA Astrophysics Data System (ADS)
Blažević, D.; Kamenar, E.; Zelenika, S.
2013-05-01
The design of a piezoelectric device aimed at harvesting the kinetic energy of random vibrations on a vehicle's wheel is presented. The harvester is optimised for powering a Tire Pressure Monitoring System (TPMS). On-road experiments are performed in order to measure the frequencies and amplitudes of wheels' vibrations. It is hence determined that the highest amplitudes occur in an unperiodic manner. Initial tests of the battery-less TPMS are performed in laboratory conditions where tuning and system set-up optimization is achieved. The energy obtained from the piezoelectric bimorph is managed by employing the control electronics which converts AC voltage to DC and conditions the output voltage to make it compatible with the load (i.e. sensor electronics and transmitter). The control electronics also manages the sleep/measure/transmit cycles so that the harvested energy is efficiently used. The system is finally tested in real on-road conditions successfully powering the pressure sensor and transmitting the data to a receiver in the car cockpit.
Detection of bremsstrahlung radiation of 90Sr-90Y for emergency lung counting.
Ho, A; Hakmana Witharana, S S; Jonkmans, G; Li, L; Surette, R A; Dubeau, J; Dai, X
2012-09-01
This study explores the possibility of developing a field-deployable (90)Sr detector for rapid lung counting in emergency situations. The detection of beta-emitters (90)Sr and its daughter (90)Y inside the human lung via bremsstrahlung radiation was performed using a 3″ × 3″ NaI(Tl) crystal detector and a polyethylene-encapsulated source to emulate human lung tissue. The simulation results show that this method is a viable technique for detecting (90)Sr with a minimum detectable activity (MDA) of 1.07 × 10(4) Bq, using a realistic dual-shielded detector system in a 0.25-µGy h(-1) background field for a 100-s scan. The MDA is sufficiently sensitive to meet the requirement for emergency lung counting of Type S (90)Sr intake. The experimental data were verified using Monte Carlo calculations, including an estimate for internal bremsstrahlung, and an optimisation of the detector geometry was performed. Optimisations in background reduction techniques and in the electronic acquisition systems are suggested.
Low power test architecture for dynamic read destructive fault detection in SRAM
NASA Astrophysics Data System (ADS)
Takher, Vikram Singh; Choudhary, Rahul Raj
2018-06-01
Dynamic Read Destructive Fault (dRDF) is the outcome of resistive open defects in the core cells of static random-access memories (SRAMs). The sensitisation of dRDF involves either performing multiple read operations or creation of number of read equivalent stress (RES), on the core cell under test. Though the creation of RES is preferred over the performing multiple read operation on the core cell, cell dissipates more power during RES than during the read or write operation. This paper focuses on the reduction in power dissipation by optimisation of number of RESs, which are required to sensitise the dRDF during test mode of operation of SRAM. The novel pre-charge architecture has been proposed in order to reduce the power dissipation by limiting the number of RESs to an optimised number of two. The proposed low power architecture is simulated and analysed which shows reduction in power dissipation by reducing the number of RESs up to 18.18%.
Very high frame rate volumetric integration of depth images on mobile devices.
Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David
2015-11-01
Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.
A stable solution-processed polymer semiconductor with record high-mobility for printed transistors
Li, Jun; Zhao, Yan; Tan, Huei Shuan; Guo, Yunlong; Di, Chong-An; Yu, Gui; Liu, Yunqi; Lin, Ming; Lim, Suo Hon; Zhou, Yuhua; Su, Haibin; Ong, Beng S.
2012-01-01
Microelectronic circuits/arrays produced via high-speed printing instead of traditional photolithographic processes offer an appealing approach to creating the long-sought after, low-cost, large-area flexible electronics. Foremost among critical enablers to propel this paradigm shift in manufacturing is a stable, solution-processable, high-performance semiconductor for printing functionally capable thin-film transistors — fundamental building blocks of microelectronics. We report herein the processing and optimisation of solution-processable polymer semiconductors for thin-film transistors, demonstrating very high field-effect mobility, high on/off ratio, and excellent shelf-life and operating stabilities under ambient conditions. Exceptionally high-gain inverters and functional ring oscillator devices on flexible substrates have been demonstrated. This optimised polymer semiconductor represents a significant progress in semiconductor development, dispelling prevalent skepticism surrounding practical usability of organic semiconductors for high-performance microelectronic devices, opening up application opportunities hitherto functionally or economically inaccessible with silicon technologies, and providing an excellent structural framework for fundamental studies of charge transport in organic systems. PMID:23082244
Dry eyes and AIs: If you don't ask you won't find out.
Inglis, Holly; Boyle, Frances M; Friedlander, Michael L; Watson, Stephanie L
2015-12-01
Our objective was to investigate the hypothesis that women on adjuvant aromatase inhibitors (AIs) for treatment of breast cancer have a higher prevalence of dry eye syndrome (DES) compared with controls. Exposure and control groups were recruited. A cross sectional questionnaire-based study was performed. Demographic data and medical histories were collected. The presence of dry eye syndrome was determined by the ocular surface disease index (OSDI). The Functional Assessment of Cancer Treatment - Endocrine Subscale (FACT-ES) was performed to investigate correlations with other side effects of AIs. 93 exposure group and 100 control group questionnaires were included. The groups were similar in all demographic variables. The prevalence of dry eye syndrome was 35% (exposure) and 18% (control) (p < 0.01, OR 2.5). AIs were the only factor associated with dry eyes. The OSDI score was negatively correlated with the total FACT-ES score and positively correlated with duration of treatment. Our study is the first to use a validated questionnaire to assess for DES in this population. DES is significantly more prevalent in women on AIs compared with controls. This is a newly emerging, and easily treated side effect of AIs. Self-reporting of dry eye symptoms underestimates the prevalence of DES with AIs. We recommend routine screening of patients on AIs with the OSDI with the aim of improving patient quality of life and possibly adherence. Copyright © 2015 Elsevier Ltd. All rights reserved.
The redMaPPer Galaxy Cluster Catalog From DES Science Verification Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rykoff, E. S.
We describe updates to the redMaPPer algorithm, a photometric red-sequence cluster finder specifically designed for large photometric surveys. The updated algorithm is applied tomore » $$150\\,\\mathrm{deg}^2$$ of Science Verification (SV) data from the Dark Energy Survey (DES), and to the Sloan Digital Sky Survey (SDSS) DR8 photometric data set. The DES SV catalog is locally volume limited, and contains 786 clusters with richness $$\\lambda>20$$ (roughly equivalent to $$M_{\\mathrm{500c}}\\gtrsim10^{14}\\,h_{70}^{-1}\\,M_{\\odot}$$) and 0.2 < $z$ <0.9. The DR8 catalog consists of 26311 clusters with 0.08 < $z$ < 0.6, with a sharply increasing richness threshold as a function of redshift for $$z\\gtrsim 0.35$$. The photometric redshift performance of both catalogs is shown to be excellent, with photometric redshift uncertainties controlled at the $$\\sigma_z/(1+z)\\sim 0.01$$ level for $$z\\lesssim0.7$$, rising to $$\\sim0.02$$ at $$z\\sim0.9$$ in DES SV. We make use of $Chandra$ and $XMM$ X-ray and South Pole Telescope Sunyaev-Zeldovich data to show that the centering performance and mass--richness scatter are consistent with expectations based on prior runs of redMaPPer on SDSS data. We also show how the redMaPPer photo-$z$ and richness estimates are relatively insensitive to imperfect star/galaxy separation and small-scale star masks.« less
DOT National Transportation Integrated Search
1970-01-01
The performance of in-service typical Virginia flexible and rigid pavements in all areas of the state is under evaluation. The objectives are to provide a ready reference for designers and field engineers and to provide background information for des...
Ribera, Esteban; Martínez-Sesmero, José Manuel; Sánchez-Rubio, Javier; Rubio, Rafael; Pasquau, Juan; Poveda, José Luis; Pérez-Mitru, Alejandro; Roldán, Celia; Hernández-Novoa, Beatriz
2018-03-01
The objective of this study is to estimate the economic impact associated with the optimisation of triple antiretroviral treatment (ART) in patients with undetectable viral load according to the recommendations from the GeSIDA/PNS (2015) Consensus and their applicability in the Spanish clinical practice. A pharmacoeconomic model was developed based on data from a National Hospital Prescription Survey on ART (2014) and the A-I evidence recommendations for the optimisation of ART from the GeSIDA/PNS (2015) consensus. The optimisation model took into account the willingness to optimise a particular regimen and other assumptions, and the results were validated by an expert panel in HIV infection (Infectious Disease Specialists and Hospital Pharmacists). The analysis was conducted from the NHS perspective, considering the annual wholesale price and accounting for deductions stated in the RD-Law 8/2010 and the VAT. The expert panel selected six optimisation strategies, and estimated that 10,863 (13.4%) of the 80,859 patients in Spain currently on triple ART, would be candidates to optimise their ART, leading to savings of €15.9M/year (2.4% of total triple ART drug cost). The most feasible strategies (>40% of patients candidates for optimisation, n=4,556) would be optimisations to ATV/r+3TC therapy. These would produce savings between €653 and €4,797 per patient per year depending on baseline triple ART. Implementation of the main optimisation strategies recommended in the GeSIDA/PNS (2015) Consensus into Spanish clinical practice would lead to considerable savings, especially those based in dual therapy with ATV/r+3TC, thus contributing to the control of pharmaceutical expenditure and NHS sustainability. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
Computer modeling of lung cancer diagnosis-to-treatment process
Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U.; Yu, Xinhua; Faris, Nick
2015-01-01
We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed. PMID:26380181