NASA Astrophysics Data System (ADS)
Mejdi, Abderrazak
Les fuselages des avions sont generalement en aluminium ou en composite renforces par des raidisseurs longitudinaux (lisses) et transversaux (cadres). Les raidisseurs peuvent etre metalliques ou en composite. Durant leurs differentes phases de vol, les structures d'avions sont soumises a des excitations aeriennes (couche limite turbulente : TBL, champs diffus : DAF) sur la peau exterieure dont l'energie acoustique produite se transmet a l'interieur de la cabine. Les moteurs, montes sur la structure, produisent une excitation solidienne significative. Ce projet a pour objectifs de developper et de mettre en place des strategies de modelisations des fuselages d'avions soumises a des excitations aeriennes et solidiennes. Tous d'abord, une mise a jour des modeles existants de la TBL apparait dans le deuxieme chapitre afin de mieux les classer. Les proprietes de la reponse vibro-acoustique des structures planes finies et infinies sont analysees. Dans le troisieme chapitre, les hypotheses sur lesquelles sont bases les modeles existants concernant les structures metalliques orthogonalement raidies soumises a des excitations mecaniques, DAF et TBL sont reexamines en premier lieu. Ensuite, une modelisation fine et fiable de ces structures est developpee. Le modele est valide numeriquement a l'aide des methodes des elements finis (FEM) et de frontiere (BEM). Des tests de validations experimentales sont realises sur des panneaux d'avions fournis par des societes aeronautiques. Au quatrieme chapitre, une extension vers les structures composites renforcees par des raidisseurs aussi en composites et de formes complexes est etablie. Un modele analytique simple est egalement implemente et valide numeriquement. Au cinquieme chapitre, la modelisation des structures raidies periodiques en composites est beaucoup plus raffinee par la prise en compte des effets de couplage des deplacements planes et transversaux. L'effet de taille des structures finies periodiques est egalement pris en compte. Les modeles developpes ont permis de conduire plusieurs etudes parametriques sur les proprietes vibro-acoustiques des structures d'avions facilitant ainsi la tache des concepteurs. Dans le cadre de cette these, un article a ete publie dans le Journal of Sound and Vibration et trois autres soumis, respectivement aux Journal of Acoustical Society of America, International Journal of Solid Mechanics et au Journal of Sound and Vibration Mots cles : structures raidies, composites, vibro-acoustique, perte par transmission.
Etude thermo-hydraulique de l'ecoulement du moderateur dans le reacteur CANDU-6
NASA Astrophysics Data System (ADS)
Mehdi Zadeh, Foad
Etant donne la taille (6,0 m x 7,6 m) ainsi que le domaine multiplement connexe qui caracterisent la cuve des reacteurs CANDU-6 (380 canaux dans la cuve), la physique qui gouverne le comportement du fluide moderateur est encore mal connue de nos jours. L'echantillonnage de donnees dans un reacteur en fonction necessite d'apporter des changements a la configuration de la cuve du reacteur afin d'y inserer des sondes. De plus, la presence d'une zone intense de radiations empeche l'utilisation des capteurs courants d'echantillonnage. En consequence, l'ecoulement du moderateur doit necessairement etre etudie a l'aide d'un modele experimental ou d'un modele numerique. Pour ce qui est du modele experimental, la fabrication et la mise en fonction de telles installations coutent tres cher. De plus, les parametres de la mise a l'echelle du systeme pour fabriquer un modele experimental a l'echelle reduite sont en contradiction. En consequence, la modelisation numerique reste une alternative importante. Actuellement, l'industrie nucleaire utilise une approche numerique, dite de milieu poreux, qui approxime le domaine par un milieu continu ou le reseau des tubes est remplace par des resistances hydrauliques distribuees. Ce modele est capable de decrire les phenomenes macroscopiques de l'ecoulement, mais ne tient pas compte des effets locaux ayant un impact sur l'ecoulement global, tel que les distributions de temperatures et de vitesses a proximite des tubes ainsi que des instabilites hydrodynamiques. Dans le contexte de la surete nucleaire, on s'interesse aux effets locaux autour des tubes de calandre. En effet, des simulations faites par cette approche predisent que l'ecoulement peut prendre plusieurs configurations hydrodynamiques dont, pour certaines, l'ecoulement montre un comportement asymetrique au sein de la cuve. Ceci peut provoquer une ebullition du moderateur sur la paroi des canaux. Dans de telles conditions, le coefficient de reactivite peut varier de maniere importante, se traduisant par l'accroissement de la puissance du reacteur. Ceci peut avoir des consequences majeures pour la surete nucleaire. Une modelisation CFD (Computational Fluid Dynamics) detaillee tenant compte des effets locaux s'avere donc necessaire. Le but de ce travail de recherche est de modeliser le comportement complexe de l'ecoulement du moderateur au sein de la cuve d'un reacteur nucleaire CANDU-6, notamment a proximite des tubes de calandre. Ces simulations servent a identifier les configurations possibles de l'ecoulement dans la calandre. Cette etude consiste ainsi a formuler des bases theoriques a l'origine des instabilites macroscopiques du moderateur, c.-a-d. des mouvements asymetriques qui peuvent provoquer l'ebullition du moderateur. Le defi du projet est de determiner l'impact de ces configurations de l'ecoulement sur la reactivite du reacteur CANDU-6.
Modelisation par elements finis du muscle strie
NASA Astrophysics Data System (ADS)
Leonard, Mathieu
Ce present projet de recherche a permis. de creer un modele par elements finis du muscle strie humain dans le but d'etudier les mecanismes engendrant les lesions musculaires traumatiques. Ce modele constitue une plate-forme numerique capable de discerner l'influence des proprietes mecaniques des fascias et de la cellule musculaire sur le comportement dynamique du muscle lors d'une contraction excentrique, notamment le module de Young et le module de cisaillement de la couche de tissu conjonctif, l'orientation des fibres de collagene de cette membrane et le coefficient de poisson du muscle. La caracterisation experimentale in vitro de ces parametres pour des vitesses de deformation elevees a partir de muscles stries humains actifs est essentielle pour l'etude de lesions musculaires traumatiques. Le modele numerique developpe est capable de modeliser la contraction musculaire comme une transition de phase de la cellule musculaire par un changement de raideur et de volume a l'aide des lois de comportement de materiau predefinies dans le logiciel LS-DYNA (v971, Livermore Software Technology Corporation, Livermore, CA, USA). Le present projet de recherche introduit donc un phenomene physiologique qui pourrait expliquer des blessures musculaires courantes (crampes, courbatures, claquages, etc.), mais aussi des maladies ou desordres touchant le tissu conjonctif comme les collagenoses et la dystrophie musculaire. La predominance de blessures musculaires lors de contractions excentriques est egalement exposee. Le modele developpe dans ce projet de recherche met ainsi a l'avant-scene le concept de transition de phase ouvrant la porte au developpement de nouvelles technologies pour l'activation musculaire chez les personnes atteintes de paraplegie ou de muscles artificiels compacts pour l'elaboration de protheses ou d'exosquelettes. Mots-cles Muscle strie, lesion musculaire, fascia, contraction excentrique, modele par elements finis, transition de phase
NASA Astrophysics Data System (ADS)
Goyette, Stephane
1995-11-01
Le sujet de cette these concerne la modelisation numerique du climat regional. L'objectif principal de l'exercice est de developper un modele climatique regional ayant les capacites de simuler des phenomenes de meso-echelle spatiale. Notre domaine d'etude se situe sur la Cote Ouest nord americaine. Ce dernier a retenu notre attention a cause de la complexite du relief et de son controle sur le climat. Les raisons qui motivent cette etude sont multiples: d'une part, nous ne pouvons pas augmenter, en pratique, la faible resolution spatiale des modeles de la circulation generale de l'atmosphere (MCG) sans augmenter a outrance les couts d'integration et, d'autre part, la gestion de l'environnement exige de plus en plus de donnees climatiques regionales determinees avec une meilleure resolution spatiale. Jusqu'alors, les MCG constituaient les modeles les plus estimes pour leurs aptitudes a simuler le climat ainsi que les changements climatiques mondiaux. Toutefois, les phenomenes climatiques de fine echelle echappent encore aux MCG a cause de leur faible resolution spatiale. De plus, les repercussions socio-economiques des modifications possibles des climats sont etroitement liees a des phenomenes imperceptibles par les MCG actuels. Afin de circonvenir certains problemes inherents a la resolution, une approche pratique vise a prendre un domaine spatial limite d'un MCG et a y imbriquer un autre modele numerique possedant, lui, un maillage de haute resolution spatiale. Ce processus d'imbrication implique alors une nouvelle simulation numerique. Cette "retro-simulation" est guidee dans le domaine restreint a partir de pieces d'informations fournies par le MCG et forcee par des mecanismes pris en charge uniquement par le modele imbrique. Ainsi, afin de raffiner la precision spatiale des previsions climatiques de grande echelle, nous developpons ici un modele numerique appele FIZR, permettant d'obtenir de l'information climatique regionale valide a la fine echelle spatiale. Cette nouvelle gamme de modeles-interpolateurs imbriques qualifies d'"intelligents" fait partie de la famille des modeles dits "pilotes". L'hypothese directrice de notre etude est fondee sur la supposition que le climat de fine echelle est souvent gouverne par des forcages provenant de la surface plutot que par des transports atmospheriques de grande echelle spatiale. La technique que nous proposons vise donc a guider FIZR par la Dynamique echantillonnee d'un MCG et de la forcer par la Physique du MCG ainsi que par un forcage orographique de meso-echelle, en chacun des noeuds de la grille fine de calculs. Afin de valider la robustesse et la justesse de notre modele climatique regional, nous avons choisi la region de la Cote Ouest du continent nord americain. Elle est notamment caracterisee par une distribution geographique des precipitations et des temperatures fortement influencee par le relief sous-jacent. Les resultats d'une simulation d'un mois de janvier avec FIZR demontrent que nous pouvons simuler des champs de precipitations et de temperatures au niveau de l'abri beaucoup plus pres des observations climatiques comparativement a ceux simules a partir d'un MCG. Ces performances sont manifestement attribuees au forcage orographique de meso-echelle de meme qu'aux caracteristiques de surface determinees a fine echelle. Un modele similaire a FIZR peut, en principe, etre implante sur l'importe quel MCG, donc, tout organisme de recherche implique en modelisation numerique mondiale de grande echelle pourra se doter d'un el outil de regionalisation.
Etude aerodynamique d'un jet turbulent impactant une paroi concave
NASA Astrophysics Data System (ADS)
LeBlanc, Benoit
Etant donne la demande croissante de temperatures elevees dans des chambres de combustion de systemes de propulsions en aerospatiale (turbomoteurs, moteur a reaction, etc.), l'interet dans le refroidissement par jets impactant s'est vu croitre. Le refroidissement des aubes de turbine permet une augmentation de temperature de combustion, ce qui se traduit en une augmentation de l'efficacite de combustion et donc une meilleure economie de carburant. Le transfert de chaleur dans les au bages est influence par les aspects aerodynamiques du refroidissement a jet, particulierement dans le cas d'ecoulements turbulents. Un manque de comprehension de l'aerodynamique a l'interieur de ces espaces confinees peut mener a des changements de transfert thermique qui sont inattendus, ce qui augmente le risque de fluage. Il est donc d'interet pour l'industrie aerospatiale et l'academie de poursuivre la recherche dans l'aerodynamique des jets turbulents impactant les parois courbes. Les jets impactant les surfaces courbes ont deja fait l'objet de nombreuses etudes. Par contre des conditions oscillatoires observees en laboratoire se sont averees difficiles a reproduire en numerique, puisque les structures d'ecoulements impactants des parois concaves sont fortement dependantes de la turbulence et des effets instationnaires. Une etude experimentale fut realisee a l'institut PPRIME a l'Universite de Poitiers afin d'observer le phenomene d'oscillation dans le jet. Une serie d'essais ont verifie les conditions d'ecoulement laminaires et turbulentes, toutefois le cout des essais experimentaux a seulement permis d'avoir un apercu du phenomene global. Une deuxieme serie d'essais fut realisee numeriquement a l'Universite de Moncton avec l'outil OpenFOAM pour des conditions d'ecoulement laminaire et bidimensionnel. Cette etude a donc comme but de poursuivre l'enquete de l'aerodynamique oscillatoire des jets impactant des parois courbes, mais pour un regime d'ecoulement transitoire, turbulent, tridimensionnel. Les nombres de Reynolds utilises dans l'etude numerique, bases sur le diametre du jet lineaire observe, sont de Red = 3333 et 6667, consideres comme etant en transition vers la turbulence. Dans cette etude, un montage numerique est construit. Le maillage, le schema numerique, les conditions frontiere et la discretisation sont discutes et choisis. Les resultats sont ensuite valides avec des donnees turbulentes experimentales. En modelisation numerique de turbulence, les modeles de Moyennage Reynolds des Equations Naviers Stokes (RANS) presentent des difficultes avec des ecoulements instationnaires en regime transitionnel. La Simulation des Grandes Echelles (LES) presente une solution plus precise, mais au cout encore hors de portee pour cette etude. La methode employee pour cette etude est la Simulation des Tourbillons Detaches (DES), qui est un hybride des deux methodes (RANS et LES). Pour analyser la topologie de l'ecoulement, la decomposition des modes propres (POD) a ete egalement ete effectuee sur les resultats numeriques. L'etude a demontre d'abord le temps de calcul relativement eleve associe a des essais DES pour garder le nombre de Courant faible. Les resultats numeriques ont cependant reussi a reproduire correctement le basculement asynchrone observe dans les essais experimentaux. Le basculement observe semble etre cause par des effets transitionnels, ce qui expliquerait la difficulte des modeles RANS a correctement reproduire l'aerodynamique de l'ecoulement. L'ecoulement du jet, a son tour, est pour la plupart du temps tridimensionnel et turbulent sauf pour de courtes periodes de temps stable et independant de la troisieme dimension. L'etude topologique de l'ecoulement a egalement permit la reconaissances de structures principales sousjacentes qui etaient brouillees par la turbulence. Mots cles : jet impactant, paroi concave, turbulence, transitionnel, simulation des tourbillons detaches (DES), OpenFOAM.
NASA Astrophysics Data System (ADS)
Bel Hadj Kacem, Mohamed Salah
All hydrological processes are affected by the spatial variability of the physical parameters of the watershed, and also by human intervention on the landscape. The water outflow from a watershed strictly depends on the spatial and temporal variabilities of the physical parameters of the watershed. It is now apparent that the integration of mathematical models into GIS's can benefit both GIS and three-dimension environmental models: a true modeling capability can help the modeling community bridge the gap between planners, scientists, decision-makers and end-users. The main goal of this research is to design a practical tool to simulate run-off water surface using Geographic design a practical tool to simulate run-off water surface using Geographic Information Systems and the simulation of the hydrological behavior by the Finite Element Method.
NASA Astrophysics Data System (ADS)
Xing, Jacques
Dielectric barrier discharge (DBD) plasma actuator is a proposed device for active for control in order to improve the performances of aircraft and turbomachines. Essentially, these actuators are made of two electrodes separated by a layer of dielectric material and convert electricity directly into flow. Because of the high costs associated with experiences in realistic operating conditions, there is a need to develop a robust numerical model that can predict the plasma body force and the effects of various parameters on it. Indeed, this plasma body force can be affected by atmospheric conditions (temperature, pressure, and humidity), velocity of the neutral flow, applied voltage (amplitude, frequency, and waveform), and by the actuator geometry. In that respect, the purpose of this thesis is to implement a plasma model for DBD actuator that has the potential to consider the effects of these various parameters. In DBD actuator modelling, two types of approach are commonly proposed, low-order modelling (or phenomenological) and high-order modelling (or scientific). However a critical analysis, presented in this thesis, showed that phenomenological models are not robust enough to predict the plasma body force without artificial calibration for each specific case. Moreover, there are based on erroneous assumptions. Hence, the selected approach to model the plasma body force is a scientific drift-diffusion model with four chemical species (electrons, positive ions, negative ions, and neutrals). This model was chosen because it gives consistent numerical results comparatively with experimental data. Moreover, this model has great potential to include the effect of temperature, pressure, and humidity on the plasma body force and requires only a reasonable computational time. This model was independently implemented in C++ programming language and validated with several test cases. This model was later used to simulate the effect of the plasma body force on the laminar-turbulent transition on airfoil in order to validate the performance of this model in practical CFD simulation. Numerical results show that this model gives a better prediction of the effect of the plasma on the fluid flow for a practical case in aerospace than a phenomenological model.
Modelisation of the SECMin molten salts environment
NASA Astrophysics Data System (ADS)
Lucas, M.; Slim, C.; Delpech, S.; di Caprio, D.; Stafiej, J.
2014-06-01
We develop a cellular automata modelisation of SECM experiments to study corrosion in molten salt media for generation IV nuclear reactors. The electrodes used in these experiments are cylindrical glass tips with a coaxial metal wire inside. As the result of simulations we obtain the current approach curves of the electrodes with geometries characterized by several values of the ratios of glass to metal area at the tip. We compare these results with predictions of the known analytic expressions, solutions of partial differential equations for flat uniform geometry of the substrate. We present the results for other, more complicated substrate surface geometries e. g. regular saw modulated surface, surface obtained by Eden model process, ...
NASA Astrophysics Data System (ADS)
Lavergne, Catherine
Geological formations of the Montreal area are mostly made of limestones. The usual approach for design is based on rock mass classification systems considering the rock mass as an equivalent continuous and isotropic material. However, for shallow excavations, stability is generally controlled by geological structures, that in Montreal, are bedding plans that give to the rock mass a strong strain and stress anisotropy. Objects of the research are to realize a numerical modeling that considers sedimentary rocks anisotropy and to determine the influence of the design parameters on displacements, stresses and failure around metro unsupported underground excavations. Geotechnical data used for this study comes from a metro extension project and has been made available to the author. The excavation geometries analyzed are the tunnel, the station and a garage consisting of three (3) parallel tunnels for rock covered between 4 and 16 m. The numerical modeling has been done with FLAC software that represents continuous environment, and ubiquitous joint behavior model to simulate strength anisotropy of sedimentary rock masses. The model considers gravity constraints for an anisotropic material and pore pressures. In total, eleven (11) design parameters have been analyzed. Results show that unconfined compressive strength of intact rock, fault zones and pore pressures in soils have an important influence on the stability of the numerical model. The geometry of excavation, the thickness of rock covered, the RQD, Poisson's ratio and the horizontal tectonic stresses have a moderate influence. Finally, ubiquitous joint parameters, pore pressures in rock mass, width of the pillars of the garage and the damage linked to the excavation method have a low impact. FLAC results have been compared with those of UDEC, a software that uses the distinct element method. Similar conclusions were obtained on displacements, stress state and failure modes. However, UDEC model give slightly less conservative results than FLAC. This study stands up by his local character and the large amount of geotechnical data available used to determine parameters of the numerical model. The results led to recommendations for laboratory tests that can be applied to characterize more specifically anisotropy of sedimentary rocks.
Bellman Continuum (3rd) International Workshop (13-14 June 1988)
1988-06-01
Modelling Uncertain Problem ................. 53 David Bensoussan ,---,>Asymptotic Linearization of Uncertain Multivariable Systems by Sliding Modes...K. Ghosh .-. Robust Model Tracking for a Class of Singularly Perturbed Nonlinear Systems via Composite Control ....... 93 F. Garofalo and L. Glielmo...MODELISATION ET COMMANDE EN ECONOMIE MODELS AND CONTROL POLICIES IN ECONOMICS Qualitative Differential Games : A Viability Approach ............. 117
NASA Astrophysics Data System (ADS)
Aurousseau, Emmanuelle
Les modeles sont des outils amplement utilises en sciences et technologies (S&T) afin de representer et d’expliquer un phenomene difficilement accessible, voire abstrait. La demarche de modelisation est presentee de maniere explicite dans le programme de formation de l’ecole quebecoise (PFEQ), notamment au 2eme cycle du secondaire (Quebec. Ministere de l'Education du Loisir et du Sport, 2007a). Elle fait ainsi partie des sept demarches auxquelles eleves et enseignants sont censes recourir. Cependant, de nombreuses recherches mettent en avant la difficulte des enseignants a structurer leurs pratiques d’enseignement autour des modeles et de la demarche de modelisation qui sont pourtant reconnus comme indispensables. En effet, les modeles favorisent la conciliation des champs concrets et abstraits entre lesquels le scientifique, meme en herbe, effectue des allers-retours afin de concilier le champ experimental de reference qu’il manipule et observe au champ theorique relie qu’il construit. L’objectif de cette recherche est donc de comprendre comment les modeles et la demarche de modelisation contribuent a faciliter l’articulation du concret et de l’abstrait dans l’enseignement des sciences et des technologies (S&T) au 2eme cycle du secondaire. Pour repondre a cette question, nous avons travaille avec les enseignants dans une perspective collaborative lors de groupes focalises et d’observation en classe. Ces dispositifs ont permis d’examiner les pratiques d’enseignement que quatre enseignants mettent en oeuvre en utilisant des modeles et des demarches de modelisation. L’analyse des pratiques d’enseignement et des ajustements que les enseignants envisagent dans leur pratique nous permet de degager des connaissances a la fois pour la recherche et pour la pratique des enseignants, au regard de l’utilisation des modeles et de la demarche de modelisation en S&T au secondaire.
2012-07-01
du monde de la modélisation et de la simulation et lui fournir des directives de mise en œuvre ; et fournir des ...définition ; rapports avec les normes ; spécification de procédure de gestion de la MC ; spécification d’artefact de MC. Considérations importantes...utilisant la présente directive comme référence. • Les VV&A (vérification, validation et acceptation) des MC doivent faire partie intégrante du
Computational approach to estimating the effects of blood properties on changes in intra-stent flow.
Benard, Nicolas; Perrault, Robert; Coisne, Damien
2006-08-01
In this study various blood rheological assumptions are numerically investigated for the hemodynamic properties of intra-stent flow. Non-newtonian blood properties have never been implemented in blood coronary stented flow investigation, although its effects appear essential for a correct estimation and distribution of wall shear stress (WSS) exerted by the fluid on the internal vessel surface. Our numerical model is based on a full 3D stent mesh. Rigid wall and stationary inflow conditions are applied. Newtonian behavior, non-newtonian model based on Carreau-Yasuda relation and a characteristic newtonian value defined with flow representative parameters are introduced in this research. Non-newtonian flow generates an alteration of near wall viscosity norms compared to newtonian. Maximal WSS values are located in the center part of stent pattern structure and minimal values are focused on the proximal stent wire surface. A flow rate increase emphasizes fluid perturbations, and generates a WSS rise except for interstrut area. Nevertheless, a local quantitative analysis discloses an underestimation of WSS for modelisation using a newtonian blood flow, with clinical consequence of overestimate restenosis risk area. Characteristic viscosity introduction appears to present a useful option compared to rheological modelisation based on experimental data, with computer time gain and relevant results for quantitative and qualitative WSS determination.
NASA Astrophysics Data System (ADS)
Varlet, Madeleine
Le recours aux modeles et a la modelisation est mentionne dans la documentation scientifique comme un moyen de favoriser la mise en oeuvre de pratiques d'enseignement-apprentissage constructivistes pour pallier les difficultes d'apprentissage en sciences. L'etude prealable du rapport des enseignantes et des enseignants aux modeles et a la modelisation est alors pertinente pour comprendre leurs pratiques d'enseignement et identifier des elements dont la prise en compte dans les formations initiale et disciplinaire peut contribuer au developpement d'un enseignement constructiviste des sciences. Plusieurs recherches ont porte sur ces conceptions sans faire de distinction selon les matieres enseignees, telles la physique, la chimie ou la biologie, alors que les modeles ne sont pas forcement utilises ou compris de la meme maniere dans ces differentes disciplines. Notre recherche s'est interessee aux conceptions d'enseignantes et d'enseignants de biologie au secondaire au sujet des modeles scientifiques, de quelques formes de representations de ces modeles ainsi que de leurs modes d'utilisation en classe. Les resultats, que nous avons obtenus au moyen d'une serie d'entrevues semi-dirigees, indiquent que globalement leurs conceptions au sujet des modeles sont compatibles avec celle scientifiquement admise, mais varient quant aux formes de representations des modeles. L'examen de ces conceptions temoigne d'une connaissance limitee des modeles et variable selon la matiere enseignee. Le niveau d'etudes, la formation prealable, l'experience en enseignement et un possible cloisonnement des matieres pourraient expliquer les differentes conceptions identifiees. En outre, des difficultes temporelles, conceptuelles et techniques peuvent freiner leurs tentatives de modelisation avec les eleves. Toutefois, nos resultats accreditent l'hypothese que les conceptions des enseignantes et des enseignants eux-memes au sujet des modeles, de leurs formes de representation et de leur approche constructiviste en enseignement representent les plus grands obstacles a la construction des modeles en classe. Mots-cles : Modeles et modelisation, biologie, conceptions, modes d'utilisation, constructivisme, enseignement, secondaire.
Vectored Thrust Digital Flight Control for Crew Escape. Volume 2.
1985-12-01
no. 24. Lecrique, J., A. Rault, M. Tessier and J.L. Testud (1978), - "Multivariable Regulation of a Thermal Power Plant Steam Generator," presented...and Extended Kalman Observers," presented at the Conf. Decision and Control, San Diego, CA. Testud , J.L. (1977), Commande Numerique Multivariable du
Turbomachinery Design Using CFD (La Conception des Turbomachines par l’Aerodynamique Numerique).
1994-05-01
Method for Flow Calculations in Turbomachines", Vrije Thompkins, W.T.,1981, "A Fortran Program for Calcu- Univ.Brussel, Dienst Stromingsmechanica, VUB- STR ...Model Equation for Simulating Flows in mung um Profile Multistage Turbomachinery MBB-Bericht Nr. UFE 1352, 1977 ASME paper 85-GT-226, Houston, March
Methodes d'amas quantiques a temperature finie appliquees au modele de Hubbard
NASA Astrophysics Data System (ADS)
Plouffe, Dany
Depuis leur decouverte dans les annees 80, les supraconducteurs a haute temperature critique ont suscite beaucoup d'interet en physique du solide. Comprendre l'origine des phases observees dans ces materiaux, telle la supraconductivite, est l'un des grands defis de la physique theorique du solide des 25 dernieres annees. L'un des mecanismes pressentis pour expliquer ces phenomenes est la forte interaction electron-electron. Le modele de Hubbard est l'un des modeles les plus simples pour tenir compte de ces interactions. Malgre la simplicite apparente de ce modele, certaines de ses caracteristiques, dont son diagramme de phase, ne sont toujours pas bien etablies, et ce malgre plusieurs avancements theoriques dans les dernieres annees. Cette etude se consacre a faire une analyse de methodes numeriques permettant de calculer diverses proprietes du modele de Hubbard en fonction de la temperature. Nous decrivons des methodes (la VCA et la CPT) qui permettent de calculer approximativement la fonction de Green a temperature finie sur un systeme infini a partir de la fonction de Green calculee sur un amas de taille finie. Pour calculer ces fonctions de Green, nous allons utiliser des methodes permettant de reduire considerablement les efforts numeriques necessaires pour les calculs des moyennes thermodynamiques, en reduisant considerablement l'espace des etats a considerer dans ces moyennes. Bien que cette etude vise d'abord a developper des methodes d'amas pour resoudre le modele de Hubbard a temperature finie de facon generale ainsi qu'a etudier les proprietes de base de ce modele, nous allons l'appliquer a des conditions qui s'approchent de supraconducteurs a haute temperature critique. Les methodes presentees dans cette etude permettent de tracer un diagramme de phase pour l'antiferromagnetisme et la supraconductivite qui presentent plusieurs similarites avec celui des supraconducteurs a haute temperature. Mots-cles : modele de Hubbard, thermodynamique, antiferromagnetisme, supraconductivite, methodes numeriques, larges matrices
NASA Astrophysics Data System (ADS)
Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.
2012-04-01
In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological dynamic and processes, i. e. sample heterogeneity. For a same streamflow range corresponds different processes such as rising limbs or recession, where uncertainties are different. The dynamical approach improves reliability, skills and sharpness of forecasts and globally reduces confidence intervals width. When compared in details, the dynamical approach allows a noticeable reduction of confidence intervals during recessions where uncertainty is relatively lower and a slight increase of confidence intervals during rising limbs or snowmelt where uncertainty is greater. The dynamic approach, validated by forecaster's experience that considered the empirical approach not discriminative enough, improved forecaster's confidence and communication of uncertainties. Montanari, A. and Brath, A., (2004). A stochastic approach for assessing the uncertainty of rainfall-runoff simulations. Water Resources Research, 40, W01106, doi:10.1029/2003WR002540. Schaefli, B., Balin Talamba, D. and Musy, A., (2007). Quantifying hydrological modeling errors through a mixture of normal distributions. Journal of Hydrology, 332, 303-315.
2003-03-01
combat modernes et des avions d’affaires E. Garrigues, Th. Percheron DASSAULT AVIATION DGT/DTA/IAP F-922 14, Saint-Cloud Cedex France 1. Introduction ...de vol, des acedidrations rigides et des rdponses de la structure ( jauges et acedidrations). Struturl Premdicton Grdjustments n~~~ligh Testsn~n Fig4ure
NASA Astrophysics Data System (ADS)
Boyer, Sylvain
On estime que sur les 3,7 millions des travailleurs au Quebec, plus de 500 000 sont exposes quotidiennement a des niveaux de bruits pouvant causer des lesions de l'appareil auditif. Lorsqu'il n'est pas possible de diminuer le niveau de bruit environnant, en modifiant les sources de bruits, ou en limitant la propagation du son, le port de protecteurs auditifs individualises, telles que les coquilles, demeure l'ultime solution. Bien que vue comme une solution a court terme, elle est communement employee, du fait de son caractere peu dispendieux, de sa facilite d'implantation et de son adaptabilite a la plupart des operations en environnement bruyant. Cependant les protecteurs auditifs peuvent etre a la fois inadaptes aux travailleurs et a leur environnement et inconfortables ce qui limite leur temps de port, reduisant leur protection effective. Afin de palier a ces difficultes, un projet de recherche sur la protection auditive intitule : " Developpement d'outils et de methodes pour ameliorer et mieux evaluer la protection auditive individuelle des travailleur ", a ete mis sur pied en 2010, associant l'Ecole de technologie superieure (ETS) et l'Institut de recherche Robert-Sauve en sante et en securite du travail (IRSST). S'inscrivant dans ce programme de recherche, le present travail de doctorat s'interesse specifiquement a la protection auditive au moyen de protecteurs auditifs " passifs " de type coquille, dont l'usage presente trois problematiques specifiques presentees dans les paragraphes suivants. La premiere problematique specifique concerne l'inconfort cause par exemple par la pression statique induite par la force de serrage de l'arceau, qui peut reduire le temps de port recommande pour limiter l'exposition au bruit. Il convient alors de pouvoir donner a l'utilisateur un protecteur confortable, adapte a son environnement de travail et a son activite. La seconde problematique specifique est l'evaluation de la protection reelle apportee par le protecteur. La methode des seuils auditifs REAT (Real Ear Attenuation Threshold) aussi vu comme un "golden standard" est utilise pour quantifier la reduction du bruit mais surestime generalement la performance des protecteurs. Les techniques de mesure terrains, telles que la F-MIRE (Field Measurement in Real Ear) peuvent etre a l'avenir de meilleurs outils pour evaluer l'attenuation individuelle. Si ces techniques existent pour des bouchons d'oreilles, elles doivent etre adaptees et ameliorees pour le cas des coquilles, en determinant l'emplacement optimal des capteurs acoustiques et les facteurs de compensation individuels qui lient la mesure microphonique a la mesure qui aurait ete prise au tympan. La troisieme problematique specifique est l'optimisation de l'attenuation des coquilles pour les adapter a l'individu et a son environnement de travail. En effet, le design des coquilles est generalement base sur des concepts empiriques et des methodes essais/erreurs sur des prototypes. La piste des outils predictifs a ete tres peu etudiee jusqu'a present et meriterait d'etre approfondie. L'utilisation du prototypage virtuel, permettrait a la fois d'optimiser le design avant production, d'accelerer la phase de developpement produit et d'en reduire les couts. L'objectif general de cette these est de repondre a ces differentes problematiques par le developpement d'un modele de l'attenuation sonore d'un protecteur auditif de type coquille. A cause de la complexite de la geometrie de ces protecteurs, la methode principale de modelisation retenue a priori est la methode des elements finis (FEM). Pour atteindre cet objectif general, trois objectifs specifiques ont ete etablis et sont presentes dans les trois paragraphes suivants. (Abstract shortened by ProQuest.).
Time Sensitive Course of Action Development and Evaluation
2010-10-01
Applications militaires de la modelisation humaine ). RTO-MP-HFM-202 14. ABSTRACT The development of courses of action that integrate military with...routes between the capital town C of the province and a neighboring country M. Both roads are historically significant smuggling routes. There were
NASA Astrophysics Data System (ADS)
Gaudeua de Gerlicz, C.; Golding, J. G.; Bobola, Ph.; Moutarde, C.; Naji, S.
2008-06-01
The spaceflight under microgravity cause basically biological and physiological imbalance in human being. Lot of study has been yet release on this topic especially about sleep disturbances and on the circadian rhythms (alternation vigilance-sleep, body, temperature...). Factors like space motion sickness, noise, or excitement can cause severe sleep disturbances. For a stay of longer than four months in space, gradual increases in the planned duration of sleep were reported. [1] The average sleep in orbit was more than 1.5 hours shorter than the during control periods on earth, where sleep averaged 7.9 hours. [2] Alertness and calmness were unregistered yield clear circadian pattern of 24h but with a phase delay of 4h.The calmness showed a biphasic component (12h) mean sleep duration was 6.4 structured by 3-5 non REM/REM cycles. Modelisations of neurophysiologic mechanisms of stress and interactions between various physiological and psychological variables of rhythms have can be yet release with the COSINOR method. [3
1980-11-21
defensive , and both the question and the answer seemed to generate supporting reactions from the audience. Discrete Event Simulation The session on...R. Toscano / A. Maceri / F. Maceri (Italy) Analyse numerique de quelques problemes de contact en theorie des membranes 3:40 - 4:00 p.m. COFFEE BREAK...Switzerland Stockage de chaleur faible profondeur : Simulation par elements finis 3:40 - 4:00 p.m. A. Rizk Abu El-Wafa / M. Tawfik / M.S. Mansour (Egypt) Digital
1993-11-01
sont caracterises par la striosconic continue tic la la partie tie la couche tie melange situde sous le jet figure 3 ainsi que par la tomoscopie tie... caracterisent les ondzs). Ces ondes un disque; de Mach. Sur la figuire 4, on observe la semblent proveriir tie la r6gion tie l’jccteur, juste trace du
Human Modelling for Military Application (Applications militaires de la modelisation humaine)
2010-10-01
techniques (rooted in the mathematics-centered analytic methods arising from World War I analyses by Lanchester 2 ). Recent requirements for research and...34Dry Shooting for Airplane Gunners - Popular Science Monthly". January 1919. p. 13-14. 2 Lanchester F.W., Mathematics in Warfare in The World of
Comparison of different 3D wavefront sensing and reconstruction techniques for MCAO
NASA Astrophysics Data System (ADS)
Bello, Dolores; Vérinaud, Christophe; Conan, Jean-Marc; Fusco, Thierry; Carbillet, Marcel; Esposito, Simone
2003-02-01
The vertical distribution of the turbulence limits the field of view of classical adaptive optics due to the anisoplanatism. Multiconjugate adaptive optics (MCAO) uses several deformable mirrors conjugated to different layers in the atmosphere to overcome this effect. In the last few years, many studies and developments have been done regarding the analysis of the turbulence volume, and the choice of the wavefront reconstruction techniques.An extensive study of MCAO modelisation and performance estimation has been done at OAA and ONERA. The developed Monte Carlo codes allow to simulate and investigate many aspects: comparison of turbulence analysis strategies (tomography or layer oriented) and comparison of different reconstruction approaches. For instance in the layer oriented approach, the control for a given deformable mirror can be either deduced from the whole set of wavefront sensor measurements or only using the associated wavefront sensor. Numerical simulations are presented showing the advantages and disadvantages of these different options for several cases depending on the number, geometry and magnitude of the guide stars.
Algorithms for Robust Identification and Control of Large Space Structures. Phase 1.
1988-05-14
Variate Analysis," Proc. Amer. Control Conf., San Francisco, * pp. 445-451. LECTIQUE, J., Rault, A., Tessier, M., and Testud , J.L. (1978), "Multivariable...Rault, J.L. Testud , and J. Papon (1978), "Model Predictive Heuris- tic Control: Applications to Industrial Processes," Automatica, Vol. 14, pp. 413...Control ’. Conference, Minneapolis, MN, June. TESTUD , J.L. (1979), "Commande Numerique Multivariable du Ballon de Recupera- tion de Vapeur," Adersa/Gerbios
1991-06-01
intensive systems, including the use of onboard digital computers. Topics include: measurements that are digital in origin, sampling, encoding, transmitting...Individuals charged with designing aircraft measuring systems to become better acquainted with new solutions to their requirements. This volume Is...concerned with aircraft measuring systems as related to flight test and flight research. Measure - ments that are digital in origin or that must be
1994-08-01
volume H1. Le rapport ext accompagnt5 doun jeo die disqoettex contenant les donn~es appropri~es Li bous let cas d’essai. (’es disqoettes sont disponibles ...GERMANY PURPL’Sb OF THE TESi The tests are part of a larger effort to establish a database of experimental measurements for missile configurations
2006-09-01
Control Force Agility Shared Situational Awareness Attentional Demand Interoperability Network Based Operations Effect Based Operations Speed of...Command Self Synchronization Reach Back Reach Forward Information Superiority Increased Mission Effectiveness Humansystems® Team Modelling...communication effectiveness and Distributed Mission Training (DMT) effectiveness . The NASA Ames Centre - Distributed Research Facilities platform could
1992-02-01
CONCLUDING REMARKS secondary flow pattern. Probably both factors are influential. Unfortunately The present study has examined the the secondary...Panels which are compesed of experts appointed - by the National Delegates, the Consultant and Exchange Programme and the Aerospace Applications Studies ...CP 352. September 1983 /Combustion Problems in Turbine Engines AGARD CP 353, January 1984 (,rHazard Studies for Solid Propellant Rocket Motors AGARD CP
1994-01-01
0 The Mission of AGARD 0 According to its Charter, the mission of AGARD is to bring together the leading personalities of the NATO nations in the...advances in the aerospace sciences relevant to strengthening the common defence posture; • - Improving the co-operation among member nations in aerospace...for the physical principles. To construct the relevant equations for fluid gas consisting of pseudo particles, 10 is the internal energy due motion it
2009-09-01
frequency shallow water scenarios, and DRDC has ready access to a well-established PE model ( PECan ). In those spectral areas below 1 kHz, where the PE...PCs Personnel Computers PE Parabolic Equation PECan PE Model developed by DRDC SPADES/ICE Sensor Performance and Acoustic Detection Evaluation
1994-08-01
c S c o I -2 b I c 5 ^ A9-I0 Kfc 0) >% 3 .« W) o w O OJ a) 5. u > o ^a 5 ~ ^ o ra to *- w 0 " ro d) iO II...12). The technique was modified to calculate the drag %*<• 4 «.c* A12-II using ihc ncin-minjsivc LUV and sidewall pressure measure- menu rather
1998-12-01
failure detection, monitoring, and decision making.) moderator function. Originally, the output from these One of the best known OCM implementations, the...imposed by the tasks themselves, the information and equipment provided, the task environment, operator skills and experience, operator strategies , the...problem-solving situation, including the toward failure.) knowledge necessary to generate the right problem- solving strategies , the attention that
Future Modelling and Simulation Challenges (Defis futurs pour la modelisation et la simulation)
2002-11-01
Language School Figure 2: Location of the simulation center within the MEC Military operations research section - simulation lab Military operations... language . This logic can be probabilistic (branching is randomised, which is useful for modelling error), tactical (a branch goes to the task with the... language and a collection of simulation tools that can be used to create human and team behaviour models to meet users’ needs. Hence, different ways of
NASA Astrophysics Data System (ADS)
Guerrout, EL-Hachemi; Ait-Aoudia, Samy; Michelucci, Dominique; Mahiou, Ramdane
2018-05-01
Many routine medical examinations produce images of patients suffering from various pathologies. With the huge number of medical images, the manual analysis and interpretation became a tedious task. Thus, automatic image segmentation became essential for diagnosis assistance. Segmentation consists in dividing the image into homogeneous and significant regions. We focus on hidden Markov random fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimisation problem. Broyden-Fletcher-Goldfarb-Shanno algorithm referred to as BFGS is one of the most powerful methods to solve unconstrained optimisation problem. In this paper, we investigate the combination of HMRF and BFGS algorithm to perform the segmentation operation. The proposed method shows very good segmentation results comparing with well-known approaches. The tests are conducted on brain magnetic resonance image databases (BrainWeb and IBSR) largely used to objectively confront the results obtained. The well-known Dice coefficient (DC) was used as similarity metric. The experimental results show that, in many cases, our proposed method approaches the perfect segmentation with a Dice Coefficient above .9. Moreover, it generally outperforms other methods in the tests conducted.
Modelisation frequentielle de la permittivite du beton pour le controle non destructif par georadar
NASA Astrophysics Data System (ADS)
Bourdi, Taoufik
Le georadar (Ground Penetrating Radar (GPR)) constitue une technique de controle non destructif (CND) interessante pour la mesure des epaisseurs des dalles de beton et la caracterisation des fractures, en raison de ses caracteristiques de resolution et de profondeur de penetration. Les equipements georadar sont de plus en plus faciles a utiliser et les logiciels d'interpretation sont en train de devenir plus aisement accessibles. Cependant, il est ressorti dans plusieurs conferences et ateliers sur l'application du georadar en genie civil qu'il fallait poursuivre les recherches, en particulier sur la modelisation et les techniques de mesure des proprietes electriques du beton. En obtenant de meilleures informations sur les proprietes electriques du beton aux frequences du georadar, l'instrumentation et les techniques d'interpretation pourraient etre perfectionnees plus efficacement. Le modele de Jonscher est un modele qui a montre son efficacite dans le domaine geophysique. Pour la premiere fois, son utilisation dans le domaine genie civil est presentee. Dans un premier temps, nous avons valide l'application du modele de Jonscher pour la caracterisation de la permittivite dielectrique du beton. Les resultats ont montre clairement que ce modele est capable de reproduire fidelement la variation de la permittivite de differents types de beton sur la bande de frequence georadar (100 MHz-2 GHz). Dans un deuxieme temps, nous avons montre l'interet du modele de Jonscher en le comparant a d'autres modeles (Debye et Debye-etendu) deja utilises dans le domaine genie civil. Nous avons montre aussi comment le modele de Jonscher peut presenter une aide a la prediction de l'efficacite de blindage et a l'interpretation des ondes de la technique GPR. Il a ete determine que le modele de Jonscher permet de donner une bonne presentation de la variation de la permittivite du beton dans la gamme de frequence georadar consideree. De plus, cette modelisation est valable pour differents types de beton et a differentes teneurs en eau. Dans une derniere partie, nous avons presente l'utilisation du modele de Jonscher pour l'estimation de l'epaisseur d'une dalle de beton par la technique GPR dans le domaine frequentiel. Mots-cles : CND, beton, georadar , permittivite, Jonscher
3D Modelling of Urban Terrain (Modelisation 3D de milieu urbain)
2011-09-01
Panel • IST Information Systems Technology Panel • NMSG NATO Modelling and Simulation Group • SAS System Analysis and Studies Panel • SCI... Systems Concepts and Integration Panel • SET Sensors and Electronics Technology Panel These bodies are made up of national representatives as well as...of a part of it may be made for individual use only. The approval of the RTA Information Management Systems Branch is required for more than one
1992-07-01
have become quite common in science and engineering, and will become more so as the demand for reliable data increases, and with it the pace of data...la derniere decennie. Elles sont appelees a jouer un r6le plus important. a l’avenir. avec l’evolution de Ia demande d’intormations tiables et...computational codes. The wind tunnel data contained in the SEADS data base were obained using these forward fuselage models (10%, 4% and 2%) over the Match
2009-10-01
636.7 115,418 0 2500 5000 7500 10000 12500 iterations -5 -4 -3 -2 -1 0 lo g( dρ /d t) SA EARSM EARSM + CC Hellsten EARSM Hellsten EARSM + CC DRSM...VORTEX BREAKDOWN RTO-TR-AVT-113 29 - 13 θU URo axial= (1) As a vortex passes through a normal shock, the tangential velocity is
2009-09-01
involved in R&T activities. RTO reports both to the Military Committee of NATO and to the Conference of National Armament Directors. It comprises a...4 11.5.3 Project Description 11-5 Chapter 12 – Technical Evaluation Report 12-1 12.1 Executive Summary 12-1 12.2 Introduction 12-2 12.3...modelling human factors has been slow over the past decade, other forums have been reporting a number of theoretical and applied papers on human behaviour
2009-09-01
ordination with other NATO bodies involved in R&T activities. RTO reports both to the Military Committee of NATO and to the Conference of National...Aims 11-4 11.5.2 Background 11-4 11.5.3 Project Description 11-5 Chapter 12 – Technical Evaluation Report 12-1 12.1 Executive Summary 12-1...track. Although progress in modelling human factors has been slow over the past decade, other forums have been reporting a number of theoretical and
NASA Astrophysics Data System (ADS)
LeBlanc, Luc R.
Les materiaux composites sont de plus en plus utilises dans des domaines tels que l'aerospatiale, les voitures a hautes performances et les equipements sportifs, pour en nommer quelques-uns. Des etudes ont demontre qu'une exposition a l'humidite nuit a la resistance des composites en favorisant l'initiation et la propagation du delaminage. De ces etudes, tres peu traitent de l'effet de l'humidite sur l'initiation du delaminage en mode mixte I/II et aucune ne traite des effets de l'humidite sur le taux de propagation du delaminage en mode mixte I/II dans un composite. La premiere partie de cette these consiste a determiner les effets de l'humidite sur la propagation du delaminage lors d'une sollicitation en mode mixte I/II. Des eprouvettes d'un composite unidirectionnel de carbone/epoxy (G40-800/5276-1) ont ete immergees dans un bain d'eau distillee a 70°C jusqu'a leur saturation. Des essais experimentaux quasi-statiques avec des chargements d'une gamme de mixites des modes I/II (0%, 25%, 50%, 75% et 100%) ont ete executes pour determiner les effets de l'humidite sur la resistance au delaminage du composite. Des essais de fatigue ont ete realises, avec la meme gamme de mixite des modes I/II, pour determiner 1'effet de 1'humidite sur l'initiation et sur le taux de propagation du delaminage. Les resultats des essais en chargement quasi-statique ont demontre que l'humidite reduit la resistance au delaminage d'un composite carbone/epoxy pour toute la gamme des mixites des modes I/II, sauf pour le mode I ou la resistance au delaminage augmente apres une exposition a l'humidite. Pour les chargements en fatigue, l'humidite a pour effet d'accelerer l'initiation du delaminage et d'augmenter le taux de propagation pour toutes les mixites des modes I/II. Les donnees experimentales recueillies ont ete utilisees pour determiner lesquels des criteres de delaminage en statique et des modeles de taux de propagation du delaminage en fatigue en mode mixte I/II proposes dans la litterature representent le mieux le delaminage du composite etudie. Une courbe de regression a ete utilisee pour determiner le meilleur ajustement entre les donnees experimentales et les criteres de delaminage en statique etudies. Une surface de regression a ete utilisee pour determiner le meilleur ajustement entre les donnees experimentales et les modeles de taux de propagation en fatigue etudies. D'apres les ajustements, le meilleur critere de delaminage en statique est le critere B-K et le meilleur modele de propagation en fatigue est le modele de Kenane-Benzeggagh. Afin de predire le delaminage lors de la conception de pieces complexes, des modeles numeriques peuvent etre utilises. La prediction de la longueur de delaminage lors des chargements en fatigue d'une piece est tres importante pour assurer qu'une fissure interlaminaire ne va pas croitre excessivement et causer la rupture de cette piece avant la fin de sa duree de vie de conception. Selon la tendance recente, ces modeles sont souvent bases sur l'approche de zone cohesive avec une formulation par elements finis. Au cours des travaux presentes dans cette these, le modele de progression du delaminage en fatigue de Landry & LaPlante (2012) a ete ameliore en y ajoutant le traitement des chargements en mode mixte I/II et en y modifiant l'algorithme du calcul de la force d'entrainement maximale du delaminage. Une calibration des parametres de zone cohesive a ete faite a partir des essais quasi-statiques experimentaux en mode I et II. Des resultats de simulations numeriques des essais quasi-statiques en mode mixte I/II, avec des eprouvettes seches et humides, ont ete compares avec les essais experimentaux. Des simulations numeriques en fatigue ont aussi ete faites et comparees avec les resultats experimentaux du taux de propagation du delaminage. Les resultats numeriques des essais quasi-statiques et de fatigue ont montre une bonne correlation avec les resultats experimentaux pour toute la gamme des mixites des modes I/II etudiee.
POD and PPP with multi-frequency processing
NASA Astrophysics Data System (ADS)
Roldán, Pedro; Navarro, Pedro; Rodríguez, Daniel; Rodríguez, Irma
2017-04-01
Precise Orbit Determination (POD) and Precise Point Positioning (PPP) are methods for estimating the orbits and clocks of GNSS satellites and the precise positions and clocks of user receivers. These methods are traditionally based on processing the ionosphere-free combination. With this combination, the delay introduced in the signal when passing through the ionosphere is removed, taking advantage of the dependency of this delay with the square of the frequency. It is also possible to process the individual frequencies, but in this case it is needed to properly model the ionospheric delay. This modelling is usually very challenging, as the electron content in the ionosphere experiences important temporal and spatial variations. These two options define the two main kinds of processing: the dual-frequency ionosphere-free processing, typically used in the POD and in certain applications of PPP, and the single-frequency processing with estimation or modelisation of the ionosphere, mostly used in the PPP processing. In magicGNSS, a software tool developed by GMV for POD and PPP, a hybrid approach has been implemented. This approach combines observations from any number of individual frequencies and any number of ionosphere-free combinations of these frequencies. In such a way, the observations of ionosphere-free combination allow a better estimation of positions and orbits, while the inclusion of observations from individual frequencies allows to estimate the ionospheric delay and to reduce the noise of the solution. It is also possible to include other kind of combinations, such as geometry-free combination, instead of processing individual frequencies. The joint processing of all the frequencies for all the constellations requires both the estimation or modelisation of ionospheric delay and the estimation of inter-frequency biases. The ionospheric delay can be estimated from the single-frequency or dual-frequency geometry-free observations, but it is also possible to use a-priori information based on ionospheric models, on external estimations and on the expected behavior of the ionosphere. The inter-frequency biases appear because the delay of the signal inside the transmitter and the receiver strongly depends on its frequency. However, it is possible to include constraints in the estimator regarding these delays, assuming small variations over time. By using different types of combinations, all the available information from GNSS systems can be included in the processing. This is especially interesting for the case of Galileo satellites, which transmit in several frequencies, and the GPS IIF satellites, which transmit in L5 in addition to the traditional L1 and L2. Several experiments have been performed, to assess the improvement on performance of POD and PPP when using all the constellations and all the available frequencies for each constellation. This paper describes the new approach of multi-frequency processing, including the estimation of biases and ionospheric delays impacting on GNSS observations, and presents the results of the performed experimentation activities to assess the benefits in POD and PPP algorithms.
2001-07-01
Major General A C Figgures, Capability Manager (Manœuvre) UK MOD, provided the Conference with a fitting end message encouraging the SE and M&S...SESSION Welcoming Address - ‘Synthetic Environments - Managing the Breakout’ WA by M. Markin Opening Address for NATO M&S Conference OA by G. Sürsal...Keynote Address KN by G.J. Burrows Industry’s Role IR† by M. Mansell The RMCS SSEL I by J.R. Searle SESSION 1: POLICY, STRATEGY & MANAGEMENT A Strategy
2003-03-01
nations, a very thorough examination of current practices. Introduction The Applied Vehicle Technology Panel (AVT) of the Research and Technology...the introduction of new information generated by computer codes required it to be timely and presented in appropriate fashion so that it could...military competition between the NATO allies and the Soviet Union. The second was the introduction of commercial, high capacity transonic aircraft and
1999-08-01
immediately, re- ducing venous return artifacts during the first beat of the simulation. tn+1 - W+ on c+ / \\ W_ on c_ t 1 Xi-l Xi+1 Figure 4...s) Figure 5: The effect of network complexity. The aortic pressure is shown in Figure 5 during the fifth beat for the networks with one and three...Mechanical Engineering Department, Uni- versity of Victoria. [19] Huyghe J.M., 1986, "Nonlinear Finite Element Models of The Beating Left
2015-05-01
delivery business model where S&T activities are conducted in a NATO dedicated executive body, having its own personnel, capabilities and infrastructure ...SD-4: Design for Securability 5-4 5.3.2 Recommendations on Simulation Environment Infrastructure 5-5 5.3.2.1 Recommendation IN-1: Harmonize...Critical Data and 5-5 Algorithms 5.3.2.2 Recommendation IN-2: Establish Permanent Simulation 5-5 Infrastructure 5.3.2.3 Recommendation IN-3: Establish
1989-09-01
pyridone).Previous work on, py/ridimum, pyrazinjumn or pyrimidi im salts Koon 2 -pyrimloone and 2 - pyrimidone salts [43j have shown that some...forces. Acct . r ~[U... •K;.i. LJ , ’ 0, ’’ .t_I ..- .It . ( :.. 2 A VIBRATIONAL MOLECULAR FORCE FIELD FOR .ACROMOLECULA-R MODELLI= Gerard VERGOTENi...microscopic point of view are (1) understanding, ( 2 ) interpretation of experimental results, (3) semiquantitative estimates of experimental results and (4
1994-09-01
the refractive index i. can be density, temperature , ion composition, ionospheric determined from a simplified form of the Appleton- electric field...see Cannon 119941. the electron density profile is based upon the underlying neutral composition. temperature and wind together with electric field...in many of the newer HF predictions decision software , NSSDC/WDC-A-R&S 90-19, National Space aids. They also provide a very useful stand alone
NASA Astrophysics Data System (ADS)
Rebaine, Ali
1997-08-01
Ce travail consiste en la simulation numerique des ecoulements internes compressibles bidimensionnels laminaires et turbulents. On s'interesse, particulierement, aux ecoulements dans les ejecteurs supersoniques. Les equations de Navier-Stokes sont formulees sous forme conservative et utilisent, comme variables independantes, les variables dites enthalpiques a savoir: la pression statique, la quantite de mouvement et l'enthalpie totale specifique. Une formulation variationnelle stable des equations de Navier-Stokes est utilisee. Elle est base sur la methode SUPG (Streamline Upwinding Petrov Galerkin) et utilise un operateur de capture des forts gradients. Un modele de turbulence, pour la simulation des ecoulements dans les ejecteurs, est mis au point. Il consiste a separer deux regions distinctes: une region proche de la paroi solide, ou le modele de Baldwin et Lomax est utilise et l'autre, loin de la paroi, ou une formulation nouvelle, basee sur le modele de Schlichting pour les jets, est proposee. Une technique de calcul de la viscosite turbulente, sur un maillage non structure, est implementee. La discretisation dans l'espace de la forme variationnelle est faite a l'aide de la methode des elements finis en utilisant une approximation mixte: quadratique pour les composantes de la quantite de mouvement et de la vitesse et lineaire pour le reste des variables. La discretisation temporelle est effectuee par une methode de differences finies en utilisant le schema d'Euler implicite. Le systeme matriciel, resultant de la discretisation spatio-temporelle, est resolu a l'aide de l'algorithme GMRES en utilisant un preconditionneur diagonal. Les validations numeriques ont ete menees sur plusieurs types de tuyeres et ejecteurs. La principale validation consiste en la simulation de l'ecoulement dans l'ejecteur teste au centre de recherche NASA Lewis. Les resultats obtenus sont tres comparables avec ceux des travaux anterieurs et sont nettement superieurs concernant les ecoulements turbulents dans les ejecteurs.
1990-01-01
modifiers and added an additional set of modifiers to adjust the average VTOP. The original DECO model made use of waveguide excitation factors and...ranges far beyond the horizon. The modified refractivity M is defined by N - N + (h/a) x 106 - N + 0.157 h (2.1) where h is the height above the earth’s...LAYEIR APPING LAVER REFRACTIVITY N MODIFIED REFRAACTIVIT M Figure 2.4. N and N profiles for an elevated duct. t /VA--’’TM tDUCT ITx IFPAT4G RELRACIVT
Modelisation and distribution of neutron flux in radium-beryllium source (226Ra-Be)
NASA Astrophysics Data System (ADS)
Didi, Abdessamad; Dadouch, Ahmed; Jai, Otman
2017-09-01
Using the Monte Carlo N-Particle code (MCNP-6), to analyze the thermal, epithermal and fast neutron fluxes, of 3 millicuries of radium-beryllium, for determine the qualitative and quantitative of many materials, using method of neutron activation analysis. Radium-beryllium source of neutron is established to practical work and research in nuclear field. The main objective of this work was to enable us harness the profile flux of radium-beryllium irradiation, this theoretical study permits to discuss the design of the optimal irradiation and performance for increased the facility research and education of nuclear physics.
2010-02-01
interdependencies, and then modifying plans according to updated projections. This is currently an immature area where further research is required. The...crosscutting.html. [7] Zeigler, B.P. and Hammonds, P. (2007). “Modelling and Simulation- Based Data Engineering: Introducing Pragmatics and Ontologies for...the optimum benefit to be obtained and while immature , ongoing research needs to be maintained. 20) Use of M&S to support complex operations needs
Modelisation de l'historique d'operation de groupes turbine-alternateur
NASA Astrophysics Data System (ADS)
Szczota, Mickael
Because of their ageing fleet, the utility managers are increasingly in needs of tools that can help them to plan efficiently maintenance operations. Hydro-Quebec started a project that aim to foresee the degradation of their hydroelectric runner, and use that information to classify the generating unit. That classification will help to know which generating unit is more at risk to undergo a major failure. Cracks linked to the fatigue phenomenon are a predominant degradation mode and the loading sequences applied to the runner is a parameter impacting the crack growth. So, the aim of this memoir is to create a generator able to generate synthetic loading sequences that are statistically equivalent to the observed history. Those simulated sequences will be used as input in a life assessment model. At first, we describe how the generating units are operated by Hydro-Quebec and analyse the available data, the analysis shows that the data are non-stationnary. Then, we review modelisation and validation methods. In the following chapter a particular attention is given to a precise description of the validation and comparison procedure. Then, we present the comparison of three kind of model : Discrete Time Markov Chains, Discrete Time Semi-Markov Chains and the Moving Block Bootstrap. For the first two models, we describe how to take account for the non-stationnarity. Finally, we show that the Markov Chain is not adapted for our case, and that the Semi-Markov chains are better when they include the non-stationnarity. The final choice between Semi-Markov Chains and the Moving Block Bootstrap depends of the user. But, with a long term vision we recommend the use of Semi-Markov chains for their flexibility. Keywords: Stochastic models, Models validation, Reliability, Semi-Markov Chains, Markov Chains, Bootstrap
2003-02-01
Stromingsmechanica Industriale Pleinlaan, 2 Universita Roma Tre B-1050 Brussel via della Vasca Navale 79 em: hirsch@stro10.vub.ac.be 00146 Roma em...the flow and noise in the diffuser of an industrial gas turbine engine. A steady RANS CFD calculation and experiments were used to identify the gross...finally, defence industry was restructuring demanding that we review our relationship with them. (SYA) KN1-5 Ministers agreed that changes were
2001-12-01
product operator, Ucg = X body axis velocity at the cg, Uvane = X body axis velocity at the cg, Vcg = Y body axis velocity at the cg, Vvane = Y body axis...Tan vane Uvane α β = = (5) Ucg = VtrueCOS(βtrue)COS(αtrue) Vcg = VtrueSIN(βtrue) Wcg = VtrueCOS(βtrue)SIN...from the definitions of these angles. 2 2 2 1 1 V U V Wcg cg cgtrue Wcg Tantrue Ucg Vcg Sintrue Vtrue α β = + + −= −= (12) 53
1985-11-01
tourbillons daxe perpendicu-V laire A l’fcoulement principal) issu d’un profil occillant en Tamis dan;, do,, condition,, dn dorochagp dynamique. 5_10...a~rodyna- - mique sur R. A partir de cette analyse experimentale, une tentative de modelisation th~sorique des effets non *lin~ laires observes aux...cisaillement A la paroi d’un profil d’aile anim6 d’un mouvament harmonique parall~le ou parpandicu- laire A 1coulement non perturb~s", EUROMECH
Modelisation of an unspecialized quadruped walking mammal.
Neveu, P; Villanova, J; Gasc, J P
2001-12-01
Kinematics and structural analyses were used as basic data to elaborate a dynamic quadruped model that may represent an unspecialized mammal. Hedgehogs were filmed on a treadmill with a cinefluorographic system providing trajectories of skeletal elements during locomotion. Body parameters such as limb segments mass and length, and segments centre of mass were checked from cadavers. These biological parameters were compiled in order to build a virtual quadruped robot. The robot locomotor behaviour was compared with the actual hedgehog to improve the model and to disclose the necessary changes. Apart from use in robotics, the resulting model may be useful to simulate the locomotion of extinct mammals.
Conductivite dans le modele de Hubbard bi-dimensionnel a faible couplage
NASA Astrophysics Data System (ADS)
Bergeron, Dominic
Le modele de Hubbard bi-dimensionnel (2D) est souvent considere comme le modele minimal pour les supraconducteurs a haute temperature critique a base d'oxyde de cuivre (SCHT). Sur un reseau carre, ce modele possede les phases qui sont communes a tous les SCHT, la phase antiferromagnetique, la phase supraconductrice et la phase dite du pseudogap. Il n'a pas de solution exacte, toutefois, plusieurs methodes approximatives permettent d'etudier ses proprietes de facon numerique. Les proprietes optiques et de transport sont bien connues dans les SCHT et sont donc de bonne candidates pour valider un modele theorique et aider a comprendre mieux la physique de ces materiaux. La presente these porte sur le calcul de ces proprietes pour le modele de Hubbard 2D a couplage faible ou intermediaire. La methode de calcul utilisee est l'approche auto-coherente a deux particules (ACDP), qui est non-perturbative et inclue l'effet des fluctuations de spin et de charge a toutes les longueurs d'onde. La derivation complete de l'expression de la conductivite dans l'approche ACDP est presentee. Cette expression contient ce qu'on appelle les corrections de vertex, qui tiennent compte des correlations entre quasi-particules. Pour rendre possible le calcul numerique de ces corrections, des algorithmes utilisant, entre autres, des transformees de Fourier rapides et des splines cubiques sont developpes. Les calculs sont faits pour le reseau carre avec sauts aux plus proches voisins autour du point critique antiferromagnetique. Aux dopages plus faibles que le point critique, la conductivite optique presente une bosse dans l'infrarouge moyen a basse temperature, tel qu'observe dans plusieurs SCHT. Dans la resistivite en fonction de la temperature, on trouve un comportement isolant dans le pseudogap lorsque les corrections de vertex sont negligees et metallique lorsqu'elles sont prises en compte. Pres du point critique, la resistivite est lineaire en T a basse temperature et devient progressivement proportionnelle a T 2 a fort dopage. Quelques resultats avec sauts aux voisins plus eloignes sont aussi presentes. Mots-cles: Hubbard, point critique quantique, conductivite, corrections de vertex
Reconnaissance invariante d'objets 3-D et correlation SONG
NASA Astrophysics Data System (ADS)
Roy, Sebastien
Cette these propose des solutions a deux problemes de la reconnaissance automatique de formes: la reconnaissance invariante d'objets tridimensionnels a partir d'images d'intensite et la reconnaissance robuste a la presence de bruit disjoint. Un systeme utilisant le balayage angulaire des images et un classificateur par trajectoires d'espace des caracteristiques permet d'obtenir la reconnaissance invariante d'objets tridimensionnels. La reconnaissance robuste a la presence de bruit disjoint est realisee au moyen de la correlation SONG. Nous avons realise la reconnaissance invariante aux translations, rotations et changements d'echelle d'objets tridimensionnels a partir d'images d'intensite segmentees. Nous utilisons le balayage angulaire et un classificateur a trajectoires d'espace des caracteris tiques. Afin d'obtenir l'invariance aux translations, le centre de balayage angulaire coincide avec le centre geometrique de l'image. Le balayage angulaire produit un vecteur de caracteristiques invariant aux changements d'echelle de l'image et il transforme en translations du signal les rotations autour d'un axe parallele a la ligne de visee. Le classificateur par trajectoires d'espace des caracteristiques represente une rotation autour d'un axe perpendiculaire a la ligne de visee par une courbe dans l'espace. La classification se fait par la mesure de la distance du vecteur de caracteristiques de l'image a reconnaitre aux trajectoires stockees dans l'espace. Nos resultats numeriques montrent un taux de classement atteignant 98% sur une banque d'images composee de 5 vehicules militaires. La correlation non-lineaire generalisee en tranches orthogonales (SONG) traite independamment les niveaux de gris presents dans une image. Elle somme les correlations lineaires des images binaires ayant le meme niveau de gris. Cette correlation est equivalente a compter le nombre de pixels situes aux memes positions relatives et ayant les memes intensites sur deux images. Nous presentons une realisation opto-electronique de la correlation SONG. Cette realisation utilise le correlateur a transformees conjointes. Les resultats des experiences numeriques et optiques montrent que le bruit disjoint ne nuit pas a la correlation SONG.
Scaling forecast models for wind turbulence and wind turbine power intermittency
NASA Astrophysics Data System (ADS)
Duran Medina, Olmo; Schmitt, Francois G.; Calif, Rudy
2017-04-01
The intermittency of the wind turbine power remains an important issue for the massive development of this renewable energy. The energy peaks injected in the electric grid produce difficulties in the energy distribution management. Hence, a correct forecast of the wind power in the short and middle term is needed due to the high unpredictability of the intermittency phenomenon. We consider a statistical approach through the analysis and characterization of stochastic fluctuations. The theoretical framework is the multifractal modelisation of wind velocity fluctuations. Here, we consider three wind turbine data where two possess a direct drive technology. Those turbines are producing energy in real exploitation conditions and allow to test our forecast models of power production at a different time horizons. Two forecast models were developed based on two physical principles observed in the wind and the power time series: the scaling properties on the one hand and the intermittency in the wind power increments on the other. The first tool is related to the intermittency through a multifractal lognormal fit of the power fluctuations. The second tool is based on an analogy of the power scaling properties with a fractional brownian motion. Indeed, an inner long-term memory is found in both time series. Both models show encouraging results since a correct tendency of the signal is respected over different time scales. Those tools are first steps to a search of efficient forecasting approaches for grid adaptation facing the wind energy fluctuations.
NASA Astrophysics Data System (ADS)
Corbeil Therrien, Audrey
La tomographie d'emission par positrons (TEP) est un outil precieux en recherche preclinique et pour le diagnostic medical. Cette technique permet d'obtenir une image quantitative de fonctions metaboliques specifiques par la detection de photons d'annihilation. La detection des ces photons se fait a l'aide de deux composantes. D'abord, un scintillateur convertit l'energie du photon 511 keV en photons du spectre visible. Ensuite, un photodetecteur convertit l'energie lumineuse en signal electrique. Recemment, les photodiodes avalanche monophotoniques (PAMP) disposees en matrice suscitent beaucoup d'interet pour la TEP. Ces matrices forment des detecteurs sensibles, robustes, compacts et avec une resolution en temps hors pair. Ces qualites en font un photodetecteur prometteur pour la TEP, mais il faut optimiser les parametres de la matrice et de l'electronique de lecture afin d'atteindre les performances optimales pour la TEP. L'optimisation de la matrice devient rapidement une operation difficile, car les differents parametres interagissent de maniere complexe avec les processus d'avalanche et de generation de bruit. Enfin, l'electronique de lecture pour les matrices de PAMP demeure encore rudimentaire et il serait profitable d'analyser differentes strategies de lecture. Pour repondre a cette question, la solution la plus economique est d'utiliser un simulateur pour converger vers la configuration donnant les meilleures performances. Les travaux de ce memoire presentent le developpement d'un tel simulateur. Celui-ci modelise le comportement d'une matrice de PAMP en se basant sur les equations de physique des semiconducteurs et des modeles probabilistes. Il inclut les trois principales sources de bruit, soit le bruit thermique, les declenchements intempestifs correles et la diaphonie optique. Le simulateur permet aussi de tester et de comparer de nouvelles approches pour l'electronique de lecture plus adaptees a ce type de detecteur. Au final, le simulateur vise a quantifier l'impact des parametres du photodetecteur sur la resolution en energie et la resolution en temps et ainsi optimiser les performances de la matrice de PAMP. Par exemple, l'augmentation du ratio de surface active ameliore les performances, mais seulement jusqu'a un certain point. D'autres phenomenes lies a la surface active, comme le bruit thermique, provoquent une degradation du resultat. Le simulateur nous permet de trouver un compromis entre ces deux extremes. Les simulations avec les parametres initiaux demontrent une efficacite de detection de 16,7 %, une resolution en energie de 14,2 % LMH et une resolution en temps de 0.478 ns LMH. Enfin, le simulateur propose, bien qu'il vise une application en TEP, peut etre adapte pour d'autres applications en modifiant la source de photons et en adaptant les objectifs de performances. Mots-cles : Photodetecteurs, photodiodes avalanche monophotoniques, semiconducteurs, tomographie d'emission par positrons, simulations, modelisation, detection monophotonique, scintillateurs, circuit d'etouffement, SPAD, SiPM, Photodiodes avalanche operees en mode Geiger
Developpement de techniques de diagnostic non intrusif par tomographie optique
NASA Astrophysics Data System (ADS)
Dubot, Fabien
Que ce soit dans les domaines des procedes industriels ou de l'imagerie medicale, on a assiste ces deux dernieres decennies a un developpement croissant des techniques optiques de diagnostic. L'engouement pour ces methodes repose principalement sur le fait qu'elles sont totalement non invasives, qu'elle utilisent des sources de rayonnement non nocives pour l'homme et l'environnement et qu'elles sont relativement peu couteuses et faciles a mettre en oeuvre comparees aux autres techniques d'imagerie. Une de ces techniques est la Tomographie Optique Diffuse (TOD). Cette methode d'imagerie tridimensionnelle consiste a caracteriser les proprietes radiatives d'un Milieu Semi-Transparent (MST) a partir de mesures optiques dans le proche infrarouge obtenues a l'aide d'un ensemble de sources et detecteurs situes sur la frontiere du domaine sonde. Elle repose notamment sur un modele direct de propagation de la lumiere dans le MST, fournissant les predictions, et un algorithme de minimisation d'une fonction de cout integrant les predictions et les mesures, permettant la reconstruction des parametres d'interet. Dans ce travail, le modele direct est l'approximation diffuse de l'equation de transfert radiatif dans le regime frequentiel tandis que les parametres d'interet sont les distributions spatiales des coefficients d'absorption et de diffusion reduit. Cette these est consacree au developpement d'une methode inverse robuste pour la resolution du probleme de TOD dans le domaine frequentiel. Pour repondre a cet objectif, ce travail est structure en trois parties qui constituent les principaux axes de la these. Premierement, une comparaison des algorithmes de Gauss-Newton amorti et de Broyden- Fletcher-Goldfarb-Shanno (BFGS) est proposee dans le cas bidimensionnel. Deux methodes de regularisation sont combinees pour chacun des deux algorithmes, a savoir la reduction de la dimension de l'espace de controle basee sur le maillage et la regularisation par penalisation de Tikhonov pour l'algorithme de Gauss-Newton amorti, et les regularisations basees sur le maillage et l'utilisation des gradients de Sobolev, uniformes ou spatialement dependants, lors de l'extraction du gradient de la fonction cout, pour la methode BFGS. Les resultats numeriques indiquent que l'algorithme de BFGS surpasse celui de Gauss-Newton amorti en ce qui concerne la qualite des reconstructions obtenues, le temps de calcul ou encore la facilite de selection du parametre de regularisation. Deuxiemement, une etude sur la quasi-independance du parametre de penalisation de Tikhonov optimal par rapport a la dimension de l'espace de controle dans les problemes inverses d'estimation de fonctions spatialement dependantes est menee. Cette etude fait suite a une observation realisee lors de la premiere partie de ce travail ou le parametre de Tikhonov, determine par la methode " L-curve ", se trouve etre independant de la dimension de l'espace de controle dans le cas sous-determine. Cette hypothese est demontree theoriquement puis verifiee numeriquement sur un probleme inverse lineaire de conduction de la chaleur puis sur le probleme inverse non-lineaire de TOD. La verification numerique repose sur la determination d'un parametre de Tikhonov optimal, defini comme etant celui qui minimise les ecarts entre les cibles et les reconstructions. La demonstration theorique repose sur le principe de Morozov (discrepancy principle) dans le cas lineaire, tandis qu'elle repose essentiellement sur l'hypothese que les fonctions radiatives a reconstruire sont des variables aleatoires suivant une loi normale dans le cas non-lineaire. En conclusion, la these demontre que le parametre de Tikhonov peut etre determine en utilisant une parametrisation des variables de controle associee a un maillage lâche afin de reduire les temps de calcul. Troisiemement, une methode inverse multi-echelle basee sur les ondelettes associee a l'algorithme de BFGS est developpee. Cette methode, qui s'appuie sur une reformulation du probleme inverse original en une suite de sous-problemes inverses de la plus grande echelle a la plus petite, a l'aide de la transformee en ondelettes, permet de faire face a la propriete de convergence locale de l'optimiseur et a la presence de nombreux minima locaux dans la fonction cout. Les resultats numeriques montrent que la methode proposee est plus stable vis-a-vis de l'estimation initiale des proprietes radiatives et fournit des reconstructions finales plus precises que l'algorithme de BFGS ordinaire tout en necessitant des temps de calcul semblables. Les resultats de ces travaux sont presentes dans cette these sous forme de quatre articles. Le premier article a ete accepte dans l'International Journal of Thermal Sciences, le deuxieme est accepte dans la revue Inverse Problems in Science and Engineering, le troisieme est accepte dans le Journal of Computational and Applied Mathematics et le quatrieme a ete soumis au Journal of Quantitative Spectroscopy & Radiative Transfer. Dix autres articles ont ete publies dans des comptes-rendus de conferences avec comite de lecture. Ces articles sont disponibles en format pdf sur le site de la Chaire de recherche t3e (www.t3e.info).
NASA Astrophysics Data System (ADS)
Communier, David
Lors de l'etude structurelle d'une aile d'avion, il est difficile de modeliser fidelement les forces aerodynamiques subies par l'aile de l'avion. Pour faciliter l'analyse, on repartit la portance maximale theorique de l'aile sur son longeron principal ou sur ses nervures. La repartition utilisee implique que l'aile entiere sera plus resistante que necessaire et donc que la structure ne sera pas totalement optimisee. Pour pallier ce probleme, il faudrait s'assurer d'appliquer une repartition aerodynamique de la portance sur la surface complete de l'aile. On serait donc en mesure d'obtenir une repartition des charges sur l'aile beaucoup plus fiable. Pour le realiser, nous aurons besoin de coupler les resultats d'un logiciel calculant les charges aerodynamiques de l'aile avec les resultats d'un logiciel permettant sa conception et son analyse structurelle. Dans ce projet, le logiciel utilise pour calculer les coefficients de pression sur l'aile est XFLR5 et le logiciel permettant la conception et l'analyse structurelle sera CATIA V5. Le logiciel XFLR5 permet une analyse rapide d'une aile en se basant sur l'analyse de ses profils. Ce logiciel calcule les performances des profils de la meme maniere que XFOIL et permet de choisir parmi trois methodes de calcul pour obtenir les performances de l'aile : Lifting Line Theory (LLT), Vortex Lattice Method (VLM) et 3D Panels. Dans notre methodologie, nous utilisons la methode de calcul 3D Panels dont la validite a ete testee en soufflerie pour confirmer les calculs sur XFLR5. En ce qui concerne la conception et l'analyse par des elements finis de la structure, le logiciel CATIA V5 est couramment utilise dans le domaine aerospatial. CATIA V5 permet une automatisation des etapes de conception de l'aile. Ainsi, dans ce memoire, nous allons decrire la methodologie permettant l'etude aerostructurelle d'une aile d'avion.
Sustaining Tunisian SMEs' Competitiveness in the Knowledge Society
NASA Astrophysics Data System (ADS)
Del Vecchio, Pasquale; Elia, Gianluca; Secundo, Giustina
The paper aims to contribute to the debate about knowledge and digital divide affecting countries' competitiveness in the knowledge society. A survey based on qualitative and quantitative data collection has been performed to analyze the level of ICTs and e-Business adoption of the Tunisian SMEs. The results shows that to increase the SMEs competitiveness is necessary to invest in all the components of Intellectual capital: human capital (knowledge, skills, and the abilities of people for using the ICTs), structural capital (supportive infrastructure such as buildings, software, processes, patents, and trademarks, proprietary databases) and social capital (relations and collaboration inside and outside the company). At this purpose, the LINCET "Laboratoire d'Innovation Numerique pour la Competitivité de l'Entreprise Tunisienne" project is finally proposed as a coherent proposition to foster the growth of all the components of the Intellectual Capital for the benefits of competitiveness of Tunisian SMEs.
NASA Astrophysics Data System (ADS)
Bergeron, Alain
Cette recherche vise a la mise en oeuvre optique de reseaux neuronaux. Deux architectures differentes sont proposees. La premiere est la memoire associative permettant d'associer a un objet quelconque une sortie arbitraire tout en preservant l'information sur sa position. La seconde architecture, le classificateur neuronal pour le controle robotique, permet l'identification d'une entree et son classement selon differentes categories. La sortie est compatible avec les systemes numeriques standard. Pour realiser ces architectures, une approche modulaire est privilegiee. Le correlateur constitue le module de base des realisations. Differents modules sont de plus introduits pour realiser convenablement les operations neuronales. Le premier de ces modules est le seuil optoelectronique permettant de realiser une fonction non lineaire, element essentiel des reseaux neuronaux. Le second module a etre introduit est l'encodeur optonumerique, utile au classement des objets. Le probleme de l'enregistrement de la memoire est aborde a l'aide du codage iteratif global.
NASA Astrophysics Data System (ADS)
Harvey, Jean-Philippe
In this work, the possibility to calculate and evaluate with a high degree of precision the Gibbs energy of complex multiphase equilibria for which chemical ordering is explicitly and simultaneously considered in the thermodynamic description of solid (short range order and long range order) and liquid (short range order) metallic phases is studied. The cluster site approximation (CSA) and the cluster variation method (CVM) are implemented in a new minimization technique of the Gibbs energy of multicomponent and multiphase systems to describe the thermodynamic behaviour of metallic solid solutions showing strong chemical ordering. The modified quasichemical model in the pair approximation (MQMPA) is also implemented in the new minimization algorithm presented in this work to describe the thermodynamic behaviour of metallic liquid solutions. The constrained minimization technique implemented in this work consists of a sequential quadratic programming technique based on an exact Newton’s method (i.e. the use of exact second derivatives in the determination of the Hessian of the objective function) combined to a line search method to identify a direction of sufficient decrease of the merit function. The implementation of a new algorithm to perform the constrained minimization of the Gibbs energy is justified by the difficulty to identify, in specific cases, the correct multiphase assemblage of a system where the thermodynamic behaviour of the equilibrium phases is described by one of the previously quoted models using the FactSage software (ex.: solid_CSA+liquid_MQMPA; solid1_CSA+solid2_CSA). After a rigorous validation of the constrained Gibbs energy minimization algorithm using several assessed binary and ternary systems found in the literature, the CVM and the CSA models used to describe the energetic behaviour of metallic solid solutions present in systems with key industrial applications such as the Cu-Zr and the Al-Zr systems are parameterized using fully consistent thermodynamic an structural data generated from a Monte Carlo (MC) simulator also implemented in the framework of this project. In this MC simulator, the modified embedded atom model in the second nearest neighbour formalism (MEAM-2NN) is used to describe the cohesive energy of each studied structure. A new Al-Zr MEAM-2NN interatomic potential needed to evaluate the cohesive energy of the condensed phases of this system is presented in this work. The thermodynamic integration (TI) method implemented in the MC simulator allows the evaluation of the absolute Gibbs energy of the considered solid or liquid structures. The original implementation of the TI method allowed us to evaluate theoretically for the first time all the thermodynamic mixing contributions (i.e., mixing enthalpy and mixing entropy contributions) of a metallic liquid (Cu-Zr and Al-Zr) and of a solid solution (face-centered cubic (FCC) Al-Zr solid solution) described by the MEAM-2NN. Thermodynamic and structural data obtained from MC and molecular dynamic simulations are then used to parameterize the CVM for the Al-Zr FCC solid solution and the MQMPA for the Al-Zr and the Cu-Zr liquid phase respectively. The extended thermodynamic study of these systems allow the introduction of a new type of configuration-dependent excess parameters in the definition of the thermodynamic function of solid solutions described by the CVM or the CSA. These parameters greatly improve the precision of these thermodynamic models based on experimental evidences found in the literature. A new parameterization approach of the MQMPA model of metallic liquid solutions is presented throughout this work. In this new approach, calculated pair fractions obtained from MC/MD simulations are taken into account as well as configuration-independent volumetric relaxation effects (regular like excess parameters) in order to parameterize precisely the Gibbs energy function of metallic melts. The generation of a complete set of fully consistent thermodynamic, physical and structural data for solid, liquid, and stoichiometric compounds and the subsequent parameterization of their respective thermodynamic model lead to the first description of the complete Al-Zr phase diagram in the range of composition [0 ≤ XZr ≤ 5 / 9] based on theoretical and fully consistent thermodynamic properties. MC and MD simulations are performed for the Al-Zr system to define for the first time the precise thermodynamic behaviour of the amorphous phase for its entire range of composition. Finally, all the thermodynamic models for the liquid phase, the FCC solid solution and the amorphous phase are used to define conditions based on thermodynamic and volumetric considerations that favor the amorphization of Al-Zr alloys.
Charge Transport in Carbon Nanotubes-Polymer Composite Photovoltaic Cells
Ltaief, Adnen; Bouazizi, Abdelaziz; Davenas, Joel
2009-01-01
We investigate the dark and illuminated current density-voltage (J/V) characteristics of poly(2-methoxy-5-(2’-ethylhexyloxy)1-4-phenylenevinylene) (MEH-PPV)/single-walled carbon nanotubes (SWNTs) composite photovoltaic cells. Using an exponential band tail model, the conduction mechanism has been analysed for polymer only devices and composite devices, in terms of space charge limited current (SCLC) conduction mechanism, where we determine the power parameters and the threshold voltages. Elaborated devices for MEH-PPV:SWNTs (1:1) composites showed a photoresponse with an open-circuit voltage Voc of 0.4 V, a short-circuit current density JSC of 1 µA/cm² and a fill factor FF of 43%. We have modelised the organic photovoltaic devices with an equivalent circuit, where we calculated the series and shunt resistances.
LLR data analysis and impact on lunar dynamics from recent developments at OCA LLR Station
NASA Astrophysics Data System (ADS)
Viswanathan, Vishnu; Fienga, Agnes; Courde, Clement; Torre, Jean-Marie; Exertier, Pierre; Samain, Etienne; Feraudy, Dominique; Albanese, Dominique; Aimar, Mourad; Mariey, Hervé; Viot, Hervé; Martinot-Lagarde, Gregoire
2016-04-01
Since late 2014, OCA LLR station has been able to range with infrared wavelength (1064nm). IR ranging provides both temporal and spatial improvement in the LLR observations. IR detection also permits in densification of normal points, including the L1 and L2 retroreflectors due to better signal to noise ratio. This contributes to a better modelisation of the lunar libration. The hypothesis of lunar dust and environmental effects due to the chromatic behavior noticed on returns from L2 retroreflector is discussed. In addition, data analysis shows that the effect of retroreflector tilt and the use of calibration profile for the normal point deduction algorithm, contributes to improving the precision of normal points, thereby impacting lunar dynamical models and inner physics.
NASA Astrophysics Data System (ADS)
Abou Chakra, Charbel; Somma, Janine; Elali, Taha; Drapeau, Laurent
2017-04-01
Climate change and its negative impact on water resource is well described. For countries like Lebanon, undergoing major population's rise and already decreasing precipitations issues, effective water resources management is crucial. Their continuous and systematic monitoring overs long period of time is therefore an important activity to investigate drought risk scenarios for the Lebanese territory. Snow cover on Lebanese mountains is the most important water resources reserve. Consequently, systematic observation of snow cover dynamic plays a major role in order to support hydrologic research with accurate data on snow cover volumes over the melting season. For the last 20 years few studies have been conducted for Lebanese snow cover. They were focusing on estimating the snow cover surface using remote sensing and terrestrial measurement without obtaining accurate maps for the sampled locations. Indeed, estimations of both snow cover area and volumes are difficult due to snow accumulation very high variability and Lebanese mountains chains slopes topographic heterogeneity. Therefore, the snow cover relief measurement in its three-dimensional aspect and its Digital Elevation Model computation is essential to estimate snow cover volume. Despite the need to cover the all lebanese territory, we favored experimental terrestrial topographic site approaches due to high resolution satellite imagery cost, its limited accessibility and its acquisition restrictions. It is also most challenging to modelise snow cover at national scale. We therefore, selected a representative witness sinkhole located at Ouyoun el Siman to undertake systematic and continuous observations based on topographic approach using a total station. After four years of continuous observations, we acknowledged the relation between snow melt rate, date of total melting and neighboring springs discharges. Consequently, we are able to forecast, early in the season, dates of total snowmelt and springs low water flows which are essentially feeded by snowmelt water. Simulations were ran, predicting the snow level between two sampled dates, they provided promising result for national scale extrapolation.
Implementation en VHDl/FPGA d'afficheur video numerique (AVN) pour des applications aerospatiales
NASA Astrophysics Data System (ADS)
Pelletier, Sebastien
L'objectif de ce projet est de developper un controleur video en langage VHDL afin de remplacer la composante specialisee presentement utilisee chez CMC Electronique. Une recherche approfondie des tendances et de ce qui se fait actuellement dans le domaine des controleurs video est effectuee afin de definir les specifications du systeme. Les techniques d'entreposage et d'affichage des images sont expliquees afin de mener ce projet a terme. Le nouveau controleur est developpe sur une plateforme electronique possedant un FPGA, un port VGA et de la memoire pour emmagasiner les donnees. Il est programmable et prend peu d'espace dans un FPGA, ce qui lui permet de s'inserer dans n'importe quelle nouvelle technologie de masse a faible cout. Il s'adapte rapidement a toutes les resolutions d'affichage puisqu'il est modulaire et configurable. A court terme, ce projet permettra un controle ameliore des specifications et des normes de qualite liees aux contraintes de l'avionique.
A Simplified and Reliable Damage Method for the Prediction of the Composites Pieces
NASA Astrophysics Data System (ADS)
Viale, R.; Coquillard, M.; Seytre, C.
2012-07-01
Structural engineers are often faced to test results on composite structures largely tougher than predicted. By attempting to reduce this frequent gap, a survey of some extensive synthesis works relative to the prediction methods and to the failure criteria was led. This inquiry dealts with the plane stress state only. All classical methods have strong and weak points wrt practice and reliability aspects. The main conclusion is that in the plane stress case, the best usaul industrial methods give predictions rather similar. But very generally they do not explain the often large discrepancies wrt the tests, mainly in the cases of strong stress gradients or of bi-axial laminate loadings. It seems that only the methods considering the complexity of the composites damages (so-called physical methods or Continuum Damage Mechanics “CDM”) bring a clear mending wrt the usual methods..The only drawback of these methods is their relative intricacy mainly in urged industrial conditions. A method with an approaching but simplified representation of the CDM phenomenology is presented. It was compared to tests and other methods: - it brings a fear improvement of the correlation with tests wrt the usual industrial methods, - it gives results very similar to the painstaking CDM methods and very close to the test results. Several examples are provided. In addition this method is really thrifty wrt the material characterization as well as for the modelisation and the computation efforts.
Contribution to study of interfaces instabilities in plane, cylindrical and spherical geometry
NASA Astrophysics Data System (ADS)
Toque, Nathalie
1996-12-01
This thesis proposes several experiments of hydrodynamical instabilities which are studied, numerically and theoretically. The experiments are in plane and cylindrical geometry. Their X-ray radiographies show the evolution of an interface between two solid media crossed by a detonation wave. These materials are initially solid. They become liquide under shock wave or stay between two phases, solid and liquid. The numerical study aims at simulating with the codes EAD and Ouranos, the interfaces instabilities which appear in the experiments. The experimental radiographies and the numerical pictures are in quite good agreement. The theoretical study suggests to modelise a spatio-temporal part of the experiments to obtain the quantitative development of perturbations at the interfaces and in the flows. The models are linear and in plane, cylindrical and spherical geometry. They preceed the inoming study of transition between linear and non linear development of instabilities in multifluids flows crossed by shock waves.
NASA Astrophysics Data System (ADS)
Privas, E.; Archier, P.; Bernard, D.; De Saint Jean, C.; Destouche, C.; Leconte, P.; Noguère, G.; Peneliau, Y.; Capote, R.
2016-02-01
A new IAEA Coordinated Research Project (CRP) aims to test, validate and improve the IRDF library. Among the isotopes of interest, the modelisation of the 238U capture and fission cross sections represents a challenging task. A new description of the 238U neutrons induced reactions in the fast energy range is within progress in the frame of an IAEA evaluation consortium. The Nuclear Data group of Cadarache participates in this effort utilizing the 238U spectral indices measurements and Post Irradiated Experiments (PIE) carried out in the fast reactors MASURCA (CEA Cadarache) and PHENIX (CEA Marcoule). Such a collection of experimental results provides reliable integral information on the (n,γ) and (n,f) cross sections. This paper presents the Integral Data Assimilation (IDA) technique of the CONRAD code used to propagate the uncertainties of the integral data on the 238U cross sections of interest for dosimetry applications.
Methodes de caracterisation des proprietes thermomecaniques d'un acier martensitique =
NASA Astrophysics Data System (ADS)
Ausseil, Lucas
Le but de l'etude est de developper des methodes permettant de mesurer les proprietes thermomecaniques d'un acier martensitique lors de chauffe rapide. Ces donnees permettent d'alimenter les modeles d'elements finis existant avec des donnees experimentales. Pour cela, l'acier 4340 est utilise. Cet acier est notamment utilise dans les roues d'engrenage, il a des proprietes mecaniques tres interessantes. Il est possible de modifier ses proprietes grâce a des traitements thermiques. Le simulateur thermomecanique Gleeble 3800 est utilise. Il permet de tester theoriquement toutes les conditions presentes dans les procedes de fabrication. Avec les tests de dilatation realises dans ce projet, les temperatures exactes de changement de phases austenitiques et martensitiques sont obtenues. Des tests de traction ont aussi permis de deduire la limite d'elasticite du materiau dans le domaine austenitique allant de 850 °C a 1100 °C. L'effet des deformations sur la temperature de debut de transformation est montre qualitativement. Une simulation numerique est aussi realisee pour comprendre les phenomenes intervenant pendant les essais.
Modelisation de l'architecture des forets pour ameliorer la teledetection des attributs forestiers
NASA Astrophysics Data System (ADS)
Cote, Jean-Francois
The quality of indirect measurements of canopy structure, from in situ and satellite remote sensing, is based on knowledge of vegetation canopy architecture. Technological advances in ground-based, airborne or satellite remote sensing can now significantly improve the effectiveness of measurement programs on forest resources. The structure of vegetation canopy describes the position, orientation, size and shape of elements of the canopy. The complexity of the canopy in forest environments greatly limits our ability to characterize forest structural attributes. Architectural models have been developed to help the interpretation of canopy structural measurements by remote sensing. Recently, the terrestrial LiDAR systems, or TLiDAR (Terrestrial Light Detection and Ranging), are used to gather information on the structure of individual trees or forest stands. The TLiDAR allows the extraction of 3D structural information under the canopy at the centimetre scale. The methodology proposed in my Ph.D. thesis is a strategy to overcome the weakness in the structural sampling of vegetation cover. The main objective of the Ph.D. is to develop an architectural model of vegetation canopy, called L-Architect (LiDAR data to vegetation Architecture), and to focus on the ability to document forest sites and to get information on canopy structure from remote sensing tools. Specifically, L-Architect reconstructs the architecture of individual conifer trees from TLiDAR data. Quantitative evaluation of L-Architect consisted to investigate (i) the structural consistency of the reconstructed trees and (ii) the radiative coherence by the inclusion of reconstructed trees in a 3D radiative transfer model. Then, a methodology was developed to quasi-automatically reconstruct the structure of individual trees from an optimization algorithm using TLiDAR data and allometric relationships. L-Architect thus provides an explicit link between the range measurements of TLiDAR and structural attributes of individual trees. L-Architect has finally been applied to model the architecture of forest canopy for better characterization of vertical and horizontal structure with airborne LiDAR data. This project provides a mean to answer requests of detailed canopy architectural data, difficult to obtain, to reproduce a variety of forest covers. Because of the importance of architectural models, L-Architect provides a significant contribution for improving the capacity of parameters' inversion in vegetation cover for optical and lidar remote sensing. Mots-cles: modelisation architecturale, lidar terrestre, couvert forestier, parametres structuraux, teledetection.
NASA Astrophysics Data System (ADS)
Mougenot, Bernard
2016-04-01
The Mediterranean region is affected by water scarcity. Some countries as Tunisia reached the limit of 550 m3/year/capita due overexploitation of low water resources for irrigation, domestic uses and industry. A lot of programs aim to evaluate strategies to improve water consumption at regional level. In central Tunisia, on the Merguellil catchment, we develop integrated water resources modelisations based on social investigations, ground observations and remote sensing data. The main objective is to close the water budget at regional level and to estimate irrigation and water pumping to test scenarios with endusers. Our works benefit from French, bilateral and European projects (ANR, MISTRALS/SICMed, FP6, FP7…), GMES/GEOLAND-ESA) and also network projects as JECAM and AERONET, where the Merguellil site is a reference. This site has specific characteristics associating irrigated and rainfed crops mixing cereals, market gardening and orchards and will be proposed as a new environmental observing system connected to the OMERE, TENSIFT and OSR systems respectively in Tunisia, Morocco and France. We show here an original and large set of ground and remote sensing data mainly acquired from 2008 to present to be used for calibration/validation of water budget processes and integrated models for present and scenarios: - Ground data: meteorological stations, water budget at local scale: fluxes tower, soil fluxes, soil and surface temperature, soil moisture, drainage, flow, water level in lakes, aquifer, vegetation parameters on selected fieds/month (LAI, height, biomass, yield), land cover: 3 times/year, bare soil roughness, irrigation and pumping estimations, soil texture. - Remote sensing data: remote sensing products from multi-platform (MODIS, SPOT, LANDSAT, ASTER, PLEIADES, ASAR, COSMO-SkyMed, TerraSAR X…), multi-wavelength (solar, micro-wave and thermal) and multi-resolution (0.5 meters to 1 km). Ground observations are used (1) to calibrate soil-vegetation-atmosphere models at field scale on different compartment and irrigated and rainfed land during a limited time (seasons or set of dry and wet years), (2) to calibrate and validate particularly evapotranspiration derived from multi-wavelength satellite data at watershed level in relationships with the aquifer conditions: pumping and recharge rate. We will point out some examples.
Prediction du profil de durete de l'acier AISI 4340 traite thermiquement au laser
NASA Astrophysics Data System (ADS)
Maamri, Ilyes
Les traitements thermiques de surfaces sont des procedes qui visent a conferer au coeur et a la surface des pieces mecaniques des proprietes differentes. Ils permettent d'ameliorer la resistance a l'usure et a la fatigue en durcissant les zones critiques superficielles par des apports thermiques courts et localises. Parmi les procedes qui se distinguent par leur capacite en terme de puissance surfacique, le traitement thermique de surface au laser offre des cycles thermiques rapides, localises et precis tout en limitant les risques de deformations indesirables. Les proprietes mecaniques de la zone durcie obtenue par ce procede dependent des proprietes physicochimiques du materiau a traiter et de plusieurs parametres du procede. Pour etre en mesure d'exploiter adequatement les ressources qu'offre ce procede, il est necessaire de developper des strategies permettant de controler et regler les parametres de maniere a produire avec precision les caracteristiques desirees pour la surface durcie sans recourir au classique long et couteux processus essai-erreur. L'objectif du projet consiste donc a developper des modeles pour predire le profil de durete dans le cas de traitement thermique de pieces en acier AISI 4340. Pour comprendre le comportement du procede et evaluer les effets des differents parametres sur la qualite du traitement, une etude de sensibilite a ete menee en se basant sur une planification experimentale structuree combinee a des techniques d'analyse statistiques eprouvees. Les resultats de cette etude ont permis l'identification des variables les plus pertinentes a exploiter pour la modelisation. Suite a cette analyse et dans le but d'elaborer un premier modele, deux techniques de modelisation ont ete considerees, soient la regression multiple et les reseaux de neurones. Les deux techniques ont conduit a des modeles de qualite acceptable avec une precision d'environ 90%. Pour ameliorer les performances des modeles a base de reseaux de neurones, deux nouvelles approches basees sur la caracterisation geometrique du profil de durete ont ete considerees. Contrairement aux premiers modeles predisant le profil de durete en fonction des parametres du procede, les nouveaux modeles combinent les memes parametres avec les attributs geometriques du profil de durete pour refleter la qualite du traitement. Les modeles obtenus montrent que cette strategie conduit a des resultats tres prometteurs.
Codigestion of solid wastes: a review of its uses and perspectives including modeling.
Mata-Alvarez, Joan; Dosta, Joan; Macé, Sandra; Astals, Sergi
2011-06-01
The last two years have witnessed a dramatic increase in the number of papers published on the subject of codigestion, highlighting the relevance of this topic within anaerobic digestion research. Consequently, it seems appropriate to undertake a review of codigestion practices starting from the late 1970s, when the first papers related to this concept were published, and continuing to the present day, demonstrating the exponential growth in the interest shown in this approach in recent years. Following a general analysis of the situation, state-of-the-art codigestion is described, focusing on the two most important areas as regards publication: codigestion involving sewage sludge and the organic fraction of municipal solid waste (including a review of the secondary advantages for wastewater treatment plant related to biological nutrient removal), and codigestion in the agricultural sector, that is, including agricultural - farm wastes, and energy crops. Within these areas, a large number of oversized digesters appear which can be used to codigest other substrates, resulting in economic and environmental advantages. Although the situation may be changing, there is still a need for good examples on an industrial scale, particularly with regard to wastewater treatment plants, in order to extend this beneficial practice. In the last section, a detailed analysis of papers addressing the important aspect of modelisation is included. This analysis includes the first codigestion models to be developed as well as recent applications of the standardised anaerobic digestion model ADM1 to codigestion. (This review includes studies ranging from laboratory to industrial scale.).
Realisation et Applications D'un Laser a Fibre a L'erbium Monofrequence
NASA Astrophysics Data System (ADS)
Larose, Robert
L'incorporation d'ions de terres rares a l'interieur de la matrice de verre d'une fibre optique a permis l'emergence de composants amplificateurs tout-fibre. Le but de cette these consiste d'une part a analyser et a modeliser un tel dispositif et d'autre part, a fabriquer puis a caracteriser un amplificateur et un oscillateur a fibre. A l'aide d'une fibre fortement dopee a l'erbium fabriquee sur demande, on realise un laser a fibre syntonisable qui fonctionne en regime multimodes longitudinaux avec une largeur de raie de 1.5 GHz et egalement comme source monofrequencielle de largeur de raie de 70 kHz. Le laser sert ensuite a caracteriser un reseau de Bragg ecrit par photosensibilite dans une fibre optique. La technique de syntonisation permet aussi l'asservissement au fond d'une resonance de l'acetylene. Le laser garde alors la position centrale de la raie avec une erreur de moins de 1 MHz corrigeant ainsi les derives mecaniques de la cavite.
Modelisation des emissions de particules microniques et nanometriques en usinage
NASA Astrophysics Data System (ADS)
Khettabi, Riad
La mise en forme des pieces par usinage emet des particules, de tailles microscopiques et nanometriques, qui peuvent etre dangereuses pour la sante. Le but de ce travail est d'etudier les emissions de ces particules pour fins de prevention et reduction a la source. L'approche retenue est experimentale et theorique, aux deux echelles microscopique et macroscopique. Le travail commence par des essais permettant de determiner les influences du materiau, de l'outil et des parametres d'usinage sur les emissions de particules. E nsuite un nouveau parametre caracterisant les emissions, nomme Dust unit , est developpe et un modele predictif est propose. Ce modele est base sur une nouvelle theorie hybride qui integre les approches energetiques, tribologiques et deformation plastique, et inclut la geometrie de l'outil, les proprietes du materiau, les conditions de coupe et la segmentation des copeaux. Il ete valide au tournage sur quatre materiaux: A16061-T6, AISI1018, AISI4140 et fonte grise.
Regard epistemique sur une evolution conceptuelle en physique au secondaire
NASA Astrophysics Data System (ADS)
Potvin, Patrice
The thesis, which is in continuity with Legendre's (1993) work, deals with qualitative understanding of physics notions at the secondary level. It attempts to identify and to label, in the verbalizations of 12 to 16 year-old students, the tendencies that guide their cognitive itineraries through the exploration of problem-situations. The hypotheses of work are about modelisations, conceptions and p-prims. These last objects are seen, in DiSessa's epistemological perspective, as a type of habit that influences the determination of links between the parameters of a problem. In other words, they coordinate logically and mathematically. Methodology is based on explicitation interviews. This type of interview authorizes verbalizations that involve an "intuitive sense" of mechanics. Twenty students are invited to share their evocations as they explore the logics of a computerized microworld. This microworld has been programmed on the "Interactive Physics(TM)" software and is made of five different situations that involve speed, acceleration, mass, force and inertia. The situations are presented to the students from the least to the most complex. An analysis of the verbalizations of the five students shows the existence of elements that play a role in modelisation and qualitative construction of comprehension as well as in its qualitative/quantitative articulation. Results indicate the presence of coordinative habits easily discernible. P-prims appear to play an important part in the construction of models and in the determination of links between the variables of a problem. The analysis of the results allows to see that conceptions are not so important pieces in comprehension. As such, they seem phenotypic. Also, analysis allows to recognize the difficulty to understand properly the inverse relation (1/x) and its asymptotic nature. The "p-prim" analysis also establishes the possibility to analyze not only efficient and inefficient intuitions, but also the cognitive itineraries of students working to construct the logic of the movement of a "ball" as a whole. Implications of the thesis are, among others, at the praxic level; it becomes possible to imagine sequences of learning and teaching physics that are based on the consideration of p-prims despite the implicit nature of these objects. This is a truly constructivist practice which establishes bridges between novice and expert knowledge because there are p-prims in both of them. As so, the thesis acknowledges a perspective of learning inscribed in "continuity". It also proposes a fertile theoretical ground for the comprehension of physics.
Algorithmes de couplage RANS et ecoulement potentiel
NASA Astrophysics Data System (ADS)
Gallay, Sylvain
Dans le processus de developpement d'avion, la solution retenue doit satisfaire de nombreux criteres dans de nombreux domaines, comme par exemple le domaine de la structure, de l'aerodynamique, de la stabilite et controle, de la performance ou encore de la securite, tout en respectant des echeanciers precis et minimisant les couts. Les geometries candidates sont nombreuses dans les premieres etapes de definition du produit et de design preliminaire, et des environnements d'optimisations multidisciplinaires sont developpes par les differentes industries aeronautiques. Differentes methodes impliquant differents niveaux de modelisations sont necessaires pour les differentes phases de developpement du projet. Lors des phases de definition et de design preliminaires, des methodes rapides sont necessaires afin d'etudier les candidats efficacement. Le developpement de methodes ameliorant la precision des methodes existantes tout en gardant un cout de calcul faible permet d'obtenir un niveau de fidelite plus eleve dans les premieres phases de developpement du projet et ainsi grandement diminuer les risques associes. Dans le domaine de l'aerodynamisme, les developpements des algorithmes de couplage visqueux/non visqueux permettent d'ameliorer les methodes de calcul lineaires non visqueuses en methodes non lineaires prenant en compte les effets visqueux. Ces methodes permettent ainsi de caracteriser l'ecoulement visqueux sur les configurations et predire entre autre les mecanismes de decrochage ou encore la position des ondes de chocs sur les surfaces portantes. Cette these se focalise sur le couplage entre une methode d'ecoulement potentiel tridimensionnelle et des donnees de section bidimensionnelles visqueuses. Les methodes existantes sont implementees et leurs limites identifiees. Une methode originale est ensuite developpee et validee. Les resultats sur une aile elliptique demontrent la capacite de l'algorithme a de grands angles d'attaques et dans la region post-decrochage. L'algorithme de couplage a ete compare a des donnees de plus haute fidelite sur des configurations issues de la litterature. Un modele de fuselage base sur des relations empiriques et des simulations RANS a ete teste et valide. Les coefficients de portance, de trainee et de moment de tangage ainsi que les coefficients de pression extraits le long de l'envergure ont montre un bon accord avec les donnees de soufflerie et les modeles RANS pour des configurations transsoniques. Une configuration a geometrie hypersustentatoire a permis d'etudier la modelisation des surfaces hypersustentees de la methode d'ecoulement potentiel, demontrant que la cambrure peut etre prise en compte uniquement dans les donnees visqueuses.
Modelling a radiology department service using a VDL integrated approach.
Guglielmino, Maria Gabriella; Celano, Giovanni; Costa, Antonio; Fichera, Sergio
2009-01-01
The healthcare industry is facing several challenges such as the reduction of costs and quality improvement of the provided services. Engineering studies could be very useful in supporting organizational and management processes. Healthcare service efficiency depends on a strong collaboration between clinical and engineering experts, especially when it comes to analyzing the system and its constraints in detail and subsequently, when it comes to deciding on the reengineering of some key activities. The purpose of this paper is to propose a case study showing how a mix of representation tools allow a manager of a radiology department to solve some human and technological resource re-organizational issues, which have to be faced due to the introduction of a new technology and a new portfolio of services. In order to simulate the activities within the radiology department and examine the relationship between human and technological resources, different visual diagrammatic language (VDL) techniques have been implemented to get knowledge about the heterogeneous factors related to the healthcare service delivery. In particular, flow charts, IDEFO diagrams and Petri nets have been integrated each other with success as a modelisation tools. The simulation study performed through the application of the aforementioned VDL techniques suggests the opportunity of re-organizing the nurse activities within the radiology department. The re-organization of a healthcare service and in particular of a radiology department by means of joint flow charts, IDEF0 diagrams and Petri nets is a poorly investigated topic in literature. This paper demonstrates how flow charts and IDEF0 can help people working within the department to understand the weak points of their organization and constitute an efficient base of knowledge for the implementation of a Petri net aimed at improving the departmental performance.
Étude de la réponse photoacoustique d'objets massifs en 3D
NASA Astrophysics Data System (ADS)
Séverac, H.; Mousseigne, M.; Franceschi, J. L.
1996-11-01
In some sectors such as microelectronics or the physics of materials, reliability is of capital importance. It is also particularly attractive to have access on informations on the material behaviour without the use of a destructive test like chemical analysis or others mechanical tests. The submitted method for non-destructive testing is based on the waves generation with a laser beam. The aim of studying the various waves in the three-dimensional space is to bring informations about materials response. Thermoelastic modelisation allowed a rigorous analytic approach and to give rise to a software written in Turbo-Pascal for a more general solution. Dans les secteurs où la fiabilité est capitale, tels la micro-électronique ou la physique des matériaux, il est particulièrement utile d'accéder aux informations sur le comportement du matériau sans avoir à utiliser une méthode destructive (analyses chimiques ou autres essais mécaniques). La méthode de contrôle non destructif présentée est basée sur la génération d'ondes par impact d'un faisceau laser focalisé à la surface d'un échantillon, sans atteindre le régime d'ablation. L'étude de la propagation des diverses ondes dans l'espace tridimensionnel permet d'apporter des mesures quantitatives sur l'analyse de la réponse des matériaux utilisés. La modélisation des phénomènes thermoélastiques a permis une approche analytique rigoureuse et donné naissance à un logiciel de simulation écrit en Turbo-Pascal pour des études plus générales.
NASA Astrophysics Data System (ADS)
Gemme, Frederic
The aim of the present research project is to increase the amount of fundamental knowledge regarding the process by getting a better understanding of the physical phenomena involved in friction stir welding (FSW). Such knowledge is required to improve the process in the context of industrial applications. In order to do so, the first part of the project is dedicated to a theoretical study of the process, while the microstructure and the mechanical properties of welded joints obtained in different welding conditions are measured and analyzed in the second part. The combination of the tool rotating and translating movements induces plastic deformation and heat generation of the welded material. The material thermomechanical history is responsible for metallurgical phenomena occurring during FSW such as recrystallization and precipitate dissolution and coarsening. Process modelling is used to reproduce this thermomechanical history in order to predict the influence of welding on the material microstructure. It is helpful to study heat generation and heat conduction mechanisms and to understand how joint properties are related to them. In the current work, a finite element numerical model based on solid mechanics has been developed to compute the thermomechanical history of the welded material. The computation results were compared to reference experimental data in order to validate the model and to calibrate unknown physical parameters. The model was used to study the effect of the friction coefficient on the thermomechanical history. Results showed that contact conditions at the workpiece/tool interface have a strong effect on relative amounts of heat generated by friction and by plastic deformation. The comparison with the experimental torque applied by the tool for different rotational speeds has shown that the friction coefficient decreases when the rotational speed increases. Consequently, heat generation is far more important near the material/tool interface and the material deformation is shallower, increasing the lack of penetration probability. The variation of thermomechanical conditions with regards to the rotational speed is responsible for the variation of the nugget shape, as recrystallization conditions are not reached in the same volume of material. The second part of the research project was dedicated to a characterization of the welded joints microstructure and mechanical properties. Sound joints were obtained by using a manufacturing procedure involving process parameters optimization and quality control of the joint integrity. Five different combinations of rotational and advancing speeds were studied. Microstructure observations have shown that the rotational speed has an effect on recrystallization conditions because of the variation of the contact conditions at the material/tool interface. On the other hand, the advancing speed has a strong effect on the precipitation state in the heat affected zone (HAZ). The heat input increases when the advancing speed decreases. The material softening in the HAZ is then more pronounced. Mechanical testing of the welded joints showed that the fatigue resistance increases when the rotational speed increases and the advancing speed decreases. The fatigue resistance of FSW joints mainly depends on the ratio of the advancing speed on the rotational speed, called the welding pitch k. When the welding pitch is high (k ≥ 0,66 mm/rev), the fatigue resistance depends on crack initiation at the root of circular grooves left by the tool on the weld surface. The size of these grooves is directly related to the welding pitch. When the welding pitch is low (k ≤ 0,2 mm/rev), the heat input is high and the fatigue resistance is limited by the HAZ softening. The fatigue resistance is optimized when k stands in the 0,25-0,30 mm/rev range. Outside that range, the presence of small lateral lips is critical. The results of the characterization part of the project showed that the effects of the applied vertical force on the formation of lateral lips should be submitted to further investigations. The elimination of the lateral lip, which could be achieved with a more precise adjustment of the vertical force, could lead to an improved fatigue resistance. The elimination of lateral lips, but also the circular grooves left by the tool, may be obtained by developing an appropriate surfacing technique and could lead to an improved fatigue resistance without reducing the advancing speed. (Abstract shortened by UMI.)
Adsorption de gaz sur les materiaux microporeux modelisation, thermodynamique et applications
NASA Astrophysics Data System (ADS)
Richard, Marc-Andre
2009-12-01
Nos travaux sur l'adsorption de gaz dans les materiaux microporeux s'inscrivent dans le cadre des recherches visant a augmenter l'efficacite du stockage de l'hydrogene a bord des vehicules. Notre objectif etait d'etudier la possibilite d'utiliser l'adsorption afin d'ameliorer l'efficacite de la liquefaction de l'hydrogene des systemes a petite echelle. Nous avons egalement evalue les performances d'un systeme de stockage cryogenique de l'hydrogene base sur la physisorption. Comme nous avons affaire a des plages de temperatures particulierement etendues et a de hautes pressions dans la region supercritique du gaz, nous avons du commencer par travailler sur la modelisation et la thermodynamique de l'adsorption. La representation de la quantite de gaz adsorbee en fonction de la temperature et de la pression par un modele semi-empirique est un outil utile pour determiner la masse de gaz adsorbee dans un systeme mais egalement pour calculer les effets thermiques lies a l'adsorption. Nous avons adapte le modele Dubinin-Astakhov (D-A) pour modeliser des isothermes d'adsorption d'hydrogene, d'azote et de methane sur du charbon actif a haute pression et sur une grande plage de temperatures supercritiques en considerant un volume d'adsorption invariant. Avec cinq parametres de regression (incluant le volume d'adsorption Va), le modele que nous avons developpe permet de tres bien representer des isothermes experimentales d'adsorption d'hydrogene (de 30 a 293 K, jusqu'a 6 MPa), d'azote (de 93 a 298 K, jusqu'a 6 MPa) et de methane (de 243 a 333 K, jusqu'a 9 MPa) sur le charbon actif. Nous avons calcule l'energie interne de la phase adsorbee a partir du modele en nous servant de la thermodynamique des solutions sans negliger le volume d'adsorption. Par la suite, nous avons presente les equations de conservation de la niasse et de l'energie pour un systeme d'adsorption et valide notre demarche en comparant des simulations et des tests d'adsorption et de desorption. En plus de l'energie interne, nous avons evalue l'entropie, l'energie differentielle d'adsorption et la chaleur isosterique d'adsorption. Nous avons etudie la performance d'un systeme de stockage d'hydrogene par adsorption pour les vehicules. La capacite de stockage d'hydrogene et les performances thermiques d'un reservoir de 150 L contenant du charbon actif Maxsorb MSC-30(TM) (surface specifique ˜ 3000 m2/g) ont ete etudiees sur une plage de temperatures de 60 a 298 K et a des pressions allant jusqu'a 35 MPa. Le systeme a ete considere de facon globale, sans nous attarder a un design particulier. Il est possible de stocker 5 kg d'hydrogene a des pressions de 7.8, 15.2 et 29 MPa pour des temperatures respectivement de 80, 114 et 172 K, lorsqu'on recupere l'hydrogene residuel a 2.5 bars en le chauffant. La simulation des phenomenes thermiques nous a permis d'analyser le refroidissement necessaire lors du remplissage, le chauffage lors de la decharge et le temps de dormance. Nous avons developpe un cycle de liquefaction de l'hydrogene base sur l'adsorption avec compression mecanique (ACM) et avons evalue sa faisabilite. L'objectif etait d'augmenter sensiblement l'efficacite des systemes de liquefaction de l'hydrogene a petite echelle (moins d'une tonne/jour) et ce, sans en augmenter le cout en capital. Nous avons adapte le cycle de refrigeration par ACM afin qu'il puisse par la suite etre ajoute a un cycle de liquefaction de l'hydrogene. Nous avons ensuite simule des cycles idealises de refrigeration par ACM. Meme dans ces conditions ideales, la refrigeration specifique est faible. De plus, l'efficacite theorique maximale de ces cycles de refrigeration est d'environ 20 a 30% de l'ideal. Nous avons realise experimentalement un cycle de refrigeration par ACM avec le couple azote/charbon actif. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Sylla, Daouda
Defined as a process that reduces the potential of soil production or the usefulness of natural resources, soil degradation is a major environmental problem which affects over 41 % of the land and, over 80 % of people affected by this phenomenon live in developing countries. The general objective of the present project is the characterisation of different types of land use and land cover and the detection of their spatio-temporal changes from radar data (ERS-1, RADARSAT-1 and ENVISAT) for a spatio-temporal modeling of environmental vulnerability to soil degradation in semi-arid area. Due to the high sensitivity of the radar signal to the observing conditions of the sensor and the target, a partition of the radar images with respect to their angular configurations (23° and [33°-35°-47°]) and to environmental conditions (wet and dry) was first performed. A good characterisation and a good temporal evolution of the four types of land use and land cover of interest are obtained with different levels of contrast depending on the incidence angles and environmental conditions. In addition to pixel-based approach used for change detection (images differences, Principal component analysis), a monitoring of land cover from an object-oriented approach which focused on two types of land cover is developed. The method allows a detailed mapping of bare soil occurrences as a function of environmental conditions. Finally, using different sources of information, a modeling of the environmental vulnerability to soil degradation is performed in the South-west of Niger from the probabilistic fusion rule of Dempster-Shafer. The resulting decision maps are statistically acceptable at 93 % and 91 % with Kappa values of 86 % and 84 %, for respectively dry and wet conditions. Besides, they are used to produce a global map of the environmental vulnerability to soil degradation in this semi-arid area. Key-words: Environmental vulnerability to soil degradation; data fusion; radar images; land use changes; semi-arid environment; South-west of Niger.
Weather and seasonal climate prediction for South America using a multi-model superensemble
NASA Astrophysics Data System (ADS)
Chaves, Rosane R.; Ross, Robert S.; Krishnamurti, T. N.
2005-11-01
This work examines the feasibility of weather and seasonal climate predictions for South America using the multi-model synthetic superensemble approach for climate, and the multi-model conventional superensemble approach for numerical weather prediction, both developed at Florida State University (FSU). The effect on seasonal climate forecasts of the number of models used in the synthetic superensemble is investigated. It is shown that the synthetic superensemble approach for climate and the conventional superensemble approach for numerical weather prediction can reduce the errors over South America in seasonal climate prediction and numerical weather prediction.For climate prediction, a suite of 13 models is used. The forecast lead-time is 1 month for the climate forecasts, which consist of precipitation and surface temperature forecasts. The multi-model ensemble is comprised of four versions of the FSU-Coupled Ocean-Atmosphere Model, seven models from the Development of a European Multi-model Ensemble System for Seasonal to Interannual Prediction (DEMETER), a version of the Community Climate Model (CCM3), and a version of the predictive Ocean Atmosphere Model for Australia (POAMA). The results show that conditions over South America are appropriately simulated by the Florida State University Synthetic Superensemble (FSUSSE) in comparison to observations and that the skill of this approach increases with the use of additional models in the ensemble. When compared to observations, the forecasts are generally better than those from both a single climate model and the multi-model ensemble mean, for the variables tested in this study.For numerical weather prediction, the conventional Florida State University Superensemble (FSUSE) is used to predict the mass and motion fields over South America. Predictions of mean sea level pressure, 500 hPa geopotential height, and 850 hPa wind are made with a multi-model superensemble comprised of six global models for the period January, February, and December of 2000. The six global models are from the following forecast centers: FSU, Bureau of Meteorology Research Center (BMRC), Japan Meteorological Agency (JMA), National Centers for Environmental Prediction (NCEP), Naval Research Laboratory (NRL), and Recherche en Prevision Numerique (RPN). Predictions of precipitation are made for the period January, February, and December of 2001 with a multi-analysis-multi-model superensemble where, in addition to the six forecast models just mentioned, five additional versions of the FSU model are used in the ensemble, each with a different initialization (analysis) based on different physical initialization procedures. On the basis of observations, the results show that the FSUSE provides the best forecasts of the mass and motion field variables to forecast day 5, when compared to both the models comprising the ensemble and the multi-model ensemble mean during the wet season of December-February over South America. Individual case studies show that the FSUSE provides excellent predictions of rainfall for particular synoptic events to forecast day 3. Copyright
NASA Astrophysics Data System (ADS)
Paradis, Alexandre
The principal objective of the present thesis is to elaborate a computational model describing the mechanical properties of NiTi under different loading conditions. Secondary objectives are to build an experimental database of NiTi under stress, strain and temperature in order to validate the versatility of the new model proposed herewith. The simulation model used presently at Laboratoire sur les Alliage a Memoire et les Systemes Intelligents (LAMSI) of ETS is showing good behaviour in quasi-static loading. However, dynamic loading with the same model do not allows one to include degradation. The goal of the present thesis is to build a model capable of describing such degradation in a relatively accurate manner. Some experimental testing and results will be presented. In particular, new results on the behaviour of NiTi being paused during cycling are presented in chapter 2. A model is developed in chapter 3 based on Likhachev's micromechanical model. Good agreement is found with experimental data. Finally, an adaptation of the model is presented in chapter 4, allowing it to be eventually implemented into a finite-element commercial software.
Modelisation de la diffusion sur les surfaces metalliques: De l'adatome aux processus de croissance
NASA Astrophysics Data System (ADS)
Boisvert, Ghyslain
Cette these est consacree a l'etude des processus de diffusion en surface dans le but ultime de comprendre, et de modeliser, la croissance d'une couche mince. L'importance de bien mai triser la croissance est primordiale compte tenu de son role dans la miniaturisation des circuits electroniques. Nous etudions ici les surface des metaux nobles et de ceux de la fin de la serie de transition. Dans un premier temps, nous nous interessons a la diffusion d'un simple adatome sur une surface metallique. Nous avons, entre autres, mis en evidence l'apparition d'une correlation entre evenements successifs lorsque la temperature est comparable a la barriere de diffusion, i.e., la diffusion ne peut pas etre associee a une marche aleatoire. Nous proposons un modele phenomenologique simple qui reproduit bien les resultats des simulations. Ces calculs nous ont aussi permis de montrer que la diffusion obeit a la loi de Meyer-Neldel. Cette loi stipule que, pour un processus active, le prefacteur augmente exponentiellement avec la barriere. En plus, ce travail permet de clarifier l'origine physique de cette loi. En comparant les resultats dynamiques aux resultats statiques, on se rend compte que la barriere extraite des calculs dynamiques est essentiellement la meme que celle obtenue par une approche statique, beaucoup plus simple. On peut donc obtenir cette barriere a l'aide de methodes plus precises, i.e., ab initio, comme la theorie de la fonctionnelle de la densite, qui sont aussi malheureusement beaucoup plus lourdes. C'est ce que nous avons fait pour plusieurs systemes metalliques. Nos resultats avec cette derniere approche se comparent tres bien aux resultats experimentaux. Nous nous sommes attardes plus longuement a la surface (111) du platine. Cette surface regorge de particularites interessantes, comme la forme d'equilibre non-hexagonale des i lots et deux sites d'adsorption differents pour l'adatome. De plus, des calculs ab initio precedents n'ont pas reussi a confirmer la forme d'equilibre et surestiment grandement la barriere. Nos calculs, plus complets et dans un formalisme mieux adapte a ce genre de probleme, predisent correctement la forme d'equilibre, qui est en fait due a un relachement different du stress de surface aux deux types de marches qui forment les cotes des i lots. Notre valeur pour la barriere est aussi fortement diminuee lorsqu'on relaxe les forces sur les atomes de la surface, amenant le resultat theorique beaucoup plus pres de la valeur experimentale. Nos calculs pour le cuivre demontre en effet que la diffusion de petits i lots pendant la croissance ne peut pas etre negligee dans ce cas, mettant en doute la valeur des interpretations des mesures experimentales. (Abstract shortened by UMI.)
Modelisation de la Propagation des Ondes Sonores dans un Environnement Naturel Complexe
NASA Astrophysics Data System (ADS)
L'Esperance, Andre
Ce travail est consacre a la propagation sonore a l'exterieur dans un environnement naturel complexe, i.e. en presence de conditions reelles de vent, de gradient de temperature et de turbulence atmospherique. Plus specifiquement ce travail comporte deux objectifs. D'une part, il vise a developper un modele heuristique de propagation sonore (MHP) permettant de prendre en consideration l'ensemble des phenomenes meteorologiques et acoustiques influencant la propagation du son a l'exterieur. D'autre part, il vise a identifier dans quelles circonstances et avec quelle importance les conditions meteorologiques interviennent sur la propagation sonore. Ce travail est divise en cinq parties. Apres une breve introduction identifiant les motivations de cette etude (chapitre 1), le chapitre 2 fait un rappel des travaux deja realises dans le domaine de la propagation du son a l'exterieur. Ce chapitre presente egalement les bases de l'acoustique geometrique a partir desquelles ont ete developpees la partie acoustique du modele heuristique de propagation. En outre, on y decrit comment les phenomenes de refraction et de turbulence atmospherique peuvent etre consideres dans la theorie des rayons. Le chapitre 3 presente le modele heuristique de propagation (MHP) developpe au cours de cet ouvrage. La premiere section de ce chapitre decrit le modele acoustique de propagation, modele qui fait l'hypothese d'un gradient de celerite lineaire et qui est base sur une solution hybride d'acoustique geometrique et de theorie des residus. La deuxieme section du chapitre 3 traite plus specifiquement de la modelisation des aspects meteorologiques et de la determination des profils de celerite et des index de fluctuation associes aux conditions meteorologiques. La section 3 de ce chapitre decrit comment les profils de celerite resultants sont linearises pour les calculs dans le modele acoustique, et finalement la section 4 donne les tendances generales obtenues par le modele. Le chapitre 4 decrit les compagnes de mesures qui ont ete realisees a Rock-Spring (Pennsylvanie, Etats -Unis) au cours de l'ete 90 et a Bouin (Vendee, France) au cours de l'automne 91. La campagne de mesure de Rock -Spring a porte plus specifiquement sur les effets de la refraction alors que la campagne de Bouin a prote plus specifiquement sur les effets de la turbulence. La section 4.1 decrit les equipements et le traitement des donnees meteorologiques realisees dans chaque cas et la section 4.2 fait de meme pour les resultats acoustiques. Finalement, le chapitre 5 compare les resultats experimentaux obtenus avec ceux donnes par le MHP, tant pour les resultats meteorologiques que pour les resultats acoustiques. Des comparaisons avec un autre modele (le Fast Field Program) sont egalement presentees.
NASA Astrophysics Data System (ADS)
Pham, Trinh Hung
Monitoring hydrological behavior of a large tropical watershed following a forest cover variation has an important role in water resource management planning as well as for forest sustainable management. Traditional methods in forest hydrology studies are Experimental watersheds, Upstream-downstream, Experimental plots, Statistical regional analysis and Watershed simulation. Those methodes have limitations for large watersheds concerning the monitoring time, the lack of input data especially about forest cover and the capacity of extrapolating results accurately in terms of large watersheds. Moreover, there is still currently a scientific debate in forest ecology on relation between water and forest. The reason of this problem comes from geographical differences in publication concerning study zones, experimental watershed size and applied methods. It gives differences in the conclusions on the influence of tropical forest cover change on the changes of outlet water and yet on the yearly runoff in terms of large watershed. In order to exceed the limitations of actual methods, to solve the difficulty of acquiring forest cover data and to have a better understanding of the relation between tropical forest cover change and hydrological behavior evolution of a large watershed, it is necessary to develop a new approach by using numeric remote sensing. We used the watershed of Dong Nai as a case study. Results show that a fusion between TM and ETM+ Landsat image series and hydro-meteorologic data allow us to observe and detect flooding trends and flooding peaks after an intensive forest cover change from 16% to 20%. Flooding frequency and flooding peaks have clearly decreased when there is an increase of the forest cover from 1983 to 1990. The influence of tropical forest cover on the hydrological behavior is varying with geographical locations of watershed. There is a significant relation between forest cover evolution and environmental facteurs as the runoff coefficient (R = 0,87) and the yearly precipitation (R = 0,93).
La conception, la modelisation et la simulation du systeme VSC-HVDC offshore
NASA Astrophysics Data System (ADS)
Benhalima, Seghir
Wind energy is recognized worldwide as a proven technology to meet the growing demands of green sustainable energy. To exploit this stochastic energy source and put together with the conventional energy sources without affecting the performance of existing electrical grids, several research projects have been achieved. In addition, at ocean level, wind energy has a great potential. It means that the production of this energy will increase in the world. The optimal extraction of this energy source needs to be connected to the grid via a voltage source converter which plays the role of interface. To minimise losses due to the transport of energy at very long distances, the technology called High Voltage Direct Current based on Voltage Source Converter (VSC-HVDC) is used. To achieve this goal, a new topology is designed with a new control algorithm based on control of power generated by the wind farm, the DC voltage regulation and the synchronization between wind farm and VSC-HVDC (based on NPC). The proposed topology and its control technique are validated using the "MATLAB/Simulink program". The results are promising, because the THD is less than 5% and the power factor is close to one.
MONET: multidimensional radiative cloud scene model
NASA Astrophysics Data System (ADS)
Chervet, Patrick
1999-12-01
All cloud fields exhibit variable structures (bulge) and heterogeneities in water distributions. With the development of multidimensional radiative models by the atmospheric community, it is now possible to describe horizontal heterogeneities of the cloud medium, to study these influences on radiative quantities. We have developed a complete radiative cloud scene generator, called MONET (French acronym for: MOdelisation des Nuages En Tridim.) to compute radiative cloud scene from visible to infrared wavelengths for various viewing and solar conditions, different spatial scales, and various locations on the Earth. MONET is composed of two parts: a cloud medium generator (CSSM -- Cloud Scene Simulation Model) developed by the Air Force Research Laboratory, and a multidimensional radiative code (SHDOM -- Spherical Harmonic Discrete Ordinate Method) developed at the University of Colorado by Evans. MONET computes images for several scenario defined by user inputs: date, location, viewing angles, wavelength, spatial resolution, meteorological conditions (atmospheric profiles, cloud types)... For the same cloud scene, we can output different viewing conditions, or/and various wavelengths. Shadowing effects on clouds or grounds are taken into account. This code is useful to study heterogeneity effects on satellite data for various cloud types and spatial resolutions, and to determine specifications of new imaging sensor.
Etude vibroacoustique d'un systeme coque-plancher-cavite avec application a un fuselage simplifie
NASA Astrophysics Data System (ADS)
Missaoui, Jemai
L'objectif de ce travail est de developper des modeles semi-analytiques pour etudier le comportement structural, acoustique et vibro-acoustique d'un systeme coque-plancher-cavite. La connection entre la coque et le plancher est assuree en utilisant le concept de rigidite artificielle. Ce concept de modelisation flexible facilite le choix des fonctions de decomposition du mouvement de chaque sous-structure. Les resultats issus de cette etude vont permettre la comprehension des phenomenes physiques de base rencontres dans une structure d'avion. Une approche integro-modale est developpee pour calculer les caracteristiques modales acoustiques. Elle utilise une discretisation de la cavite irreguliere en sous-cavites acoustiques dont les bases de developpement sont connues a priori. Cette approche, a caractere physique, presente l'avantage d'etre efficace et precise. La validite de celle-ci a ete demontree en utilisant des resultats disponibles dans la litterature. Un modele vibro-acoustique est developpe dans un but d'analyser et de comprendre les effets structuraux et acoustiques du plancher dans la configuration. La validite des resultats, en termes de resonance et de fonction de transfert, est verifiee a l'aide des mesures experimentales realisees au laboratoire.
NASA Astrophysics Data System (ADS)
Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca
2018-06-01
We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.
NASA Astrophysics Data System (ADS)
Boissonneault, Maxime
L'electrodynamique quantique en circuit est une architecture prometteuse pour le calcul quantique ainsi que pour etudier l'optique quantique. Dans cette architecture, on couple un ou plusieurs qubits supraconducteurs jouant le role d'atomes a un ou plusieurs resonateurs jouant le role de cavites optiques. Dans cette these, j'etudie l'interaction entre un seul qubit supraconducteur et un seul resonateur, en permettant cependant au qubit d'avoir plus de deux niveaux et au resonateur d'avoir une non-linearite Kerr. Je m'interesse particulierement a la lecture de l'etat du qubit et a son amelioration, a la retroaction du processus de mesure sur le qubit de meme qu'a l'etude des proprietes quantiques du resonateur a l'aide du qubit. J'utilise pour ce faire un modele analytique reduit que je developpe a partir de la description complete du systeme en utilisant principalement des transfprmations unitaires et une elimination adiabatique. J'utilise aussi une librairie de calcul numerique maison permettant de simuler efficacement l'evolution du systeme complet. Je compare les predictions du modele analytique reduit et les resultats de simulations numeriques a des resultats experimentaux obtenus par l'equipe de quantronique du CEASaclay. Ces resultats sont ceux d'une spectroscopie d'un qubit supraconducteur couple a un resonateur non lineaire excite. Dans un regime de faible puissance de spectroscopie le modele reduit predit correctement la position et la largeur de la raie. La position de la raie subit les decalages de Lamb et de Stark, et sa largeur est dominee par un dephasage induit par le processus de mesure. Je montre que, pour les parametres typiques de l'electrodynamique quantique en circuit, un accord quantitatif requiert un modele en reponse non lineaire du champ intra-resonateur, tel que celui developpe. Dans un regime de forte puissance de spectroscopie, des bandes laterales apparaissent et sont causees par les fluctuations quantiques du champ electromagnetique intra-resonateur autour de sa valeur d'equilibre. Ces fluctuations sont causees par la compression du champ electromagnetique due a la non-linearite du resonateur, et l'observation de leur effet via la spectroscopie d'un qubit constitue une premiere. Suite aux succes quantitatifs du modele reduit, je montre que deux regimes de parametres ameliorent marginalement la mesure dispersive d'un qubit avec un resonateur lineaire, et significativement une mesure par bifurcation avec un resonateur non lineaire. J'explique le fonctionnement d'une mesure de qubit dans un resonateur lineaire developpee par une equipe experimentale de l'Universite de Yale. Cette mesure, qui utilise les non-linearites induites par le qubit, a une haute fidelite, mais utilise une tres haute puissance et est destructrice. Dans tous ces cas, la structure multi-niveaux du qubit s'avere cruciale pour la mesure. En suggerant des facons d'ameliorer la mesure de qubits supraconducteurs, et en decrivant quantitativement la physique d'un systeme a plusieurs niveaux couple a un resonateur non lineaire excite, les resultats presentes dans cette these sont pertinents autant pour l'utilisation de l'architecture d'electrodynamique quantique en circuit pour l'informatique quantique que pour l'optique quantique. Mots-cles: electrodynamique quantique en circuit, informatique quantique, mesure, qubit supraconducteur, transmon, non-linearite Kerr
NASA Astrophysics Data System (ADS)
Coulibaly, Issa
Principale source d'approvisionnement en eau potable de la municipalite d'Edmundston, le bassin versant Iroquois/Blanchette est un enjeu capital pour cette derniere, d'ou les efforts constants deployes pour assurer la preservation de la qualite de son eau. A cet effet, plusieurs etudes y ont ete menees. Les plus recentes ont identifie des menaces de pollution de diverses origines dont celles associees aux changements climatiques (e.g. Maaref 2012). Au regard des impacts des modifications climatiques annonces a l'echelle du Nouveau-Brunswick, le bassin versant Iroquois/Blanchette pourrait etre fortement affecte, et cela de diverses facons. Plusieurs scenarios d'impacts sont envisageables, notamment les risques d'inondation, d'erosion et de pollution a travers une augmentation des precipitations et du ruissellement. Face a toutes ces menaces eventuelles, l'objectif de cette etude est d'evaluer les impacts potentiels des changements climatiques sur les risques d'erosion et de pollution a l'echelle du bassin versant Iroquois/Blanchette. Pour ce faire, la version canadienne de l'equation universelle revisee des pertes en sol RUSLE-CAN et le modele hydrologique SWAT ( Soil and Water Assessment Tool) ont ete utilises pour modeliser les risques d'erosion et de pollution au niveau dans la zone d'etude. Les donnees utilisees pour realiser ce travail proviennent de sources diverses et variees (teledetections, pedologiques, topographiques, meteorologiques, etc.). Les simulations ont ete realisees en deux etapes distinctes, d'abord dans les conditions actuelles ou l'annee 2013 a ete choisie comme annee de reference, ensuite en 2025 et 2050. Les resultats obtenus montrent une tendance a la hausse de la production de sediments dans les prochaines annees. La production maximale annuelle augmente de 8,34 % et 8,08 % respectivement en 2025 et 2050 selon notre scenario le plus optimiste, et de 29,99 % en 2025 et 29,72 % en 2050 selon le scenario le plus pessimiste par rapport a celle de 2013. Pour ce qui est de la pollution, les concentrations observees (sediment, nitrate et phosphore) connaissent une evolution avec les changements climatiques. La valeur maximale de la concentration en sediments connait une baisse en 2025 et 2050 par rapport a 2013, de 11,20 mg/l en 2013, elle passe a 9,03 mg/l en 2025 puis a 6,25 en 2050. On s'attend egalement a une baisse de la valeur maximale de la concentration en nitrate au fil des annees, plus accentuee en 2025. De 4,12 mg/l en 2013, elle passe a 1,85 mg/l en 2025 puis a 2,90 en 2050. La concentration en phosphore par contre connait une augmentation dans les annees a venir par rapport a celle de 2013, elle passe de 0,056 mg/l en 2013 a 0,234 mg/l en 2025 puis a 0,144 en 2050.
NASA Astrophysics Data System (ADS)
Mijiyawa, Faycal
Cette etude permet d'adapter des materiaux composites thermoplastiques a fibres de bois aux engrenages, de fabriquer de nouvelles generations d'engrenages et de predire le comportement thermique de ces engrenages. Apres une large revue de la litterature sur les materiaux thermoplastiques (polyethylene et polypropylene) renforces par les fibres de bois (bouleau et tremble), sur la formulation et l'etude du comportement thermomecanique des engrenages en plastique-composite; une relation a ete etablie avec notre presente these de doctorat. En effet, beaucoup d'etudes sur la formulation et la caracterisation des materiaux composites a fibres de bois ont ete deja realisees, mais aucune ne s'est interessee a la fabrication des engrenages. Les differentes techniques de formulation tirees de la litterature ont facilite l'obtention d'un materiau composite ayant presque les memes proprietes que les materiaux plastiques (nylon, acetal...) utilises dans la conception des engrenages. La formulation des materiaux thermoplastiques renforces par les fibres de bois a ete effectuee au Centre de recherche en materiaux lignocellulosiques (CRML) de l'Universite du Quebec a Trois-Rivieres (UQTR), en collaboration avec le departement de Genie Mecanique, en melangeant les composites avec deux rouleaux sur une machine de type Thermotron-C.W. Brabender (modele T-303, Allemand) ; puis des pieces ont ete fabriquees par thermocompression. Les thermoplastiques utilises dans le cadre de cette these sont le polypropylene (PP) et le polyethylene haute densite (HDPE), avec comme renfort des fibres de bouleau et de tremble. A cause de l'incompatibilite entre la fibre de bois et le thermoplastique, un traitement chimique a l'aide d'un agent de couplage a ete realise pour augmenter les proprietes mecaniques des materiaux composites. Pour les composites polypropylene/bois : (1) Les modules elastiques et les contraintes a la rupture en traction des composites PP/bouleau et PP/tremble evoluent lineairement en fonction du taux de fibres, avec ou sans agent de couplage (Maleate de polypropylene MAPP). De plus, l'adherence entre les fibres de bois et le plastique est amelioree en utilisant seulement 3 % MAPP, entrainant donc une augmentation de la contrainte maximale bien qu'aucun effet significatif ne soit observe sur le module d'elasticite. (2) Les resultats obtenus montrent que, en general, les proprietes en traction des composites polypropylene/bouleau, polypropylene/tremble et polypropylene/bouleau/ tremble sont tres semblables. Les composites plastique-bois (WPCs), en particulier ceux contenant 30 % et 40 % de fibres, ont des modules elastiques plus eleves que certains plastiques utilises dans l'application des engrenages (ex. Nylon). Pour les composites polyethylene/bois, avec 3%Maleate de polyethylene (MAPE): (1) Tests de traction : le module elastique passe de 1.34 GPa a 4.19 GPa pour le composite HDPE/bouleau, alors qu'il passe de 1.34 GPa a 3.86 GPa pour le composite HDPE/tremble. La contrainte maximale passe de 22 MPa a 42.65 MPa pour le composite HDPE/bouleau, alors qu'elle passe de 22 MPa a 43.48 MPa pour le composite HDPE/tremble. (2) Tests de flexion : le module elastique passe de 1.04 GPa a 3.47 GPa pour le composite HDPE/bouleau et a 3.64 GPa pour le composite HDPE/tremble. La contrainte maximale passe de 23.90 MPa a 66.70 MPa pour le composite HDPE/bouleau, alors qu'elle passe a 59.51 MPa pour le composite HDPE/tremble. (3) Le coefficient de Poisson determine par impulsion acoustique est autour de 0.35 pour tous les composites HDPE/bois. (4) Le test de degradation thermique TGA nous revele que les materiaux composites presentent une stabilite thermique intermediaire entre les fibres de bois et la matrice HDPE. (5) Le test de mouillabilite (angle de contact) revele que l'ajout de fibres de bois ne diminue pas de facon significative les angles de contact avec de l'eau parce que les fibres de bois (bouleau ou tremble) semblent etre enveloppees par la matrice sur la surface des composites, comme le montrent des images prises au microscope electronique a balayage MEB. (6) Le modele de Lavengoof-Goettler predit mieux le module elastique du composite thermoplastique/bois. (7) Le HDPE renforce par 40 % de bouleau est mieux adapte pour la fabrication des engrenages, car le retrait est moins important lors du refroidissement au moulage. La simulation numerique semble mieux predire la temperature d'equilibre a la vitesse de 500 tr/min; alors qu'a 1000 tr/min, on remarque une divergence du modele. (Abstract shortened by ProQuest.). None None None None None None None None
NASA Astrophysics Data System (ADS)
Ait Hammou, Zouhair
Cette etude porte sur la conception d'un accumulateur echangeur de chaleur hybride (AECH) pour la gestion simultanee des energies solaire et electrique. Un modele mathematique reposant sur les equations de conservation de la quantite d'energie est expose. Il est developpe pour tester differents materiaux de stockage, entre autres, les materiaux a changement de phase (solide/liquide) et les materiaux de stockage sensible. Un code de calcul est mis en eeuvre sur ordinateur, puis valide a l'aide des resultats analytiques et numeriques de la litterature. En parallele, un prototype experimental a echelle reduite est concu au laboratoire afin de valider le code de calcul. Des simulations sont effectuees pour etudier les effets des parametres de conception et des materiaux de stockage sur le comportement thermique de l'AECH et sur la consommation d'energie electrique. Les resultats des simulations sur quatre mois d'hiver montrent que la paraffine n-octadecane et l'acide caprique sont deux candidats souhaitables pour le stockage d'energie destine au chauffage des habitats. L'utilisation de ces deux materiaux dans l'AECH permet de reduire la consommation d'energie electrique de 32% et d'aplanir le probleme de pointe electrique puisque 90% de l'energie electrique est consommee durant les heures creuses. En plus, en adoptant un tarif preferentiel, le calcul des couts lies a la consommation d'energie electrique montre que le consommateur adoptant ce systeme beneficie d'une reduction de 50% de la facture d'electricite.
NASA Astrophysics Data System (ADS)
Filali, Bilai
Graphene, as an advanced carbon nano-structure, has attracted a deluge of interest of scholars recently because of it's outstanding mechanical, electrical and thermal properties. There are several different ways to synthesis graphene in practical ways, such as Mechanical Exfoliation, Chemical Vapor Deposition (CVD), and Anodic Arc discharge. In this thesis a method of graphene synthesis in plasma will be discussed, in which this synthesis method is supported by the erosion of the anode material. This graphene synthesis method is one of the most practical methods which can provide high production rate. High purity of graphene flakes have been synthesized with an anodic arc method under certain pressure (about 500 torr). Raman spectrometer, Scanning Electron Microscope (SEM), Atomic Force Microscopy (AFM) and Transmission Electron Microscopy (TEM) have been utilized for characterization of the synthesis products. Arc produced graphene and commercially available graphene was compared by those machine and the difference lies in the number of layers, the thicknesses of each layer and the shape of the structure itself. Temperature dependence of the synthesis procedure has been studied. It has been found that the graphene can be produced on a copper foil substrate under temperatures near the melting point of copper. However, with a decrease in substrate temperature yields a transformation of the synthesized graphene into amorphous carbon. Glow discharge was utilized to functionalize grapheme. SEM and EDS observation indicated increases of oxygen content in the graphene after its exposure to glow discharge.
NASA Astrophysics Data System (ADS)
Vignéras-Lefebvre, V.; Miane, J. L.; Parneix, J. P.
1993-03-01
A modelisation of the behaviour of heterogeneous structures, made with spherical particles, submitted to microwave fields, using multiple scattering, is presented. First of all, we expose the principle of scattering by a single sphere, using Mie's equations represented in the form of a transfer matrix. This matrix, generalized to a distribution of particles, allows the translation of an average behaviour of the material. As part of the limit of Rayleigh, the value of the effective permittivity thus obtained is compared to experimental results. Une modélisation du comportement de structures hétérogènes composées de particules sphériques, soumises à des champs hyperfréquences, utilisant la multidiffusion, est présentée. Dans un premier temps, nous exposons le principe de la diffusion par une seule sphère en utilisant les équations de Mie représentées sous forme matricielle. Cette matrice généralisée à une distribution de particules permet de traduire un comportement moyen du matériau. Dans le cadre de l'approximation de Rayleigh, la valeur de la permittivité ainsi calculée est comparée à des résultats expérimentaux.
Modelisations et inversions tri-dimensionnelles en prospections gravimetrique et electrique
NASA Astrophysics Data System (ADS)
Boulanger, Olivier
The aim of this thesis is the application of gravity and resistivity methods for mining prospecting. The objectives of the present study are: (1) to build a fast gravity inversion method to interpret surface data; (2) to develop a tool for modelling the electrical potential acquired at surface and in boreholes when the resistivity distribution is heterogeneous; and (3) to define and implement a stochastic inversion scheme allowing the estimation of the subsurface resistivity from electrical data. The first technique concerns the elaboration of a three dimensional (3D) inversion program allowing the interpretation of gravity data using a selection of constraints such as the minimum distance, the flatness, the smoothness and the compactness. These constraints are integrated in a Lagrangian formulation. A multi-grid technique is also implemented to resolve separately large and short gravity wavelengths. The subsurface in the survey area is divided into juxtaposed rectangular prismatic blocks. The problem is solved by calculating the model parameters, i.e. the densities of each block. Weights are given to each block depending on depth, a priori information on density, and density range allowed for the region under investigation. The present code is tested on synthetic data. Advantages and behaviour of each method are compared in the 3D reconstruction. Recovery of geometry (depth, size) and density distribution of the original model is dependent on the set of constraints used. The best combination of constraints experimented for multiple bodies seems to be flatness and minimum volume for multiple bodies. The inversion method is tested on real gravity data. The second tool developed in this thesis is a three-dimensional electrical resistivity modelling code to interpret surface and subsurface data. Based on the integral equation, it calculates the charge density caused by conductivity gradients at each interface of the mesh allowing an exact estimation of the potential. Modelling generates a huge matrix made of Green's functions which is stored by using the method of pyramidal compression. The third method consists to interpret electrical potential measurements from a non-linear geostatistical approach including new constraints. This method estimates an analytical covariance model for the resistivity parameters from the potential data. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Abdellaoui, Amr
This research project presents a complete modelling process of the effects of GIC on Hydro-Quebec power system network for system planning studies. The advantage of the presented method is that it enables planning engineers to simulate the effects of geomagnetic disturbances on the Hydro-Quebec System under different conditions and contingencies within reasonable calculation time frame. This modelling method of GIC in electric power systems has been applied to the Hydro-Quebec System. An equivalent HQ DC model has been achieved. A numerical calculation method of DC sources from a non-uniform geoelectric field has been developed and implemented on HQ DC model. Harmonics and increased reactive power losses of saturated transformers have been defined as a function of GIC through a binary search algorithm using a chosen HQ magnetization curve. The evolution in time of each transformer saturation according to its effective GIC has been evaluated using analytical formulas. The reactive power losses of saturated transformers have been modeled in PSS/E[1] HQ network as constant reactive current loads assigned to the corresponding transformer buses. Finally, time domain simulations have been performed with PSS/E taking into account transformer saturation times. This has been achieved by integrating HQ DC model results and analytical calculations results of transformer saturation times into an EMTP load model. An interface has been used to link EMTP load model to HQ PSS/E network. Different aspects of GIC effects on the Hydro-Quebec system have been studied, including the influence of uniform and non-uniform geoelectric fields, the comparison of reactive power losses of the 735kV HQ system with those of Montreal network, the risks to voltage levels and the importance of reactive power dynamic reserve. This dissertation presents a new GIC modelling approach for power systems for planning and operations purposes. This methodology could be further enhanced, particularly, the aspect regarding the transformer saturation times. Hence more research remains to be pursued in this area.
Operational coupled atmosphere - ocean - ice forecast system for the Gulf of St. Lawrence, Canada
NASA Astrophysics Data System (ADS)
Faucher, M.; Roy, F.; Desjardins, S.; Fogarty, C.; Pellerin, P.; Ritchie, H.; Denis, B.
2009-09-01
A fully interactive coupled atmosphere-ocean-ice forecasting system for the Gulf of St. Lawrence (GSL) has been running in experimental mode at the Canadian Meteorological Centre (CMC) for the last two winter seasons. The goal of this project is to provide more accurate weather and sea ice forecasts over the GSL and adjacent coastal areas by including atmosphere-oceanice interactions in the CMC operational forecast system using a formal coupling strategy between two independent modeling components. The atmospheric component is the Canadian operational GEM model (Côté et al. 1998) and the oceanic component is the ocean-ice model for the Gulf of St. Lawrence developed at the Maurice Lamontagne Institute (IML) (Saucier et al. 2003, 2004). The coupling between those two models is achieved by exchanging surface fluxes and variables through MPI communication. The re-gridding of the variables is done with a package developed at the Recherche en Prevision Numerique centre (RPN, Canada). Coupled atmosphere - ocean - ice forecasts are issued once a day based on 00GMT data. Results for the past two years have demonstrated that the coupled system produces improved forecasts in and around the GSL during all seasons, proving that atmosphere-ocean-ice interactions are indeed important even for short-term Canadian weather forecasts. This has important implications for other coupled modeling and data assimilation partnerships that are in progress involving EC, the Department of Fisheries and Oceans (DFO) and the National Defense (DND). Following this experimental phase, it is anticipated that this GSL system will be the first fully interactive coupled system to be implemented at CMC.
NASA Astrophysics Data System (ADS)
Queirós, S. M. D.; Tsallis, C.
2005-11-01
The GARCH algorithm is the most renowned generalisation of Engle's original proposal for modelising returns, the ARCH process. Both cases are characterised by presenting a time dependent and correlated variance or volatility. Besides a memory parameter, b, (present in ARCH) and an independent and identically distributed noise, ω, GARCH involves another parameter, c, such that, for c=0, the standard ARCH process is reproduced. In this manuscript we use a generalised noise following a distribution characterised by an index qn, such that qn=1 recovers the Gaussian distribution. Matching low statistical moments of GARCH distribution for returns with a q-Gaussian distribution obtained through maximising the entropy Sq=1-sumipiq/q-1, basis of nonextensive statistical mechanics, we obtain a sole analytical connection between q and left( b,c,qnright) which turns out to be remarkably good when compared with computational simulations. With this result we also derive an analytical approximation for the stationary distribution for the (squared) volatility. Using a generalised Kullback-Leibler relative entropy form based on Sq, we also analyse the degree of dependence between successive returns, zt and zt+1, of GARCH(1,1) processes. This degree of dependence is quantified by an entropic index, qop. Our analysis points the existence of a unique relation between the three entropic indexes qop, q and qn of the problem, independent of the value of (b,c).
NASA Astrophysics Data System (ADS)
Benard, Pierre
Nous presentons une etude des fluctuations magnetiques de la phase normale de l'oxyde de cuivre supraconducteur La_{2-x}Sr _{x}CuO_4 . Le compose est modelise par le Hamiltonien de Hubbard bidimensionnel avec un terme de saut vers les deuxiemes voisins (modele tt'U). Le modele est etudie en utilisant l'approximation de la GRPA (Generalized Random Phase Approximation) et en incluant les effets de la renormalisation de l'interaction de Hubbard par les diagrammes de Brueckner-Kanamori. Dans l'approche presentee dans ce travail, les maximums du facteur de structure magnetique observes par les experiences de diffusion de neutrons sont associes aux anomalies 2k _{F} de reseau du facteur de structure des gaz d'electrons bidimensionnels sans interaction. Ces anomalies proviennent de la diffusion entre particules situees a des points de la surface de Fermi ou les vitesses de Fermi sont tangentes, et conduisent a des divergences dont la nature depend de la geometrie de la surface de Fermi au voisinage de ces points. Ces resultats sont ensuite appliques au modele tt'U, dont le modele de Hubbard usuel tU est un cas particulier. Dans la majorite des cas, les interactions ne determinent pas la position des maximums du facteur de structure. Le role de l'interaction est d'augmenter l'intensite des structures du facteur de structure magnetique associees a l'instabilite magnetique du systeme. Ces structures sont souvent deja presentes dans la partie imaginaire de la susceptibilite sans interaction. Le rapport d'intensite entre les maximums absolus et les autres structures du facteur de structure magnetique permet de determiner le rapport U_ {rn}/U_{c} qui mesure la proximite d'une instabilite magnetique. Le diagramme de phase est ensuite etudie afin de delimiter la plage de validite de l'approximation. Apres avoir discute des modes collectifs et de l'effet d'une partie imaginaire non-nulle de la self-energie, l'origine de l'echelle d'energie des fluctuations magnetiques est examinee. Il est ensuite demontre que le modele a trois bandes predit les memes resultats pour la position des structures du facteur de structure magnetique que le modele a une bande, dans la limite ou l'hybridation des orbitales des atomes d'oxygene des plans Cu-O_2 et l'amplitude de sauts vers les seconds voisins sont nulles. Il est de plus constate que l'effet de l'hybridation des orbitales des atomes d'oxygene est bien modelise par le terme de saut vers les seconds voisins. Meme si ils decrivent correctement le comportement qualitatif des maximums du facteur de structure magnetique, les modeles a trois bandes et a une bande ne permettent pas d'obtenir une position de ces structures conforme avec les mesures experimentales, si on suppose que la bande est rigide, c'est-a-dire que les parametres du Hamiltonien sont independants de la concentration de strontium. Ceci peut etre cause par la dependance des parametres du Hamiltonien sur la concentration de strontium. Finalement, les resultats sont compares avec les experiences de diffusion de neutrons et les autres theories, en particulier celles de Littlewood et al. (1993) et de Q. Si et al. (1993). La comparaison avec les resultats experimentaux pour le compose de lanthane suggere que le liquide de Fermi possede une surface de Fermi disjointe, et qu'il est situe pres d'une instabilite magnetique incommensurable.
NASA Astrophysics Data System (ADS)
Leroy, Pierre
The objective of this thesis is to conduct a thorough numerical and experimental analysis of the smart foam concept, in order to highlight the physical mechanisms and the technological limitations for the control of acoustic absorption. A smart foam is made of an absorbing material with an embedded actuator able to complete the lack of effectiveness of this material in the low frequencies (<500Hz). In this study, the absorbing material is a melamine foam and the actuator is a piezoelectric film of PVDF. A 3D finite element model coupling poroelastic, acoustic, elastic and piezoelectric fields is proposed. The model uses volume and surface quadratic elements. The improved formulation (u,p) is used. An orthotropic porous element is proposed. The power balance in the porous media is established. This model is a powerful and general tool allowing the modeling of all hybrid configurations using poroelastic and piezoelectric fields. Three smart foams prototypes have been built with the aim of validating the numerical model and setting up experimental active control. The comparison of numerical calculations and experimental measurements shows the validity of the model for passive aspects, transducer behaviors and also for control configuration. The active control of acoustic absorption is carried out in normal incidence with the assumption of plane wave in the frequency range [0-1500Hz]. The criterion of minimization is the reflected pressure measured by an unidirectional microphone. Three control cases were tested: off line control with a sum of pure tones, adaptive control with the nFX-LMS algorithm for a pure tone and for a random broad band noise. The results reveal the possibility of absorbing a pressure of 1.Pa at 1.00Hz with 100V and a broad band noise of 94dB with a hundred Vrms starting from 250Hz. These results have been obtained with a mean foam thickness of 4cm. The control ability of the prototypes is directly connected to the acoustic flow. An important limitation for the broad band control comes from the high distortion level through the system in the low and high frequency range (<500Hz, > 1500Hz). The use of the numerical model, supplemented by an analytical study made it possible to clarify the action mode and the dissipation mechanisms in smart foams. The PVDF moves with the same phase and amplitude of the residual incidental pressure which is not dissipated in the foam. Viscous effect dissipation is then very weak in the low frequencies and becomes more important in the high frequencies. The wave which was not been dissipated in the porous material is transmitted by the PVDF in the back cavity. The outlooks of this study are on the one hand, the improvement of the model and the prototypes and on the other hand, the widening of the field of research to the control of the acoustic transmission and the acoustic radiation of surfaces. The model could be improved by integrating viscoelastic elements able to account for the behavior of the adhesive layer between the PVDF and foam. A modelisation of electro-elastomers materials would also have to be implemented in the code. This new type of actuator could make it possible to exceed the PVDF displacement limitations. Finally it would be interesting for the industrial integration prospects to seek configurations able to maximize acoustic absorption and to limit the transmission and the radiation of surfaces at the same time.
De l'importance des orbites periodiques: Detection et applications
NASA Astrophysics Data System (ADS)
Doyon, Bernard
L'ensemble des Orbites Periodiques Instables (OPIs) d'un systeme chaotique est intimement relie a ses proprietes dynamiques. A partir de l'ensemble (en principe infini) d'OPIs cachees dans l'espace des phases, on peut obtenir des quantites dynamiques importantes telles les exposants de Lyapunov, la mesure invariante, l'entropie topologique et la dimension fractale. En chaos quantique (i.e. l'etude de systemes quantiques qui ont un equivalent chaotique dans la limite classique), ces memes OPIs permettent de faire le pont entre le comportement classique et quantique de systemes non-integrables. La localisation de ces cycles fondamentaux est un probleme complexe. Cette these aborde dans un premier temps le probleme de la detection des OPIs dans les systemes chaotiques. Une etude comparative de deux algorithmes recents est presentee. Nous approfondissons ces deux methodes afin de les utiliser sur differents systemes dont des flots continus dissipatifs et conservatifs. Une analyse du taux de convergence des algorithmes est aussi realisee afin de degager les forces et les limites de ces schemes numeriques. Les methodes de detection que nous utilisons reposent sur une transformation particuliere de la dynamique initiale. Cette astuce nous a inspire une methode alternative pour cibler et stabiliser une orbite periodique quelconque dans un systeme chaotique. Le ciblage est en general combine aux methodes de controle pour stabiliser rapidement un cycle donne. En general, il faut connaitre la position et la stabilite du cycle en question. La nouvelle methode de ciblage que nous presentons ne demande pas de connaitre a priori la position et la stabilite des orbites periodiques. Elle pourrait etre un outil complementaire aux methodes de ciblage et de controle actuelles.
Effets non lineaires transversaux dans les guides d'ondes plans
NASA Astrophysics Data System (ADS)
Dumais, Patrick
Les effets non lineaires transversaux dus a l'effet Kerr optique non resonant sont etudies dans deux types de guides a geometrie plane. D'abord (au chapitre 2), l'emission de solitons spatiaux d'un guide de type canal est etudie historiquement, analytiquement et numeriquement dans le but d'en faire la conception et la fabrication, en AlGaAs, dans la region spectrale en deca de la moitie de la bande interdite de ce materiau, soit autour de 1,5 microns. Le composant, tel que concu, comporte une structure de multipuits quantiques. Le desordonnement local de cette structure permet une variation locale du coefficient Kerr dans le guide, ce qui mene a l'emission d'un soliton spatial au-dela d'une puissance optique de seuil. L'observation experimentale d'un changement en fonction de l'intensite du profil de champ a la sortie du guide realise est presentee. Deuxiemement (au chapitre 3) une technique de mesure du coefficient Kerr dans un guide plan est presentee. Cette technique consiste a mesurer le changement de transmission au travers d'un cache place a la sortie du guide en fonction de l'intensite crete a l'entree du guide plan. Une methode pour determiner les conditions optimales pour la sensibilite de la mesure est presentee, illustree de plusieurs exemples. Finalement, la realisation d'un oscillateur parametrique optique basee sur un cristal de niobate de lithium a domaines periodiquement inverses est presentee. La theorie des oscillateurs parametriques optiques est exposee avec une emphase sur la generation d'impulsions intenses a des longueurs d'onde autour de 1,5 microns a partir d'un laser Ti:saphir, dans le but d'obtenir une source pour faire les experiences sur l'emission solitonique.
Impact of upper-level fine-scale structures in the deepening of a Mediterranean "hurricane"
NASA Astrophysics Data System (ADS)
Claud, C.; Chaboureau, J.-P.; Argence, S.; Lambert, D.; Richard, E.; Gauthier, N.; Funatsu, B.; Arbogast, P.; Maynard, K.; Hauchecorne, A.
2009-09-01
Subsynoptic scale vortices that have been likened to tropical cyclones or polar lows (Medicanes) are occasionally observed over the Mediterranean Sea. They are usually associated with strong winds and heavy precipitation and thus can have highly destructive effects in densely-populated regions. Only a precise forecasting of such systems could mitigate these effects. In this study, the role of an approaching upper-level Potential Vorticity (PV) maximum close to the vicinity of a Medicane which appeared early in the morning of 26 September 2006 over the Ionian Sea and moved north-eastwards affecting Apulia, is evaluated using the anelastic non-hydrostatic model Méso-NH initialized with forecasts from ARPEGE, the French operational forecasting system. To this end, in a first step, high resolution PV fields have been determined using a semi-Lagrangian advection model, MIMOSA (Modelisation Isentrope du transport Meso-echelle de l'Ozone Stratospherique par Advection). MIMOSA PV fields at and around 320 K for 25 September 2006 at 1800 UTC clearly show a stratospheric intrusion under the form of a filament crossing UK, western Europe and the Tyrrhenian Sea. MIMOSA fields show a number of details that do not appear in ECMWF analysed PV fields, and in particular an area of high PV values just west of Italy over the Tyrrhenian Sea. While the overall structure of the filament is well described by ARPEGE analysis, the high PV values in the Tyrrhenian Sea close to the coast of Italy are missing. In order to take into account these differences, ARPEGE upper-level fields have been corrected after a PV inversion guided by MIMOSA fields. Modifications of PV in ARPEGE lead to a deepest system and improved rain fields (both in location and intensity), when evaluated against ground-based observations. In a second step, Meso-NH simulations coupled with corrected and non-corrected ARPEGE forecasts have been performed. The impact of the corrections on the intensity, the trajectory and the associated precipitation has been evaluated using in situ and satellite observations, in the latter case through a model to satellite approach. When the PV corrections are applied, the track of the simulated Medicane is closer to the observed one. The deepening of the low is also better reproduced, even if it is over-estimated (982 hPa instead of 986 hPa), as well as the precipitation. This study confirms the role of fine-scale upper level structures for short range forecasting of sub-synoptic vortices over the Mediterranean Sea. It also suggests that ensemble prediction models should include perturbations related to upper-level coherent structures.
L'archivage a long terme de la maquette numerique trois-dimensionnelle annotee
NASA Astrophysics Data System (ADS)
Kheddouci, Fawzi
The use of engineering drawings in the development of mechanical products, including the exchange of engineering data as well as for archiving, is common industry practice. Traditionally, paper has been the mean to deliver those needs. However, these practices have evolved in favour of computerized tools and methods for the creation, diffusion and preservation of data involved in the process of developing aeronautical products characterized by life cycles that can exceed 70 years. Therefore, it is necessary to redefine how to maintain this data in a context whereby engineering drawings are being replaced by the 3D annotated digital mock-up. This thesis addresses the issue of long-term archiving of 3D annotated digital mock-ups, which includes geometric and dimensional tolerances, as well as other notes and specifications, in compliance with the requirements formulated by the aviation industry including regulatory and legal requirements. First, we review the requirements imposed by the aviation industry in the context of long-term archiving of 3D annotated digital mock-ups. We then consider alternative solutions. We begin by identifying the theoretical approach behind the choice of a conceptual model for digital long-term archiving. Then we evaluate, among the proposed alternatives, an archiving format that will guarantee the preservation of the integrity of the 3D annotated model (geometry, tolerances and other metadata) and its sustainability. The evaluation of 3D PDF PRC as a potential archiving format is carried out on a sample of 185 3D CATIA V5 models (parts and assemblies) provided by industrial partners. This evaluation is guided by a set of criteria including the transfer of geometry, 3D annotations, views, captures and parts positioning in assembly. The results indicate that maintaining the exact geometry is done successfully when transferring CATIA V5 models to 3D PDF PRC. Concerning the transfer of 3D annotations, we observed degradation associated with their display on the 3D model. This problem can, however, be solved by performing the conversion of the native model to STEP first, and then to 3D PDF PRC. In view of current tools, PDF 3D PRC is considered as a potential solution for long-term archiving of 3D annotated models for individual parts. However, this solution is currently not deemed adequate for archiving assemblies. The practice of 2D drawing will thus remain, in the short term, for assemblies.
Modelisation geometrique par NURBS pour le design aerodynamique des ailes d'avion
NASA Astrophysics Data System (ADS)
Bentamy, Anas
The constant evolution of the computer science gives rise to many research areas especially in computer aided design. This study is part, of the advancement of the numerical methods in engineering computer aided design, specifically in aerospace science. The geometric modeling based on NURBS has been applied successfully to generate a parametric wing surface for aerodynamic design while satisfying manufacturing constraints. The goal of providing a smooth geometry described with few parameters has been achieved. In that case, a wing design including ruled surfaces at the leading edge slat and at the flap, and, curved central surfaces with intrinsic geometric property coming from conic curves, necessitates 130 control points and 15 geometric design variables. The 3D character of the wing need to be analyzed by techniques of investigation of surfaces in order to judge conveniently the visual aspect and detect any sign inversion in both directions of parametrization u and nu. Color mapping of the Gaussian curvature appears to be a very effective tools in visualization. The automation of the construction has been attained using an heuristic optimization algorithm, simulated annealing. The relative high speed of convergence to the solutions confirms its practical interest in engineering problems nowadays. The robustness of the geometric model has been tested successfully with an academic inverse design problem. The results obtained allow to foresee multiple possible applications from an extension to a complete geometric description of an airplane to the interaction with others disciplines belonging to a preliminary aeronautical design process.
Development of a precision multimodal surgical navigation system for lung robotic segmentectomy
Soldea, Valentin; Lachkar, Samy; Rinieri, Philippe; Sarsam, Mathieu; Bottet, Benjamin; Peillon, Christophe
2018-01-01
Minimally invasive sublobar anatomical resection is becoming more and more popular to manage early lung lesions. Robotic-assisted thoracic surgery (RATS) is unique in comparison with other minimally invasive techniques. Indeed, RATS is able to better integrate multiple streams of information including advanced imaging techniques, in an immersive experience at the level of the robotic console. Our aim was to describe three-dimensional (3D) imaging throughout the surgical procedure from preoperative planning to intraoperative assistance and complementary investigations such as radial endobronchial ultrasound (R-EBUS) and virtual bronchoscopy for pleural dye marking. All cases were operated using the DaVinci SystemTM. Modelisation was provided by Visible Patient™ (Strasbourg, France). Image integration in the operative field was achieved using the Tile Pro multi display input of the DaVinci console. Our experience was based on 114 robotic segmentectomies performed between January 2012 and October 2017. The clinical value of 3D imaging integration was evaluated in 2014 in a pilot study. Progressively, we have reached the conclusion that the use of such an anatomic model improves the safety and reliability of procedures. The multimodal system including 3D imaging has been used in more than 40 patients so far and demonstrated a perfect operative anatomic accuracy. Currently, we are developing an original virtual reality experience by exploring 3D imaging models at the robotic console level. The act of operating is being transformed and the surgeon now oversees a complex system that improves decision making. PMID:29785294
Development of a precision multimodal surgical navigation system for lung robotic segmentectomy.
Baste, Jean Marc; Soldea, Valentin; Lachkar, Samy; Rinieri, Philippe; Sarsam, Mathieu; Bottet, Benjamin; Peillon, Christophe
2018-04-01
Minimally invasive sublobar anatomical resection is becoming more and more popular to manage early lung lesions. Robotic-assisted thoracic surgery (RATS) is unique in comparison with other minimally invasive techniques. Indeed, RATS is able to better integrate multiple streams of information including advanced imaging techniques, in an immersive experience at the level of the robotic console. Our aim was to describe three-dimensional (3D) imaging throughout the surgical procedure from preoperative planning to intraoperative assistance and complementary investigations such as radial endobronchial ultrasound (R-EBUS) and virtual bronchoscopy for pleural dye marking. All cases were operated using the DaVinci System TM . Modelisation was provided by Visible Patient™ (Strasbourg, France). Image integration in the operative field was achieved using the Tile Pro multi display input of the DaVinci console. Our experience was based on 114 robotic segmentectomies performed between January 2012 and October 2017. The clinical value of 3D imaging integration was evaluated in 2014 in a pilot study. Progressively, we have reached the conclusion that the use of such an anatomic model improves the safety and reliability of procedures. The multimodal system including 3D imaging has been used in more than 40 patients so far and demonstrated a perfect operative anatomic accuracy. Currently, we are developing an original virtual reality experience by exploring 3D imaging models at the robotic console level. The act of operating is being transformed and the surgeon now oversees a complex system that improves decision making.
Mechanical behavior and modelisation of Ti-6Al-4V titanium sheet under hot stamping conditions
NASA Astrophysics Data System (ADS)
Sirvin, Q.; Velay, V.; Bonnaire, R.; Penazzi, L.
2017-10-01
The Ti-6Al-4V titanium alloy is widely used for the manufacture of aeronautical and automotive parts (solid parts). In aeronautics, this alloy is employed for its excellent mechanical behavior associated with low density, outstanding corrosion resistance and good mechanical properties up to 600°C. It is especially used for the manufacture of fuselage frames, on the pylon for carrying out the primary structure (machining forged blocks) and the secondary structure in sheet form. In this last case, the sheet metal forming can be done through various methods: at room temperature by drawing operation, at very high temperature (≃900°C) by superplastic forming (SPF) and at intermediate temperature (≥750°C) by hot forming (HF). In order to reduce production costs and environmental troubles, the cycle times reduction associated with a decrease of temperature levels are relevant. This study focuses on the behavior modelling of Ti-6Al-4V alloy at temperatures above room temperature to obtained greater formability and below SPF condition to reduce tools workshop and energy costs. The displacement field measurement obtained by Digital Image Correlation (DIC) is based on innovative surface preparation pattern adapted to high temperature exposures. Different material parameters are identified to define a model able to predict the mechanical behavior of Ti-6Al-4V alloy under hot stamping conditions. The hardening plastic model identified is introduced in FEM to simulate an omega shape forming operation.
[Relapse: causes and consequences].
Thomas, P
2013-09-01
Relapse after a first episode of schizophrenia is the recurrence of acute symptoms after a period of partial or complete remission. Due to its variable aspects, there is no operational definition of relapse able to modelise the outcome of schizophrenia and measure how the treatment modifies the disease. Follow-up studies based on proxys such as hospital admission revealed that 7 of 10 patients relapsed after a first episode of schizophrenia. The effectiveness of antipsychotic medications on relapse prevention has been widely demonstrated. Recent studies claim for the advantages of atypical over first generation antipsychotic medication. Non-adherence to antipsychotic represents with addictions the main causes of relapse long before some non-consensual factors such as premorbid functioning, duration of untreated psychosis and associated personality disorders. The consequences of relapse are multiple, psychological, biological and social. Pharmaco-clinical studies have demonstrated that the treatment response decreases with each relapse. Relapse, even the first one, will contribute to worsen the outcome of the disease and reduce the capacity in general functionning. Accepting the idea of continuing treatment is a complex decision in which the psychiatrist plays a central role besides patients and their families. The development of integrated actions on modifiable risk factors such as psychosocial support, addictive comorbidities, access to care and the therapeutic alliance should be promoted. Relapse prevention is a major goal of the treatment of first-episode schizophrenia. It is based on adherence to the maintenance treatment, identification of prodromes, family active information and patient therapeutical education. Copyright © 2013 L’Encéphale. Published by Elsevier Masson SAS.. All rights reserved.
NASA Astrophysics Data System (ADS)
Vazquez Rascon, Maria de Lourdes
This thesis focuses on the implementation of a participatory and transparent decision making tool about the wind farm projects. This tool is based on an (argumentative) framework that reflects the stakeholder's values systems involved in these projects and it employs two multicriteria methods: the multicriteria decision aide and the participatory geographical information systems, making it possible to represent this value systems by criteria and indicators to be evaluated. The stakeholder's values systems will allow the inclusion of environmental, economic and social-cultural aspects of wind energy projects and, thus, a sustainable development wind projects vision. This vision will be analyzed using the 16 sustainable principles included in the Quebec's Sustainable Development Act. Four specific objectives have been instrumented to favor a logical completion work, and to ensure the development of a successfultool : designing a methodology to couple the MCDA and participatory GIS, testing the developed methodology by a case study, making a robustness analysis to address strategic issues and analyzing the strengths, weaknesses, opportunities and threads of the developed methodology. Achieving the first goal allowed us to obtain a decision-making tool called Territorial Intelligence Modeling for Energy Development (TIMED approach). The TIMED approach is visually represented by a figure expressing the idea of a co-construction decision and where ail stakeholders are the focus of this methodology. TIMED is composed of four modules: Multi-Criteria decision analysis, participatory geographic Information systems, active involvement of the stakeholders and scientific knowledge/local knowledge. The integration of these four modules allows for the analysis of different implementation scenarios of wind turbines in order to choose the best one based on a participatory and transparent decision-making process that takes into account stakeholders' concerns. The second objective enabled the testing of TIMED in an ex-post experience of a wind farm in operation since 2006. In this test, II people participated representing four stakeholder' categories: the private sector, the public sector, experts and civil society. This test allowed us to analyze the current situation in which wind projects are currently developed in Quebec. The concerns of some stakeholders regarding situations that are not considered in the current context were explored through a third goal. This third objective allowed us to make simulations taking into account the assumptions of strategic levels. Examples of the strategic level are the communication tools used to approach the host community and the park property type. Finally, the fourth objective, a SWOT analysis with the participation of eight experts, allowed us to verify the extent to which TIMED approach succeeded in constructing four fields for participatory decision-making: physical, intellectual, emotional and procedural. From these facts, 116 strengths, 28 weaknesses, 32 constraints and 54 opportunities were identified. Contributions, applications, limitations and extensions of this research are based on giving a participatory decision-making methodology taking into account socio-cultural, environmental and economic variables; making reflection sessions on a wind farm in operation; acquiring MCDA knowledge for participants involved in testing the proposed methodology; taking into account the physical, intellectual, emotional and procedural spaces to al1iculate a participatory decision; using the proposed methodology in renewable energy sources other than wind; the need to an interdisciplinary team for the methodology application; access to quality data; access to information technologies; the right to public participation; the neutrality of experts; the relationships between experts and non-experts; cultural constraints; improvement of designed indicators; the implementation of a Web platform for participatory decision-making and writing a manual on the use of the developed methodology. Keywords: wind farm, multicriteria decision, geographic information systems, TIMED approach, sustainable wind energy projects development, renewable energy, social participation, robustness concern, SWOT analysis.
Capteur de CO{2} à fibres optiques par absorption moléculaire à 4,3 μm
NASA Astrophysics Data System (ADS)
Bendamardji, S.; Alayli, Y.; Huard, S.
1996-04-01
This paper describes a remote optical fibre sensor for the carbon dioxide detection by molecular absorption in the near infrared (4.3 μm) corresponding to fundamental mode ν3. To overcome the problem of the strong attenuation signal of optical fibre in the near infrared, we have used the opto-suppling technique which changes the working wavelength from 4.3 μm to 860 nm and permits the use of standard optical fibre 50/125. The simulation of absorption has been obtained by original modelisation of the absorption spectrum and the establishment of the calibration curves takes to the sensor to detect a partial pressures greater than 100 μbar with a minimal error margin of 100 μbar, which is acceptable considering the future use of the device. The sensor has been designed to monitor the CO{2} rate in enriched greenhouses. Cet article décrit un capteur à fibres optiques de gaz carbonique par absorption moléculaire dans l'infrarouge moyen (4,3 μm) correspondant au mode fondamental ν3. La liaison entre le site de mesure et le site de contrôle est assurée par un fibre optique standard 50/125 après une transposition de longueur d'onde de 4,3 μm à 860 nm par opto-alimentation. La simulation de l'absorption a été obtenue par modélisation originale du spectre d'absorption et l'établissement des courbes d'étalonnage prévoit une marge d'erreur minimale de 100 μbar, ce qui est suffisant pour l'application du dispositif à la régulation de taux CO{2} dans les serres agricoles enrichies par de gaz.
NASA Astrophysics Data System (ADS)
Losseau, Romain
The ongoing energy transition is about to entail important changes in the way we use and manage energy. In this view, smart grids are expected to play a significant part through the use of intelligent storage techniques. Initiated in 2014, the SmartDesc project follows this trend to create an innovative load management program by exploiting the thermal storage associated with electric water heaters existing in residential households. The device control algorithms rely on the recent theory of mean field games to achieve a decentralized control of the water heaters temperatures producing an aggregate optimal trajectory, designed to smooth the electric demand of a neighborhood. Currently, this theory does not include power and temperature constraints due to the tank heating system or necessary for the user's safety and comfort. Therefore, a trajectory violating these constraints would not be feasible and would not induce the forecast load smoothing. This master's thesis presents a method to detect the non-feasability, of a target trajectory based on the Kolmogorov equations associated with the controlled electric water heaters and suggests a way to correct it so as to make it achievable under constraints. First, a partial differential equations based model of the water heaters under temperature constraints is presented. Subsequently, a numerical scheme is developed to simulate it, and applied to the mean field control. The results of the mean field control with and without constraints are compared, and non-feasabilities of the target trajectory are highlighted upon violations. The last part of the thesis is dedicated to developing an accelerated version of the mean field and a method of correcting the target trajectory so as to enlarge as much as possible the set of achievable profiles.
NASA Astrophysics Data System (ADS)
Nguimbus, Raphael
La determination de l'impact des facteurs sous controle et hors controle qui influencent les volumes de vente des magasins de detail qui vendent des produits homogenes et fortement substituables constitue le coeur de cette these. Il s'agit d'estimer un ensemble de coefficients stables et asymtotiquement efficaces non correles avec les effets specifiques aleatoires des sites d'essence dans le marche de Montreal (Quebec, Canada) durant is periode 1993--1997. Le modele econometrique qui est ainsi specifie et teste, isole un ensemble de quatre variables dont le prix de detail affiche dans un site d'essence ordinaire, la capacite de service du site pendant les heures de pointe, les heures de service et le nombre de sites concurrents au voisinage du site dans un rayon de deux kilometres. Ces quatre facteurs influencent les ventes d'essence dans les stations-service. Les donnees en panel avec les methodes d'estimation robustes (estimateur a distance minimale) sont utilisees pour estimer les parametres du modele de vente. Nous partons avec l'hypothese generale selon laquelle il se developpe une force d'attraction qui attire les clients automobilistes dans chaque site, et qui lui permet de realiser les ventes. Cette capacite d'attraction varie d'un site a un autre et cela est du a la combinaison de l'effort marketing et de l'environnement concurrentiel autour du site. Les notions de voisinage et de concurrence spatiale expliquent les comportements des decideurs qui gerent les sites. Le but de cette these est de developper un outil d'aide a la decision (modele analytique) pour permettre aux gestionnaires des chaines de stations-service d'affecter efficacement les ressources commerciales dans ies points de vente.
Etude de la dynamique des porteurs dans des nanofils de silicium par spectroscopie terahertz
NASA Astrophysics Data System (ADS)
Beaudoin, Alexandre
Ce memoire presente une etude des proprietes de conduction electrique et de la dynamique temporelle des porteurs de charges dans des nanofils de silicium sondes par rayonnement terahertz. Les cas de nanofils de silicium non intentionnellement dopes et dopes type n sont compares pour differentes configurations du montage experimental. Les mesures de spectroscopie terahertz en transmission montre qu'il est possible de detecter la presence de dopants dans les nanofils via leur absorption du rayonnement terahertz (˜ 1--12 meV). Les difficultes de modelisation de la transmission d'une impulsion electromagnetique dans un systeme de nanofils sont egalement discutees. La detection differentielle, une modification au systeme de spectroscopie terahertz, est testee et ses performances sont comparees au montage de caracterisation standard. Les instructions et des recommendations pour la mise en place de ce type de mesure sont incluses. Les resultats d'une experience de pompe optique-sonde terahertz sont egalement presentes. Dans cette experience, les porteurs de charge temporairement crees suite a l'absorption de la pompe optique (lambda ˜ 800 nm) dans les nanofils (les photoporteurs) s'ajoutent aux porteurs initialement presents et augmentent done l'absorption du rayonnement terahertz. Premierement, l'anisotropie de l'absorption terahertz et de la pompe optique par les nanofils est demontree. Deuxiemement, le temps de recombinaison des photoporteurs est etudie en fonction du nombre de photoporteurs injectes. Une hypothese expliquant les comportements observes pour les nanofils non-dopes et dopes-n est presentee. Troisiemement, la photoconductivite est extraite pour les nanofils non-dopes et dopes-n sur une plage de 0.5 a 2 THz. Un lissage sur la photoconductivite permet d'estimer le nombre de dopants dans les nanofils dopes-n. Mots-cles: nanofil, silicium, terahertz, conductivite, spectroscopie, photoconductivite.
Caracterisation pratique des systemes quantiques et memoires quantiques auto-correctrices 2D
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier
Cette these s'attaque a deux problemes majeurs de l'information quantique: - Comment caracteriser efficacement un systeme quantique? - Comment stocker de l'information quantique? Elle se divise done en deux parties distinctes reliees par des elements techniques communs. Chacune est toutefois d'un interet propre et se suffit a elle-meme. Caracterisation pratique des systemes quantiques. Le calcul quantique exige un tres grand controle des systemes quantiques composes de plusieurs particules, par exemple des atomes confines dans un piege electromagnetique ou des electrons dans un dispositif semi-conducteur. Caracteriser un tel systeme quantique consiste a obtenir de l'information sur l'etat grace a des mesures experimentales. Or, chaque mesure sur le systeme quantique le perturbe et doit done etre effectuee apres avoir reprepare le systeme de facon identique. L'information recherchee est ensuite reconstruite numeriquement a partir de l'ensemble des donnees experimentales. Les experiences effectuees jusqu'a present visaient a reconstruire l'etat quantique complet du systeme, en particulier pour demontrer la capacite de preparer des etats intriques, dans lesquels les particules presentent des correlations non-locales. Or, la procedure de tomographie utilisee actuellement n'est envisageable que pour des systemes composes d'un petit nombre de particules. Il est donc urgent de trouver des methodes de caracterisation pour les systemes de grande taille. Dans cette these, nous proposons deux approches theoriques plus ciblees afin de caracteriser un systeme quantique en n'utilisant qu'un effort experimental et numerique raisonnable. - La premiere consiste a estimer la distance entre l'etat realise en laboratoire et l'etat cible que l'experimentateur voulait preparer. Nous presentons un protocole, dit de certification, demandant moins de ressources que la tomographie et tres efficace pour plusieurs classes d'etats importantes pour l'informatique quantique. - La seconde approche, dite de tomographie variationnelle, propose de reconstruire l'etat en restreignant l'espace de recherche a une classe variationnelle plutot qu'a l'immense espace des etats possibles. Un etat variationnel etant decrit par un petit nombre de parametres, un petit nombre d'experiences peut suffire a identifier les parametres variationnels de l'etat experimental. Nous montrons que c'est le cas pour deux classes variationnelles tres utilisees, les etats a produits matriciels (MPS) et l'ansatz pour intrication multi-echelle (MERA). Memoires quantiques auto-correctrices 2D. Une memoire quantique auto-correctrice est un systeme physique preservant de l'information quantique durant une duree de temps macroscopique. Il serait done l'equivalent quantique d'un disque dur ou d'une memoire flash equipant les ordinateurs actuels. Disposer d'un tel dispositif serait d'un grand interet pour l'informatique quantique. Une memoire quantique auto-correctrice est initialisee en preparant un etat fondamental, c'est-a-dire un etat stationnaire de plus basse energie. Afin de stocker de l'information quantique, il faut plusieurs etats fondamentaux distincts, chacun correspondant a une valeur differente de la memoire. Plus precisement, l'espace fondamental doit etre degenere. Dans cette these, on s'interesse a des systemes de particules disposees sur un reseau bidimensionnel (2D), telles les pieces sur un echiquier, qui sont plus faciles a realiser que les systemes 3D. Nous identifions deux criteres pour l'auto-correction: - La memoire quantique doit etre stable face aux perturbations provenant de l'environnement, par exemple l'application d'un champ magnetique externe. Ceci nous amene a considerer les systemes topologiques 2D dont les degres de liberte sont intrinsequement robustes aux perturbations locales de l'environnement. - La memoire quantique doit etre robuste face a un environnement thermique. Il faut s'assurer que les excitations thermiques n'amenent pas deux etats fondamentaux distincts vers le meme etat excite, sinon l'information aura ete perdue. Notre resultat principal montre qu'aucun systeme topologique 2D n'est auto-correcteur: l'environnement peut changer l'etat fondamental en deplacant aleatoirement de petits paquets d'energie, un mecanisme coherent avec l'intuition que tout systeme topologique admet des excitations localisees ou quasiparticules. L'interet de ce resultat est double. D'une part, il oriente la recherche d'un systeme auto-correcteur en montrant qu'il doit soit (i) etre tridimensionnel, ce qui est difficile a realiser experimentalement, soit (ii) etre base sur des mecanismes de protection nouveaux, allant au-dela des considerations energetiques. D'autre part, ce resultat constitue un premier pas vers la demonstration formelle de l'existence de quasiparticules pour tout systeme topologique.
NASA Astrophysics Data System (ADS)
Bejaoui, Najoua
The pressurized water nuclear reactors (PWRs) is the largest fleet of nuclear reactors in operation around the world. Although these reactors have been studied extensively by designers and operators using efficient numerical methods, there are still some calculation weaknesses, given the geometric complexity of the core, still unresolved such as the analysis of the neutron flux's behavior at the core-reflector interface. The standard calculation scheme is a two steps process. In the first step, a detailed calculation at the assembly level with reflective boundary conditions, provides homogenized cross-sections for the assemblies, condensed to a reduced number of groups; this step is called the lattice calculation. The second step uses homogenized properties in each assemblies to calculate reactor properties at the core level. This step is called the full-core calculation or whole-core calculation. This decoupling of the two calculation steps is the origin of methodological bias particularly at the interface core reflector: the periodicity hypothesis used to calculate cross section librairies becomes less pertinent for assemblies that are adjacent to the reflector generally represented by these two models: thus the introduction of equivalent reflector or albedo matrices. The reflector helps to slowdown neutrons leaving the reactor and returning them to the core. This effect leads to two fission peaks in fuel assemblies localised at the core/reflector interface, the fission rate increasing due to the greater proportion of reentrant neutrons. This change in the neutron spectrum arises deep inside the fuel located on the outskirts of the core. To remedy this we simulated a peripheral assembly reflected with TMI-PWR reflector and developed an advanced calculation scheme that takes into account the environment of the peripheral assemblies and generate equivalent neutronic properties for the reflector. This scheme is tested on a core without control mechanisms and charged with fresh fuel. The results of this study showed that explicit representation of reflector and calculation of peripheral assembly with our advanced scheme allow corrections to the energy spectrum at the core interface and increase the peripheral power by up to 12% compared with that of the reference scheme.
How to anticipate the assessment of the public health benefit of new medicines?
Massol, Jacques; Puech, Alain; Boissel, Jean-Pierre
2007-01-01
The Public Health Benefit (PHB) of new medicines is a recent and French-specific criterion (October 1999 decree) which is often only partially documented in the transparency files due to a lack of timely information. At the time of the first reimbursement application for a new medicine to the "Transparency Committee", the file is exclusively based on data from randomised clinical trials. These data are generated from a global clinical development plan which was designed a long time before the new medicine's submission for reimbursement. And this plan does not systematically provide the data needed to assess the PHB. Thus, one easily understands the difficulty to anticipate and document this recent French criterion. In France, the PHB is both one of the necessary criteria for the reimbursement submission and an indicator for the national health policy management. Its assessment also helps to identify the needs and objectives of the post-registration studies (nowadays in the scope of responsibilities of the "Drug Economics Committee"). The assessment of the PHB criterion is carried through after the marketing authorization process and is an addition to it. To understand how to anticipate the assessment of the new medicines' PHB, one needs to consider how it differs from the preliminary step of the marketing authorization process. Whereas the evaluation for marketing authorization seeks to determine if the new medicine could be useful in a specific indication, the PHB assessment aims at quantifying the therapeutic benefit in a population, taking into account the reference treatments in this population. A new medicine receives a marketing authorization based on the data of the registration file which provides information on the clinical benefit of the new medicine in the populations of the trials and in the context of the trials. On the other side, the PHB looks at the effects of the new medicine at the scale of the general population, in real practice. The PHB components of a new medicine at first submission are the expected response of this new medicine to a public health need, the expected benefit on the health status of the population and ultimately the expected impact on the health care system. The benefit of a new medicine on the health status of a population is based on public health criteria which can be morbi-mortality or quality of life criteria. However, few registration files contain these public health criteria from the beginning and the predictive value of the surrogate criteria used in the trials is not always precisely assessed. It is, thus, difficult to quantify the expected benefit on these public health criteria. Moreover, the data that enable to quantify the new medicine's effects according to the various characteristics of the target population, are rarely available. Similarly, the French population epidemiological data related to the indication of the new medicine are often not available at the time of the assessment. Therefore it is difficult to evaluate the expected number of events that could be avoided if the new medicine reached the market. The authors suggest to adapt the clinical development plan for a better documentation of the PHB. They specifically recommend to integrate to the judgment criteria (endpoints) of the trials, criteria that are relevant in terms of public health, and to check for a good heterogeneity of the trial populations. They also suggest to start early enough collecting reliable national epidemiological data and the necessary elements for the assessment of the transposability of the trial results to the French population (ability to target the patients to be treated, adaptation of the healthcare system...). About the epidemiological data, the authors consider that the needs are covered in various ways depending on the diseases. To meet the needs of evaluation of the new medicines' target populations in specific indications, they recommend to use ad hoc studies as much as needed. In addition, epidemiological studies designed for market purpose with an acceptable methodology should not be systematically rejected but deserve to be presented. To be able to assess the importance of the expected theoretical benefit of a new medicine in a population, the authors underline the necessity to have access to study results with criteria related to this objective. They suggest to first define and list the criteria by disease. Regarding the representativity of the populations, it comes out that it would be advisable, but unrealistic to include in trials a population 100% representative of the population to be treated. Therefore the effect of the new medicine must be modelised (the "effect model") to be evaluated in the general population. Yet to obtain a reliable effect model, the study population must be sufficiently heterogeneous, which legitimates the demand to ensure a good population heterogeneity at the time of decision-making about trials methodology. When the criteria assessed during the development plan does not correspond to the PHB criteria, the only way to evaluate the number of events related to the PHB criterion is, again, to use modelisation. However, modelisation is only possible when the scientific literature has established a reliable correlation between the two types of criteria. In this case, the new model should be applied to a French target population to assess the expected benefit. As a conclusion, the possibilities to estimate the expected benefit of a new medicine on the health status of a specific population are currently limited. These limitations are regrettable because such an estimate is feasible without disrupting the development plans. The authors' general recommendations to update the development plans seem especially appropriate as the additions should not only be beneficial to France but to all the health authorities who would wish to assess the expected benefit of a new medicine on their territories. Anticipating the lack of clinical and epidemiological data and the lack of data that enable to evaluate the transposability of the trials results to real clinical practice is a sine qua none condition to improve the PHB assessment. The anticipation of these needs should be planned early enough by the pharmaceutical companies which could in this purpose meet the health authorities and the heads of the French public health policy in a consultation.Finally, because of the PHB's universal dimension, it is suggested that the necessary actions and publications be initiated so that the PHB can be acknowledged at the European level.
Modelisation et optimisation des systemes energetiques a l'aide d'algorithmes evolutifs
NASA Astrophysics Data System (ADS)
Hounkonnou, Sessinou M. William
Optimization of thermal and nuclear plant has many economics advantages as well as environmentals. Therefore new operating points research and use of new tools to achieve those kind of optimization are the subject of many studies. In this momentum, this project is intended to optimize energetic systems precisely the secondary loop of Gentilly 2 nuclear plant using both the extraction of the high and low pressure turbine as well as the extraction of the mixture coming from the steam generator. A detailed thermodynamic model of the various equipment of the secondary loop such as the feed water heaters, the moisture separator-reheater, the dearator, the condenser and the turbine is carried out. We use Matlab software (version R2007b, 2007) with the library for the thermodynamic properties of water and steam (XSteam pour Matlab, Holmgren, 2006). A model of the secondary loop is than obtained thanks to the assembly of the different equipments. A simulation of the equipment and the complete cycle enabled us to release two objectifs functions knowing as the net output and the efficiency which evolve in an opposite way according to the variation of the extractions. Due to the complexity of the problem, we use a method based on the genetic algorithms for the optimization. More precisely we used a tool which was developed at the "Institut de genie nucleaire" named BEST (Boundary Exploration Search Technique) developed in VBA* (Visual BASIC for Application) for its ability to converge more quickly and to carry out a more exhaustive search at the border of the optimal solutions. The use of the DDE (Dynamic Data Exchange) enables us to link the simulator and the optimizer. The results obtained show us that they still exists several combinations of extractions which make it possible to obtain a better point of operation for the improvement of the performance of Gentilly 2 power station secondary loop. *Trademark of Microsoft
Development and modelisation of a hydro-power conversion system based on vortex induced vibration
NASA Astrophysics Data System (ADS)
Lefebure, David; Dellinger, Nicolas; François, Pierre; Mosé, Robert
2016-11-01
The Vortex Induced Vibration (VIV) phenomenon leads to mechanical issues concerning bluff bodies immerged in fluid flows and have therefore been studied by numerous authors. Moreover, an increasing demand for energy implies the development of alternative, complementary and renewable energy solutions. The main idea of EauVIV project consists in the use of VIV rather than its deletion. When rounded objects are immerged in a fluid flow, vortices are formed and shed on their downstream side, creating a pressure imbalance resulting in an oscillatory lift. A convertor modulus consists of an elastically mounted, rigid cylinder on end-springs, undergoing flow- induced motion when exposed to transverse fluid-flow. These vortices induce cyclic lift forces in opposite directions on the circular bar and cause the cylinder to vibrate up and down. An experimental prototype was developed and tested in a free-surface water channel and is already able to recover energy from free-stream velocity between 0.5 and 1 m.s -1. However, the large number of parameters (stiffness, damping coefficient, velocity of fluid flow, etc.) associated with its performances requires optimization and we choose to develop a complete tridimensionnal numerical model solution. A 3D numerical model has been developed in order to represent the real system behavior and improve it through, for example, the addition of parallel cylinders. The numerical model build up was carried out in three phases. The first phase consists in establishing a 2D model to choose the turbulence model and quantify the dependence of the oscillations amplitudes on the mesh size. The second corresponds to a 3D simulation with cylinder at rest in first time and with vertical oscillation in a second time. The third and final phase consists in a comparison between the experimental system dynamic behavior and its numerical model.
NASA Astrophysics Data System (ADS)
Basiuk, V.; Huynh, P.; Merle, A.; Nowak, S.; Sauter, O.; Contributors, JET; the EUROfusion-IM Team
2017-12-01
The neoclassical tearing modes (NTM) increase the effective heat and particle radial transport inside the plasma, leading to a flattening of the electron and ion temperature and density profiles at a given location depending on the safety factor q rational surface (Hegna and Callen 1997 Phys. Plasmas 4 2940). In burning plasma such as in ITER, this NTM-induced increased transport could reduce significantly the fusion performance and even lead to a disruption. Validating models describing the NTM-induced transport in present experiment is thus important to help quantifying this effect on future devices. In this work, we apply an NTM model to an integrated simulation of current, heat and particle transport on JET discharges using the European transport simulator. In this model, the heat and particle radial transport coefficients are modified by a Gaussian function locally centered at the NTM position and characterized by a full width proportional to the island size through a constant parameter adapted to obtain the best simulations of experimental profiles. In the simulation, the NTM model is turned on at the same time as the mode is triggered in the experiment. The island evolution is itself determined by the modified Rutherford equation, using self-consistent plasma parameters determined by the transport evolution. The achieved simulation reproduces the experimental measurements within the error bars, before and during the NTM. A small discrepancy is observed on the radial location of the island due to a shift of the position of the computed q = 3/2 surface compared to the experimental one. To explain such small shift (up to about 12% with respect to the position observed from the experimental electron temperature profiles), sensitivity studies of the NTM location as a function of the initialization parameters are presented. First results validate both the transport model and the transport modification calculated by the NTM model.
NASA Astrophysics Data System (ADS)
Quiers, M.; Gateuille, D.; Perrette, Y.; Naffrechoux, E.; David, B.; Malet, E.
2017-12-01
Soils are a key compartments of hydrosystems, especially in karst aquifers which are characterized by fast hydrologic responses to rainfalls. In steady state, soils are efficient filters preventing karst water from pollutions. But agricultural or forestry land uses can alter or even reverse the role of soils. Thus , soils can act as pollution sources rather than pollution filters. In order to manage water quality together with man activities in karst environment, the development of new tools and procedures designed to monitor the fate of soil organic matter are needed. This study reports two complementary methods applied in a moutain karst system impacted by anthropic activities and environmental stresses. A continuous monitoring of water fluorescence coupled with punctual sampling was analyzed by chemiometric methods and allowed to discriminate the type of organic matter transferred through the karst system along the year (winter / summer) and hydrological stages. As a main result, the modelisation of organic carbone fluxes is dominated by a colloidal or particulate part during highwaters, and a main part dissolved in solution during low water, demonstrating the change of organic carbone source. To confirm this result, a second method was used based on the observation of Polycyclic Aromatic Hydrocarbons (PAH) profiles. Two previous studies (Perrette et al 2013, Schwarz et al 2011) led to opposite conclusions about the fate of PAH from soil to groundwaters. This opposition leads to a potential use of PAH profiles (low molecular weight less hydrophobic ones versus high molecular weight more hydrophobic ones) as an indicator of soil erosion. We validate that use by the anaylsis of these PAH profiles for low and high waters (floods). These results demonstrate if needed the high vulnerability of karst system to soil erosion, and propose a new proxy to record soils erosion in groundwaters and in natural archives as stalagmites or sediments.
Modelisation de materiaux composites adaptatifs munis d'actionneurs en alliage a memoire de forme
NASA Astrophysics Data System (ADS)
Simoneau, Charles
Technological development of structures having the capabilities to adapt themselves to different operating conditions is increasing in many areas of research such as aerospace. In fact, numerous works are now oriented toward the design of adaptive aircraft wings where the goal is to enhance the aerodynamic properties of the wing. Following this approach, the work realised in the framework of this master thesis presents the steps leading to the creation of a numerical model that can be used to predict the behavior of an adaptive panel, and therefore, eventually of an adaptive aircraft wing. Foremost, the adaptive panel of this project has been designed from a carbon-epoxy composite, acting as host structure, where shape memory alloy (SMA) wires, acting as actuators, have been inserted in it. SMA actuators have also been embedded asymmetrically along the direction of the panel thickness in order to generate a bending moment when the SMA wires are activated. To achieve the modeling of such structure it has been firstly shown that a numerical model composed of only solid finite elements could be used to represent the panel. However, a second numerical model composed of shell, beam and link finite elements showed that identical results can be obtained with much less nodes (the first model was composed of more than 300 000 nodes compared with 1 000 nodes for the second). The combination of shell-beam-link elements has then been chosen. Secondly, a constitutive relation had to be used for modeling the particular behavior of SMA. For the present work, a uniaxial version of the Likhachev's model is used. Due to its fairly straightforward mathematical formulation, this material law is able to model the main functional properties of SMA including the two-way shape memory effect (TWSME) at zero stress obtained after a thermomechanical education treatment. The last step was to compare the results of the numerical simulations with those obtained with a prototype where 19 actuators were embedded in a composite panel of 425 mm x 425 mm. Various load cases were performed. However, during experimental tests, it has been found that the measured actuator temperature was systematically underestimated. Therefore, by comparing the radius of curvature (rho) of the panel as a function of the activation temperature (T) of the actuators, an offset (in temperature) between the curves numerically and experimentally obtained is observable. Aside from this technological difficulty, the experimental and numerical results are very similar and therefore, this numerical model can be used for predicting the behavior of an adaptive panel. In addition, one the main advantages of this numerical model resides in its versatility where it has been shown that a "warping" of the panel could be realized by controlling independently each actuator. Future works should now obviously focus on the temperature measurement while considering the improvement of the numerical model and the possibility to model an initially curved adaptive panel whose form could resemble an aircraft wing.
Samouda, Hanen; Ruiz-Castell, Maria; Bocquet, Valery; Kuemmerle, Andrea; Chioti, Anna; Dadoun, Frédéric; Kandala, Ngianga-Bakwin; Stranges, Saverio
2018-01-01
The analyses of geographic variations in the prevalence of major chronic conditions, such as overweight and obesity, are an important public health tool to identify "hot spots" and inform allocation of funding for policy and health promotion campaigns, yet rarely performed. Here we aimed at exploring, for the first time in Luxembourg, potential geographic patterns in overweight/obesity prevalence in the country, adjusted for several demographic, socioeconomic, behavioural and health status characteristics. Data came from 720 men and 764 women, 25-64 years old, who participated in the European Health Examination Survey in Luxembourg (2013-2015). To investigate the geographical variation, geo-additive semi-parametric mixed model and Bayesian modelisations based on Markov Chain Monte Carlo techniques for inference were performed. Large disparities in the prevalence of overweight and obesity were found between municipalities, with the highest rates of obesity found in 3 municipalities located in the South-West of the country. Bayesian approach also underlined a nonlinear effect of age on overweight and obesity in both genders (significant in men) and highlighted the following risk factors: 1. country of birth for overweight in men born in a non-European country (Posterior Odds Ratio (POR): 3.24 [1.61-8.69]) and women born in Portugal (POR: 2.44 [1.25-4.43]), 2. low educational level (secondary or below) for overweight (POR: 1.66 (1.06-2.72)] and obesity (POR:2.09 [1.05-3.65]) in men, 3. single marital status for obesity in women (POR: 2.20 [1.24-3.91]), 4.fair (men: POR: 3.19 [1.58-6.79], women: POR: 2.24 [1.33-3.73]) to very bad health perception (men: POR: 15.01 [2.16-98.09]) for obesity, 5. sleeping more than 6 hours for obesity in unemployed men (POR: 3.66 [2.02-8.03]). Protective factors highlighted were: 1. single marital status against overweight (POR: [0.60 (0.38-0.96)]) and obesity (POR: 0.39 [0.16-0.84]) in men, 2. the fact to be widowed against overweight in women (POR: [0.30 (0.07-0.86)], as well as a non European country of birth (POR: 0.49 [0.19-0.98]), tertiary level of education (POR: 0.34 [0.18-0.64]), moderate alcohol consumption (POR: 0.54 [0.36-0.90]) and aerobic physical activity practice (POR: 0.44 [0.27-0.77]) against obesity in women. A double burden of environmental exposure due to historic mining and industrial activities and past economic vulnaribility in the South-West of the country may have participated to the higher prevalence of obesity found in this region. Other demographic, socioeconomic, behavioural and health status covariates could have been involved as well.
NASA Astrophysics Data System (ADS)
Zidane, Shems
This study is based on data acquired with an airborne multi-altitude sensor on July 2004 during a nonstandard atmospheric event in the region of Saint-Jean-sur-Richelieu, Quebec. By non-standard atmospheric event we mean an aerosol atmosphere that does not obey the typical monotonic, scale height variation employed in virtually all atmospheric correction codes. The surfaces imaged during this field campaign included a diverse variety of targets : agricultural land, water bodies, urban areas and forests. The multi-altitude approach employed in this campaign allowed us to better understand the altitude dependent influence of the atmosphere over the array of ground targets and thus to better characterize the perturbation induced by a non-standard (smoke) plume. The transformation of the apparent radiance at 3 different altitudes into apparent reflectance and the insertion of the plume optics into an atmospheric correction model permitted an atmospheric correction of the apparent reflectance at the two higher altitudes. The results showed consistency with the apparent validation reflectances derived from the lowest altitude radiances. This approach effectively confirmed the accuracy of our non-standard atmospheric correction approach. This test was particularly relevant at the highest altitude of 3.17 km : the apparent reflectances at this altitude were above most of the plume and therefore represented a good test of our ability to adequately correct for the influence of the perturbation. Standard atmospheric disturbances are obviously taken into account in most atmospheric correction models, but these are based on monotonically decreasing aerosol variations with increasing altitude. When the atmospheric radiation is affected by a plume or a local, non-standard pollution event, one must adapt the existing models to the radiative transfer constraints of the local perturbation and to the reality of the measurable parameters available for ingestion into the model. The main inputs of this study were those normally used in an atmospheric correction : apparent at-sensor radiance and the aerosol optical depth (AOD) acquired using ground-based sunphotometry. The procedure we employed made use of a standard atmospheric correction code (CAM5S, for Canadian Modified 5S, which comes from the 5S radiative transfer model in the visible and near infrared) : however, we also used other parameters and data to adapt and correctly model the special atmospheric situation which affected the multi-altitude images acquired during the St. Jean field campaign. We then developed a modeling protocol for these atmospheric perturbations where auxiliary data was employed to complement our main data-set. This allowed for the development of a robust and simple methodology adapted to this atmospheric situation. The auxiliary data, i.e. meteorological data, LIDAR profiles, various satellite images and sun photometer retrievals of the scattering phase function, were sufficient to accurately model the observed plume in terms of a unusual, vertical distribution. This distribution was transformed into an aerosol optical depth profile that replaced the standard aerosol optical depth profile employed in the CAM5S atmospheric correction model. Based on this model, a comparison between the apparent ground reflectances obtained after atmospheric corrections and validation values of R*(0) obtained from the lowest altitude data showed that the error between the two was less than 0.01 rms. This correction was shown to be a significantly better estimation of the surface reflectance than that obtained using the atmospheric correction model. Significant differences were nevertheless observed in the non-standard solution : these were mainly caused by the difficulties brought about by the acquisition conditions, significant disparities attributable to inconsistencies in the co-sampling / co-registration of different targets from three different altitudes, and possibly modeling errors and / or calibration. There is accordingly room for improvement in our approach to dealing with such conditions. The modeling and forecasting of such a disturbance is explicitly described in this document: our goal in so doing is to permit the establishment of a better protocol for the acquisition of more suitable supporting data. The originality of this study stems from a new approach for incorporating a plume structure into an operational atmospheric correction model and then demonstrating that the approach was a significant improvement over an approach that ignored the perturbations in the vertical profile while employing the correct overall AOD. The profile model we employed was simple and robust but captured sufficient plume detail to achieve significant improvements in atmospheric correction accuracy. The overall process of addressing all the problems encountered in the analysis of our aerosol perturbation helped us to build an appropriate methodology for characterizing such events based on data availability, distributed freely and accessible to the scientific community. This makes our study adaptable and exportable to other types of non-standard atmospheric events. Keywords : non-standard atmospheric perturbation, multi-altitude apparent radiances, smoke plume, Gaussian plume modelization, radiance fit, AOD, CASI
NASA Astrophysics Data System (ADS)
Leger, Michel T.
Les activites humaines energivores telles l'utilisation intensive de l'automobile, la surconsommation de biens et l'usage excessif d'electricite contribuent aux changements climatiques et autres problemes environnementaux. Bien que plusieurs recherches rapportent que l'etre humain est de plus en plus conscient de ses impacts sur le climat de la planete, ces memes recherches indiquent qu'en general, les gens continuent a se comporter de facon non ecologique. Que ce soit a l'ecole ou dans la communaute, plusieurs chercheurs en education relative a l'environnement estiment qu'une personne bien intentionnee est capable d'adopter des comportements plus respectueux de l'environnement. Le but de cette these etait de comprendre le processus d'integration de comportements d'attenuation des changements climatiques dans des familles. A cette fin, nous nous sommes fixe deux objectifs : 1) decrire les competences et les procedes qui favorisent l'adoption de comportements d'attenuation des changements climatiques dans des familles et 2) decrire les facteurs et les dynamiques familiales qui facilitent et limitent l'adoption de comportements d'attenuation des changements climatiques dans des familles. Des familles ont ete invitees a essayer des comportements personnels et collectifs d'attenuation des changements climatiques de sorte a integrer des modes de vie plus ecologiques. Sur une periode de huit mois, nous avons suivi leur experience de changement afin de mieux comprendre comment se produit le processus de changement dans des familles qui decident volontairement d'adopter des comportements d'attenuation des changements climatiques. Apres leur avoir fourni quelques connaissances de base sur les changements climatiques, nous avons observe le vecu de changement des familles durant huit mois d'essais a l'aide de journaux reflexifs, d'entretiens d'explicitation et du journal du chercheur. La these comporte trois articles scientifiques. Dans le premier article, nous presentons une recension des ecrits sur le changement de comportement en environnement. Nous explorons egalement la famille comme systeme fonctionnel de sorte a mieux comprendre ce contexte d'action environnementale qui est, a notre connaissance, peu etudie. Dans le deuxieme article, nous presentons nos resultats de recherche concernant les facteurs d'influence observes ainsi que les competences manifestees au cours du processus d'adoption de nouveaux comportements environnementaux dans trois familles. Enfin, le troisieme article presente les resultats du cas d'une quatrieme famille ou les membres vivent depuis longtemps des modes de vie ecologique. Dans le cadre d'une demarche d'analyse par theorisation ancree, l'etude de ce cas modele nous a permis d'approfondir les categories conceptuelles identifiees dans le deuxieme article de sorte a produire une modelisation de l'integration de comportements environnementaux dans le contexte de la famille. Les conclusions degagees grace a la recension des ecrits nous ont permis d'identifier les elements qui pourraient influencer l'adoption de comportements environnementaux dans des familles. La recension a aussi permis une meilleure comprehension des divers facteurs qui peuvent affecter l'adoption de comportements environnementaux et, enfin, elle a permis de mieux cerner le phenomene de changement de comportement dans le contexte de la famille consideree comme un systeme. En appliquant un processus d'analyse inductif, a partir de nos donnees qualitatives, les resultats de notre etude multi-cas nous ont indique que deux construits conceptuels semblent influencer l'adoption de comportements environnementaux en famille : 1) les valeurs biospheriques communes au sein de la famille et 2) les competences collectivement mises a profit collectivement durant l'essai de nouveaux comportements environnementaux. Notre modelisation du processus de changement dans des familles indique aussi qu'une dynamique familiale collaborative et la presence d'un groupe de soutien exterieur sont deux elements conceptuels qui tendent a influencer les deux principaux construits et, par ce fait, tendent a augmenter les chances d'integrer de nouveaux comportements environnementaux dans des familles. En conclusion, nous presentons les limites de notre recherche ainsi que des pistes pour des recherches futures. Notamment, nous recommandons que l'ecole accueille les familles des eleves dans le cadre d'activites d'education a l'environnement ou les freres, les soeurs et les parents des eleves puissent apprendre ensemble, a l'ecole. Par exemple, nous recommandons la conduite en ERE d'une recherche action portant sur l'apprentissage intergenerationnel de nouveaux comportements dans le contexte de la famille. Mots-cles : education relative a l'environnement, comportement environnemental en famille, changement de comportement en famille, valeurs biospheriques, competences d'action.
NASA Astrophysics Data System (ADS)
Daïf, A.; Ali Chérif, A.; Bresson, J.; Sarh, B.
1995-10-01
The vaporization of one or two multi-component fuel droplets in hot air-stream is presented. A thermal wind tunnel with experimental channel has been designed to develop an experimental process. Firstly, the comparison between experimental results and numerical data is presented for the case of an isolated multi-component droplet. The numerical method is based on the resolution of heat and mass transfer equations between the droplet and the gas stream. This model includes the effect of Stephan flow, the effect of variable thermophysical properties of the components, and the non-unitary Lewis number in the gas film. The experimental results show the micro-explosion phenomenon observed in the liquid phase of multi-component droplet at low temperature. The experimental case of two pure or multi-component droplets in interaction is also presented. On présente un article de synthèse sur l'évaporation d'une ou deux gouttes de carburants à plusieurs composants dans un écoulement d'air chaud. Un dispositif expérimental constitué d'une soufflerie thermique, avec veine d'expérimentation, est réalisé pour permettre cette étude. Pour le cas d'une goutte isolée, une comparaison expérience-calcul est entreprise. Le principe de la méthode numerique consiste en la résolution des équations de transfert de masse et de chaleur entre la goutte et l'écoulement. Ce modèle prend en compte les effets de l'écoulement de Stephan, les variations des propriétés thermophysiques des composants dans les deux phases et la valeur du nombre de Lewis différente de l'unité dans le film de vapeur. Outre l'analyse plus approfondie qu'apporte la confrontation entre le calcul et l'expérience, les résultats expérimentaux montrent le phénomène de micro-explosion observé à l'intérieur de la goutte liquide. Le cas expérimental de deux gouttes en interaction est abordé qu'il s'agisse de gouttes de carburant pur ou de mélange.
Conception et optimisation d'une peau en composite pour une aile adaptative =
NASA Astrophysics Data System (ADS)
Michaud, Francois
Les preoccupations economiques et environnementales constituent des enjeux majeurs pour le developpement de nouvelles technologies en aeronautique. C'est dans cette optique qu'est ne le projet MDO-505 intitule Morphing Architectures and Related Technologies for Wing Efficiency Improvement. L'objectif de ce projet vise a concevoir une aile adaptative active servant a ameliorer sa laminarite et ainsi reduire la consommation de carburant et les emissions de l'avion. Les travaux de recherche realises ont permis de concevoir et optimiser une peau en composite adaptative permettant d'assurer l'amelioration de la laminarite tout en conservant son integrite structurale. D'abord, une methode d'optimisation en trois etapes fut developpee avec pour objectif de minimiser la masse de la peau en composite en assurant qu'elle s'adapte par un controle actif de la surface deformable aux profils aerodynamiques desires. Le processus d'optimisation incluait egalement des contraintes de resistance, de stabilite et de rigidite de la peau en composite. Suite a l'optimisation, la peau optimisee fut simplifiee afin de faciliter la fabrication et de respecter les regles de conception de Bombardier Aeronautique. Ce processus d'optimisation a permis de concevoir une peau en composite dont les deviations ou erreurs des formes obtenues etaient grandement reduites afin de repondre au mieux aux profils aerodynamiques optimises. Les analyses aerodynamiques realisees a partir de ces formes ont predit de bonnes ameliorations de la laminarite. Par la suite, une serie de validations analytiques fut realisee afin de valider l'integrite structurale de la peau en composite suivant les methodes generalement utilisees par Bombardier Aeronautique. D'abord, une analyse comparative par elements finis a permis de valider une rigidite equivalente de l'aile adaptative a la section d'aile d'origine. Le modele par elements finis fut par la suite mis en boucle avec des feuilles de calcul afin de valider la stabilite et la resistance de la peau en composite pour les cas de chargement aerodynamique reels. En dernier lieu, une analyse de joints boulonnes fut realisee en utilisant un outil interne nomme LJ 85 BJSFM GO.v9 developpe par Bombardier Aeronautique. Ces analyses ont permis de valider numeriquement l'integrite structurale de la peau de composite pour des chargements et des admissibles de materiaux aeronautiques typiques.
Modeles numeriques de la stimulation optique de neurones assistee par nanoparticules plasmoniques
NASA Astrophysics Data System (ADS)
Le Hir, Nicolas
La stimulation de neurones par laser emerge depuis plusieurs annees comme une alternative aux techniques plus traditionnelles de stimulation artificielle. Contrairement a celles-ci, la stimulation lumineuse ne necessite pas d'interagir directement avec le tissu organique, comme c'est le cas pour une stimulation par electrodes, et ne necessite pas de manipulation genetique comme c'est le cas pour les methodes optogenetiques. Plus recemment, la stimulation lumineuse de neurones assistee par nanoparticules a emerge comme un complement a la stimulation simplement lumineuse. L'utilisation de nanoparticules complementaires permet d'augmenter la precision spatiale du procede et de diminuer la fluence necessaire pour observer le phenomene. Ceci vient des proprietes d'interaction entre les nanoparticules et le faisceau laser, comme par exemple les proprietes d'absorption des nanoparticules. Deux phenomenes princpaux sont observes. Dans certains cas, il s'agit d'une depolarisation de la membrane, ou d'un potentiel d'action. Dans d'autres experiences, un influx de calcium vers l'interieur du neurone est detecte par une augmentation de la fluorescence d'une proteine sensible a la concentration calcique. Certaines stimulations sont globales, c'est a dire qu'une perturbation se propage a l'ensemble du neurone : c'est le cas d'un potentiel d'action. D'autres sont, au contraire, locales et ne se propagent pas a l'ensemble de la cellule. Si une stimulation lumineuse globale est rendue possible par des techniques relativement bien maitrisees a l'heure actuelle, comme l'optogenetique, une stimulation uniquement locale est plus difficile a realiser. Or, il semblerait que les methodes de stimulation lumineuse assistees par nanoparticules puissent, dans certaines conditions, offrir cette possibilite. Cela serait d'une grande aide pour conduire de nouvelles etudes sur le fonctionnement des neurones, en offrant de nouvelles possibilites experimentales en complement des possibilites actuelles. Cependant, le mecanisme physique a l'origine de la stimulation lumineuse de neurones, ainsi que celui a l'orgine de la stimulation lumineuse assistee par nanoparticules, n'est a ce jour pas totalement compris. Des hypotheses ont ete formulees concernant ce mecanisme : il pourrait etre photothermique, photomecanique, ou encore photochimique. Il se pourrait egalement que plusieurs mecanismes soient a l'oeuvre conjointement, etant donne la variete des observations. La litterature ne converge pas a ce sujet et l'existence d'un mecanisme commun aux differentes situations n'a pas ete demontree.
Analyse des interactions energetiques entre un arena et son systeme de refrigeration
NASA Astrophysics Data System (ADS)
Seghouani, Lotfi
La presente these s'inscrit dans le cadre d'un projet strategique sur les arenas finance par le CRSNG (Conseil de Recherche en Sciences Naturelles et en Genie du Canada) qui a pour but principal le developpement d'un outil numerique capable d'estimer et d'optimiser la consommation d'energie dans les arenas et curlings. Notre travail s'inscrit comme une suite a un travail deja realise par DAOUD et coll. (2006, 2007) qui a developpe un modele 3D (AIM) en regime transitoire de l'arena Camilien Houde a Montreal et qui calcule les flux de chaleur a travers l'enveloppe du batiment ainsi que les distributions de temperatures et d'humidite durant une annee meteorologique typique. En particulier, il calcule les flux de chaleur a travers la couche de glace dus a la convection, la radiation et la condensation. Dans un premier temps nous avons developpe un modele de la structure sous la glace (BIM) qui tient compte de sa geometrie 3D, des differentes couches, de l'effet transitoire, des gains de chaleur du sol en dessous et autour de l'arena etudie ainsi que de la temperature d'entree de la saumure dans la dalle de beton. Par la suite le BIM a ete couple le AIM. Dans la deuxieme etape, nous avons developpe un modele du systeme de refrigeration (REFSYS) en regime quasi-permanent pour l'arena etudie sur la base d'une combinaison de relations thermodynamiques, de correlations de transfert de chaleur et de relations elaborees a partir de donnees disponibles dans le catalogue du manufacturier. Enfin le couplage final entre l'AIM +BIM et le REFSYS a ete effectue sous l'interface du logiciel TRNSYS. Plusieurs etudes parametriques on ete entreprises pour evaluer les effets du climat, de la temperature de la saumure, de l'epaisseur de la glace, etc. sur la consommation energetique de l'arena. Aussi, quelques strategies pour diminuer cette consommation ont ete etudiees. Le considerable potentiel de recuperation de chaleur au niveau des condenseurs qui peut reduire l'energie requise par le systeme de ventilation de l'arena a ete mis en evidence. Mots cles. Arena, Systeme de refrigeration, Consommation d'energie, Efficacite energetique, Conduction au sol, Performance annuelle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benoist, P.
The calculation of diffusion coefficients in a lattice necessitates the knowledge of a correct method of weighting the free paths of the different constituents. An unambiguous definition of this weighting method is given here, based on the calculation of leakages from a zone of a reactor. The formulation obtained, which is both simple and general, reduces the calculation of diffusion coefficients to that of collision probabilities in the different media; it reveals in the expression for the radial coefficient the series of the terms of angular correlation (cross terms) recently shown by several authors. This formulation is then used tomore » calculate the practical case of a classical type of lattice composed of a moderator and a fuel element surrounded by an empty space. Analytical and numerical comparison of the expressions obtained with those inferred from the theory of BEHRENS shows up the importance of several new terms some of which are linked with the transparency of the fuel element. Cross terms up to the second order are evaluated. A practical formulary is given at the end of the paper. (author) [French] Le calcul des coefficients de diffusion dans un reseau suppose la connaissance d'un mode de ponderation correct des libres parcours des differents constituants. On definit ici sans ambiguite ce mode de ponderation a partir du calcul des fuites hors d'une zone de reacteur. La formulation obtenue, simple et generale, ramene le calcul des coefficients de diffusion a celui des probabilites de collision dans les differents milieux; elle fait apparaitre dans l'expression du coefficient radial la serie des termes de correlation angulaire (termes rectangles), mis en evidence recemment par plusieurs auteurs. Cette formulation est ensuite appliquee au calcul pratique d'un reseau classique, compose d'un moderateur et d'un element combustible entoure d'une cavite; la comparaison analytique et numerique des expressions obtenues avec celles deduites de la theorie de BEHRENS fait apparaitre l'importance de plusieurs termes nouveaux, dont certains sont lies a la transparence de l'element combustible; les termes rectangles sont calcules jusqu'a l'ordre 2. Un formulaire pratique est donne a la fin de cette etude. (auteur)« less
Modelisation 0D/1D des emissions de particules de suie dans les turbines a gaz aeronautiques
NASA Astrophysics Data System (ADS)
Bisson, Jeremie
Because of more stringent regulations of aircraft particle emissions as well as strong uncertainties about their formation and their effects on the atmosphere, a better understanding of particle microphysical mechanisms and their interactions with the engine components is required. This thesis focuses on the development of a 0D/1D combustion model with soot production in an aeronautical gas turbine. A major objective of this study is to assess the quality of soot particle emission predictions for different flight configurations. The model should eventually allow performing parametric studies on current or future engines with a minimal computation time. The model represents the combustor as well as turbines and nozzle with a chemical reactor network (CRN) that is coupled with a detailed combustion chemistry for kerosene (Jet A-1) and a soot particle dynamics model using the method of moments. The CRN was applied to the CFM56-2C1 engine during flight configurations of the LTO cycle (Landing-Take-Off) as in the APEX-1 study on aircraft particle emissions. The model was mainly validated on gas turbine thermodynamic data and pollutant concentrations (H2O, COX, NOx, SOX) which were measured in the same study. Once the first validation completed, the model was subsequently used for the computation of mass and number-based emissions indices of the soot particulate population and average diameter. Overall, the model is representative of the thermodynamic conditions and succeeds in predicting the emissions of major pollutants, particularly at high power. Concerning soot particulate emissions, the model's ability to predict simultaneously the emission indices as well as mean diameter has been partially validated. Indeed, the mass emission indices have remained higher than experimental results particularly at high power. These differences on particulate emission index may be the result of uncertainties on thermodynamic parameters of the CRN and mass air flow distribution in the combustion chamber. The analysis of the number-based emission index profile along the CRN also highlights the need to review the nucleation model that has been used and to consider in the future the implementation of a particle aggregation mechanism.
Simulation numerique de l'accretion de glace sur une pale d'eolienne
NASA Astrophysics Data System (ADS)
Fernando, Villalpando
The wind energy industry is growing steadily, and an excellent place for the construction of wind farms is northern Quebec. This region has huge wind energy production potential, as the cold temperatures increase air density and with it the available wind energy. However, some issues associated with arctic climates cause production losses on wind farms. Icing conditions occur frequently, as high air humidity and freezing temperatures cause ice to build up on the blades, resulting in wind turbines operating suboptimally. One of the negative consequences of ice accretion is degradation of the blade's aerodynamics, in the form of a decrease in lift and an increase in drag. Also, the ice grows unevenly, which unbalances the blades and induces vibration. This reduces the expected life of some of the turbine components. If the ice accretion continues, the ice can reach a mass that endangers the wind turbine structure, and operation must be suspended in order to prevent mechanical failure. To evaluate the impact of ice on the profits of wind farms, it is important to understand how ice builds up and how much it can affect blade aerodynamics. In response, researchers in the wind energy field have attempted to simulate ice accretion on airfoils in refrigerated wind tunnels. Unfortunately, this is an expensive endeavor, and researchers' budgets are limited. However, ice accretion can be simulated more cost-effectively and with fewer limitations on airfoil size and air speed using numerical methods. Numerical simulation is an approach that can help researchers acquire knowledge in the field of wind energy more quickly. For years, the aviation industry has invested time and money developing computer codes to simulate ice accretion on aircraft wings. Nearly all these codes are restricted to use by aircraft developers, and so they are not accessible to researchers in the wind engineering field. Moreover, these codes have been developed to meet aeronautical industry specifications, which are different from those that must be met in the wind energy industry. Among these differences are the following: wind turbines operate at subsonic speeds; the cords and angles of attack of wind turbine blades are smaller than those of aircraft wings; and a wind turbine can operate with a larger ice mass on its blades than an aircraft can. So, it is important to provide wind energy researchers with tools specifically validated with the operations parameters of a wind turbine. The main goal of this work is to develop a methodology to simulate ice accretion in 2D using Fluent and Matlab, commercial software programs that are available at nearly all research institutions. In this study, we used Gambit, previously the companion tool of Fluent, for mesh generation, and which has now been replaced by ICEM. We decided to stay with Gambit, because we were already deeply involved with the meshing procedure for our simulation of ice accretion at the time Gambit was removed from the market. We validate the methodology with experimental data consisting of iced airfoil contours obtained in a refrigerated wind tunnel using the parameters of actual ice conditions recorded in northern Quebec. This methodology consists of four steps: airfoil meshing, droplet trajectory calculation, thermodynamic model application, and airfoil contour updating. The total simulation time is divided into several time steps, for each of which the four steps are performed until the total time has elapsed. The time step length depends on the icing conditions. (Abstract shortened by UMI.).
NASA Astrophysics Data System (ADS)
Altamirano, Felipe Ignacio Castro
This dissertation focuses on the problem of designing rates in the utility sector. It is motivated by recent developments in the electricity industry, where renewable generation technologies and distributed energy resources are becoming increasingly relevant. Both technologies disrupt the sector in unique ways. While renewables make grid operations more complex, and potentially more expensive, distributed energy resources enable consumers to interact two-ways with the grid. Both developments present challenges and opportunities for regulators, who must adapt their techniques for evaluating policies to the emerging technological conditions. The first two chapters of this work make the case for updating existing techniques to evaluate tariff structures. They also propose new methods which are more appropriate given the prospective technological characteristics of the sector. The first chapter constructs an analytic tool based on a model that captures the interaction between pricing and investment. In contrast to previous approaches, this technique allows consistently comparing portfolios of rates while enabling researchers to model with a significantly greater level of detail the supply side of the sector. A key theoretical implication of the model that underlies this technique is that, by properly updating the portfolio of tariffs, a regulator could induce the welfare maximizing adoption of distributed energy resources and enrollment in rate structures. We develop an algorithm to find globally optimal solutions of this model, which is a nonlinear mathematical program. The results of a computational experiment show that the performance of the algorithm dominates that of commercial nonlinear solvers. In addition, to illustrate the practical relevance of the method, we conduct a cost benefit analysis of implementing time-variant tariffs in two electricity systems, California and Denmark. Although portfolios with time-varying rates create value in both systems, these improvements differ enough to advise very different policies. While in Denmark time-varying tariffs appear unattractive, they at least deserve further revision in California. This conclusion is beyond the reach of previous techniques to analyze rates, as they do not capture the interplay between an intermittent supply and a price-responsive demand. While useful, the method we develop in the first chapter has two important limitations. One is the lack of transparency of the parameters that determine demand substitution patterns, and demand heterogeneity; the other is the narrow range of rate structures that could be studied with the technique. Both limitations stem from taking as a primitive a demand function. Following an alternative path, in the second chapter we develop a technique based on a pricing model that has as a fundamental building block the consumer utility maximization problem. Because researchers do not have to limit themselves to problems with unique solutions, this approach significantly increases the flexibility of the model and, in particular, addresses the limitations of the technique we develop in the first chapter. This gain in flexibility decreases the practicality of our method since the underlying model becomes a Bilevel Problem. To be able to handle realistic instances, we develop a decomposition method based on a non-linear variant of the Alternating Direction Method of Multipliers, which combines Conic and Mixed Integer Programming. A numerical experiment shows that the performance of the solution technique is robust to instance sizes and a wide combination of parameters. We illustrate the relevance of the new method with another applied analysis of rate structures. Our results highlight the value of being able to model in detail distributed energy resources. They also show that ignoring transmission constraints can have meaningful impacts on the analysis of rate structures. In addition, we conduct a distributional analysis, which portrays how our method permits regulators and policy makers to study impacts of a rate update on a heterogeneous population. While a switch in rates could have a positive impact on the aggregate of households, it could benefit some more than others, and even harm some customers. Our technique permits to anticipate these impacts, letting regulators decide among rate structures with considerably more information than what would be available with alternative approaches. In the third chapter, we conduct an empirical analysis of rate structures in California, which is currently undergoing a rate reform. To contribute to the ongoing regulatory debate about the future of rates, we analyze in depth a set of plausible tariff alternatives. In our analysis, we focus on a scenario in which advanced metering infrastructure and home energy management systems are widely adopted. Our modeling approach allows us to capture a wide variety of temporal and spatial demand substitution patterns without the need of estimating a large number of parameters. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Tremblay, Jose-Philippe
Les systemes avioniques ne cessent d'evoluer depuis l'apparition des technologies numeriques au tournant des annees 60. Apres le passage par plusieurs paradigmes de developpement, ces systemes suivent maintenant l'approche " Integrated Modular Avionics " (IMA) depuis le debut des annees 2000. Contrairement aux methodes anterieures, cette approche est basee sur une conception modulaire, un partage de ressources generiques entre plusieurs systemes et l'utilisation plus poussee de bus multiplexes. La plupart des concepts utilises par l'architecture IMA, bien que deja connus dans le domaine de l'informatique distribuee, constituent un changement marque par rapport aux modeles anterieurs dans le monde avionique. Ceux-ci viennent s'ajouter aux contraintes importantes de l'avionique classique telles que le determinisme, le temps reel, la certification et les cibles elevees de fiabilite. L'adoption de l'approche IMA a declenche une revision de plusieurs aspects de la conception, de la certification et de l'implementation d'un systeme IMA afin d'en tirer profit. Cette revision, ralentie par les contraintes avioniques, est toujours en cours, et offre encore l'opportunite de developpement de nouveaux outils, methodes et modeles a tous les niveaux du processus d'implementation d?un systeme IMA. Dans un contexte de proposition et de validation d'une nouvelle architecture IMA pour un reseau generique de capteurs a bord d?un avion, nous avons identifie quelques aspects des differentes approches traditionnelles pour la realisation de ce type d?architecture pouvant etre ameliores. Afin de remedier a certaines des differentes lacunes identifiees, nous avons propose une approche de validation basee sur une plateforme materielle reconfigurable ainsi qu'une nouvelle approche de gestion de la redondance pour l'atteinte des cibles de fiabilite. Contrairement aux outils statiques plus limites satisfaisant les besoins pour la conception d'une architecture federee, notre approche de validation est specifiquement developpee de maniere a faciliter la conception d'une architecture IMA. Dans le cadre de cette these, trois axes principaux de contributions originales se sont degages des travaux executes suivant les differents objectifs de recherche enonces precedemment. Le premier axe se situe au niveau de la proposition d'une architecture hierarchique de reseau de capteurs s'appuyant sur le modele de base de la norme IEEE 1451. Cette norme facilite l'integration de capteurs et actuateurs intelligents a tout systeme de commande par des interfaces normalisees et generiques.
NASA Astrophysics Data System (ADS)
Paradis, Pierre-Luc
The global energy consumption is still increasing year after year even if different initiatives are set up to decrease fossil fuel dependency. In Canada 80% of the energy is used for space heating and domestic hot water heating in residential sector. This heat could be provided by solar thermal technologies despite few difficulties originating from the cold climate. The aim of this project is to design a solar evacuated tube thermal collector using air as the working fluid. Firstly, needs and specifications of the product are established in a clear way. Then, three concepts of collector are presented. The first one relies on the standard evacuated tube. The second one uses a new technology of tubes; both sides are open. The third one uses heat pipe to extract the heat from the tubes. Based on the needs and specification as criteria, the concept involving tubes with both sides open has been selected as the best idea. In order to simulate the performances of the collector, a model of the heat exchanges in an evacuated tube was developed in 4 steps. The first step is a model in steady state intended to calculate the stagnation temperature of the tube for a fixed solar radiation, outside temperature and wind speed. As a second step, the model is generalised to transient condition in order to validate it with an experimental setup. A root mean square error of 2% is then calculated. The two remainder steps are intended to calculate the temperature of airflow leaving the tube. In the same way, a first model in steady state is developed and then generalised to the transient mode. Then, the validation with an experimental setup gave a difference of 0.2% for the root mean square error. Finally, a preindustrial prototype intended to work in open loop for preheating of fresh air is presented. During the project, explosion of the both sides open evacuated tube in overheating condition blocked the construction of a real prototype for the test. Different path for further work are also identified. One of these is in relation with CFD simulation of the uniformity of the airflow inside of the collector. Another one is the analysis of the design with a design of experiment plan.
NASA Astrophysics Data System (ADS)
Mathevet, T.; Joel, G.; Gottardi, F.; Nemoz, B.
2017-12-01
The aim of this communication is to present analyses of climate variability and change on snow water equivalent (SWE) observations, reconstructions (1900-2016) and scenarii (2020-2100) of a hundred of snow courses dissiminated within the french Alps. This issue became particularly important since a decade, in regions where snow variability had a large impact on water resources availability, poor snow conditions in ski resorts and artificial snow production. As a water resources manager in french mountainuous regions, EDF (french hydropower company) has developed and managed a hydrometeorological network since 1950. A recent data rescue research allowed to digitize long term SWE manual measurments of a hundred of snow courses within the french Alps. EDF have been operating an automatic SWE sensors network, complementary to the snow course network. Based on numerous SWE observations time-series and snow accumulation and melt model (Garavaglia et al., 2017), continuous daily historical SWE time-series have been reconstructed within the 1950-2016 period. These reconstructions have been extented to 1900 using 20 CR reanalyses (ANATEM method, Kuentz et al., 2015) and up to 2100 using GIEC Climate Change scenarii. Considering various mountainous areas within the french Alps, this communication focuses on : (1) long term (1900-2016) analyses of variability and trend of total precipitation, air temperature, snow water equivalent, snow line altitude, snow season length , (2) long term variability of hydrological regime of snow dominated watersheds and (3) future trends (2020 -2100) using GIEC Climate Change scenarii. Comparing historical period (1950-1984) to recent period (1984-2016), quantitative results within a region in the north Alps (Maurienne) shows an increase of air temperature by 1.2 °C, an increase of snow line height by 200m, a reduction of SWE by 200 mm/year and a reduction of snow season length by 15 days. These analyses will be extended from north to south of the Alps, on a region spanning 200 km. Caracterisation of the increase of snow line height and SWE reduction are particularly important at a local and watershed scale. This long term change of snow dynamics within moutainuous regions both impacts snow resorts and artificial snow production developments and multi-purposes dam reservoirs managments.
NASA Astrophysics Data System (ADS)
Aboutajeddine, Ahmed
Les modeles micromecaniques de transition d'echelles qui permettent de determiner les proprietes effectives des materiaux heterogenes a partir de la microstructure sont consideres dans ce travail. L'objectif est la prise en compte de la presence d'une interphase entre la matrice et le renforcement dans les modeles micromecaniques classiques, de meme que la reconsideration des approximations de base de ces modeles, afin de traiter les materiaux multiphasiques. Un nouveau modele micromecanique est alors propose pour tenir compte de la presence d'une interphase elastique mince lors de la determination des proprietes effectives. Ce modele a ete construit grace a l'apport de l'equation integrale, des operateurs interfaciaux de Hill et de la methode de Mori-Tanaka. Les expressions obtenues pour les modules globaux et les champs dans l'enrobage sont de nature analytique. L'approximation de base de ce modele est amelioree par la suite dans un nouveau modele qui s'interesse aux inclusions enrobees avec un enrobage mince ou epais. La resolution utilisee s'appuie sur une double homogeneisation realisee au niveau de l'inclusion enrobee et du materiau. Cette nouvelle demarche, permettra d'apprehender completement les implications des approximations de la modelisation. Les resultats obtenus sont exploites par la suite dans la solution de l'assemblage de Hashin. Ainsi, plusieurs modeles micromecaniques classiques d'origines differentes se voient unifier et rattacher, dans ce travail, a la representation geometrique de Hashin. En plus de pouvoir apprecier completement la pertinence de l'approximation de chaque modele dans cette vision unique, l'extension correcte de ces modeles aux materiaux multiphasiques est rendue possible. Plusieurs modeles analytiques et explicites sont alors proposee suivant des solutions de differents ordres de l'assemblage de Hashin. L'un des modeles explicite apparait comme une correction directe du modele de Mori-Tanaka, dans les cas ou celui ci echoue a donner de bons resultats. Finalement, ce modele de Mori-Tanaka corrige est utilise avec les operateurs de Hill pour construire un modele de transition d'echelle pour les materiaux ayant une interphase elastoplastique. La loi de comportement effective trouvee est de nature incrementale et elle est conjuguee a la relation de la plasticite de l'interphase. Des simulations d'essais mecaniques pour plusieurs proprietes de l'interphase plastique a permis de dresser des profils de l'enrobage octroyant un meilleur comportement au materiau.
In-flight calibration/validation of the ENVISAT/MWR
NASA Astrophysics Data System (ADS)
Tran, N.; Obligis, E.; Eymard, L.
2003-04-01
Retrieval algorithms for wet tropospheric correction, integrated vapor and liquid water contents, atmospheric attenuations of backscattering coefficients in Ku and S band, have been developed using a database of geophysical parameters from global analyses from a meteorological model and corresponding simulated brightness temperatures and backscattering cross-sections by a radiative transfer model. Meteorological data correspond to 12 hours predictions from the European Center for Medium range Weather Forecasts (ECMWF) model. Relationships between satellite measurements and geophysical parameters are determined using a statistical method. The quality of the retrieval algorithms depends therefore on the representativity of the database, the accuracy of the radiative transfer model used for the simulations and finally on the quality of the inversion model. The database has been built using the latest version of the ECMWF forecast model, which has been operationally run since November 2000. The 60 levels in the model allow a complete description of the troposphere/stratosphere profiles and the horizontal resolution is now half of a degree. The radiative transfer model is the emissivity model developed at the Université Catholique de Louvain [Lemaire, 1998], coupled to an atmospheric model [Liebe et al, 1993] for gaseous absorption. For the inversion, we have replaced the classical log-linear regression with a neural networks inversion. For Envisat, the backscattering coefficient in Ku band is used in the different algorithms to take into account the surface roughness as it is done with the 18 GHz channel for the TOPEX algorithms or an additional term in wind speed for ERS2 algorithms. The in-flight calibration/validation of the Envisat radiometer has been performed with the tuning of 3 internal parameters (the transmission coefficient of the reflector, the sky horn feed transmission coefficient and the main antenna transmission coefficient). First an adjustment of the ERS2 brightness temperatures to the simulations for the 2000/2001 version of the ECMWF model has been applied. Then, Envisat brightness temperatures have been calibrated on these adjusted ERS2 values. The advantages of this calibration approach are that : i) such a method provides the relative discrepancy with respect to the simulation chain. The results, obtained simultaneously for several radiometers (we repeat the same analyze with TOPEX and JASON radiometers), can be used to detect significant calibration problems, more than 2 3 K). ii) the retrieval algorithms have been developed using the same meteorological model (2000/2001 version of the ECMWF model), and the same radiative transfer model than the calibration process, insuring the consistency between calibration and retrieval processing. Retrieval parameters are then optimized. iii) the calibration of the Envisat brightness temperatures over the 2000/2001 version of the ECMWF model, as well as the recommendation to use the same model as a reference to correct ERS2 brightness temperatures, allow the use of the same retrieval algorithms for the two missions, providing the continuity between the two. iv) by comparison with other calibration methods (such as systematic calibration of an instrument or products by using respectively the ones from previous mission), this method is more satisfactory since improvements in terms of technology, modelisation, retrieval processing are taken into account. For the validation of the brightness temperatures, we use either a direct comparison with measurements provided by other instruments in similar channel, or the monitoring over stable areas (coldest ocean points, stable continental areas). The validation of the wet tropospheric correction can be also provided by comparison with other radiometer products, but the only real validation rely on the comparison between in-situ measurements (performed by radiosonding) and retrieved products in coincidence.
NASA Astrophysics Data System (ADS)
Chastenay, Pierre
Since the Quebec Education Program came into effect in 2001, Quebec classrooms have again been teaching astronomy. Unfortunately, schools are ill-equipped to teach complex astronomical concepts, most of which occur outside school hours and over long periods of time. Furthermore, many astronomical phenomena involve celestial objects travelling through three-dimensional space, which we cannot access from our geocentric point of view. The lunar phases, a concept prescribed in secondary cycle one, fall into that category. Fortunately, schools can count on support from the planetarium, a science museum dedicated to presenting ultra-realistic simulations of astronomical phenomena in fast time and at any hour of the day. But what type of planetarium will support schools? Recently, planetariums also underwent their own revolution: they switched from analogue to digital, replacing geocentric opto-mechanical projectors with video projectors that offer the possibility of travelling virtually through a completely immersive simulation of the three-dimensional Universe. Although research into planetarium education has focused little on this new paradigm, certain of its conclusions, based on the study of analogue planetariums, can help us develop a rewarding teaching intervention in these new digital simulators. But other sources of inspiration will be cited, primarily the teaching of science, which views learning no longer as the transfer of knowledge, but rather as the construction of knowledge by the learners themselves, with and against their initial conceptions. The conception and use of constructivist learning environments, of which the digital planetarium is a fine example, and the use of simulations in astronomy will complete our theoretical framework and lead to the conception of a teaching intervention focusing on the lunar phases in a digital planetarium and targeting students aged 12 to 14. This teaching intervention was initially tested as part of development research (didactic engineering) aimed at improving it, both theoretically and practically, through multiple iterations in its "natural" environment, in this case an inflatable digital planetarium six metres in diameter. We are presenting the results of our first iteration, completed with help from six children aged 12 to 14 (four boys and two girls) whose conceptions about the lunar phases were noted before, during and after the intervention through group interviews, questionnaires, group exercises and recordings of the interventions throughout the activity. The evaluation was essentially qualitative, based on the traces obtained throughout the session, in particular within the planetarium itself. This material was then analyzed to validate the theoretical concepts that led to the conception of the teaching intervention and also to reveal possible ways to improve the intervention. We noted that the intervention indeed changed most participants' conceptions about the lunar phases, but also identified ways to boost its effectiveness in the future.
Navigation d'un vehicule autonome autour d'un asteroide
NASA Astrophysics Data System (ADS)
Dionne, Karine
Les missions d'exploration planetaire utilisent des vehicules spatiaux pour acquerir les donnees scientifiques qui font avancer notre connaissance du systeme solaire. Depuis les annees 90, ces missions ciblent non seulement les planetes, mais aussi les corps celestes de plus petite taille comme les asteroides. Ces astres representent un defi particulier du point de vue des systemes de navigation, car leur environnement dynamique est complexe. Une sonde spatiale doit reagir rapidement face aux perturbations gravitationnelles en presence, sans quoi sa securite pourrait etre compromise. Les delais de communication avec la Terre pouvant souvent atteindre plusieurs dizaines de minutes, il est necessaire de developper des logiciels permettant une plus grande autonomie d'operation pour ce type de mission. Ce memoire presente un systeme de navigation autonome qui determine la position et la vitesse d'un satellite en orbite autour d'un asteroide. Il s'agit d'un filtre de Kalman etendu adaptatif a trois degres de liberte. Le systeme propose se base sur l'imagerie optique pour detecter des " points de reperes " qui ont ete prealablement cartographies. Il peut s'agir de crateres, de rochers ou de n'importe quel trait physique discernable a la camera. Les travaux de recherche realises se concentrent sur les techniques d'estimation d'etat propres a la navigation autonome. Ainsi, on suppose l'existence d'un logiciel approprie qui realise les fonctions de traitement d'image. La principale contribution de recherche consiste en l'inclusion, a chaque cycle d'estimation, d'une mesure de distance afin d'ameliorer les performances de navigation. Un estimateur d'etat de type adaptatif est necessaire pour le traitement de ces mesures, car leur precision varie dans le temps en raison de l'erreur de pointage. Les contributions secondaires de recherche sont liees a l'analyse de l'observabilite du systeme ainsi qu'a une analyse de sensibilite pour six parametres principaux de conception. Les resultats de simulation montrent que l'ajout d'une mesure de distance par cycle de mise a jour entraine une amelioration significative des performances de navigation. Ce procede reduit l'erreur d'estimation ainsi que les periodes de non-observabilite en plus de contrer la dilution de precision des mesures. Les analyses de sensibilite confirment quant a elles la contribution des mesures de distance a la diminution globale de l'erreur d'estimation et ce pour une large gamme de parametres de conception. Elles indiquent egalement que l'erreur de cartographie est un parametre critique pour les performances du systeme de navigation developpe. Mots cles : Estimation d'etat, filtre de Kalman adaptatif, navigation optique, lidar, asteroide, simulations numeriques
Magnetophotoluminescence de dyades d'azote uniques dans le gallium arsenide
NASA Astrophysics Data System (ADS)
Ouellet-Plamondon, Clauderic
On the goal to achieve an efficient quantum light source, there are many possibilities ranging from lasers to quantum dots. One of those candiate is to use a single nitrogen dyad in GaAs. This nanostructure is composed of two nitrogen atoms in nearest neigbors subsituting for two arsenic atoms. Since both of those atoms have the same valence, the combined effet of the electronegativity and the small size of the nitrogen atoms form a potential well which attracts an electron. A hole is then bound to the electron via coulomb interaction, creating a bound exciton at the dyad from which the luminescence can be studied. In this work, we present an experimental study of the fine structure of the emission from single nitrogen dyads. The photoluminescence measurements are realised using a high resolution confocal microscope and under a magnetic field of up to 7 T. The spatial resolution combined with the sample's surface density of nitrogen dyads allows studying the properties of individual dyads. Since the C2v symmetry of the dyad lifts the degeneracy of the excitonic levels without magnetic field, four or five transitions are observed, depending on the orientation of the dyad with respect to the observation axis. Using a Hamiltonian taking into account the exchange interaction, the local crystal field and the Zeeman effect, the energie of excitonic states as well as their transition probabilites are modelised. This model reproduce the linear polarization of the emmited photons and is used to determine a range of acceptable value for the g-factor of the bound electron as well as the isotropic and anisotropic factors of the interaction of the weakly-bound hole with the magnetic field. Furthermore, from the diamagnetic shift, the radius of the wavefunction of the electron is evalutated at 16.2 °A, confirming that it is strongly localized to the dyad. Of all the dyads studied, a certain number of them had an emission strickingly different from the ones usually observed. In a first case, the environment perturbed the excitonic states making only the two states at higher energy observable. In a second case, an additional depolarised transition is observed at lower energy. We show that this transition is associated to a charged exciton, indicating for the first time that these nanotructures can bind multiple charges like their larger epitaxial and colloidal counterpart. This work gives a better comprehension of excitons bound to a nitrogen dyad and opens the way to many applications.
Evaluation d'un ecosysteme pastoral sahelien: Apport de la geomatique (Oursi, Burkina Faso)
NASA Astrophysics Data System (ADS)
Kabore, Seraphine Sawadogo
L'objectif principal de cette recherche est la mise au point d'une architecture d'integration de donnees socio-bio-geographiques et de donnees satellitales dans un Systeme d'Information Geographique (SIG) en vue d'une aide a la prise de decisions dans un environnement semi-aride au nord du Burkina Faso. Elle repond a la question fondamentale de l'interpretation des effets des facteurs climatiques et socioeconomiques sur le milieu pastoral. La recherche s'est appuyee sur plusieurs hypotheses de travail: possibilite d'utilisation de modele de simulation, d'approche multicritere et de donnees de teledetection dans un cadre de systeme d'information geographique. L'evolution spatiotemporelle des parametres de productivite du milieu a ete evaluee par approche dynamique selon le modele de Wu et al. (1996) qui modelise les interactions entre le climat, le milieu physique, le vegetal et l'animal pour mieux quantifier la biomasse primaire. A ce modele, quatre parametres ont ete integres par approche floue et multicritere afin de prendre en compte la dimension socioeconomique de la productivite pastorale (apport majeur de la recherche): la sante, l'education, l'agriculture et l'eau. La teledetection (imagerie SPOT) a permis de definir la production primaire a partir de laquelle les simulations ont ete realisees sur 10 annees. Les resultats obtenus montrent une bonne correlation entre biomasse primaire in situ et celle calculee pour les deux modeles, avec toutefois une meilleure efficacite du modele modifie (4 fois plus) dans les zones de forte productivite ou l'on note un taux de surexploitation agricole eleve. A cause de la variabilite spatiale de la production primaire in situ, les erreurs des resultats de simulation (8 a 11%) sont acceptables et montrent la pertinence de l'approche grace a l'utilisation des SIG pour la spatialisation et l'integration des differents parametres des modeles. Les types de production secondaire preconises (production de lait pendant 7 mois ou de viande pendant 6 mois) sont bases sur les besoins de l'UBT et le disponible fourrager qui est de qualite mediocre en saison seche. Dans les deux cas de figure, un deficit fourrager est observe. Deux types de transhumance sont proposes afin d'assurer une production durable selon deux scenarios: exploitation rationnelle des unites pastorales selon un plan de rotation annuelle et mise en defens a moyen terme des zones degradees pour une regeneration. Les zones potentielles pour la transhumance ont ete determinees selon les limites acceptables des criteres d'exploitation durable des milieux saheliens definis par Kessler (1994) soit 0,2 UBT.ha-1.
NASA Astrophysics Data System (ADS)
Ecoffet, Robert; Maget, Vincent; Rolland, Guy; Lorfevre, Eric; Bourdarie, Sébastien; Boscher, Daniel
2016-07-01
We have developed a series of instruments for energetic particle measurements, associated with component test beds "MEX". The aim of this program is to check and improve space radiation engineering models and techniques. The first series of instruments, "ICARE" has flown on the MIR space station (SPICA mission), the ISS (SPICA-S mission) and the SAC-C low Earth polar orbiting satellite (ICARE mission 2001-2011) in cooperation with the Argentinian space agency CONAE. A second series of instruments "ICARE-NG" was and is flown as: - CARMEN-1 mission on CONAE's SAC-D, 650 km, 98°, 2011-2015, along with three "SODAD" space micro-debris detectors - CARMEN-2 mission on the JASON-2 satellite (CNES, JPL, EUMETSAT, NOAA), 1336 km, 66°, 2008-now, along with JAXA's LPT energetic particle detector - CARMEN-3 mission on the JASON-3 satellite in the same orbit as JASON-2, launched 17 January 2016, along with a plasma detector "AMBRE", and JAXA's LPT again. The ICARE-NG is spectrometer composed of a set of three fully depleted silicon solid state detectors used in single and coincident mode. The on-board measurements consist in accumulating energy loss spectra in the detectors over a programmable accumulation period. The spectra are generated through signal amplitude classification using 8 bit ADCs and resulting in 128/256 channels histograms. The discriminators reference levels, amplifier gain and accumulation time for the spectra are programmable to provide for possible on-board tuning optimization. Ground level calibrations have been made at ONERA-DESP using radioactive source emitting alpha particles in order to determine the exact correspondence between channel number and particle energy. To obtain the response functions to particles, a detailed sectoring analysis of the satellite associated with GEANT-4/MCNP-X calculations has been performed to characterize the geometrical factors of the each detector for p+ as well as for e- with different energies. The component test bed "MEX" is equipped with two different types of active dosimeters, P-MOS silicon dosimeters and OSL (optically stimulated luminescence). Those dosimeters provide independent measurements of ionizing and displacement damage doses and consolidate spectrometers' observations. The data sets obtained cover more than one solar cycle. Dynamics of the radiation belts, effects of solar particle events, coronal mass ejections and coronal holes were observed. Spectrometer measurements and dosimeter readings were used to evaluate current engineering models, and helped in developing improved ones, along with "space weather" radiation belt indices. The presentation will provide a comprehensive review of detector features and mission results.
Evaluation de la qualite osseuse par les ondes guidees ultrasonores =
NASA Astrophysics Data System (ADS)
Abid, Alexandre
La caracterisation des proprietes mecaniques de l'os cortical est un domaine d'interet pour la recherche orthopedique. En effet, cette caracterisation peut apporter des informations primordiales pour determiner le risque de fracture, la presence de microfractures ou encore depister l'osteoporose. Les deux principales techniques actuelles de caracterisation de ces proprietes sont le Dual-energy X-ray Absorptiometry (DXA) et le Quantitative Computed Tomogaphy (QCT). Ces techniques ne sont pas optimales et presentent certaines limites, ainsi l'efficacite du DXA est questionnee dans le milieu orthopedique tandis que le QCT necessite des niveaux de radiations problematiques pour en faire un outil de depistage. Les ondes guidees ultrasonores sont utilisees depuis de nombreuses annees pour detecter les fissures, la geometrie et les proprietes mecaniques de cylindres, tuyaux et autres structures dans des milieux industriels. De plus, leur utilisation est plus abordable que celle du DXA et n'engendrent pas de radiation ce qui les rendent prometteuses pour detecter les proprietes mecaniques des os. Depuis moins de dix ans, de nombreux laboratoires de recherche tentent de transposer ces techniques au monde medical, en propageant les ondes guidees ultrasonores dans les os. Le travail presente ici a pour but de demontrer le potentiel des ondes guidees ultrasonores pour determiner l'evolution des proprietes mecaniques de l'os cortical. Il commence par une introduction generale sur les ondes guidees ultrasonores et une revue de la litterature des differentes techniques relatives a l'utilisation des ondes guidees ultrasonores sur les os. L'article redige lors de ma maitrise est ensuite presente. L'objectif de cet article est d'exciter et de detecter certains modes des ondes guides presentant une sensibilite a la deterioration des proprietes mecaniques de l'os cortical. Ce travail est realise en modelisant par elements finis la propagation de ces ondes dans deux modeles osseux cylindriques. Ces deux modeles sont composes d'une couche peripherique d'os cortical et remplis soit d'os trabeculaire soit de moelle osseuse. Ces deux modeles permettent d'obtenir deux geometries, chacune propice a la propagation circonferentielle ou longitudinale des ondes guidees. Les resultats, ou trois differents modes ont pu etre identifies, sont compares avec des donnees experimentales obtenues avec des fantomes osseux et theoriques. La sensibilite de chaque mode pour les differents parametres des proprietes mecaniques est alors etudiee ce qui permet de conclure sur le potentiel de chaque mode quant a la prediction de risque de fracture ou de presence de microfractures.
NASA Astrophysics Data System (ADS)
Francoeur, Dany
Cette these de doctorat s'inscrit dans le cadre de projets CRIAQ (Consortium de recherche et d'innovation en aerospatiale du Quebec) orientes vers le developpement d'approches embarquees pour la detection de defauts dans des structures aeronautiques. L'originalite de cette these repose sur le developpement et la validation d'une nouvelle methode de detection, quantification et localisation d'une entaille dans une structure de joint a recouvrement par la propagation d'ondes vibratoires. La premiere partie expose l'etat des connaissances sur l'identification d'un defaut dans le contexte du Structural Health Monitoring (SHM), ainsi que la modelisation de joint a recouvrements. Le chapitre 3 developpe le modele de propagation d'onde d'un joint a recouvrement endommage par une entaille pour une onde de flexion dans la plage des moyennes frequences (10-50 kHz). A cette fin, un modele de transmission de ligne (TLM) est realise pour representer un joint unidimensionnel (1D). Ce modele 1D est ensuite adapte a un joint bi-dimensionnel (2D) en faisant l'hypothese d'un front d'onde plan incident et perpendiculaire au joint. Une methode d'identification parametrique est ensuite developpee pour permettre a la fois la calibration du modele du joint a recouvrement sain, la detection puis la caracterisation de l'entaille situee sur le joint. Cette methode est couplee a un algorithme qui permet une recherche exhaustive de tout l'espace parametrique. Cette technique permet d'extraire une zone d'incertitude reliee aux parametres du modele optimal. Une etude de sensibilite est egalement realisee sur l'identification. Plusieurs resultats de mesure sur des joints a recouvrements 1D et 2D sont realisees permettant ainsi l'etude de la repetabilite des resultats et la variabilite de differents cas d'endommagement. Les resultats de cette etude demontrent d'abord que la methode de detection proposee est tres efficace et permet de suivre la progression d'endommagement. De tres bons resultats de quantification et de localisation d'entailles ont ete obtenus dans les divers joints testes (1D et 2D). Il est prevu que l'utilisation d'ondes de Lamb permettraient d'etendre la plage de validite de la methode pour de plus petits dommages. Ces travaux visent d'abord la surveillance in-situ des structures de joint a recouvrements, mais d'autres types de defauts. (comme les disbond) et. de structures complexes sont egalement envisageables. Mots cles : joint a recouvrement, surveillance in situ, localisation et caracterisation de dommages
Study of dehydroxylated-rehydroxylated smectites by SAXS
NASA Astrophysics Data System (ADS)
Muller, F.; Pons, C.-H.; Papin, A.
2002-07-01
Montmorillonite and beidellite are dioctahedral 2:1 phyllosilicates. The weakness of the bonding between layers allows the intercalation of water molecules (disposed in layers) in the interlayer space. The samples studied are constituted of cv layers (cv for vacant octahedral sites in cis positions). They have been dehydroxylated. This is accompanied by the migration of the octahedral cations from former trans-octahedra to empty cis-sites therefore the layers become tv (vacant site in trans position). To characterize the stacking of the layers, SAXS (Small Angle X-ray Scattering) analyses have been investigated in natural (N) and after a dehydroxylation-rehydroxylation cycle (R) states. The SAXS pattern modelisation for Na -exchanged samples in the N state shows that the layers stack in particles with well defined interlayer distances d_{001}, corresponding to 0 water layer, 1 water layers and 2 water layers. The dehydroxylation-rehydroxylation cycle increases the proportion of interlayer distances with zero water layer and the disorder in the stacking. The decreasing of the disorder parameter with the proportion of tetrahedral charge in the N and R sample shows that the distribution of the water layers depend on the localization of the deficit of charge. Les montmorillonites et les smectites sont des phyllosilicates 2:1 dioctaédriques. Les liaisons entre feuillets sont suffisamment faibles pour permettre l'insertion, dans l'espace interfoliaire, de molécules d'eau qui se disposent en couches. Les échantillons étudiés ont des feuillets cis-vacants (le site octaédrique inoccupé est en une des deux positions “cis”). Ils ont été deshydroxylés. Ceci s'accompagne d'une migration cationique, à l'intérieur des couches octaédriques, des sites trans vers les sites cis et le feuillet devient trans-vacant. Des expériences de Diffusion X aux Petits Angles (DPA) ont permis de caractériser l'empilement des feuillets. La modélisation des diagrammes de DPAX met en évidence, pour les échantillons sodique non traités, des empilements de feuillets formant des particules avec des distances interlamellaires à 0, 1 et 2 couches d'eau. Après le cycle de déshydroxylation-réhydroxylation, la proportion de feuillets avec une distance interlamellaire correspondante à zéro couche d'eau et le désordre dans l'empi lement des feuillets augmentent. La décroissance du paramètre de désordre avec la proportion de charges tetraédriques montre que l'organi sation des couches d'eau dépend de la localisation du déficit de charge.
NASA Astrophysics Data System (ADS)
Lallier-Daniels, Dominic
La conception de ventilateurs est souvent basée sur une méthodologie « essais/erreurs » d'amélioration de géométries existantes ainsi que sur l'expérience de design et les résultats expérimentaux cumulés par les entreprises. Cependant, cette méthodologie peut se révéler coûteuse en cas d'échec; même en cas de succès, des améliorations significatives en performance sont souvent difficiles, voire impossibles à obtenir. Le projet présent propose le développement et la validation d'une méthodologie de conception basée sur l'emploi du calcul méridien pour la conception préliminaire de turbomachines hélico-centrifuges (ou flux-mixte) et l'utilisation du calcul numérique d'écoulement fluides (CFD) pour la conception détaillée. La méthode de calcul méridien à la base du processus de conception proposé est d'abord présentée. Dans un premier temps, le cadre théorique est développé. Le calcul méridien demeurant fondamentalement un processus itératif, le processus de calcul est également présenté, incluant les méthodes numériques de calcul employée pour la résolution des équations fondamentales. Une validation du code méridien écrit dans le cadre du projet de maîtrise face à un algorithme de calcul méridien développé par l'auteur de la méthode ainsi qu'à des résultats de simulation numérique sur un code commercial est également réalisée. La méthodologie de conception de turbomachines développée dans le cadre de l'étude est ensuite présentée sous la forme d'une étude de cas pour un ventilateur hélico-centrifuge basé sur des spécifications fournies par le partenaire industriel Venmar. La méthodologie se divise en trois étapes: le calcul méridien est employé pour le pré-dimensionnement, suivi de simulations 2D de grilles d'aubes pour la conception détaillée des pales et finalement d'une analyse numérique 3D pour la validation et l'optimisation fine de la géométrie. Les résultats de calcul méridien sont en outre comparés aux résultats de simulation pour la géométrie 3D afin de valider l'emploi du calcul méridien comme outil de prédimensionnement.
Caracterisation experimentale et numerique de la flamme de carburants synthetiques gazeux
NASA Astrophysics Data System (ADS)
Ouimette, Pascale
The goal of this research is to characterize experimentally and numerically laminar flames of syngas fuels made of hydrogen (H2), carbon monoxide (CO), and carbon dioxide (CO2). More specifically, the secondary objectives are: 1) to understand the effects of CO2 concentration and H2/CO ratio on NOx emissions, flame temperature, visible flame height, and flame appearance; 2) to analyze the influence of H2/CO ratio on the lame structure, and; 3) to compare and validate different H2/CO kinetic mechanisms used in a CFD (computational fluid dynamics) model over different H2/CO ratios. Thus, the present thesis is divided in three chapters, each one corresponding to a secondary objective. For the first part, experimentations enabled to conclude that adding CO2 diminishes flame temperature and EINOx for all equivalence ratios while increasing the H2/CO ratio has no influence on flame temperature but increases EINOx for equivalence ratios lower than 2. Concerning flame appearance, a low CO2 concentration in the fuel or a high H2/CO ratio gives the flame an orange color, which is explained by a high level of CO in the combustion by-products. The observed constant flame temperature with the addition of CO, which has a higher adiabatic flame temperature, is mainly due to the increased heat loss through radiation by CO2. Because NOx emissions of H2/CO/CO 2 flames are mainly a function of flame temperature, which is a function of the H2/CO ratio, the rest of the thesis concentrates on measuring and predicting species in the flame as a good prediction of species and heat release will enable to predict NOx emissions. Thus, for the second part, different H2/CO fuels are tested and major species are measured by Raman spectroscopy. Concerning major species, the maximal measured H 2O concentration decreases with addition of CO to the fuel, while the central CO2 concentration increases, as expected. However, at 20% of the visible flame height and for all fuels tested herein, the measured CO2 concentration is lower than its stoechiometric value while the measured H2O already reached its stoechiometric concentration. The slow chemical reactions necessary to produce CO2 compared to the ones forming H2O could explain this difference. For the third part, a numerical model is created for a partially premixed flame of 50% H 2 / 50% CO. This model compares different combustion mechanisms and shows that a reduced kinetic mechanism reduces simulation times while conserving the results quality of more complex kinetic schemes. This numerical model, which includes radiation heat losses, is also validated for a large range of fuels going from 100% H2 to 5% H2 / 95% CO. The most important recommendation of this work is to include a NOx mechanism to the numerical model in order to eventually determine an optimal fuel. It would also be necessary to validate the model over a wide range for different parameters such as equivalence ratio, initial temperature and initial pressure.
NASA Astrophysics Data System (ADS)
Rioux, David
Metallic nanoparticles (NPs) constitute a research area that has been booming in the recent decades. Among them the plasmonic NPs, which are composed of noble metals such as gold and silver, are the best known and possess extraordinary optical properties. Their ability to strongly absorb and scatter light on a specific band in the visible wavelengths gives them a very intense coloration. Moreover, these structures strongly concentrate the light near their surface upon illumination. These properties can be exploited in a variety of applications from biomedical imaging to detection and even for improving the performance of solar cells. Although gold and silver are the most widely used materials for plasmonic NPs, it has long been known that their alloys have optical properties equally interesting with the added benefit that their color can be controlled by the gold-silver ratio of the alloy. Nevertheless, the gold-silver alloy NPs are not frequently used in different applications. The main reason is probably that the synthesis of these NPs with good size control has not been demonstrated yet. Many applications, including imaging, require NPs which strongly scatter light. Large NPs (50 nm and more) are often required since they scatter light more efficiently. However, the different synthesis methods used until now to produce gold-silver alloy NPs result in sizes smaller than 30 nm or very polydisperse samples, making them unattractive for these applications. The potential to use gold-silver alloy NPs is therefore based on the ability to manufacture them with a sufficiently large diameter and with good size control. It is also important to be able to predict in advance the optical properties of gold-silver alloy nanostructures, to help guide the design of these structures depending on the intended properties. This requires knowledge of the dielectric function of the alloys according to their composition. Although the dielectric function was measured experimentally several times, tabular data are often limited to a few specific compositions and an analytical model would be more interesting. This thesis focuses on the study and modeling of the optical properties of gold-silver alloy NPs, on their synthesis as well as an application example; using these NPs as cell markers for multiplexed scattering imaging. The first part of this thesis deals with a study of the dielectric function of gold-silver alloys in order to develop an analytical model to calculate the dielectric function for an arbitrary composition of the alloy. This model considers the contribution of the free and bound electrons of the metal to the dielectric function. The contribution of free electrons is calculated using the Drude model while the contribution of bound electrons was modeled by studying the shape of the interband transitions from the study of the gold and silver band structures. A parameterized model incorporating these two contributions was developed and composition dependence comes from the evolution of these parameters depending on the composition. The model was validated by comparing the spectra of experimental extinctions alloy NPs with the spectra calculated by the Mie theory using the dielectric functions determined from this model. This model has also been very useful to predict the optical properties and characterize NPs produced by a new synthesis method developed during this PhD project. This method allowed the synthesis of spherical gold-silver alloy NPs with controlled size and composition while maintaining a small size distribution. This technique relies on the combination of two known methods. The first, being used for the synthesis of small alloy NPs, is based on the chemical co-reduction of gold and silver salts in aqueous solution. The second, used for the synthesis of gold or silver NPs of controlled size, is the seed-mediated growth method. Using this new approach, the synthesis sized gold-silver alloy NPs with sizes controlled between 30 and 150 nm has been demonstrated. The synthesized NPs do not have a homogeneous composition with a gold-rich core and a silver-rich surface. This non-homogeneous composition affects the optical properties for the smallest particles (˜ 30 nm) by broadening the plasmon peak and making it asymmetrical, but its effect is considerably less important for larger particles (˜ 60 nm and more) where the measured plasmon peak is similar to that predicted for a homogeneous particle. This new synthesis method thus provides the ability to synthesize high quality alloy NPs for applications requiring controlled size and a precise plasmon peak position. These NPs were used in scattering imaging and their potential as cell markers was studied. It has been shown that the darkfield imaging, a standard technique for scattering imaging, is not optimal for the observation of NPs on cells because of the strong scattering signal of the latter. An alternative approach based on the detection of the backscattering of the NPs was proposed. This approach provides better contrast for the NPs as their backscatter signal is much stronger than that of the cells. In this thesis, a semi-quantitative study of the contrast of the NPs relative to cells explain why the backscattering approach is more promising than the darkfield imaging for cell labeling. Overall, this thesis covers many aspects of the gold-silver alloy NPs, either theoretical understanding of the optical properties, the development of the synthesis method and an application example. It also paves the way for many other avenues of research in the optimization of the method of synthesis of the particles as well as in their use in imaging applications and others.
NASA Astrophysics Data System (ADS)
Dolez, Patricia
Le travail de recherche effectue dans le cadre de ce projet de doctorat a permis la mise au point d'une methode de mesure des pertes ac destinee a l'etude des supraconducteurs a haute temperature critique. Pour le choix des principes de cette methode, nous nous sommes inspires de travaux anterieurs realises sur les supraconducteurs conventionnels, afin de proposer une alternative a la technique electrique, presentant lors du debut de cette these des problemes lies a la variation du resultat des mesures selon la position des contacts de tension sur la surface de l'echantillon, et de pouvoir mesurer les pertes ac dans des conditions simulant la realite des futures applications industrielles des rubans supraconducteurs: en particulier, cette methode utilise la technique calorimetrique, associee a une calibration simultanee et in situ. La validite de la methode a ete verifiee de maniere theorique et experimentale: d'une part, des mesures ont ete realisees sur des echantillons de Bi-2223 recouverts d'argent ou d'alliage d'argent-or et comparees avec les predictions theoriques donnees par Norris, nous indiquant la nature majoritairement hysteretique des pertes ac dans nos echantillons; d'autre part, une mesure electrique a ete realisee in situ dont les resultats correspondent parfaitement a ceux donnes par notre methode calorimetrique. Par ailleurs, nous avons compare la dependance en courant et en frequence des pertes ac d'un echantillon avant et apres qu'il ait ete endommage. Ces mesures semblent indiquer une relation entre la valeur du coefficient de la loi de puissance modelisant la dependance des pertes avec le courant, et les inhomogeneites longitudinales du courant critique induites par l'endommagement. De plus, la variation en frequence montre qu'au niveau des grosses fractures transverses creees par l'endommagement dans le coeur supraconducteur, le courant se partage localement de maniere a peu pres equivalente entre les quelques grains de matiere supraconductrice qui restent fixes a l'interface coeur-enveloppe, et le revetement en alliage d'argent. L'interet d'une methode calorimetrique par rapport a la technique electrique, plus rapide, plus sensible et maintenant fiable, reside dans la possibilite de realiser des mesures de pertes ac dans des environnements complexes, reproduisant la situation presente par exemple dans un cable de transport d'energie ou dans un transformateur. En particulier, la superposition d'un courant dc en plus du courant ac habituel nous a permis d'observer experimentalement, pour la premiere fois a notre connaissance, un comportement particulier des pertes ac en fonction de la valeur du courant dc decrit theoriquement par LeBlanc. Nous avons pu en deduire la presence d'un courant d'ecrantage Meissner de 16 A, ce qui nous permet de determiner les conditions dans lesquelles une reduction du niveau de pertes ac pourrait etre obtenue par application d'un courant dc, phenomene denomme "vallee de Clem".
NASA Astrophysics Data System (ADS)
Martinez, Nicolas
Actuellement le Canada et plus specialement le Quebec, comptent une multitude de sites isoles dont !'electrification est assuree essentiellement par des generatrices diesel a cause, principalement, de l'eloignement du reseau central de disuibution electrique. Bien que consideree comme une source fiable et continue, 1 'utilisation des generatrices diesel devient de plus en plus problematique d'un point de vue energetique, economique et environnemental. Dans le but d'y resoudre et de proposer une methode d'approvisionnement plus performante, mains onereuse et plus respectueuse de l'environnement, l'usage des energies renouvelables est devenu indispensable. Differents travaux ont alors demontre que le couplage de ces energies avec des generauices diesel, formant des systemes hybrides, semble etre une des meilleures solutions. Parmi elles, le systeme hybride eolien-diesel avec stockage par air comprime (SHEDAC) se presente comme une configuration optimale pour 1 'electrification des sites isoles. En effet, differentes etudes ont permis de mettre en avant l'efficacite du stockage par air comprime, par rapport a d'autres technologies de stockage, en complement d'un systeme hybride eolien-diesel. Plus precisement, ce systeme se compose de sous-systemes qui sont: des eoliennes, des generatrices diesel et une chaine de compression et de stockage d'air comprime qui est ensuite utilisee pour suralimenter les generatrices. Ce processus permet ainsi de reduire Ia consommation en carburant tout en agrandissant la part d'energie renouvelable dans Ia production electrique. A ce jour, divers travaux de recherche ont pennis de demontrer I' efficacite d 'un tel systeme et de presenter une variete de configurations possibles. A travers ce memoire, un logiciel de dimensionnement energetique y est elabore dans le but d'unifom1iser !'approche energetique de cette technologie. Cet outil se veut etre une innovation dans le domaine puisqu'il est actuellement impossible de dimensionner un SHEDAC avec les outils existants. Un etat de l'art specifique associe a une validation des resultats a ete realise dans le but de proposer un logiciel fiable et performant. Dans une logique visant !'implantation d'un SHEDAC sur un site isole du Nord-du-Quebec, le logiciel developpe a ete, ensuite, utilise pour realiser une etude energetique permettant d' en degager la solution optimale a mettre en place. Enfin, a l' aide des outils et des resultats obtenus precedemment, !'elaboration de nouvelles strategies d'operation est presentee dans le but de demontrer comment le systeme pourrait etre optimise afm de repondre a differentes contraintes techniques. Le contenu de ce memoire est presente sous forme de trois articles originaux, soumis a des joumaux scientifiques avec cornite de lecture, et d'un chapitre specifique presentant les nouvelles strategies d 'operation. Ils relatent des travaux decrits dans le paragraphe precedent et permettent d'en deduire un usage concluant et pertinent du SHEDAC dans un site isole nordique.
Le niobate de lithium a haute temperature pour les applications ultrasons =
NASA Astrophysics Data System (ADS)
De Castilla, Hector
L'objectif de ce travail de maitrise en sciences appliquees est de trouver puis etudier un materiau piezoelectrique qui est potentiellement utilisable dans les transducteurs ultrasons a haute temperature. En effet, ces derniers sont actuellement limites a des temperatures de fonctionnement en dessous de 300°C a cause de l'element piezoelectrique qui les compose. Palier a cette limitation permettrait des controles non destructifs par ultrasons a haute temperature. Avec de bonnes proprietes electromecaniques et une temperature de Curie elevee (1200°C), le niobate de lithium (LiNbO 3) est un bon candidat. Mais certaines etudes affirment que des processus chimiques tels que l'apparition de conductivite ionique ou l'emergence d'une nouvelle phase ne permettent pas son utilisation dans les transducteurs ultrasons au-dessus de 600°C. Cependant, d'autres etudes plus recentes ont montre qu'il pouvait generer des ultrasons jusqu'a 1000°C et qu'aucune conductivite n'etait visible. Une hypothese a donc emerge : une conductivite ionique est presente dans le niobate de lithium a haute temperature (>500°C) mais elle n'affecte que faiblement ses proprietes a hautes frequences (>100 kHz). Une caracterisation du niobate de lithium a haute temperature est donc necessaire afin de verifier cette hypothese. Pour cela, la methode par resonance a ete employee. Elle permet une caracterisation de la plupart des coefficients electromecaniques avec une simple spectroscopie d'impedance electrochimique et un modele reliant de facon explicite les proprietes au spectre d'impedance. Il s'agit de trouver les coefficients du modele permettant de superposer au mieux le modele avec les mesures experimentales. Un banc experimental a ete realise permettant de controler la temperature des echantillons et de mesurer leur impedance electrochimique. Malheureusement, les modeles actuellement utilises pour la methode par resonance sont imprecis en presence de couplages entre les modes de vibration. Cela implique de posseder plusieurs echantillons de differentes formes afin d'isoler chaque mode principal de vibration. De plus, ces modeles ne prennent pas bien en compte les harmoniques et modes en cisaillement. C'est pourquoi un nouveau modele analytique couvrant tout le spectre frequentiel a ete developpe afin de predire les resonances en cisaillement, les harmoniques et les couplages entre les modes. Neanmoins, certains modes de resonances et certains couplages ne sont toujours pas modelises. La caracterisation d'echantillons carres a pu etre menee jusqu'a 750°C. Les resultats confirment le caractere prometteur du niobate de lithium. Les coefficients piezoelectriques sont stables en fonction de la temperature et l'elasticite et la permittivite ont le comportement attendu. Un effet thermoelectrique ayant un effet similaire a de la conductivite ionique a ete observe ce qui ne permet pas de quantifier l'impact de ce dernier. Bien que des etudes complementaires soient necessaires, l'intensite des resonances a 750°C semble indiquer que le niobate de lithium peut etre utilise pour des applications ultrasons a hautes frequences (>100 kHz).
Modelisation et simulation d'une liaison HVDC de type VSC-MMC
NASA Astrophysics Data System (ADS)
Saad, Hani Aziz
High-voltage direct current transmission systems (HVDC) are rapidly expanding in the world. Two main factors are responsible for this expansion. The first is related to the difficulty of building new overhead lines to ensure the development of high-voltage AC grids, which makes the usage of underground cables more common. However, the use of such cables is limited in length to a few tens of km because of the capacitive current generated by the cable itself. Beyond this length limit, the solution is usually to transmit in DC. The second factor is related to the development of offshore wind power plants that require connecting powers of several hundred of MW to the mainland grid by cables whose lengths can reach several hundreds of km and consequently require HVDC transmission system. Several HVDC projects are currently planned and developed by the French transmission system operator RTE. One of such projects is the INELFE interconnection project, with a capacity of 2,000 MW, between France and Spain. This thesis has been funded by RTE, in order to model and simulate in off-line and real time modes, modern HVDC interconnections. The delivered simulation means are used to examine targeted HVDC system performances and risks of abnormal interactions with surrounding power systems. The particularity of the INELFE HVDC system is the usage of a dedicated control system that will largely determine the dynamic behaviour of the system for both large disturbances (faults on the network) and small perturbations (power step changes). Various VSC topologies, including the conventional two-level, multi-level diode-clamped and floating capacitor multi-level converters, have been proposed and reported in the literature. However, due to the complexity of controls and practical limitations, the VSC-HVDC system installations have been limited to the two-level and three-level diode-clamped converters. Recently, the development of modular technology called MMC (Modular Multilevel Converter [Siemens] - [Alstom]) or CTL (Cascaded Two Level topology [ABB]) has allowed to overcome existing limitations. This topology consists of several sub-modules connected in series. Each sub-module contains two IGBTs with antiparallel diodes and a capacitor that act as energy storage. The control of these IGBTs allows connecting and disconnecting the capacitor on the network. The grouping of several sub-modules in series forms an arm. From the AC side, each phase consists of two arms. Reactors are included in series with each arm in order to limit the fault current. The large number of IGBTs in MMCs creates complicated computation problems in electromagnetic transient type (EMT-type) simulation tools. Detailed MMC models include the representation of thousands of IGBT (Insulated Gate Bipolar Transistor) switches and must use small numerical integration time steps to accurately represent fast and multiple simultaneous switching events. This becomes particularly more complex for performing real-time simulations. The computational burden introduced by such models highlights the need to develop more efficient models. A current trend is based on simplified models capable of delivering sufficient accuracy for EMT-type simulations, however the validity range of such models must be carefully evaluated. The first objective of this thesis is to model HVDC-MMC transmission systems in EMT-type programs for off-line and real-time simulations. To fulfill this objective, different modelling approaches are presented, then the control system used for HVDC-MMC links is developed and finally the implementations of MMC models using both CPU (Central Processing Unit) and FPGA (Field-Programmable Gate Array) technologies for real-time simulations, are presented. The contributions are useful for researchers and engineers using transient simulation tools for modelling and analysis of power systems including HVDC-MMC. The HVDC links currently planned or constructed in France, are embedded in highly meshed networks and may have significant impact on their operations and performance. Therefore, the second objective of this thesis is to perform modal analysis and parametric studies to assess the risks of abnormal interactions between several HVDC links inserted in meshed AC networks.
NASA Astrophysics Data System (ADS)
Metiche, Slimane
La demande croissante en poteaux pour les differents reseaux d'electricite et de telecommunications a rendu necessaire l'utilisation de materiaux innovants, qui preservent l'environnement. La majorite des poteaux electriques existants au Canada ainsi qu'a travers le monde, sont fabriques a partir de materiaux traditionnels tel que le bois, le beton ou l'acier. Les motivations des industriels et des chercheurs a penser a d'autres solutions sont diverses, citons entre autre: La limitation en longueur des poteaux en bois ainsi que la vulnerabilite des poteaux fabriques en beton ou en acier aux agressions climatiques. Les nouveaux poteaux en materiaux composites se presentent comme de bons candidats a cet effet, cependant; leur comportement structural n'est pas connu et des etudes theoriques et experimentales approfondies sont necessaires avant leur mise en marche a grande echelle. Un programme de recherche intensif comportant plusieurs projets experimentaux, analytiques et numeriques est en cours a l'Universite de Sherbrooke afin d'evaluer le comportement a court et a long termes de ces nouveaux poteaux en Polymeres Renforces de Fibres (PRF). C'est dans ce contexte que s'inscrit la presente these, et notre recherche vise a evaluer le comportement a la flexion de nouveaux poteaux tubulaires coniques fabriques en materiaux composites par enroulement filamentaire et ce, a travers une etude theorique, ainsi qu'a travers une serie d'essais de flexion en "grandeur reelle" afin de comprendre le comportement structural de ces poteaux, d'optimiser la conception et de proposer une procedure de dimensionnement pour les utilisateurs. Les poteaux en Polymeres Renforces de Fibres (PRF) etudies dans cette these sont fabriques avec une resine epoxyde renforcee de fibres de verre type E. Chaque type poteaux est constitue principalement de trois zones ou les proprietes geometriques (epaisseur, diametre) et les proprietes mecaniques sont differentes d'une zone a l'autre. La difference entre ces proprietes est due au nombre de couches utilisees dans chaque zone ainsi qu'a l'orientation des fibres de chaque couche. Un total de vingt-trois prototypes de dimensions differentes; ont ete testes en flexion jusqu'a la rupture. Deux types de fibres de verre de masses lineaires differentes, ont ete utilisees afin d'evaluer l'effet du type de fibres sur le comportement a la flexion. Un nouveau montage experimental permettant de tester tous les types de poteaux en PRF a ete dimensionne et fabrique selon les recommandations decrites dans les normes ASTM D 4923-01 et ANSI C 136.20-2005. Un modele analytique base sur la theorie des poutres en elasticite lineaire est propose dans cette these. Ce modele predit avec une bonne precision le comportement experimental charge---deflexion ainsi que la deflexion maximale au sommet des poteaux en PRF; constitues de plusieurs zones de caracteristiques geometriques et mecaniques differentes. Une procedure de dimensionnement des poteaux en PRF, basee sur les resultats experimentaux obtenus dans le cadre de la presente these, est egalement proposee. Les resultats obtenus dans le cadre de la presente these permettront le developpement et l'amelioration des regles de conception utiles et pratiques a l'usage des concepteurs et des industriels du domaine des poteaux en PRF. Les retombees de cette recherche sont a la fois economiques et technologiques, car les resultats obtenus constitueront une banque de donnees qui contribueront au developpement des normes de calcul, et par consequent a l'optimisation des materiaux utilises, et serviront a valider de futurs resultats et modeles theoriques.
NASA Astrophysics Data System (ADS)
Samson, Thomas
Nous proposons une methode permettant d'obtenir une expression pour la conductivite de Hall de structures electroniques bidimensionnelles et nous examinons celle -ci a la limite d'une temperature nulle dans le but de verifier l'effet Hall quantique. Nous allons nous interesser essentiellement a l'effet Hall quantique entier et aux effets fractionnaires inferieurs a un. Le systeme considere est forme d'un gaz d'electrons en interaction faible avec les impuretes de l'echantillon. Le modele du gaz d'electrons consiste en un gaz bidimensionnel d'electrons sans spin expose perpendiculairement a un champ magnetique uniforme. Ce dernier est decrit par le potentiel vecteur vec{rm A} defini dans la jauge de Dingle ou jauge symetrique. Conformement au formalisme de la seconde quantification, l'hamiltonien de ce gaz est represente dans la base des etats a un-corps de Dingle |n,m> et exprime ainsi en terme des operateurs de creation et d'annihilation correspondants a_sp{ rm n m}{dag} et a _{rm n m}. Nous supposons de plus que les electrons du niveau fondamental de Dingle interagissent entre eux via le potentiel coulombien. La methode utilisee fait appel a une equation mai tresse a N-corps, de nature quantique et statistique, et verifiant le second principe de la thermodynamique. A partir de celle-ci, nous obtenons un systeme d'equations differentielles appele hierarchie d'equations quantique dont la resolution nous permet de determiner une equation a un-corps, dite de Boltzmann quantique, et dictant l'evolution de la moyenne statistique de l'operateur non-diagonal a _sp{rm n m}{dag } a_{rm n}, _{rm m}, sous l'action du champ electrique applique vec{rm E}(t). C'est sa solution Tr(p(t) a _sp{rm n m}{dag} a_{rm n},_ {rm m}), qui definit la relation de convolution entre la densite courant de Hall vec{rm J}_{rm H }(t) et le champ electrique vec {rm E}(t) dont la transformee de Laplace-Fourier du noyau nous fournit l'expression de la conductivite de Hall desiree. Pour une valeur de facteur d'occupation (nombre d'electrons/degenerescence des etats de Dingle) superieure a un, c'est-a-dire en absence d'interaction electron-electron, il nous sera facile d'evaluer cette conductivite a la limite d'une temperature nulle et de demontrer qu'elle tend vers l'une des valeurs quantiques qe^2/h conformement a l'effet Hall quantique entier. Cependant, pour une valeur du facteur d'occupation inferieure a un, c'est-a-dire en presence d'interaction electron-electron, nous ne pourrons evaluer cette limite et obtenir les resultats escomptes a cause de l'impossibilite de determiner l'un des termes impliques. Neanmoins, ce dernier etant de nature statistique, il pourra etre aisement mis en fonction du propagateur du gaz d'electrons dont on doit maintenant determiner une expression en regime effet Hall quantique fractionnaire. Apres avoir demontre l'impuissance de la theorie des perturbations, basee sur le theoreme de Wick et la technique des diagrammes de Feynman, a accomplir cette tache correctement, nous proposons une seconde methode. Celle -ci fait appel au formalisme de l'integrale fonctionnelle et a l'utilisation d'une transformation de Hubbard-Stratonovich generalisee permettant de substituer a l'interaction a deux-corps une interaction effective a un-corps. L'expression finale obtenue bien que non completement resolue, devrait pouvoir etre estimee par une bonne approximation analytique ou au pire numeriquement.
Laboratory measurements of electrical resistivity versus water content on small soil cores
NASA Astrophysics Data System (ADS)
Robain, H.; Camerlynck, C.; Bellier, G.; Tabbagh, A.
2003-04-01
The assessment of soil water content variations more and more leans on geophysical methods that are non invasive and that allow a high spatial sampling. Among the different methods, DC electrical imaging is moving forward. DC Electrical resistivity shows indeed strong seasonal variations that principally depend on soil water content variations. Nevertheless, the widely used Archie's empirical law [1], that links resistivity with voids saturation and water conductivity is not well suited to soil materials with high clay content. Furthermore, the shrinking and swelling properties of soil materials have to be considered. Hence, it is relevant to develop new laboratory experiments in order to establish a relation between electrical resistivity and water content taking into account the rheological and granulometrical specificities of soil materials. The experimental device developed in IRD laboratory allows to monitor simultaneously (i) the water content, (ii) the electrical resistivity and (iii) the volume of a small cylindrical soil core (100cm3) put in a temperature controlled incubator (30°C). It provides both the shrinkage curve of the soil core (voids volume versus water content) and the electrical resistivity versus water content curve The modelisation of the shrinkage curve gives for each moisture state the water respectively contained in macro and micro voids [2], and then allows to propose a generalized Archie's like law as following : 1/Rs = 1/Fma.Rma + 1/Fmi.Rmi and Fi = Ai/(Vi^Mi.Si^Ni) with Rs : the soil resistivity. Fma and Fmi : the so called "formation factor" for macro and micro voids, respectively. Rma and Rmi : the resistivity of the water contained in macro and micro voids, respectively. Vi : the volume of macro and micro voids, respectively. Si : the saturation of macro and micro voids, respectively. Ai, Mi and Ni : adjustment coefficients. The variations of Rmi are calculated, assuming that Rma is a constant. Indeed, the rise of ionic concentration in water may be neglected during the sewage of macro voids as it corresponds to a small quantity of water for the studied samples. Soil solid components are generally electrical insulators, the conduction of electrical current only lies on two phenomenon occurring in water : (i) volume conduction controlled by the electrolyte concentration in water and the geometrical characteristics of macro voids network ; (ii) surface conduction controlled by the double diffuse layer that depends on the solid-liquid interactions, the specific surface of clay minerals and the geometry of particles contacts. For the water contained in macro voids the preeminent phenomenon seems to be volume conduction while for the water contained in micro voids, it seems to be surface conduction. This hypothesis satisfyingly explains the shape of the electrical resistivity versus water content curves obtained for three different oxisols with clayey, clayey-sandy and sandy-clayey texture. [1] Archie G.E. 1942. The electrical resistivity log as an aid in determining some reservoirs characteristics. Trans. AIME, 146, 54-67. [2] Braudeau E. et al. 1999. New device and method for soil shrinkage curve measurement and characterization. S.S.S.A.J., 63(3), 525-535.
NASA Astrophysics Data System (ADS)
Baland, Rose-Marie; Van Hoolst, Tim; Tobie, Gabriel; Dehant, Véronique
2015-04-01
Besides being the largest natural satellite known in the Solar System, Ganymede most likely also has the most differentiated internal structure of all satellites.Ganymede is thought to have an external water/ice layer subdivided into three sublayers: an outer ice shell, a global liquid water ocean, and a high pressure ice mantle. The presence of a water layer is supported by the possible detection of an induced magnetic field with the Galileo spacecraft. The metallic core is divided into a solid (inner core) and a liquid (outer core) part. Between the water/ice and the metallic layers, a rock mantle is expected. The JUpiter ICy moons Explorer (JUICE) mission led by ESA is planned to be launched in 2022. The spacecraft is expected to enter in orbit around Ganymede in september 2032. The Ganymede Tour will alternate elliptic and circular phases at different altitudes. The circular phases at altitudes of a few hundred kilometers are dedicated partly to the study of the internal structure such as the determination of the extent and composition of the ocean and of the surface ice shell. The payload of the spacecraft comprises the radio science package 3GM (Gravity and Geophysics of Jupiter and the Galilean Moons) that will be used to measure the Doppler effect on radio links between the orbiter and the Earth which will be affected by the gravity field of Ganymede. The gravity field of Ganymede is the sum of the static hydrostatic field (related to the secular Love number kf), of the periodically varying field due to tidal deformations (related to the tidal Love number k2 and the tidal dissipation factor Q), of the periodically varying field due to change in the rotation state (variations in the rotation rate and in the orientation of the rotation axis), and of the non-hydrostatic field that may be due to mass anomalies. The tidal and rotation parameters depend on the internal structure of the satellite (density, size, rheological properties of the different layers) in a non-trivial way. Our aim is to assess for which internal structure quantities of Ganymede information can be retrieved from Doppler effect measurements. The Doppler effect is modelled by the relative radial velocity between the orbiter and the terrestrial observer, considering the tides and the rotation state of Ganymede, together with the strong attraction exerted directly by Jupiter on the orbiter. The modelisation neglects some effects such as a possible atmospheric drag and a possible non-hydrostatic part of the gravity field. We aim to answer questions as 'Is it possible to separate the tidal and rotational signals?', 'What is the optimal orbital configuration?'. The inversion of the simulated noised data is done by the least square method. An interesting configuration has to maximise the effect of the tides and of the rotation on the Doppler signal, in order to maximise the constraints inferred on the internal stucture of Ganymede. It also has to correspond to a quite stable quasi-circular orbit as required for the circular phases of the Ganymede tour.
Transport de paires EPR dans des structures mesoscopiques
NASA Astrophysics Data System (ADS)
Dupont, Emilie
Dans cette these, nous nous sommes particulierement interesses a la propagation de paires EPR1 delocalisees et localisees, et a l'influence d'un supraconducteur sur le transport de ces paires. Apres une introduction de cette etude, ainsi que du cadre scientifique qu'est l'informatique quantique dans lequel elle s'inscrit, nous allons dans le chapitre 1 faire un rappel sur le systeme constitue de deux points quantiques normaux entoures de deux fils supraconducteurs. Cela nous permettra d'introduire une methode de calcul qui sera reutilisee par la suite, et de trouver egalement le courant Josephson produit par ce systeme transforme en SQUID-dc par l'ajout d'une jonction auxiliaire. Le SQUID permet de mesurer l'etat de spin (singulet ou triplet), et peut etre forme a partir d'autres systemes que nous etudierons ensuite. Dans le chapitre 2, nous rappellerons l'etude detaillee d'un intricateur d'Andreev faite par un groupe de Bale. La matrice T, permettant d'obtenir le courant dans les cas ou les electrons sont separes spatialement ou non, sera etudiee en detail afin d'en faire usage au chapitre suivant. Le chapitre 3 est consacre a l'etude de l'influence du bruit sur le fonctionnement de l'intricateur d'Andreev. Ce bruit modifie la forme du courant jusqu'a aboutir a d'autres conditions de fonctionnement de l'intricateur. En effet, le bruit present sur les points quantiques peut perturber le transport des paires EPR par l'intermediaire des degres de liberte. Nous montrerons que, du fait de l'"intrication" entre la charge de la paire et le bruit, la paire est detruite pour des temps longs. Cependant, le resultat le plus important sera que le bruit perturbe plus le transport des paires delocalisees, qui implique une resonance de Breit-Wigner a deux particules. Le transport parasite n'implique pour sa part qu'une resonance de Breit-Wigner a une particule. Dans le chapitre 4, nous reviendrons au systeme constitue de deux points quantiques entoures de deux fils supraconducteurs, mais ici, les deux points quantiques seront aussi supraconducteurs. On obtiendra alors l'hamiltonien effectif de la meme maniere que precedemment ainsi que la forme du courant. Dans le cas ou la tension entre les deux fils est nulle, nous ferons une comparaison avec l'experience et nous verrons que les resultats obtenus sont plus en accord avec celle-ci si on fait l'hypothese de la presence d'un bain, qui va modeliser le bruit sur l'un des fils. Enfin, dans le dernier chapitre, nous utiliserons a la fois un qubit de charge et un qubit de spin entoures par deux fils supraconducteurs. Nous pourrons alors mesurer l'influence du supraconducteur et voir s'il est possible de creer ici des paires d'electrons intriques et d'aboutir a un pendule quantique. 1Il existe des systemes qui produisent des paires de particules ejectees simultanement dans des directions opposees et qui permettent de tester le paradoxe d' Einstein, Podolsky, Rosen. Chaque particule de la paire est dans un etat indetermine. Si on mesure les etats respectifs des deux particules, on obtient systematiquement des resultats complementaires, soit de facon aleatoire: 0-1 ou 1-0. La mecanique quantique explique que les deux particules ainsi produites constituent un seul systeme, une paire EPR.
Determination des Parametres Atmospheriques des Etoiles Naines Blanches de Type DB
NASA Astrophysics Data System (ADS)
Beauchamp, Alain
1995-01-01
Les etoiles naines blanches dont les spectres visibles sont domines par des raies fortes d'helium neutre sont subdivisees en trois classes, DB (raies d'helium neutre seulement), DBA (raies d'helium neutre et d'hydrogene) et DBZ (raies d'helium neutre et d'elements lourds). Nous analysons trois echantillons de spectres observes de ces types de naines blanches. Les echantillons consistent, respectivement, de 48 spectres dans le domaine du visible (3700-5100 A). 24 dans l'ultraviolet (1200-3100 A) et quatre dans la partie rouge du visible (5100-6900) A). Parmi les objets de l'echantillon visible, nous identifions quatre nouvelles DBA, ainsi que deux nouvelles DBZ, auparavant classees DB. L'analyse nous permet de determiner spectroscopiquement les parametres atmospheriques, soit la temperature effective, la gravite de surface, ainsi que l'abondance relative de l'hydrogene, N(H)/N(He), dans le cas des DBA. Pour les objets plus chauds que ~15,000 K, la gravite de surface determinee est fiable, et nous obtenons les masses stellaires avec une relation masse -rayon theorique. Les exigences propres a l'analyse de ces objets ont requis d'importantes ameliorations dans la modelisation de leurs atmospheres et distributions de flux de radiation emis par ces derniers. Nous avons inclus dans les modeles d'atmospheres, pour la premiere fois a notre connaissance, les effets dus a la molecule He_sp{2 }{+}, ainsi que l'equation d'etat de Hummer et Mihalas (1988), qui tient compte des perturbations entre particules dans le calcul des populations des differents niveaux atomiques. Nous traitons la convection dans le cadre de la theorie de la longueur de melange. Trois grilles de modeles d'atmospheres a l'ETL (equilibre thermodynamique local) ont ete produites, pour un ensemble de temperatures effectives, gravites de surface et abondances d'hydrogene couvrant les proprietes des etoiles de nos echantillons; elles sont caracterisees par differentes parametrisations appelees, respectivement, ML1, ML2 et ML3, de la theorie de longueur de melange. Nous avons calcule une grille de spectres synthetiques avec les memes parametrisations que la grille de modeles d'atmospheres. Notre traitement de l'elargissement des raies de l'helium neutre a ete ameliore de facon significative par rapport aux etudes precedentes. D'une part, nous tenons compte de l'elargissement des raies produit par les interactions entre l'emetteur et les particules neutres (elargissements par resonance et de van der Waals) en plus de celui par les particules chargees (elargissement Stark). D'autre part, nous avons calcule nous-memes les profils Stark avec les meilleures theories d'elargissement disponibles pour la majorite des raies observees; ces profils depassent en qualite ce qui a ete publie jusqu'a ce jour. Nous avons calcule la distribution de masse des etoiles DB plus chaudes que 15,000 K. La distribution de masse des DB est tres etroite, avec environ les trois quarts des etoiles incluses dans l'intervalle 0.55-0.65 Modot. La masse moyenne des etoiles DB est de 0.58 M_⊙ avec sigma = 0.07. La difference principale entre les distributions de masse des DB et DA est la faible proportion de DB dans les ailes de la distribution, ce qui implique que les DA moins massives que ~0.4 M odot et plus massives que ~0.8 M_⊙ ne se convertissent pas en DB. Les objets les plus massifs de notre echantillon sont de type DBA, ce qui suggere que la masse elevee favorise la visibilite de l'hydrogene. (Abstract shortened by UMI.).
NASA Astrophysics Data System (ADS)
Bredin, Nathalie
La génération incéssante de déchets nécessite le développement de nouvelles technologies, telle que la valorisation énergétique, pour en disposer. Une filière appropriée pour valoriser énergétiquement les déchets est le procédé cimentier. L'industrie du ciment est une des grandes consommatrices d'énergie. La température nécessaire à la préparation du ciment se situe aux environs de 1450°C pour faire réagir la matière première (cru) qui est sous forme poudreuse et constituée principalement de roches calcaires, argileuses, schisteuses,... Les gaz de combustion ont un long temps de résidence dans les fours. De plus, le mode de fonctionnement des fours est responsable d'un effet de lessivage des gaz acides par le cru alcalin. Ces propriétés font du procédé cimentier un bon candidat pour la valorisation énergétique des déchets. Les déchets interessants pour les cimenteries sont, entre autres, les huiles usées, les solvants usés et les pneus usés qui ont un pouvoir calorifique équivalent à celui du charbon. Les rejets à l'atmosphère des cimenteries sont principalement des émissions gazeuses. Ainsi, l'impact de la valorisation de pneus usés sur les émissions gazeuses de la cimenterie Saint-Laurent de Joliette a été étudié, à l'aide de l'outil qu'est la modélisation de la dispersion atmosphérique. Le modèle utilisé ISC-ST2, Industrial Source Complex-Short Term, est de type gaussien. L'analyse des concentrations maximales horaires et moyennes sur différentes périodes de temps (1 heure, 8 heures, ..., 1 an) des émissions gazeuses dans l'air ambiant montre que celles-ci sont en deçà des normes émises par le gouvernement québécois et par la Communauté Urbaine de Montréal. L'étude de la distribution géographique des polluants, d'après les concentrations annuelles, dans un rayon de 5 km autour de la cimenterie montre que le fait d'utiliser des pneus comme combustible de substitution n'a qu'un faible impact sur la concentration des contaminants au niveau du sol. L'étude du modèle gaussien de dispersion a permis de mettre au point un nouvel outil: la corrélation existant entre les concentrations latérales et verticales des polluants dans le panache. Cet outil utilisé pour étudier les schémas de dispersion atmosphérique de Turner en milieu rural, et de Briggs en milieux urbain et rural, montre que, en milieu rural, le schéma de dispersion proposé par Briggs présente une dispersion plus importante que celui de Turner pour les classes neutres et instables et moins importantes pour les classes stables. La comparaison des schémas de Briggs (rural et urbain) confirme que la dispersion en milieu urbain est plus importante qu'en milieu rural.
[Therapeutic monitoring: analytic, pharmacokinetic and clinical aspects].
Marquet, P
1999-01-01
This paper gives an overview of present aspects and future prospects of therapeutic drug monitoring (TDM). The main aims of TDM are to avoid therapeutic failures due to bad compliance or too low dose of a given drug, as well as adverse or toxic effects due to an excessive dose. The therapeutic drugs frequently monitored depend on the country, but are generally few. For some of these drugs or for others, only patients at risk or belonging to particular sub-populations for a given drug, need TDM. A pre-analytical management is necessary, comprising a correct information of the physician, concerning the nature of the sample to collect and the clinical data necessary to the interpretation, as well as their recording; the control of the sample routing and storing conditions. Nowadays, drug analyses are essentially performed using immunochemical techniques, rapid and easy to operate but limited to a small number of drugs, and chromatographic methods, more specific and adaptable to almost any therapeutic drug and financially and technically more and more accessible. The interpretation of analytical results is a most important part of TDM, which requires knowledge of clinical data, precise collection time, co-administered treatments, and to dispose of a previously defined therapeutic range or target concentration, adapted to the population to which the patient belongs; the limitations of the analytical technique used must also be considered. Clinical pharmacokinetics is a further step in the use of analytical results, allowing the prediction of an efficient dose and administration schedule in one step, using a limited number of blood samples and generally a Bayesian estimation algorithm, readily available through commercial software dedicated to a few drugs in different reference populations. The pharmacokinetic characteristics of different populations and the validation of bayesian estimation have also been published for a number of drugs, sometimes by pharmaceutical companies following phase I and II clinical trials, even taking into account various physiopathological co-variables, but mostly by independent researchers using smaller populations. The efficiency and cost of routine TDM are questionable when it is prescribed with no clinical information or even no indication of administration and sampling times. On the contrary, several studies reported that clinical pharmacokinetics significantly improved patient outcome and were cost-saving, particularly in terms of duration of hospitalisation. The author's opinion is that TDM, in the near future, will be mainly dedicated to drugs used to treat life-threatening diseases, such as anti-HIV, anticancer and immunosuppressive drugs, and maybe also biotechnological peptides or proteins, because of cost considerations. TDM will probably also be used preferentially in target populations, characterised by higher risk or pharmacokinetic variability. Very sensitive, specific and partly automated separative techniques, such as liquid chromatography-tandem mass spectrometry, might become more common than immunochemical methods, owing to a higher flexibility and improved sample throughout. Clinical pharmacokinetics may spread to a larger number of drugs and patients, due to larger reference populations available, taking into account a number of co-variables, computerised data collection and simplified modelisation. Therefore, TDM will mainly be performed in hospitals, with an essentially clinical role for the pharmacists or pharmacologists involved and routine use of recent and efficient technologies for the TDM laboratory technical staff.
NASA Astrophysics Data System (ADS)
Maurice, Elsa
Fossil fuels are a scarce energy resource. Since the industrial revolution, mankind uses and abuses of non-renewable energies. They are responsible for many environmental damages. The production of energy is one of the main challenges for a global sustainable development. In our society, we can witness an exponential increase of the usage of the systems of Information and Communication Technologies (ICT) such as Internet, phone calls, etc. The ICT development allows the creation and optimization of many smart systems, the pooling of services, and it also helps damping the climate change. However, because of their electric consumption, the ICT are also responsible for some green house gases (GHG) emissions: 3% in total. This fact gives them the willingness to change in order to limit their GHG emissions. In order to properly evaluate and optimize the ICT services, it is necessary to use some methods of evaluation that comply with the specificity of these systems. Currently, the methods used to evaluate the GHG emissions are not adapted to dynamic systems, which include the ICT systems. The variations of the production of electricity in a day or even a month are not yet taken into account. This problem is far from being restricted to the modelling of GHG emissions, it widens to the global variation in production and consumption of electricity. The Life Cycle Assessment (LCA) method grants useful and complete tools to analyse their environmental impacts, but, as with the GHG computation methods, it should be dynamically adapted. In the ICT framework, the first step to solve this LCA problem is to be able to model the variations in time of the electricity production. This master thesis introduces a new way to include the variation in time of the consumption and production of electricity in LCA methods. First, it generates an historical hourly database of the electricity production and import-export of three Canadian states: Alberta, Ontario and Quebec. Then it develops a model in function of time to predict their electricity consumption. This study is done for a project implementing a " cloud computing " service in between these states. The consumption model then provides information to optimize the best place and time to make use of ICT services such as Internet messaging or server maintenance. This first-ever implementation of time parameter allows more precision and vision in LCA data. The disintegration of electrical inventory flows in LCA refines the effects of the electricity production both historically and in real time. Some short-term predictions for the state of Quebec electrical exportations and importations were also computed in this thesis. The goal is to foresee and optimize in real time the ICT services use. The origin of a kilowatt-hour consumed in Quebec depends on the import-export variable with its surrounding states. This parameter relies mainly on the price of the electricity, the weather and the need for the state of Quebec in energy. This allows to plot a time-varying estimate of the environmental consequences for the consumption of a kilowatt-hour in Quebec. This can then be used to limit the GHG emission of ICT services like " cloud-computing " or " smart-grids ". A smart trade-off between electricity consumption and environmental issues will lead to a more efficient sustainable development.
NASA Astrophysics Data System (ADS)
Lasri, Abdel-Halim
Dans cette recherche-developpement, nous avons concu, developpe et mis a l'essai un simulateur interactif pour favoriser l'apprentissage des lois probabilistes impliqees dans la genetique mendelienne. Cet environnement informatise devra permettre aux etudiants de mener des experiences simulees, utilisant les statistiques et les probebilites comme outils mathematiques pour modeliser le phenomene de la transmission des caracteres hereditaires. L'approche didactique est essentiellement orientee vers l'utilisation des methodes quantitatives impliquees dans l'experimentation des facteurs hereditaires. En incorporant au simulateur le principe de la "Lunette cognitive" de Nonnon (1986), l'etudiant fut place dans une situation ou il a pu synchroniser la perception de la representation iconique (concrete) et symbolique (abstraite) des lois probabilistes de Mendel. A l'aide de cet environnement, nous avons amene l'etudiant a identifier le(s) caractere(s) hereditaire(s) des parents a croiser, a predire les frequences phenotypiques probables de la descendance issue du croisement, a observer les resultats statistiques et leur fluctuation au niveau de l'histogramme des frequences, a comparer ces resultats aux predictions anticipees, a interpreter les donnees et a selectionner en consequence d'autres experiences a realiser. Les etapes de l'approche inductive sont privilegiees du debut a la fin des activites proposees. L'elaboration, du simulateur et des documents d'accompagnement, a ete concue a partir d'une vingtaine de principes directeurs et d'un modele d'action. Ces principes directeurs et le modele d'action decoulent de considerations theoriques psychologiques, didactiques et technologiques. La recherche decrit la structure des differentes parties composant le simulateur. L'architecture de celui-ci est construite autour d'une unite centrale, la "Principale", dont les liens et les ramifications avec les autres unites confere a l'ensemble du simulateur sa souplesse et sa facilite d'utilisation. Le simulateur "Genetique", a l'etat de prototype, et la documentation qui lui est afferente ont ete soumis a deux mises a l'essai: l'une fonctionnelle, l'autre empirique. La mise a l'essai fonctionnelle, menee aupres d'un groupe d'enseignants experts, a permis d'identifier les lacunes du materiel elabore afin de lui apporter les reajustements qui s'imposaient. La mise a l'essai empirique, conduite par un groupe de onze (11) etudiants de niveau secondaire, avait pour but, d'une part, de tester la facilite d'utilisation du simulateur "Genetique" ainsi que les documents d'accompagnement et, d'autre part, de verifier si les participants retiraient des avantages pedagogiques de cet environnement. Trois techniques furent exploitees pour recolter les donnees de la mise a l'essai empirique. L'analyse des resultats a permis de faire un retour critique sur les productions concretes de cette recherche et d'apporter les modifications necessaires tant au simulateur qu'aux documents d'accompagnement. Cette analyse a permis egalement de conclure que notre simulateur interactif favorise une approche inductive permettant aux etudiants de s'approprier les lois probabilistes de Mendel. Enfin, la conclusion degage des pistes de recherches destinees aux etudes ulterieures, plus particulierement celles qui s'interessent a developper des simulateurs, afin d'integrer a ceux-ci des representations concretes et abstraites presentees en temps reel. Les disquettes du simulateur "Genetique" et les documents d'accompagnement sont annexes a la presente recherche.
Analyse numerique de la microplasticite aux joints de grains dans les polycristaux metalliques CFC
NASA Astrophysics Data System (ADS)
Andriamisandratra, Mamiandrianina
La rupture par fatigue concerne aujourd’hui encore beaucoup de pièces métalliques soumises en service à un chargement répétitif. À l’échelle de la microstructure, les joints de grains sont connus pour jouer un rôle important dans la tenue en fatigue du matériau grâce au durcissement qu’ils confèrent. Cependant les joints de grains eux-mêmes ou la zone à leur proximité ont souvent été identifiés comme lieux d’amorçage de fissures de fatigue, particulièrement dans le cas des métaux cubiques à faces centrées (CFC). Dans le but de caractériser le comportement micromécanique à proximité de différents types de joint de grain, le comportement à l’interface en traction monotone uniaxiale a été modélisé par la méthode des éléments finis et une loi de plasticité cristalline a été utilisée. De plus, quelques configurations cristallographiques bicristallines ont alors été simulées et leur comportement a été analysé sous un chargement de traction axiale monotone. Le cadre de validité de la modélisation a été restreint à celui des petites déformations (<5%). Quatre critères importants dictant le comportement mécanique cristallin ont été identifiés. Il s’agit de la rigidité élastique, du facteur de Schmid des deux systèmes de glissement les plus favorables, et enfin du ratio entre ces deux plus forts facteurs de Schmid traduisant la propension au glissement simple ou multiple. Des simulations de traction sur des monocristaux ont ainsi permis de comprendre l’influence propre de chaque critère sur le comportement macroscopique (contraintes et déformations) et microscopique (glissements cristallins). Les calculs bicristallins ont ensuite mis en évidence l’activation particulière de certains systèmes de glissement à priori non favorables au niveau du joint de grain. Ce phénomène a été associé avec la nécessité d’assurer la compatibilité mécanique de déformation de part et d’autre de l’interface. Le profil de la déformation dans le sens longitudinal de l’éprouvette a montré une baisse systématique de déformation au niveau du joint et dont l’intensité augmente avec la désorientation angulaire entre les deux grains. L’hétérogénéité de la déformation en chaque section de l’éprouvette est quant à elle surtout liée au caractère fortement anisotrope de la plasticité et s’avère plus prononcée lorsque le mode de déformation est assuré par un glissement simple. Enfin, un cas bicristallin qui présentait une compatibilité microscopique des traces de glissement dans le plan du joint de grain a été étudié. Cependant, aucune corrélation avec le profil de glissement n’a été relevée, une cause macroscopique étant plus vraisemblablement à l’origine du profil observé. Mots-clés: joints de grains, localisation, plasticité cristalline, éléments finis, bicristal.
NASA Astrophysics Data System (ADS)
Pollender-Moreau, Olivier
Ce document présente, dans le cadre d'un contexte conceptuel, une méthode d'enchaînement servant à faire le lien entre les différentes étapes qui permettent de réaliser la simulation d'un aéronef à partir de ses données géométriques et de ses propriétés massiques. En utilisant le cas de l'avion d'affaires Hawker 800XP de la compagnie Hawker Beechcraft, on démontre, via des données, un processus de traitement par lots et une plate-forme de simulation, comment (1) modéliser la géométrie d'un aéronef en plusieurs surfaces, (2) calculer les forces aérodynamiques selon une technique connue sous le nom de
NASA Astrophysics Data System (ADS)
Miquel, Benjamin
The dynamic or seismic behavior of hydraulic structures is, as for conventional structures, essential to assure protection of human lives. These types of analyses also aim at limiting structural damage caused by an earthquake to prevent rupture or collapse of the structure. The particularity of these hydraulic structures is that not only the internal displacements are caused by the earthquake, but also by the hydrodynamic loads resulting from fluid-structure interaction. This thesis reviews the existing complex and simplified methods to perform such dynamic analysis for hydraulic structures. For the complex existing methods, attention is placed on the difficulties arising from their use. Particularly, interest is given in this work on the use of transmitting boundary conditions to simulate the semi infinity of reservoirs. A procedure has been developed to estimate the error that these boundary conditions can introduce in finite element dynamic analysis. Depending on their formulation and location, we showed that they can considerably affect the response of such fluid-structure systems. For practical engineering applications, simplified procedures are still needed to evaluate the dynamic behavior of structures in contact with water. A review of the existing simplified procedures showed that these methods are based on numerous simplifications that can affect the prediction of the dynamic behavior of such systems. One of the main objectives of this thesis has been to develop new simplified methods that are more accurate than those existing. First, a new spectral analysis method has been proposed. Expressions for the fundamental frequency of fluid-structure systems, key parameter of spectral analysis, have been developed. We show that this new technique can easily be implemented in a spreadsheet or program, and that its calculation time is near instantaneous. When compared to more complex analytical or numerical method, this new procedure yields excellent prediction of the dynamic behavior of fluid-structure systems. Spectral analyses ignore the transient and oscillatory nature of vibrations. When such dynamic analyses show that some areas of the studied structure undergo excessive stresses, time history analyses allow a better estimate of the extent of these zones as well as a time notion of these excessive stresses. Furthermore, the existing spectral analyses methods for fluid-structure systems account only for the static effect of higher modes. Thought this can generally be sufficient for dams, for flexible structures the dynamic effect of these modes should be accounted for. New methods have been developed for fluid-structure systems to account for these observations as well as the flexibility of foundations. A first method was developed to study structures in contact with one or two finite or infinite water domains. This new technique includes flexibility of structures and foundations as well as the dynamic effect of higher vibration modes and variations of the levels of the water domains. Extension of this method was performed to study beam structures in contact with fluids. These new developments have also allowed extending existing analytical formulations of the dynamic properties of a dry beam to a new formulation that includes effect of fluid-structure interaction. The method yields a very good estimate of the dynamic behavior of beam-fluid systems or beam like structures in contact with fluid. Finally, a Modified Accelerogram Method (MAM) has been developed to modify the design earthquake into a new accelerogram that directly accounts for the effect of fluid-structure interaction. This new accelerogram can therefore be applied directly to the dry structure (i.e. without water) in order to calculate the dynamic response of the fluid-structure system. This original technique can include numerous parameters that influence the dynamic response of such systems and allows to treat analytically the fluid-structure interaction while keeping the advantages of finite element modeling.
NASA Astrophysics Data System (ADS)
Desrosiers, Marc
La Gaspésie, région péninsulaire à l'extrême est du Québec, possède un héritage archéologique préhistorique riche qui s'étend de 9 000 ans jusqu'à 450 ans avant aujourd'hui. L'archéologie a livré de nombreux sites préhistoriques dans la région, surtout dans le nord de la péninsule, un constat en partie attribuable à la présence de sources de matières premières lithiques, d'axes de circulation potentielles, mais aussi à la répartition spatiale inégale des efforts de recherche. L'étude du potentiel archéologique préhistorique gaspésien, c'est la première étape de la démarche archéologique, est donc nécessaire pour orienter les travaux futurs dans cette région. Trois secteurs représentatifs de la Gaspésie ont été étudiés pour développer une nouvelle approche d'étude de potentiel : Sainte-Anne-des-Monts, le lac Sainte-Anne et New Richmond. Ensemble, ces secteurs s'étirent au travers de la péninsule sur un axe nord-sud et correspondent à un possible axe de circulation préhistorique. L'évaluation du potentiel archéologique de ces aires d'étude dépend notamment de contexte géographique gaspésien. Celui-ci est complexe, particulièrement depuis la déglaciation qui a engendré une succession de transformations du paysage. Les variations du niveau marin relatif, le passage de la toundra à un milieu forestier et d'autres changements environnementaux ont conditionné la façon dont les populations préhistoriques se sont adaptées au territoire. L'étude de potentiel dépend également des connaissances archéologiques existantes pour la région, car elles conditionnent la sélection de variables environnementales représentatives des sites connus. À partir de la confrontation des données géographiques et archéologiques, nous avons proposé des schèmes d'établissement pour les périodes chronoculturelles de la préhistoire gaspésienne. Ces patrons informationnels permettent d'illustrer la façon dont les populations préhistoriques occupaient le territoire gaspésien. C'est en appliquant ces schèmes d'établissement dans un SIG (Système d'Information Géographique) qu'il est alors possible de modéliser l'occupation potentielle du territoire au cours des périodes de la préhistoire gaspésienne. Les résultats de la modélisation sont prometteurs car les sites archéologiques connus dans le secteur de Sainte-Anne-des-Monts correspondent aux secteurs à potentiel élevé proposés par la modélisation, bien qu'une validation géomorphologique de la modélisation soit nécessaire à une étape ultérieure. Les deux autres secteurs ne présentaient pas de sites archéologiques datés qui permettaient de valider la modélisation à ces endroits. L'outil proposé est d'autant plus intéressant qu'il permet d'étudier rapidement de vastes territoires au potentiel archéologique mal évalué, comme c'est le cas de la Gaspésie.
NASA Astrophysics Data System (ADS)
Sawadogo, Teguewinde
This study focuses on the modeling of fluidelastic instability induced by two-phase cross-flow in tube bundles of steam generators. The steam generators in CANDU type nuclear power plants for e.g., designed in Canada by AECL and exploited worldwide, have thousands of tubes assembled in bundles that ensure the heat exchange between the internal circuit of heated heavy water coming from the reactor core and the external circuit of light water evaporated and directed toward the turbines. The main objective of this research project is to extend the theoretical models for fluidelastic instability to two-phase flow, validate the models and develop a computer program for simulating flow induced vibrations in tube bundles. The quasi-steady model has been investigated in scope of this research project. The time delay between the structure motion and the fluid forces generated thereby has been extensively studied in two-phase flow. The study was conducted for a rotated triangular tube array. Firstly, experimental measurements of unsteady and quasi-static fluid forces (in the lift direction) acting on a tube subject to two-phase flow were conducted. Quasi-static fluid force coefficients were measured at the same Reynolds number, Re = 2.8x104, for void fractions ranging from 0% to 80%. The derivative of the lift coefficient with respect to the quasi-static dimensionless displacement in the lift direction was deduced from the experimental measurements. This derivative is one of the most important parameters of the quasi-steady model because this parameter, in addition to the time delay, generates the fluid negative damping that causes the instability. This derivative was found to be positive in liquid flow and negative in two-phase flow. It seemed to vanish at 5% of void fraction, challenging the ability of the quasi-steady model to predict fluidelastic instability in this case. However, stability tests conducted at 5% void fraction clearly showed fluidelastic instability. Stability tests were conducted in the second stage of the project to validate the theoretical model. The two phase damping, the added mass and the critical velocity for fluidelastic instability were measured in two-phase flow. A viscoelastic damper was designed to vary the damping of the flexible tube and thus measure the critical velocity for a certain range of the mass-damping parameter. A new formulation of the added mass as a function of the void fraction was proposed. This formulation has a better agreement with the experimental results because it takes into account the reduction of the void fraction in the vicinity of the tubes in a rotated triangular tube array. The experimental data were used to validate the theoretical results of the quasi-steady model. The validity of the quasi-steady model for two-phase flow was confirmed by the good agreement between its results and the experimental data. The time delay parameter determined in the first stage of the project has improved significantly the theoretical results, especially for high void fractions (90%). However, the model could not be verified for void fractions lower or equal to 50% because of the limitation of the water pump capability. Further studies are consequently required to clarify this point. However, this model can be used to simulate the flow induced vibrations in steam generators' tube bundles as their most critical parts operate at high void fractions (≥ 60%). Having verified the quasi-steady model for high void fractions in two-phase flow, the third and final stage of the project was devoted to the development of a computer code for simulating flow induced vibrations of a steam generator tube subjected to fluidelastic and turbulence forces. This code was based on the ABAQUS finite elements code for solving the equation of motion of the fluid-structure system, and a development of a subroutine in which the fluid forces are calculated and applied to the tube. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Savard, Stephane
Les premieres etudes d'antennes a base de supraconducteurs a haute temperature critique emettant une impulsion electromagnetique dont le contenu en frequence se situe dans le domaine terahertz remontent a 1996. Une antenne supraconductrice est formee d'un micro-pont d'une couche mince supraconductrice sur lequel un courant continu est applique. Un faisceau laser dans le visible est focalise sur le micro-pont et place le supraconducteur dans un etat hors-equilibre ou des paires sont brisees. Grace a la relaxation des quasiparticules en surplus et eventuellement de la reformation des paires supraconductrices, nous pouvons etudier la nature de la supraconductivite. L'analyse de la cinetique temporelle du champ electromagnetique emis par une telle antenne terahertz supraconductrice s'est averee utile pour decrire qualitativement les caracteristiques de celle-ci en fonction des parametres d'operation tels que le courant applique, la temperature et la puissance d'excitation. La comprehension de l'etat hors-equilibre est la cle pour comprendre le fonctionnement des antennes terahertz supraconductrices a haute temperature critique. Dans le but de comprendre ultimement cet etat hors-equilibre, nous avions besoin d'une methode et d'un modele pour extraire de facon plus systematique les proprietes intrinseques du materiau qui compose l'antenne terahertz a partir des caracteristiques d'emission de celle-ci. Nous avons developpe une procedure pour calibrer le spectrometre dans le domaine temporel en utilisant des antennes terahertz de GaAs bombarde aux protons H+ comme emetteur et detecteur. Une fois le montage calibre, nous y avons insere une antenne emettrice dipolaire de YBa 2Cu3O7-delta . Un modele avec des fonctions exponentielles de montee et de descente du signal est utilise pour lisser le spectre du champ electromagnetique de l'antenne de YBa 2Cu3O7-delta, ce qui nous permet d'extraire les proprietes intrinseques de ce dernier. Pour confirmer la validite du modele choisi, nous avons mesure les proprietes intrinseques du meme echantillon de YBa2Cu3O7- delta avec la technique pompe-visible et sonde-terahertz donnant, elle aussi, acces aux temps caracteristiques regissant l'evolution hors-equilibre de ce materiau. Dans le meilleur scenario, ces temps caracteristiques devraient correspondre a ceux evalues grace a la modelisation des antennes. Un bon controle des parametres de croissance des couches minces supraconductrices et de fabrication du dispositif nous a permis de realiser des antennes d'emission terahertz possedant d'excellentes caracteristiques en terme de largeur de bande d'emission (typiquement 3 THz) exploitables pour des applications de spectroscopie resolue dans le domaine temporel. Le modele developpe et retenu pour le lissage du spectre terahertz decrit bien les caracteristiques de l'antenne supraconductrice pour tous les parametres d'operation. Toutefois, le lien avec la technique pompe-sonde lors de la comparaison des proprietes intrinseques n'est pas direct malgre que les deux techniques montrent que le temps de relaxation des porteurs augmente pres de la temperature critique. Les donnees en pompe-sonde indiquent que la mesure du temps de relaxation depend de la frequence de la sonde, ce qui complique la correspondance des proprietes intrinseques entre les deux techniques. De meme, le temps de relaxation extrait a partir du spectre de l'antenne terahertz augmente en s'approchant de la temperature critique (T c) de YBa2Cu 3O7-delta. Le comportement en temperature du temps de relaxation correspond a une loi de puissance qui est fonction de l'inverse du gap supraconducteur avec un exposant 5 soit 1/Delta 5(T). Le travail presente dans cette these permet de mieux decrire les caracteristiques des antennes supraconductrices a haute temperature critique et de les relier aux proprietes intrinseques du materiau qui les compose. De plus, cette these presente les parametres a ajuster comme le courant applique, la puissance de la pompe, la temperature d'operation, etc, afin d'optimiser l'emission et les performances de ces antennes supraconductrices entre autres pour maximiser leur etendue en frequence dans une perspective d'application en spectroscopie terahertz. Cependant, plusieurs des resultats obtenus soulevent la difficulte de decrire l'etat hors-equilibre et la necessite de developper une theorie pour le supraconducteur YBa2 Cu3O7-delta
Mesure sans contact d'un panneau d'aile d'avion et analyse numerique pour controle dimensionnel
NASA Astrophysics Data System (ADS)
Sok, Michel Christian
During the manufacturing of the wing skin, the inspection steps are essential to ensure their conformity and thus allow the wings to ensure the required aerodynamic performances. Nowadays, considering the panel's low stiffness which prevents traditional inspection methods, this inspection is done manually with a template gauge and a jig. Iteratively, as long as form compliance is not reached, the panel goes through an additional dimensional refinement before being inspected in a second time. Because the jig is accurate, it is very expensive and furthermore, the inspection of panels is time-consuming by monopolizing the jig, which cannot be used in the meantime. Using this consideration as a starting point, this project seeks to provide a response to the practicability of a methodology based on the automation of that king of operation. This by integrating into the process non-contact measuring machines capable of acquiring numerically the geometrical shape of the panel. Moreover, the opportunity of realizing this operation without the use of a jig is also being considered, which would leave it free for other tasks. The methodology suggested use numerical simulations to check form compliance. Finally, this would provide a tool to assist the operator by allowing a semi automated inspection without jig. The methodology suggested can be describe in three steps, however it is necessary to propose an additional step to validate the results achieved with this methodology. Then, the first step consist of manually acquiring reference values which will served to be compared with the values obtained during the application of the methodology. The second step deals with the numerical acquisition, with a laser scanner, of the object to be inspected settled down on some supporting plate. The third step is the numerical reconstruction of this object with a computer-aided design software. Finally the last step consists of a numerical inspection of the object to predict the form compliance. Considering the large dimensions of the wing skins and of the jigs used in industry, the methodology suggested takes accounts of the available means in laboratory. Then, the objects used have lower dimensions than those used in the industry. That is the reason why a simplifying assumption that the shot peening operation has a negligible effect on the evolution of the thickness of the wing skin is made. Furthermore, the non-contact measurement device is also tested to know its accuracy under real conditions. Those two preliminary studies show that the thickness variation of a plate after being shot peened, with extreme parameters in terms of effects, remains negligible for the study of practicability realized in this thesis. The study on the performance of the REVscan 3D also brings to light that this variation would probably be drown in the uncertainty acquired by the device during the numerical acquisition. In this project, only the steps two and three are dealt with in depth. This study involves essentially to test the measuring device and the software about their capacity of numerically acquiring an object and then to bring it to another state of stresses with the help of a simulation. Indeed, the validation of the free state step is problematic because it is precisely a state that cannot be obtained in an experimental way. As an analogy, it is suggested to pass from a particular state of stress to another because, in a simplified way, the free state step is equivalent to a change of a state of stress. The study of the result allows to put forward a particular phenomenon linked to thin plates : it is a sudden change of the form when the plate is in a particular state of stress. The software is then no more able to predict that kind of comportment. Several tests are carried out to confirm the existence of that phenomenon and show that the stress modulus, the point of application of the stresses and the position of the support points are the more influent parameters. However, even by ensuring to avoid this phenomenon during the tests, the degree of accuracy reached by the software is far from being sufficient. Indeed, the uncertainty of the results is still too high and the next studies will have to focus on improving the results. Currently, the tests realized in this thesis are not enough to validate the steps 2 and 3 of the methodology suggested. Nevertheless, the phenomenon highlighted which can suddenly modify the comportment of thin plates and the information gathered in these tests establishes a base for further research. (Abstract shortened by UMI.).
Teaching Programming to Novices: A Review of Approaches and Tools.
ERIC Educational Resources Information Center
Brusilovsky, P.; And Others
Three different approaches to teaching introductory programming are reviewed: the incremental approach, the sub-language approach, and the mini-language approach. The paper analyzes all three approaches, providing a brief history of each and describing an example of a programming environment supporting this approach. In the incremental approach,…
ERIC Educational Resources Information Center
Iivari, Juhani; Hirschheim, Rudy
1996-01-01
Analyzes and compares eight information systems (IS) development approaches: Information Modelling, Decision Support Systems, the Socio-Technical approach, the Infological approach, the Interactionist approach, the Speech Act-based approach, Soft Systems Methodology, and the Scandinavian Trade Unionist approach. Discusses the organizational roles…
A framework for organizing and selecting quantitative approaches for benefit-harm assessment.
Puhan, Milo A; Singh, Sonal; Weiss, Carlos O; Varadhan, Ravi; Boyd, Cynthia M
2012-11-19
Several quantitative approaches for benefit-harm assessment of health care interventions exist but it is unclear how the approaches differ. Our aim was to review existing quantitative approaches for benefit-harm assessment and to develop an organizing framework that clarifies differences and aids selection of quantitative approaches for a particular benefit-harm assessment. We performed a review of the literature to identify quantitative approaches for benefit-harm assessment. Our team, consisting of clinicians, epidemiologists, and statisticians, discussed the approaches and identified their key characteristics. We developed a framework that helps investigators select quantitative approaches for benefit-harm assessment that are appropriate for a particular decisionmaking context. Our framework for selecting quantitative approaches requires a concise definition of the treatment comparison and population of interest, identification of key benefit and harm outcomes, and determination of the need for a measure that puts all outcomes on a single scale (which we call a benefit and harm comparison metric). We identified 16 quantitative approaches for benefit-harm assessment. These approaches can be categorized into those that consider single or multiple key benefit and harm outcomes, and those that use a benefit-harm comparison metric or not. Most approaches use aggregate data and can be used in the context of single studies or systematic reviews. Although the majority of approaches provides a benefit and harm comparison metric, only four approaches provide measures of uncertainty around the benefit and harm comparison metric (such as a 95 percent confidence interval). None of the approaches considers the actual joint distribution of benefit and harm outcomes, but one approach considers competing risks when calculating profile-specific event rates. Nine approaches explicitly allow incorporating patient preferences. The choice of quantitative approaches depends on the specific question and goal of the benefit-harm assessment as well as on the nature and availability of data. In some situations, investigators may identify only one appropriate approach. In situations where the question and available data justify more than one approach, investigators may want to use multiple approaches and compare the consistency of results. When more evidence on relative advantages of approaches accumulates from such comparisons, it will be possible to make more specific recommendations on the choice of approaches.
A framework for organizing and selecting quantitative approaches for benefit-harm assessment
2012-01-01
Background Several quantitative approaches for benefit-harm assessment of health care interventions exist but it is unclear how the approaches differ. Our aim was to review existing quantitative approaches for benefit-harm assessment and to develop an organizing framework that clarifies differences and aids selection of quantitative approaches for a particular benefit-harm assessment. Methods We performed a review of the literature to identify quantitative approaches for benefit-harm assessment. Our team, consisting of clinicians, epidemiologists, and statisticians, discussed the approaches and identified their key characteristics. We developed a framework that helps investigators select quantitative approaches for benefit-harm assessment that are appropriate for a particular decisionmaking context. Results Our framework for selecting quantitative approaches requires a concise definition of the treatment comparison and population of interest, identification of key benefit and harm outcomes, and determination of the need for a measure that puts all outcomes on a single scale (which we call a benefit and harm comparison metric). We identified 16 quantitative approaches for benefit-harm assessment. These approaches can be categorized into those that consider single or multiple key benefit and harm outcomes, and those that use a benefit-harm comparison metric or not. Most approaches use aggregate data and can be used in the context of single studies or systematic reviews. Although the majority of approaches provides a benefit and harm comparison metric, only four approaches provide measures of uncertainty around the benefit and harm comparison metric (such as a 95 percent confidence interval). None of the approaches considers the actual joint distribution of benefit and harm outcomes, but one approach considers competing risks when calculating profile-specific event rates. Nine approaches explicitly allow incorporating patient preferences. Conclusion The choice of quantitative approaches depends on the specific question and goal of the benefit-harm assessment as well as on the nature and availability of data. In some situations, investigators may identify only one appropriate approach. In situations where the question and available data justify more than one approach, investigators may want to use multiple approaches and compare the consistency of results. When more evidence on relative advantages of approaches accumulates from such comparisons, it will be possible to make more specific recommendations on the choice of approaches. PMID:23163976
An Overview of Focal Approaches of Critical Discourse Analysis
ERIC Educational Resources Information Center
Jahedi, Maryam; Abdullah, Faiz Sathi; Mukundan, Jayakaran
2014-01-01
This article aims to present detailed accounts of central approaches to Critical Discourse Analysis. It focuses on the work of three prominent scholars such as Fairclough's critical approach, Wodak's discourse-historical approach and Van Dijk's socio-cognitive approach. This study concludes that a combination of these three approaches can be…
More Value to Defining Quality
ERIC Educational Resources Information Center
Van Kemenade, Everard; Pupius, Mike; Hardjono, Teun W.
2008-01-01
There are lots of definitions of quality, and also of quality in education. Garvin (1984) discerns five approaches: the transcendental approach, the product-oriented approach, the customer-oriented approach, the manufacturing-oriented approach and the value-for-money approach. Harvey and Green (1993) give five interrelated concepts of quality as:…
Introduction to Approaches in Music Therapy. Second Edition
ERIC Educational Resources Information Center
Darrow, Alice Ann, Ed.
2008-01-01
The second edition of "Introduction to Approaches in Music Therapy" includes a new introductory chapter that addresses historical perspectives on the approaches, a rationale for the categorization of approaches, and discussion on professional issues related to the use of these approaches. Each of the chapters addressing approaches includes updated…
Dynamic Approaches to Language Processing
ERIC Educational Resources Information Center
Srinivasan, Narayanan
2007-01-01
Symbolic rule-based approaches have been a preferred way to study language and cognition. Dissatisfaction with rule-based approaches in the 1980s lead to alternative approaches to study language, the most notable being the dynamic approaches to language processing. Dynamic approaches provide a significant alternative by not being rule-based and…
Toward a new approach to the study of personality in culture.
Cheung, Fanny M; van de Vijver, Fons J R; Leong, Frederick T L
2011-10-01
We review recent developments in the study of culture and personality measurement. Three approaches are described: an etic approach that focuses on establishing measurement equivalence in imported measures of personality, an emic (indigenous) approach that studies personality in specific cultures, and a combined emic-etic approach to personality. We propose the latter approach as a way of combining the methodological rigor of the etic approach and the cultural sensitivity of the emic approach. The combined approach is illustrated by two examples: the first with origins in Chinese culture and the second in South Africa. The article ends with a discussion of the theoretical and practical implications of the combined emic-etic approach for the study of culture and personality and for psychology as a science.
The anterior approach for the fixation of displaced talar neck fractures--a cadaveric study.
Mullen, Michael; Pillai, Anand; Fogg, Quentin A; Kumar, C Senthil
2013-01-01
Talar neck fractures are rare and are associated with high complication rates. Adequate surgical exposure is essential in the operative management of these challenging injuries. The anterior approach is an alternative to the more commonly described and utilized anterolateral and anteromedial approaches. The main objective was to compare the surface area of talus visible and quality of exposure via the anterior approach, with the anteromedial and anterolateral approaches. An anterior approach was performed on five fresh frozen cadaveric specimens. The surface area of talus visible was measured using an Immersion Digital Microscribe and analyzed with the Rhinoceros 3D graphics package. Standard anterolateral and anteromedial approaches were performed in the same specimens and areas visible measured using the same method. The talar surface area visible using the anterior approach is significantly greater than that visible using the anterolateral approach or anteromedial, without and with medial malleolar osteotomy, as well as combination approaches. The anterior approach offers excellent visualization in the fixation of displaced talar neck fractures. Greater talar surface area is visible using this approach compared to traditional approaches. Copyright © 2013 Elsevier Ltd. All rights reserved.
Erbaugh, James; Agrawal, Arun
2017-11-01
Objectives, assumptions, and methods for landscape restoration and the landscape approach. World leaders have pledged 350 Mha for restoration using a landscape approach. The landscape approach is thus poised to become one of the most influential methods for multi-functional land management. Reed et al (2016) meaningfully advance scholarship on the landscape approach, but they incorrectly define the approach as it exists within their text. This Letter to the Editor clarifies the landscape approach as an ethic for land management, demonstrates how it relates to landscape restoration, and motivates continued theoretical development and empirical assessment of the landscape approach. © 2017 John Wiley & Sons Ltd.
Nooh, Ahmed Mohamed; Abdeldayem, Hussein Mohammed; Ben-Affan, Othman
2017-05-01
The objective of this study was to assess effectiveness and safety of the reverse breech extraction approach in Caesarean section for obstructed labour, and compare it with the standard approach of pushing the fetal head up through the vagina. This randomised controlled trial included 192 women. In 96, the baby was delivered by the 'reverse breech extraction approach', and in the remaining 96, by the 'standard approach'. Extension of uterine incision occurred in 18 participants (18.8%) in the reverse breech extraction approach group, and 46 (47.9%) in the standard approach group (p = .0003). Two women (2.1%) in the reverse breech extraction approach group needed blood transfusion and 11 (11.5%) in the standard approach group (p = .012). Pyrexia developed in 3 participants (3.1%) in the reverse breech extraction approach group, and 19 (19.8%) in the standard approach group (p = .0006). Wound infection occurred in 2 women (2.1%) in the reverse breech extraction approach group, and 12 (12.5%) in the standard approach group (p = .007). Apgar score <7 at 5 minutes was noted in 8 babies (8.3%) in the reverse breech extraction approach group, and 21 (21.9%) in the standard approach group (p = .015). In conclusion, reverse breech extraction in Caesarean section for obstructed labour is an effective and safe alternative to the standard approach of pushing the fetal head up through the vagina.
Market-based approaches to tree valuation
Geoffrey H. Donovan; David T. Butry
2008-01-01
A recent four-part series in Arborist News outlined different appraisal processes used to value urban trees. The final article in the series described the three generally accepted approaches to tree valuation: the sales comparison approach, the cost approach, and the income capitalization approach. The author, D. Logan Nelson, noted that the sales comparison approach...
Kruse, Christine; Rosenlund, Signe; Broeng, Leif; Overgaard, Søren
2018-01-01
The two most common surgical approaches to total hip arthroplasty are the posterior approach and lateral approach. The surgical approach may influence cup positioning and restoration of the offset, which may affect the biomechanical properties of the hip joint. The primary aim was to compare cup position between posterior approach and lateral approach. Secondary aims were to compare femoral offset, abductor moment arm and leg length discrepancy between the two approaches. Eighty patients with primary hip osteoarthritis were included in a randomized controlled trial and assigned to total hip arthroplasty using posterior approach or lateral approach. Postoperative radiographs from 38 patients in each group were included in this study for measurement of cup anteversion and inclination. Femoral offset, cup offset, total offset, abductor moment arm and leg length discrepancy were measured on preoperative and postoperative radiographs in 28 patients in each group. We found that mean anteversion was 5° larger in the posterior approach group (95% CI, -8.1 to -1.4; p = 0.006), while mean inclination was 5° less steep (95% CI, 2.7 to 7.2; p<0.001) compared with the lateral approach group. The posterior approach group had a larger mean femoral offset of 4.3mm (95% CI, -7.4 to -1.3, p = 0.006), mean total offset of 6.3mm (95% CI, -9.6 to -3; p<0.001) and mean abductor moment arm of 4.8mm (95% CI, -7.6 to -1.9; p = 0.001) compared with the lateral approach group. We found a larger cup anteversion but less steep cup inclination in the posterior approach group compared with the lateral approach group. Femoral offset and abductor moment arm were restored after total hip arthroplasty using lateral approach but significantly increased when using posterior approach.
Choi, Y; Jung, C; Chae, Y; Kang, M; Kim, J; Joung, K; Lim, J; Cho, S; Sung, S; Lee, E; Kim, S
2014-01-01
Mapping of drug indications to ICD-10 was undertaken in Korea by a public and a private institution for their own purposes. A different mapping approach was used by each institution, which presented a good opportunity to compare the validity of the two approaches. This study was undertaken to compare the validity of a direct mapping approach and an indirect terminology based mapping approach of drug indications against the gold standard drawn from the results of the two mapping processes. Three hundred and seventy-five cardiovascular reference drugs were selected from all listed cardiovascular drugs for the study. In the direct approach, two experienced nurse coders mapped the free text indications directly to ICD-10. In the indirect terminology based approach, the indications were extracted and coded in the Korean Standard Terminology of Medicine. These terminology coded indications were then manually mapped to ICD-10. The results of the two approaches were compared to the gold standard. A kappa statistic was calculated to see the compatibility of both mapping approaches. Recall, precision and F1 score of each mapping approach were calculated and analyzed using a paired t-test. The mean number of indications for the study drugs was 5.42. The mean number of ICD-10 codes that matched in direct approach was 46.32 and that of indirect terminology based approach was 56.94. The agreement of the mapping results between the two approaches were poor (kappa = 0.19). The indirect terminology based approach showed higher recall (86.78%) than direct approach (p < 0.001). However, there was no difference in precision and F1 score between the two approaches. Considering no differences in the F1 scores, both approaches may be used in practice for mapping drug indications to ICD-10. However, in terms of consistency, time and manpower, better results are expected from the indirect terminology based approach.
Barnes, Leslie Fink; Lombardi, Joseph; Gardner, Thomas R; Strauch, Robert J; Rosenwasser, Melvin P
2018-01-01
The aim of this study was to compare the complete visible surface area of the radial head, neck, and coronoid in the Kaplan and Kocher approaches to the lateral elbow. The hypothesis was that the Kaplan approach would afford greater visibility due to the differential anatomy of the intermuscular planes. Ten cadavers were dissected with the Kaplan and Kocher approaches, and the visible surface area was measured in situ using a 3-dimensional digitizer. Six measurements were taken for each approach by 2 surgeons, and the mean of these measurements were analyzed. The mean surface area visible with the lateral collateral ligament (LCL) preserved in the Kaplan approach was 616.6 mm 2 in comparison with the surface area of 136.2 mm 2 visible in the Kocher approach when the LCL was preserved. Using a 2-way analysis of variance, the difference between these 2 approaches was statistically significant. When the LCL complex was incised in the Kocher approach, the average visible surface area of the Kocher approach was 456.1 mm 2 and was statistically less than the Kaplan approach. The average surface area of the coronoid visible using a proximally extended Kaplan approach was 197.8 mm 2 . The Kaplan approach affords significantly greater visible surface area of the proximal radius than the Kocher approach.
ERIC Educational Resources Information Center
Bozeman, Barry; Landsbergen, David
1989-01-01
Two competing approaches to policy analysis are distinguished: a credibility approach, and a truth approach. According to the credibility approach, the policy analyst's role is to search for plausible argument rather than truth. Each approach has pragmatic tradeoffs in fulfilling the goal of providing usable knowledge to decision makers. (TJH)
33 CFR 167.1302 - In the approaches to the Strait of Juan de Fuca: Southwestern approach.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of Juan de Fuca: Southwestern approach. 167.1302 Section 167.1302 Navigation and Navigable Waters....1302 In the approaches to the Strait of Juan de Fuca: Southwestern approach. In the southwestern approach to the Strait of Juan de Fuca, the following are established: (a) A separation zone bounded by a...
Domain Approach: An Alternative Approach in Moral Education
ERIC Educational Resources Information Center
Vengadasalam, Chander; Mamat, Wan Hasmah Wan; Mail, Fauziah; Sudramanian, Munimah
2014-01-01
This paper discusses the use of the domain approach in moral education in an upper secondary school in Malaysia. Moral Education needs a creative and an innovative approach. Therefore, a few forms of approaches are used in the teaching-learning of Moral Education. This research describes the use of domain approach which comprises the moral domain…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-13
...-AB55 Traffic Separation Schemes: In the Approaches to Portland, ME; Boston, MA; Narragansett Bay, RI... schemes in the approaches to Portland, ME; in the approaches to Boston, MA; in the approaches to... Coast Guard updates the current regulations for the traffic separation scheme in the approaches to...
Wichlas, Florian; Tsitsilonis, Serafim; Kopf, Sebastian; Krapohl, Björn Dirk; Manegold, Sebastian
2017-01-01
Introduction: The aim of the present study is to develop a heuristic that could replace the surgeon's analysis for the decision on the operative approach of distal radius fractures based on simple fracture characteristics. Patients and methods: Five hundred distal radius fractures operated between 2011 and 2014 were analyzed for the surgeon's decision on the approach used. The 500 distal radius fractures were treated with open reduction and internal fixation through palmar, dorsal, and dorsopalmar approaches with 2.4 mm locking plates or underwent percutaneous fixation. The parameters that should replace the surgeon's analysis were the fractured palmar cortex, and the frontal and the sagittal split of the articular surface of the distal radius. Results: The palmar approach was used for 422 (84.4%) fractures, the dorsal approach for 39 (7.8%), and the combined dorsopalmar approach for 30 (6.0%). Nine (1.8%) fractures were treated percutaneously. The correlation between the fractured palmar cortex and the used palmar approach was moderate (r=0.464; p<0.0001). The correlation between the frontal split and the dorsal approach, including the dorsopalmar approach, was strong (r=0.715; p<0.0001). The sagittal split had only a weak correlation for the dorsal and dorsopalmar approach (r=0.300; p<0.0001). Discussion: The study shows that the surgical decision on the preferred approach is dictated through two simple factors, even in the case of complex fractures. Conclusion: When the palmar cortex is displaced in distal radius fractures, a palmar approach should be used. When there is a displaced frontal split of the articular surface, a dorsal approach should be used. When both are present, a dorsopalmar approach should be used. These two simple parameters could replace the surgeon's analysis for the surgical approach.
Chu, Hui-May; Ette, Ene I
2005-09-02
his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.
Radke, Sina; Seidel, Eva-Maria; Eickhoff, Simon B; Gur, Ruben C; Schneider, Frank; Habel, Ute; Derntl, Birgit
2016-02-15
Social rewards are processed by the same dopaminergic-mediated brain networks as non-social rewards, suggesting a common representation of subjective value. Individual differences in personality and motivation influence the reinforcing value of social incentives, but it remains open whether the pursuit of social incentives is analogously supported by the neural reward system when positive social stimuli are connected to approach behavior. To test for a modulation of neural activation by approach motivation, individuals with high and low approach motivation (BAS) completed implicit and explicit social approach-avoidance paradigms during fMRI. High approach motivation was associated with faster implicit approach reactions as well as a trend for higher approach ratings, indicating increased approach tendencies. Implicit and explicit positive social approach was accompanied by stronger recruitment of the nucleus accumbens, middle cingulate cortex, and (pre-)cuneus for individuals with high compared to low approach motivation. These results support and extend prior research on social reward processing, self-other distinctions and affective judgments by linking approach motivation to the engagement of reward-related circuits during motivational reactions to social incentives. This interplay between motivational preferences and motivational contexts might underlie the rewarding experience during social interactions. Copyright © 2015 Elsevier Inc. All rights reserved.
Rawson, Richard A.; Rataemane, Solomon; Rataemane, Lusanda; Ntlhe, Nomvuyo; Fox, Ruthlyn Sodano; McCuller, Jason; Brecht, Mary-Lynn
2012-01-01
This study evaluated the effectiveness of 3 approaches to transferring cognitive behavioral therapy (CBT) to addiction clinicians in the Republic of South Africa (RSA). Clinicians (N = 143) were assigned to 3 training conditions: (1) An in vivo (IV) approach in which clinicians received in-person training and coaching; (2) A distance learning (DL) approach providing training via video conference and coaching through teleconferencing; and (3) A control condition (C) providing a manual and 2-hour orientation. Frequency of use of CBT skills increased significantly with the IV and DL approaches compared to the C approach, and the IV approach facilitated greater use of CBT skills than the DL approach. During the active phase of the study, skill quality declined significantly for clinicians trained in the C condition, whereas those in the DL approach maintained skill quality and those in the IV approach improved skill quality. After coaching was discontinued, clinicians in the IV and DL approaches declined in skill quality. However, those in the IV approach maintained a higher level of skill quality compared to the other approaches. Cost of the IV condition was double that of the DL condition and 10 times greater than the C condition. PMID:23577903
Explicit and Implicit Approach Motivation Interact to Predict Interpersonal Arrogance
Robinson, Michael D.; Ode, Scott; Spencer L., Palder; Fetterman, Adam K.
2012-01-01
Self-reports of approach motivation are unlikely to be sufficient in understanding the extent to which the individual reacts to appetitive cues in an approach-related manner. A novel implicit probe of approach tendencies was thus developed, one that assessed the extent to which positive affective (versus neutral) stimuli primed larger size estimates, as larger perceptual sizes co-occur with locomotion toward objects in the environment. In two studies (total N = 150), self-reports of approach motivation interacted with this implicit probe of approach motivation to predict individual differences in arrogance, a broad interpersonal dimension previously linked to narcissism, antisocial personality tendencies, and aggression. The results of the two studies were highly parallel in that self-reported levels of approach motivation predicted interpersonal arrogance in the particular context of high, but not low, levels of implicit approach motivation. Implications for understanding approach motivation, implicit probes of it, and problematic approach-related outcomes are discussed. PMID:22399360
Comparison of two head-up displays in simulated standard and noise abatement night visual approaches
NASA Technical Reports Server (NTRS)
Cronn, F.; Palmer, E. A., III
1975-01-01
Situation and command head-up displays were evaluated for both standard and two segment noise abatement night visual approaches in a fixed base simulation of a DC-8 transport aircraft. The situation display provided glide slope and pitch attitude information. The command display provided glide slope information and flight path commands to capture a 3 deg glide slope. Landing approaches were flown in both zero wind and wind shear conditions. For both standard and noise abatement approaches, the situation display provided greater glidepath accuracy in the initial phase of the landing approaches, whereas the command display was more effective in the final approach phase. Glidepath accuracy was greater for the standard approaches than for the noise abatement approaches in all phases of the landing approach. Most of the pilots preferred the command display and the standard approach. Substantial agreement was found between each pilot's judgment of his performance and his actual performance.
Explicit and implicit approach motivation interact to predict interpersonal arrogance.
Robinson, Michael D; Ode, Scott; Palder, Spencer L; Fetterman, Adam K
2012-07-01
Self-reports of approach motivation are unlikely to be sufficient in understanding the extent to which the individual reacts to appetitive cues in an approach-related manner. A novel implicit probe of approach tendencies was thus developed, one that assessed the extent to which positive affective (versus neutral) stimuli primed larger size estimates, as larger perceptual sizes co-occur with locomotion toward objects in the environment. In two studies (total N = 150), self-reports of approach motivation interacted with this implicit probe of approach motivation to predict individual differences in arrogance, a broad interpersonal dimension previously linked to narcissism, antisocial personality tendencies, and aggression. The results of the two studies were highly parallel in that self-reported levels of approach motivation predicted interpersonal arrogance in the particular context of high, but not low, levels of implicit approach motivation. Implications for understanding approach motivation, implicit probes of it, and problematic approach-related outcomes are discussed.
Optimization of coupled systems: A critical overview of approaches
NASA Technical Reports Server (NTRS)
Balling, R. J.; Sobieszczanski-Sobieski, J.
1994-01-01
A unified overview is given of problem formulation approaches for the optimization of multidisciplinary coupled systems. The overview includes six fundamental approaches upon which a large number of variations may be made. Consistent approach names and a compact approach notation are given. The approaches are formulated to apply to general nonhierarchic systems. The approaches are compared both from a computational viewpoint and a managerial viewpoint. Opportunities for parallelism of both computation and manpower resources are discussed. Recommendations regarding the need for future research are advanced.
An update on surgical approaches in hip arthoplasty: lateral versus posterior approach.
Mukka, Sebastian S; Sayed-Noor, Arkan S
2014-10-02
In this update we searched the literature about the outcome of the lateral versus posterior approach in hip arthoplasty for osteoarthritis (OA) and femoral neck fracture (FNF) patients. The available evidence shows that the use of posterior approach in OA patients is associated with lower mortality and better functional outcome while the use of lateral approach in FNF patients gives lower dislocation rate. We recommend therefore the use of posterior approach in OA patients and lateral approach in FNF patients.
Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.
Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja
2015-06-01
Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.
Theory and Methodology in Researching Emotions in Education
ERIC Educational Resources Information Center
Zembylas, Michalinos
2007-01-01
Differing theoretical approaches to the study of emotions are presented: emotions as private (psychodynamic approaches); emotions as sociocultural phenomena (social constructionist approaches); and a third perspective (interactionist approaches) transcending these two. These approaches have important methodological implications in studying…
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false In the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Buzzards Bay approach. 167.103 Section 167.103 Navigation and Navigable... the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Buzzards Bay approach. (a) A separation...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false In the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Narragansett Bay approach. 167.102 Section 167.102 Navigation and....102 In the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Narragansett Bay approach. (a) A...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false In the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Buzzards Bay approach. 167.103 Section 167.103 Navigation and Navigable... the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Buzzards Bay approach. (a) A separation...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false In the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Buzzards Bay approach. 167.103 Section 167.103 Navigation and Navigable... the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Buzzards Bay approach. (a) A separation...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false In the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Narragansett Bay approach. 167.102 Section 167.102 Navigation and....102 In the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Narragansett Bay approach. (a) A...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false In the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Narragansett Bay approach. 167.102 Section 167.102 Navigation and....102 In the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Narragansett Bay approach. (a) A...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false In the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Buzzards Bay approach. 167.103 Section 167.103 Navigation and Navigable... the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Buzzards Bay approach. (a) A separation...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false In the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Narragansett Bay approach. 167.102 Section 167.102 Navigation and....102 In the approaches to Narragansett Bay, RI, and Buzzards Bay, MA: Narragansett Bay approach. (a) A...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait. 167.1322 Section 167.1322 Navigation and Navigable... Coast § 167.1322 In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait. 167.1322 Section 167.1322 Navigation and Navigable... Coast § 167.1322 In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait. 167.1322 Section 167.1322 Navigation and Navigable... Coast § 167.1322 In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait...
33 CFR 167.1301 - In the approaches to the Strait of Juan de Fuca: Western approach.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of Juan de Fuca: Western approach. 167.1301 Section 167.1301 Navigation and Navigable Waters COAST....1301 In the approaches to the Strait of Juan de Fuca: Western approach. In the western approach to the Strait of Juan de Fuca, the following are established: (a) A separation zone bounded by a line connecting...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait. 167.1322 Section 167.1322 Navigation and Navigable... Coast § 167.1322 In Puget Sound and its approaches: Approaches to Puget Sound other than Rosario Strait...
Life Extending Control. [mechanical fatigue in reusable rocket engines
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control is defined. Life is defined in terms of mechanical fatigue life. A brief description is given of the current approach to life prediction using a local, cyclic, stress-strain approach for a critical system component. An alternative approach to life prediction based on a continuous functional relationship to component performance is proposed. Based on cyclic life prediction, an approach to life extending control, called the Life Management Approach, is proposed. A second approach, also based on cyclic life prediction, called the implicit approach, is presented. Assuming the existence of the alternative functional life prediction approach, two additional concepts for Life Extending Control are presented.
Life extending control: A concept paper
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control is defined. Life is defined in terms of mechanical fatigue life. A brief description is given of the current approach to life prediction using a local, cyclic, stress-strain approach for a critical system component. An alternative approach to life prediction based on a continuous functional relationship to component performance is proposed.Base on cyclic life prediction an approach to Life Extending Control, called the Life Management Approach is proposed. A second approach, also based on cyclic life prediction, called the Implicit Approach, is presented. Assuming the existence of the alternative functional life prediction approach, two additional concepts for Life Extending Control are presented.
Simple approach in understanding interzeolite transformations using ring building units
NASA Astrophysics Data System (ADS)
Suhendar, D.; Buchari; Mukti, R. R.; Ismunandar
2018-04-01
Recently, there are two general approaches used in understanding interzeolite transformations, thermodynamically represented by framework density (FD) and kinetically by structural building units. Two types of structural building units are composite building units (CBU’s) and secondary building units (SBU’s). This study aims to examine the approaches by using interzeolite transformation data available in literature and propose a possible alternative approach. From a number of cases of zeolite transformation, the FD and CBU approach are not suitable for use. The FD approach fails in cases involving zeolite parents that have moderate or high FD’s, while CBU approach fails because of CBU’s unavailability in parent zeolites compared with CBU’s in their transformation products. The SBU approach is most likely to fit because SBU’s are units that have basic form of ring structures and closer to the state and shape of oligomeric fragments present in zeolite synthesis or dissolution cases. Thus, a new approach can be considered in understanding the interzeolite transformation, namely the ring building unit (RBU) approach. The advantage of RBU approach is RBU’s can be easily derived from all framework types, but in SBU approach there are several types of frameworks that cannot be expressed in SBU forms.
Al-Moraissi, Essam Ahmed; Louvrier, Aurélien; Colletti, Giacomo; Wolford, Larry M; Biglioli, Federico; Ragaey, Marwa; Meyer, Christophe; Ellis, Edward
2018-03-01
The purpose of this study was to determine the rate of facial nerve injury (FNI) when performing (ORIF) of mandibular condylar fractures by different surgical approaches. A systematic review and meta-analysis were performed that included several databases with specific keywords, a reference search, and a manual search for suitable articles. The inclusion criteria were all clinical trials, with the aim of assessing the rate of facial nerve injuries when (ORIF) of mandibular condylar fractures was performed using different surgical approaches. The main outcome variable was transient facial nerve injury (TFNI) and permanent facial nerve injury (PFNI) according to the fracture levels, namely: condylar head fractures (CHFs), condylar neck fractures (CNFs), and condylar base fractures (CBFs). For studies where there was no delineation between CNFs and CBFs, the fractures were defined as CNFs/CBFs. The dependent variables were the surgical approaches. A total of 3873 patients enrolled in 96 studies were included in this analysis. TFNI rates reported in the literature were as follows: A) For the transoral approach: a) for strictly intraoral 0.72% (1.3 in CNFs and 0% for CBFs); b) for the transbuccal trocar instrumentation 2.7% (4.2% in CNFs and 0% for CBFs); and c) for endoscopically assisted ORIF 4.2% (5% in CNFs, and 4% in CBFs). B) For low submandibular approach 15.3% (26.1% for CNFs, 11.8% for CBFs, and 13.7% for CNFs/CBFs). C) For the high submandibular/angular subparotid approach with masseter transection 0% in CBFs. D) For the high submandibular/angular transmassetric anteroparotid approach 0% (CNFs and CBFs). E) For the transparotid retromandibular approach a) with nerve facial preparation 14.4% (23.9% in CNFs, 11.8% in CBFs and 13.7% for CNFs/CBFs); b) without facial nerve preparation 19% (24.3% for CNFs and 10.5% for CBFs). F) For retromandibular transmassetric anteroparotid approach 3.4% in CNFs/CBFs. G) For retromandibular transmassetric anteroparotid approach with preauricular extension 2.3% for CNFs/CBFs. H) For preauricular approach a) deep subfascial dissection plane 0% in CHFs b) for subfascial approach using traditional preauricular incision 10% (8.5% in CHFs and 11.5% in CNFs). I) For retroauricular approach 3% for CHFs. PFNI rates reported in the literature were as follows: A) for low submandibular approach 2.2%, B) for retromandibular transparotid approach 1.4%; C) for preauricular approach 0.33%; D) for high submandibular approach 0.3%; E) for deep retroparotid approach 1.5%. According to published data for CHFs, a retroauricular approach or deep subfascial preauricular approach was the safest to protect the facial nerve. For CNFs, a transmassetric anteroparotid approach with retromandibular and preauricular extension was the safest approach to decrease risk of FNI. For CBFs, high submandibular incisions with either transmassetric anteroparotid approach with retromandibular or transmassetric subparotid approach, followed by intraoral (with or without endoscopic/transbuccal trocar) were the safest approaches with respect to decreased risk of FNI. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Effects of a blended learning approach on student outcomes in a graduate-level public health course.
Kiviniemi, Marc T
2014-03-11
Blended learning approaches, in which in-person and online course components are combined in a single course, are rapidly increasing in health sciences education. Evidence for the relative effectiveness of blended learning versus more traditional course approaches is mixed. The impact of a blended learning approach on student learning in a graduate-level public health course was examined using a quasi-experimental, non-equivalent control group design. Exam scores and course point total data from a baseline, "traditional" approach semester (n = 28) was compared to that from a semester utilizing a blended learning approach (n = 38). In addition, student evaluations of the blended learning approach were evaluated. There was a statistically significant increase in student performance under the blended learning approach (final course point total d = 0.57; a medium effect size), even after accounting for previous academic performance. Moreover, student evaluations of the blended approach were very positive and the majority of students (83%) preferred the blended learning approach. Blended learning approaches may be an effective means of optimizing student learning and improving student performance in health sciences courses.
Weighted least-squares solver for determining pressure from particle image velocimetry data
NASA Astrophysics Data System (ADS)
de Kat, Roeland
2016-11-01
Currently, most approaches to determine pressure from particle image velocimetry data are Poisson approaches (e.g.) or multi-pass marching approaches (e.g.). However, these approaches deal with boundary conditions in their specific ways which cannot easily be changed-Poisson approaches enforce boundary conditions strongly, whereas multi-pass marching approaches enforce them weakly. Under certain conditions (depending on the certainty of the data or availability of reference data along the boundary) both types of boundary condition enforcement have to be used together to obtain the best result. In addition, neither of the approaches takes the certainty of the particle image velocimetry data (see e.g.) within the domain into account. Therefore, to address these shortcomings and improve upon current approaches, a new approach is proposed using weighted least-squares. The performance of this new approach is tested on synthetic and experimental particle image velocimetry data. Preliminary results show that a significant improvement can be made in determining pressure fields using the new approach. RdK is supported by a Leverhulme Trust Early Career Fellowship.
Garcia, Luís Filipe; de Oliveira, Luís Caldas; de Matos, David Martins
2016-01-01
This study compared the performance of two statistical location-aware pictogram prediction mechanisms, with an all-purpose (All) pictogram prediction mechanism, having no location knowledge. The All approach had a unique language model under all locations. One of the location-aware alternatives, the location-specific (Spec) approach, made use of specific language models for pictogram prediction in each location of interest. The other location-aware approach resulted from combining the Spec and the All approaches, and was designated the mixed approach (Mix). In this approach, the language models acquired knowledge from all locations, but a higher relevance was assigned to the vocabulary from the associated location. Results from simulations showed that the Mix and Spec approaches could only outperform the baseline in a statistically significant way if pictogram users reuse more than 50% and 75% of their sentences, respectively. Under low sentence reuse conditions there were no statistically significant differences between the location-aware approaches and the All approach. Under these conditions, the Mix approach performed better than the Spec approach in a statistically significant way.
Mortensen, Martin B; Afzal, Shoaib; Nordestgaard, Børge G; Falk, Erling
2015-12-22
Guidelines recommend initiating primary prevention for atherosclerotic cardiovascular disease (ASCVD) with statins based on absolute ASCVD risk assessment. Recently, alternative trial-based and hybrid approaches were suggested for statin treatment eligibility. This study compared these approaches in a direct head-to-head fashion in a contemporary population. The study used the CGPS (Copenhagen General Population Study) with 37,892 subjects aged 40 to 75 years recruited in 2003 to 2008, all free of ASCVD, diabetes, and statin use at baseline. Among the population studied, 42% were eligible for statin therapy according to the 2013 American College of Cardiology/American Heart Association (ACC/AHA) risk assessment and cholesterol treatment guidelines approach, versus 56% with the trial-based approach and 21% with the hybrid approach. Among these statin-eligible subjects, the ASCVD event rate per 1,000 person-years was 9.8, 6.8, and 11.2, respectively. The ACC/AHA-recommended absolute risk score was well calibrated around the 7.5% 10-year ASCVD risk treatment threshold and discriminated better than the trial-based or hybrid approaches. Compared with the ACC/AHA risk-based approach, the net reclassification index for eligibility for statin therapy among 40- to 75-year-old subjects from the CGPS was -0.21 for the trial-based approach and -0.13 for the hybrid approach. The clinical performance of the ACC/AHA risk-based approach for primary prevention of ASCVD with statins was superior to the trial-based and hybrid approaches. Our results indicate that the ACC/AHA guidelines will prevent more ASCVD events than the trial-based and hybrid approaches, while treating fewer people compared with the trial-based approach. Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Accuracy of glenohumeral joint injections: comparing approach and experience of provider.
Tobola, Allison; Cook, Chad; Cassas, Kyle J; Hawkins, Richard J; Wienke, Jeffrey R; Tolan, Stefan; Kissenberth, Michael J
2011-10-01
The purpose of this study was to prospectively evaluate the accuracy of three different approaches used for glenohumeral injections. In addition, the accuracy of the injection was compared to the experience and confidence of the provider. One-hundred six consecutive patients with shoulder pain underwent attempted intra-articular injection either posteriorly, supraclavicularly, or anteriorly. Each approach was performed by an experienced and inexperienced provider. A musculoskeletal radiologist blinded to technique used and provider interpreted fluoroscopic images to determine accuracy. Providers were blinded to these results. The accuracy of the anterior approach regardless of experience was 64.7%, the posterior approach was 45.7%, and the supraclavicular approach was 45.5%. With each approach, experience did not provide an advantage. For the anterior approach, the experienced provider was 50% accurate compared to 85.7%. For the posterior approach, the experienced provider had a 42.1% accuracy rate compared to 50%. The experienced provider was accurate 50% of the time in the supraclavicular approach compared to 38.5%. The providers were not able to predict their accuracy regardless of experience. The experienced providers, when compared to those who were less experienced, were more likely to be overconfident, particularly with the anterior and supraclavicular approaches. There was no statistically significant difference between the 3 approaches. The anterior approach was the most accurate, independent of the experience level of the provider. The posterior approach produced the lowest level of confidence regardless of experience. The experienced providers were not able to accurately predict the results of their injections, and were more likely to be overconfident with the anterior and supraclavicular approaches. Copyright © 2011 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Barragan, Barnard; Love, Lance; Wachtel, Mitchell; Griswold, John A; Frezza, Eldo E
2005-12-01
Laparoscopic treatment of pancreatic pseudocyst allows for definitive drainage with faster recovery. Although many groups have reported their experience with an anterior approach, only a few have done so with a posterior approach. This paper compares the approaches, analyzing their potential benefits and pitfalls. Seven females and one male underwent laparoscopic cystgastrostomy to treat pancreatic pseudocysts. The anterior approach was performed by opening the stomach anteriorly, localizing the pseudocyst ultrasonographically, draining the cyst with a needle and, via the same opening, using a stapler to form a cystgastrostomy. The posterior approach was performed by directly visualizing the posterior gastric wall and the pseudocyst, opening and draining the cyst with a needle, and using a stapler and running sutures for closure. All patients had gallstone pancreatitis. Cystgastrostomy via the anterior approach was used in 4 patients and via the posterior approach in 4 patients. Dense adhesions required one attempted posterior cystgastrostomy to be converted to an anterior approach. The mean age of the anterior group was 38 years (range, 18-58 years) and hospital stay was 6 days (range, 4-8 days): for the posterior group, mean age was 42 years (range, 40-44 years) and length of stay was 3 days (range, 2-4 days). Although both approaches had good results with no complications and short hospital stays, the posterior approach is safer, with a more precise cyst visualization and dissection that permits more tissue to be sent for histopathologic examination. Furthermore, the posterior approach?s larger anastomosis would seem to yield fewer occlusions, which are commonly seen with the anterior approach. The anterior approach is easier to learn, but it requires the opening of the anterior stomach and the use of ultrasound.
Theoretical Approaches to Moral/Citizenship Education.
ERIC Educational Resources Information Center
Heslep, Robert D.
Four theoretical approaches to moral/citizenship education are described and compared. Positive and negative aspects of the cognitive-decision, developmental, prosocial, and values approaches are discussed and ways of relating the four approaches to each other are suggested. The first approach, cognitive-decision, is distinctive for its…
Differentiating Performance Approach Goals and Their Unique Effects
ERIC Educational Resources Information Center
Edwards, Ordene V.
2014-01-01
The study differentiates between two types of performance approach goals (competence demonstration performance approach goal and normative performance approach goal) by examining their unique effects on self-efficacy, interest, and fear of failure. Seventy-nine students completed questionnaires that measure performance approach goals,…
A Humanistic Approach to South African Accounting Education
ERIC Educational Resources Information Center
West, A.; Saunders, S.
2006-01-01
Humanistic psychologist Carl Rogers made a distinction between traditional approaches and humanistic "learner-centred" approaches to education. The traditional approach holds that educators impart their knowledge to willing and able recipients; whereas the humanistic approach holds that educators act as facilitators who assist learners…
Moral Conflicts and Religious Convictions: What Role for Clinical Ethics Consultants?
Moskop, John C
2018-05-03
Moral conflicts over medical treatment that are the result of differences in fundamental moral commitments of the stakeholders, including religiously grounded commitments, can present difficult challenges for clinical ethics consultants. This article begins with a case example that poses such a conflict, then examines how consultants might use different approaches to clinical ethics consultation in an effort to facilitate the resolution of conflicts of this kind. Among the approaches considered are the authoritarian approach, the pure consensus approach, and the ethics facilitation approach described in the Core Competencies for Healthcare Ethics Consultation report of the American Society for Bioethics and Humanities, as well as a patient advocate approach, a clinician advocate approach, and an institutional advocate approach. The article identifies clear limitations to each of these approaches. An analysis of the introductory case illustrates those limitations, and the article concludes that deep-seated conflicts of this kind may reveal inescapable limits of current approaches to clinical ethics consultation.
Bioinformatics approaches to predict target genes from transcription factor binding data.
Essebier, Alexandra; Lamprecht, Marnie; Piper, Michael; Bodén, Mikael
2017-12-01
Transcription factors regulate gene expression and play an essential role in development by maintaining proliferative states, driving cellular differentiation and determining cell fate. Transcription factors are capable of regulating multiple genes over potentially long distances making target gene identification challenging. Currently available experimental approaches to detect distal interactions have multiple weaknesses that have motivated the development of computational approaches. Although an improvement over experimental approaches, existing computational approaches are still limited in their application, with different weaknesses depending on the approach. Here, we review computational approaches with a focus on data dependency, cell type specificity and usability. With the aim of identifying transcription factor target genes, we apply available approaches to typical transcription factor experimental datasets. We show that approaches are not always capable of annotating all transcription factor binding sites; binding sites should be treated disparately; and a combination of approaches can increase the biological relevance of the set of genes identified as targets. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
California State Univ., Fresno. Dept. of Home Economics.
This competency-based preservice home economics teacher education module on incorporating the consumer approach in homemaking classes is the fourth in a set of four core curriculum modules on consumer approach to homemaking education. (This set is part of a larger series of sixty-seven modules on the Management Approach to Teaching Consumer and…
Influence of study approaches on academic outcomes during pre-clinical medical education.
Ward, Peter J
2011-01-01
Different approaches to study lead to differing academic outcomes. Deep and strategic approaches have been linked to academic success while surface approaches lead to poorer understandings. This study sought to characterize how the approaches to study used by medical students impacted their academic success as measured by three outcomes: cumulative grades at the end of the first year, cumulative grades at the end of the second year, and performance on a medical licensing examination. The approaches and study skills inventory for students was administered to medical students to determine their predominant study approach (deep, strategic, superficial) at the beginning of their first year, end of first year, and end of second year. Each group's mean performance on each outcome measure was compared by ANOVA to find significant differences. For all three outcome measures, strategic approaches to study were associated with high performance while surface approaches with a poor one. Deep approaches were most popular at all times and were largely associated with adequate performance. Deep approaches to study are sufficient for success in the current paradigm of medical education but strategic ones may offer a selective advantage to those who use them. Surface approaches to study must be discouraged by instructors through deliberate course design.
Automatic approach/avoidance tendencies towards food and the course of anorexia nervosa.
Neimeijer, Renate A M; de Jong, Peter J; Roefs, Anne
2015-08-01
The aim of the present study was to investigate the role of automatic approach/avoidance tendencies for food in Anorexia Nervosa (AN). We used a longitudinal approach and tested whether a reduction in eating disorder symptoms is associated with enhanced approach tendencies towards food and whether approach tendencies towards food at baseline are predictive for treatment outcome after one year follow up. The Affective Simon Task-manikin version (AST-manikin) was administered to measure automatic approach/avoidance tendencies towards high-caloric and low-caloric food in young AN patients. Percentage underweight and eating disorder symptoms as indexed by the EDE-Q were determined both during baseline and at one year follow up. At baseline anorexia patients showed an approach tendency for low caloric food, but not for high caloric food, whereas at 1 year follow up, they have an approach tendency for both high and low caloric food. Change in approach bias was neither associated with change in underweight nor with change in eating disorder symptoms. Strength of approach/avoidance tendencies was not predictive for percentage underweight. Although approach tendencies increased after one year, approach tendencies were neither associated with concurrent change in eating disorder symptoms nor predictive for treatment success as indexed by EDE-Q. This implicates that, so far, there is no reason to add a method designed to directly target approach/avoidance tendencies to the conventional approach to treat patients with a method designed to influence the more deliberate processes in AN. Copyright © 2015 Elsevier Ltd. All rights reserved.
Amanatullah, D F; Masini, M A; Roger, D J; Pagnano, M W
2016-08-01
We wished to quantify the extent of soft-tissue damage sustained during minimally invasive total hip arthroplasty through the direct anterior (DA) and direct superior (DS) approaches. In eight cadavers, the DA approach was performed on one side, and the DS approach on the other, a single brand of uncemented hip prosthesis was implanted by two surgeons, considered expert in their surgical approaches. Subsequent reflection of the gluteus maximus allowed the extent of muscle and tendon damage to be measured and the percentage damage to each anatomical structure to be calculated. The DA approach caused substantially greater damage to the gluteus minimus muscle and tendon when compared with the DS approach (t-test, p = 0.049 and 0.003, respectively). The tensor fascia lata and rectus femoris muscles were damaged only in the DA approach. There was no difference in the amount of damage to the gluteus medius muscle and tendon, piriformis tendon, obturator internus tendon, obturator externus tendon or quadratus femoris muscle between approaches. The posterior soft-tissue releases of the DA approach damaged the gluteus minimus muscle and tendon, piriformis tendon and obturator internus tendon. The DS approach caused less soft-tissue damage than the DA approach. However the clinical relevance is unknown. Further clinical outcome studies, radiographic evaluation of component position, gait analyses and serum biomarker levels are necessary to evaluate and corroborate the safety and efficacy of the DS approach. Cite this article: Bone Joint J 2016;98-B1036-42. ©2016 The British Editorial Society of Bone & Joint Surgery.
An Overview of the Effectiveness of Adolescent Substance Abuse Treatment Models.
ERIC Educational Resources Information Center
Muck, Randolph; Zempolich, Kristin A.; Titus, Janet C.; Fishman, Marc; Godley, Mark D.; Schwebel, Robert
2001-01-01
Describes current approaches to adolescent substance abuse treatment, including the 12-step treatment approach, behavioral treatment approach, family-based treatment approach, and therapeutic community approach. Summarizes research that assesses the effectiveness of these models, offering findings from the Center for Substance Abuse Treatment's…
Five Stances That Have Got to Go
ERIC Educational Resources Information Center
Zeigler, Earle F.
1973-01-01
The five stances in physical education that have to go are as follows: a) the shotgun approach'' to professional preparation; b) the athletics uber alles approach''; c) the women are all right in their place approach''; d) the body of knowledge approach'' and the password is treadmill' approach.''
Anatomic comparison of the endonasal and transpetrosal approaches for interpeduncular fossa access.
Oyama, Kenichi; Prevedello, Daniel M; Ditzel Filho, Leo F S; Muto, Jun; Gun, Ramazan; Kerr, Edward E; Otto, Bradley A; Carrau, Ricardo L
2014-01-01
The interpeduncular cistern, including the retrochiasmatic area, is one of the most challenging regions to approach surgically. Various conventional approaches to this region have been described; however, only the endoscopic endonasal approach via the dorsum sellae and the transpetrosal approach provide ideal exposure with a caudal-cranial view. The authors compared these 2 approaches to clarify their limitations and intrinsic advantages for access to the interpeduncular cistern. Four fresh cadaver heads were studied. An endoscopic endonasal approach via the dorsum sellae with pituitary transposition was performed to expose the interpeduncular cistern. A transpetrosal approach was performed bilaterally, combining a retrolabyrinthine presigmoid and a subtemporal transtentorium approach. Water balloons were used to simulate space-occupying lesions. "Water balloon tumors" (WBTs), inflated to 2 different volumes (0.5 and 1.0 ml), were placed in the interpeduncular cistern to compare visualization using the 2 approaches. The distances between cranial nerve (CN) III and the posterior communicating artery (PCoA) and between CN III and the edge of the tentorium were measured through a transpetrosal approach to determine the width of surgical corridors using 0- to 6-ml WBTs in the interpeduncular cistern (n = 8). Both approaches provided adequate exposure of the interpeduncular cistern. The endoscopic endonasal approach yielded a good visualization of both CN III and the PCoA when a WBT was in the interpeduncular cistern. Visualization of the contralateral anatomical structures was impaired in the transpetrosal approach. The surgical corridor to the interpeduncular cistern via the transpetrosal approach was narrow when the WBT volume was small, but its width increased as the WBT volume increased. There was a statistically significant increase in the maximum distance between CN III and the PCoA (p = 0.047) and between CN III and the tentorium (p = 0.029) when the WBT volume was 6 ml. Both approaches are valid surgical options for retrochiasmatic lesions such as craniopharyngiomas. The endoscopic endonasal approach via the dorsum sellae provides a direct and wide exposure of the interpeduncular cistern with negligible neurovascular manipulation. The transpetrosal approach also allows direct access to the interpeduncular cistern without pituitary manipulation; however, the surgical corridor is narrow due to the surrounding neurovascular structures and affords poor contralateral visibility. Conversely, in the presence of large or giant tumors in the interpeduncular cistern, which widen the spaces between neurovascular structures, the transpetrosal approach becomes a superior route, whereas the endoscopic endonasal approach may provide limited freedom of movement in the lateral extension.
Outcome Assessment from the Perspective of Psychological Science: The TAIM Approach
ERIC Educational Resources Information Center
Steinke, Pamela; Fitch, Peggy
2011-01-01
In this chapter, the authors outline an approach to assessing complex constructs supported by psychological science and research. This approach is informed by their background as psychologists but is general enough to incorporate other disciplinary approaches as well. They identify this approach as TAIM (Theory, Activities, Indicators, Multiple…
Using a Hybrid Approach to Facilitate Learning Introductory Programming
ERIC Educational Resources Information Center
Cakiroglu, Unal
2013-01-01
In order to facilitate students' understanding in introductory programming courses, different types of teaching approaches were conducted. In this study, a hybrid approach including comment first coding (CFC), analogy and template approaches were used. The goal was to investigate the effect of such a hybrid approach on students' understanding in…
An Analysis of Leadership Theory and Its Application to Higher Education.
ERIC Educational Resources Information Center
Geering, Adrian D.
Leadership theories are reviewed, and ways that college administrators can approach leadership are suggested. After defining leadership and distinguishing it from administration and management, three different approaches to leadership are reviewed: the trait approach, the behavioral approach, and the situational approach. Some emerging views of…
Applying Current Approaches to the Teaching of Reading
ERIC Educational Resources Information Center
Villanueva de Debat, Elba
2006-01-01
This article discusses different approaches to reading instruction for EFL learners based on theoretical frameworks. The author starts with the bottom-up approach to reading instruction, and briefly explains phonics and behaviorist ideas that inform this instructional approach. The author then explains the top-down approach and the new cognitive…
The Place of Grammar in the Language Arts Curriculum.
ERIC Educational Resources Information Center
Einarsson, Robert
The history of grammar instruction includes two approaches: the handbook approach, which is practiced today, and the textbook approach. The handbook approach focuses on rules for correct writing and is an error-based view, while the textbook approach would treat grammar holistically and interpretively and would systematically explain new concepts…
Reflections on John Monaghan's "Computer Algebra, Instrumentation, and the Anthropological Approach"
ERIC Educational Resources Information Center
Blume, Glen
2007-01-01
Reactions to John Monaghan's "Computer Algebra, Instrumentation and the Anthropological Approach" focus on a variety of issues related to the ergonomic approach (instrumentation) and anthropological approach to mathematical activity and practice. These include uses of the term technique; several possibilities for integration of the two approaches;…
Guidelines for Media Selection.
ERIC Educational Resources Information Center
Heeren, Elske; Verwijs, Carla; Moonen, Jef
This paper presents two types of approaches to media selection--rational-choice approaches and social-influence approaches. It is argued that designers should combine the two types of approaches in a bottom-up/top-down media-selection process. As examples of the two types of approaches, two conceptual frameworks are described--task/media fit and…
10 CFR 830.7 - Graded approach.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Graded approach. 830.7 Section 830.7 Energy DEPARTMENT OF ENERGY NUCLEAR SAFETY MANAGEMENT § 830.7 Graded approach. Where appropriate, a contractor must use a graded approach to implement the requirements of this part, document the basis of the graded approach...
10 CFR 830.7 - Graded approach.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Graded approach. 830.7 Section 830.7 Energy DEPARTMENT OF ENERGY NUCLEAR SAFETY MANAGEMENT § 830.7 Graded approach. Where appropriate, a contractor must use a graded approach to implement the requirements of this part, document the basis of the graded approach...
JiFUNzeni: A Blended Learning Approach for Sustainable Teachers' Professional Development
ERIC Educational Resources Information Center
Onguko, Brown Bully
2014-01-01
JiFUNzeni blended learning approach is a sustainable approach to provision of professional development (PD) for those in challenging educational contexts. JiFUNzeni approach emphasizes training regional experts to create blended learning content, working with appropriate technology while building content repositories. JiFUNzeni approach was…
Bibliography of Several Approaches to Rhetorical Criticism.
ERIC Educational Resources Information Center
Benoit, William L.; Moeder, Michael D.
An illustrative rather than an exhaustive bibliography on approaches to rhetorical criticism, this update of an earlier publication lists more than 150 selections. The bibliography is divided into sections on: (1) discussions of the Burkean approach; (2) applications of the Burkean approach; (3) discussions of the fantasy theme approach; (4)…
10 CFR 830.7 - Graded approach.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Graded approach. 830.7 Section 830.7 Energy DEPARTMENT OF ENERGY NUCLEAR SAFETY MANAGEMENT § 830.7 Graded approach. Where appropriate, a contractor must use a graded approach to implement the requirements of this part, document the basis of the graded approach...
10 CFR 830.7 - Graded approach.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Graded approach. 830.7 Section 830.7 Energy DEPARTMENT OF ENERGY NUCLEAR SAFETY MANAGEMENT § 830.7 Graded approach. Where appropriate, a contractor must use a graded approach to implement the requirements of this part, document the basis of the graded approach...
Adapting a Framework for Assessing Students' Approaches to Modeling
ERIC Educational Resources Information Center
Bennett, Steven Carl
2017-01-01
We used an "approach to learning" theoretical framework to explicate the ways students engage in scientific modeling. Approach to learning theory suggests that when students approach learning deeply, they link science concepts with prior knowledge and experiences. Conversely, when students engage in a surface approach to learning, they…
Equity, empowerment and different ways of knowing
NASA Astrophysics Data System (ADS)
Boaler, Jo
1997-11-01
This paper considers the experiences of two sets of students who attended schools that taught mathematics in completely different ways. One of the schools used a traditional, textbook approach, and the other used an open, project-based approach. The latter approach produced equity between girls and boys whereas the textbook approach prompted many of the girls to under achieve. This paper will consider the experiences of girls and boys who followed the project-based approach, reflect upon the sources of equity within this approach and relate the differences between the two approaches to Gilligan's notions of "separate" and "connected" knowing.
Alternate Approaches to Exploration: The Single Crew Module Concept
NASA Technical Reports Server (NTRS)
Chambliss, Joe
2011-01-01
The Cx Program envisioned exploration of the moon and mars using an extrapolation of the Apollo approach. If new technology development initiatives are successful, they will provide capabilities that can enable alternate approaches. This presentation will provide a brief overview of the Cx approaches for lunar and Mars missions and some of the alternatives that were considered. Then an alternative approach referred to as a Single Crew Module approach is described. The SCM concept employs new technologies in a way that could reduce exploration cost and possibly schedule. Options to the approaches will be presented and discussed.
Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems
NASA Technical Reports Server (NTRS)
Balling, R. J.; Wilkinson, C. A.
1997-01-01
A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.
NASA Astrophysics Data System (ADS)
Ahmed, H. M.; Al-azawi, R. J.; Abdulhameed, A. A.
2018-05-01
Huge efforts have been put in the developing of diagnostic methods to skin cancer disease. In this paper, two different approaches have been addressed for detection the skin cancer in dermoscopy images. The first approach uses a global method that uses global features for classifying skin lesions, whereas the second approach uses a local method that uses local features for classifying skin lesions. The aim of this paper is selecting the best approach for skin lesion classification. The dataset has been used in this paper consist of 200 dermoscopy images from Pedro Hispano Hospital (PH2). The achieved results are; sensitivity about 96%, specificity about 100%, precision about 100%, and accuracy about 97% for globalization approach while, sensitivity about 100%, specificity about 100%, precision about 100%, and accuracy about 100% for Localization Approach, these results showed that the localization approach achieved acceptable accuracy and better than globalization approach for skin cancer lesions classification.
Minimally invasive surgery of the anterior skull base: transorbital approaches
Gassner, Holger G.; Schwan, Franziska; Schebesch, Karl-Michael
2016-01-01
Minimally invasive approaches are becoming increasingly popular to access the anterior skull base. With interdisciplinary cooperation, in particular endonasal endoscopic approaches have seen an impressive expansion of indications over the past decades. The more recently described transorbital approaches represent minimally invasive alternatives with a differing spectrum of access corridors. The purpose of the present paper is to discuss transorbital approaches to the anterior skull base in the light of the current literature. The transorbital approaches allow excellent exposure of areas that are difficult to reach like the anterior and posterior wall of the frontal sinus; working angles may be more favorable and the paranasal sinus system can be preserved while exposing the skull base. Because of their minimal morbidity and the cosmetically excellent results, the transorbital approaches represent an important addition to established endonasal endoscopic and open approaches to the anterior skull base. Their execution requires an interdisciplinary team approach. PMID:27453759
NASA Astrophysics Data System (ADS)
Samsudin, Syafiza Saila; Ujang, Suriyati; Sahlan, Nor Fasiha
2016-06-01
This study was conducted on students in Year 3 at Sekolah Kebangsaan Air Putih, Kuantan. The study used a constructivism approach in simplest fraction topic in Mathematics. Students were divided into 2 groups; the control group and the experimental group. Experimental group was taught using Constructivist Approach whereas the control group student was taught using the Traditional Approach. This study aimed to determine the effectiveness of constructivist learning approach the topic of Simplest Fraction. It also aimed to compare the student's achievement between the constructivist approach and traditional approach. This study used the instrument in pre-test, post-test, questionnaires and observation. The data were analyzed with SPSS 15.0 for window. The finding shows there is a significant difference between the pre-test and post-test for experimental group after using constructivism approach in learning process. The mean scores (76.39) of the post-test is higher than the mean scores (60.28) for pre-test. It is proved that constructivist approach is more efficient and suitable for teaching and learning in simplest fraction topic in the classroom compared to traditional approaches. The findings also showed interest and the positive perception of this approach.
Measured noise reductions resulting from modified approach procedures for business jet aircraft
NASA Technical Reports Server (NTRS)
Burcham, F. W., Jr.; Putnam, T. W.; Lasagna, P. L.; Parish, O. O.
1975-01-01
Five business jet airplanes were flown to determine the noise reductions that result from the use of modified approach procedures. The airplanes tested were a Gulfstream 2, JetStar, Hawker Siddeley 125-400, Sabreliner-60 and LearJet-24. Noise measurements were made 3, 5, and 7 nautical miles from the touchdown point. In addition to a standard 3 deg glide slope approach, a 4 deg glide slope approach, a 3 deg glide slope approach in a low-drag configuration, and a two-segment approach were flown. It was found that the 4 deg approach was about 4 EPNdB quieter than the standard 3 deg approach. Noise reductions for the low-drag 3 deg approach varied widely among the airplanes tested, with an average of 8.5 EPNdB on a fleet-weighted basis. The two-segment approach resulted in noise reductions of 7 to 8 EPNdB at 3 and 5 nautical miles from touchdown, but only 3 EPNdB at 7 nautical miles from touchdown when the airplanes were still in level flight prior to glide slope intercept. Pilot ratings showed progressively increasing workload for the 4 deg, low-drag 3 deg, and two-segment approaches.
Effects of a blended learning approach on student outcomes in a graduate-level public health course
2014-01-01
Background Blended learning approaches, in which in-person and online course components are combined in a single course, are rapidly increasing in health sciences education. Evidence for the relative effectiveness of blended learning versus more traditional course approaches is mixed. Method The impact of a blended learning approach on student learning in a graduate-level public health course was examined using a quasi-experimental, non-equivalent control group design. Exam scores and course point total data from a baseline, “traditional” approach semester (n = 28) was compared to that from a semester utilizing a blended learning approach (n = 38). In addition, student evaluations of the blended learning approach were evaluated. Results There was a statistically significant increase in student performance under the blended learning approach (final course point total d = 0.57; a medium effect size), even after accounting for previous academic performance. Moreover, student evaluations of the blended approach were very positive and the majority of students (83%) preferred the blended learning approach. Conclusions Blended learning approaches may be an effective means of optimizing student learning and improving student performance in health sciences courses. PMID:24612923
A discussion of approaches to transforming care: contemporary strategies to improve patient safety.
Burston, Sarah; Chaboyer, Wendy; Wallis, Marianne; Stanfield, Jane
2011-11-01
This article presents a discussion of three contemporary approaches to transforming care: Transforming Care at the Bedside, Releasing Time to Care: the Productive Ward and the work of the Studer Group(®). International studies of adverse events in hospitals have highlighted the need to focus on patient safety. The case for transformational change was identified and recently several approaches have been developed to effect this change. Despite limited evaluation, these approaches have spread and have been adopted outside their country of origin and contextual settings. Medline and CINAHL databases were searched for the years 1999-2009. Search terms included derivatives of 'transformation' combined with 'care', 'nursing', 'patient safety', 'Transforming Care at the Bedside', 'the Productive Ward' and 'Studer Group'. A comparison of the three approaches revealed similarities including: the foci of the approaches; interventions employed; and the outcomes measured. Key differences identified are the implementation models used, spread strategies and sustainability of the approaches. The approaches appear to be complementary and a hybrid of the approaches such as a blend of a top-down and bottom-up leadership strategy may offer more sustainable behavioural change. These approaches transform the way nurses do their work, how they work with others and how they view the care they provide to promote patient safety. All the approaches involve the implementation of multiple interventions occurring simultaneously to affect improvements in patient safety. The approaches are complementary and a hybrid approach may offer more sustainable outcomes. © 2011 Blackwell Publishing Ltd.
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
Liu, James K; Husain, Qasim; Kanumuri, Vivek; Khan, Mohemmed N; Mendelson, Zachary S; Eloy, Jean Anderson
2016-05-01
OBJECT Juvenile nasopharyngeal angiofibromas (JNAs) are formidable tumors because of their hypervascularity and difficult location in the skull base. Traditional transfacial procedures do not always afford optimal visualization and illumination, resulting in significant morbidity and poor cosmesis. The advent of endoscopic procedures has allowed for resection of JNAs with greater surgical freedom and decreased incidence of facial deformity and scarring. METHODS This report describes a graduated multiangle, multicorridor, endoscopic approach to JNAs that is illustrated in 4 patients, each with a different tumor location and extent. Four different surgical corridors in varying combinations were used to resect JNAs, based on tumor size and location, including an ipsilateral endonasal approach (uninostril); a contralateral, transseptal approach (binostril); a sublabial, transmaxillary Caldwell-Luc approach; and an orbitozygomatic, extradural, transcavernous, infratemporal fossa approach (transcranial). One patient underwent resection via an ipsilateral endonasal uninostril approach (Corridor 1) only. One patient underwent a binostril approach that included an additional contralateral transseptal approach (Corridors 1 and 2). One patient underwent a binostril approach with an additional sublabial Caldwell-Luc approach for lateral extension in the infratemporal fossa (Corridors 1-3). One patient underwent a combined transcranial and endoscopic endonasal/sublabial Caldwell-Luc approach (Corridors 1-4) for an extensive JNA involving both the lateral infratemporal fossa and cavernous sinus. RESULTS A graduated multiangle, multicorridor approach was used in a stepwise fashion to allow for maximal surgical exposure and maneuverability for resection of JNAs. Gross-total resection was achieved in all 4 patients. One patient had a postoperative CSF leak that was successfully repaired endoscopically. One patient had a delayed local recurrence that was successfully resected endoscopically. There were no vascular complications. CONCLUSIONS An individualized, multiangle, multicorridor approach allows for safe and effective surgical customization of access for resection of JNAs depending on the size and exact location of the tumor. Combining the endoscopic endonasal approach with a transcranial approach via an orbitozygomatic, extradural, transcavernous approach may be considered in giant extensive JNAs that have intracranial extension and intimate involvement of the cavernous sinus.
Neurosurgical management of anterior meningo-encephaloceles about 60 cases
Rifi, Loubna; Barkat, Amina; El Khamlichi, Abdeslam; Boulaadas, Malek; El Ouahabi, Abdessamad
2015-01-01
Anterior meningo-encephaloceles (AME) are congenital malformations characterized by herniation of brain tissue and meninges through a defect in the cranium, in frontal, orbital, nasal and ethmoidal regions. The management of this complex congenital malformation is controversial according to whether use, an intracranial, extra-cranial or combined approach. This is the first largest series published in Africa, in which we present our experience in the operative management of AME; we share our recommendation in technical consideration for surgical approach with review of the literature. All patients beneficed of neuro-radiological investigations including Plan X rays, Spiral Three dimensional CT scan and MRI. Ophthalmologic and maxillo-facial evaluations were done in all the cases. MEA are surgically approached in various ways, mainly on the basis of its location and type, by cranio-facial approach in one-step, or in two stages by intracranial approach followed by facial approach, only by cranial approach or facial approach. The surgical results were evaluated in the follow up on the basis of disappearance of cranio-facial tumefaction with correction of hypertelorism. 60 children with AME were treated in our department between January 1992 and December 2012. The mean age at time of surgery was 14 months (20 days to 18 years) with slight men predominance (28 females/32 males). Cranio-facial team operated 21 patients, 16 were operated in two stages by intracranial approach followed by facial approach, 20 cases beneficed the neurosurgical approach and three only the facial approach Some post operative complications were observed: 2 cases of post operative hydrocephalus underwent shunt; CSF fistulas in three cases cured by spinal drainage, one death due to per operative hypothermia, 3 cases of recurrence how needed second surgery. After mean follow up for 80 months (1 year to 19 years) theses techniques permitted a good cosmetics results in 42 cases, average cosmetics results in 8 cases, poor results in 5 cases and worse cosmetics results in 4 cases, The AME are rare conditions we used the multiples approach first intracranial approach followed by facial approach, but after 1998 we used one-step correction by combined approach, only cranial approach when needed or facial correction. PMID:26448810
Quantifying Overdiagnosis in Cancer Screening: A Systematic Review to Evaluate the Methodology.
Ripping, Theodora M; Ten Haaf, Kevin; Verbeek, André L M; van Ravesteyn, Nicolien T; Broeders, Mireille J M
2017-10-01
Overdiagnosis is the main harm of cancer screening programs but is difficult to quantify. This review aims to evaluate existing approaches to estimate the magnitude of overdiagnosis in cancer screening in order to gain insight into the strengths and limitations of these approaches and to provide researchers with guidance to obtain reliable estimates of overdiagnosis in cancer screening. A systematic review was done of primary research studies in PubMed that were published before January 1, 2016, and quantified overdiagnosis in breast cancer screening. The studies meeting inclusion criteria were then categorized by their methods to adjust for lead time and to obtain an unscreened reference population. For each approach, we provide an overview of the data required, assumptions made, limitations, and strengths. A total of 442 studies were identified in the initial search. Forty studies met the inclusion criteria for the qualitative review. We grouped the approaches to adjust for lead time in two main categories: the lead time approach and the excess incidence approach. The lead time approach was further subdivided into the mean lead time approach, lead time distribution approach, and natural history modeling. The excess incidence approach was subdivided into the cumulative incidence approach and early vs late-stage cancer approach. The approaches used to obtain an unscreened reference population were grouped into the following categories: control group of a randomized controlled trial, nonattenders, control region, extrapolation of a prescreening trend, uninvited groups, adjustment for the effect of screening, and natural history modeling. Each approach to adjust for lead time and obtain an unscreened reference population has its own strengths and limitations, which should be taken into consideration when estimating overdiagnosis. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Sun, Xiang-Yao; Zhang, Xi-Nuo; Hai, Yong
2017-05-01
This study evaluated differences in outcome variables between percutaneous, traditional, and paraspinal posterior open approaches for traumatic thoracolumbar fractures without neurologic deficit. A systematic review of PubMed, Cochrane, and Embase was performed. In this meta-analysis, we conducted online searches of PubMed, Cochrane, Embase using the search terms "thoracolumbar fractures", "lumbar fractures", ''percutaneous'', "minimally invasive", ''open", "traditional", "posterior", "conventional", "pedicle screw", "sextant", and "clinical trial". The analysis was performed on individual patient data from all the studies that met the selection criteria. Clinical outcomes were expressed as risk difference for dichotomous outcomes and mean difference for continuous outcomes with 95 % confidence interval. Heterogeneity was assessed using the χ 2 test and I 2 statistics. There were 4 randomized controlled trials and 14 observational articles included in this analysis. Percutaneous approach was associated with better ODI score, less Cobb angle correction, less Cobb angle correction loss, less postoperative VBA correction, and lower infection rate compared with open approach. Percutaneous approach was also associated with shorter operative duration, longer intraoperative fluoroscopy, less postoperative VAS, and postoperative VBH% in comparison with traditional open approach. No significant difference was found in Cobb angle correction, postoperative VBA, VBA correction loss, Postoperative VBH%, VBH correction loss, and pedicle screw misplacement between percutaneous approach and open approach. There was no significant difference in operative duration, intraoperative fluoroscopy, postoperative VAS, and postoperative VBH% between percutaneous approach and paraspianl approach. The functional and the radiological outcome of percutaneous approach would be better than open approach in the long term. Although trans-muscular spatium approach belonged to open fixation methods, it was strictly defined as less invasive approach, which provided less injury to the paraspinal muscles and better reposition effect.
Neural substrates of approach-avoidance conflict decision-making.
Aupperle, Robin L; Melrose, Andrew J; Francisco, Alex; Paulus, Martin P; Stein, Murray B
2015-02-01
Animal approach-avoidance conflict paradigms have been used extensively to operationalize anxiety, quantify the effects of anxiolytic agents, and probe the neural basis of fear and anxiety. Results from human neuroimaging studies support that a frontal-striatal-amygdala neural circuitry is important for approach-avoidance learning. However, the neural basis of decision-making is much less clear in this context. Thus, we combined a recently developed human approach-avoidance paradigm with functional magnetic resonance imaging (fMRI) to identify neural substrates underlying approach-avoidance conflict decision-making. Fifteen healthy adults completed the approach-avoidance conflict (AAC) paradigm during fMRI. Analyses of variance were used to compare conflict to nonconflict (avoid-threat and approach-reward) conditions and to compare level of reward points offered during the decision phase. Trial-by-trial amplitude modulation analyses were used to delineate brain areas underlying decision-making in the context of approach/avoidance behavior. Conflict trials as compared to the nonconflict trials elicited greater activation within bilateral anterior cingulate cortex, anterior insula, and caudate, as well as right dorsolateral prefrontal cortex (PFC). Right caudate and lateral PFC activation was modulated by level of reward offered. Individuals who showed greater caudate activation exhibited less approach behavior. On a trial-by-trial basis, greater right lateral PFC activation related to less approach behavior. Taken together, results suggest that the degree of activation within prefrontal-striatal-insula circuitry determines the degree of approach versus avoidance decision-making. Moreover, the degree of caudate and lateral PFC activation related to individual differences in approach-avoidance decision-making. Therefore, the approach-avoidance conflict paradigm is ideally suited to probe anxiety-related processing differences during approach-avoidance decision-making. © 2014 Wiley Periodicals, Inc.
Safety and Suitability for Service Assessment Testing for Surface and Underwater Launched Munitions
2014-12-05
test efficiency that tend to associate the Analytical S3 Test Approach with large, complex munition systems and the Empirical S3 Test Approach with...the smaller, less complex munition systems . 8.1 ANALYTICAL S3 TEST APPROACH. The Analytical S3 test approach, as shown in Figure 3, evaluates...assets than the Analytical S3 Test approach to establish the safety margin of the system . This approach is generally applicable to small munitions
Meckel's cave access: anatomic study comparing the endoscopic transantral and endonasal approaches.
Van Rompaey, Jason; Suruliraj, Anand; Carrau, Ricardo; Panizza, Benedict; Solares, C Arturo
2014-04-01
Recent advances in endonasal endoscopy have facilitated the surgical access to the lateral skull base including areas such as Meckel's cave. This approach has been well documented, however, few studies have outlined transantral specific access to Meckel's. A transantral approach provides a direct pathway to this region obviating the need for extensive endonasal and transsphenoidal resection. Our aim in this study is to compare the anatomical perspectives obtained in endonasal and transantral approaches. We prepared 14 cadaveric specimens with intravascular injections of colored latex. Eight cadavers underwent endoscopic endonasal transpterygoid approaches to Meckel's cave. Six additional specimens underwent an endoscopic transantral approach to the same region. Photographic evidence was obtained for review. 30 CT scans were analyzed to measure comparative distances to Meckel's cave for both approaches. The endoscopic approaches provided a direct access to the anterior and inferior portions of Meckel's cave. However, the transantral approach required shorter instrumentation, and did not require clearing of the endonasal corridor. This approach gave an anterior view of Meckel's cave making posterior dissection more difficult. A transantral approach to Meckel's cave provides access similar to the endonasal approach with minimal invasiveness. Some of the morbidity associated with extensive endonasal resection could possibly be avoided. Better understanding of the complex skull base anatomy, from different perspectives, helps to improve current endoscopic skull base surgery and to develop new alternatives, consequently, leading to improvements in safety and efficacy.
Sun, Qiang; Yin, Jia; Yin, Xiao; Zou, Guanyang; Liang, Mingli; Zhong, Jieming; Walley, John; Wei, Xiaolin
2013-06-01
Moving the clinical services from tuberculosis (TB) dispensary to the integrated county hospital (called integrated approach) has been practiced in China; however, it is unknown the quality of TB care in the integrated approach and in the dispensary approach. A total of 202 new TB patients were investigated using structured questionnaires in three counties implementing the integrated approach and one county implementing the dispensary approach. The quality of TB care is measured based on success rate of treatment, medical expenditure, health system delay and second-line drug use. The integrated approach showed a high success treatment rate. The medical expenditure in the integrated approach was USD 432, significantly lower than that in the dispensary approach (Z = -5.771, P < 0.001). The integrated approach had a shorter health system delay (5 days) than the dispensary approach (32 days). Twenty-six percent of patients in integrated hospitals were prescribed with second-line TB drugs, significantly lower than that in the TB dispensary (47%, χ(2) = 7.452, P = 0.006). However, the medical expenditure, use of second-line anti-TB drug and liver-protection drugs indeed varied greatly across the three integrated hospitals. The integrated approach showed better quality of TB care, but the performance of the integrated hospitals varied greatly. A method to standardize TB treatment and management of this approach is urgent.
ERIC Educational Resources Information Center
Maciejewski, Wes; Merchant, Sandra
2016-01-01
Students approach learning in different ways, depending on the experienced learning situation. A deep approach is geared toward long-term retention and conceptual change while a surface approach focuses on quickly acquiring knowledge for immediate use. These approaches ultimately affect the students' academic outcomes. This study takes a…
Language Management Theory as One Approach in Language Policy and Planning
ERIC Educational Resources Information Center
Nekvapil, Jirí
2016-01-01
Language Policy and Planning is currently a significantly diversified research area and thus it is not easy to find common denominators that help to define basic approaches within it. Richard B. Baldauf attempted to do so by differentiating between four basic approaches: (1) the classical approach, (2) the language management approach (Language…
Transferring Codified Knowledge: Socio-Technical versus Top-Down Approaches
ERIC Educational Resources Information Center
Guzman, Gustavo; Trivelato, Luiz F.
2008-01-01
Purpose: This paper aims to analyse and evaluate the transfer process of codified knowledge (CK) performed under two different approaches: the "socio-technical" and the "top-down". It is argued that the socio-technical approach supports the transfer of CK better than the top-down approach. Design/methodology/approach: Case study methodology was…
Toward a New Approach to the Study of Personality in Culture
ERIC Educational Resources Information Center
Cheung, Fanny M.; van de Vijver, Fons J. R.; Leong, Frederick T. L.
2011-01-01
We review recent developments in the study of culture and personality measurement. Three approaches are described: an etic approach that focuses on establishing measurement equivalence in imported measures of personality, an emic (indigenous) approach that studies personality in specific cultures, and a combined emic-etic approach to personality.…
EMR implementation: big bang or a phased approach?
Owens, Kathleen
2008-01-01
There are two major strategies to implementing an EMR: the big-bang approach and the phased, or incremental, approach. Each strategy has pros and cons that must be considered. This article discusses these approaches and the risks and benefits of each as well as some training strategies that can be used with either approach.
Inventing Adulthoods: A Biographical Approach to Youth Transitions
ERIC Educational Resources Information Center
Henderson, Sheila J.; Holland, Janet; McGrellis, Sheena; Sharpe, Sue; Thomson, Rachel
2006-01-01
"Inventing Adulthoods: A Biographical Approach to Youth Transitions" is a ground-breaking book that offers a new approach to understanding young people's lives and their transitions to adulthood. Contrary to policy and research approaches that often see young people's lives in a fragmented way, the book argues that a biographical approach to youth…
ERIC Educational Resources Information Center
Gijbels, David; Coertjens, Liesje; Vanthournout, Gert; Struyf, Elke; Van Petegem, Peter
2009-01-01
Inciting a deep approach to learning in students is difficult. The present research poses two questions: can a constructivist learning-assessment environment change students' approaches towards a more deep approach? What effect does additional feedback have on the changes in learning approaches? Two cohorts of students completed questionnaires…
Wei Liao; Rohr, Karl; Chang-Ki Kang; Zang-Hee Cho; Worz, Stefan
2016-01-01
We propose a novel hybrid approach for automatic 3D segmentation and quantification of high-resolution 7 Tesla magnetic resonance angiography (MRA) images of the human cerebral vasculature. Our approach consists of two main steps. First, a 3D model-based approach is used to segment and quantify thick vessels and most parts of thin vessels. Second, remaining vessel gaps of the first step in low-contrast and noisy regions are completed using a 3D minimal path approach, which exploits directional information. We present two novel minimal path approaches. The first is an explicit approach based on energy minimization using probabilistic sampling, and the second is an implicit approach based on fast marching with anisotropic directional prior. We conducted an extensive evaluation with over 2300 3D synthetic images and 40 real 3D 7 Tesla MRA images. Quantitative and qualitative evaluation shows that our approach achieves superior results compared with a previous minimal path approach. Furthermore, our approach was successfully used in two clinical studies on stroke and vascular dementia.
A preliminary study of mechanistic approach in pavement design to accommodate climate change effects
NASA Astrophysics Data System (ADS)
Harnaeni, S. R.; Pramesti, F. P.; Budiarto, A.; Setyawan, A.
2018-03-01
Road damage is caused by some factors, including climate changes, overload, and inappropriate procedure for material and development process. Meanwhile, climate change is a phenomenon which cannot be avoided. The effects observed include air temperature rise, sea level rise, rainfall changes, and the intensity of extreme weather phenomena. Previous studies had shown the impacts of climate changes on road damage. Therefore, several measures to anticipate the damage should be considered during the planning and construction in order to reduce the cost of road maintenance. There are three approaches generally applied in the design of flexible pavement thickness, namely mechanistic approach, mechanistic-empirical (ME) approach and empirical approach. The advantages of applying mechanistic approach or mechanistic-empirical (ME) approaches are its efficiency and reliability in the design of flexible pavement thickness as well as its capacity to accommodate climate changes in compared to empirical approach. However, generally, the design of flexible pavement thickness in Indonesia still applies empirical approach. This preliminary study aimed to emphasize the importance of the shifting towards a mechanistic approach in the design of flexible pavement thickness.
Bias modification training can alter approach bias and chocolate consumption.
Schumacher, Sophie E; Kemps, Eva; Tiggemann, Marika
2016-01-01
Recent evidence has demonstrated that bias modification training has potential to reduce cognitive biases for attractive targets and affect health behaviours. The present study investigated whether cognitive bias modification training could be applied to reduce approach bias for chocolate and affect subsequent chocolate consumption. A sample of 120 women (18-27 years) were randomly assigned to an approach-chocolate condition or avoid-chocolate condition, in which they were trained to approach or avoid pictorial chocolate stimuli, respectively. Training had the predicted effect on approach bias, such that participants trained to approach chocolate demonstrated an increased approach bias to chocolate stimuli whereas participants trained to avoid such stimuli showed a reduced bias. Further, participants trained to avoid chocolate ate significantly less of a chocolate muffin in a subsequent taste test than participants trained to approach chocolate. Theoretically, results provide support for the dual process model's conceptualisation of consumption as being driven by implicit processes such as approach bias. In practice, approach bias modification may be a useful component of interventions designed to curb the consumption of unhealthy foods. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wallis, Graham B.
1989-01-01
Some features of two recent approaches of two-phase potential flow are presented. The first approach is based on a set of progressive examples that can be analyzed using common techniques, such as conservation laws, and taken together appear to lead in the direction of a general theory. The second approach is based on variational methods, a classical approach to conservative mechanical systems that has a respectable history of application to single phase flows. This latter approach, exemplified by several recent papers by Geurst, appears generally to be consistent with the former approach, at least in those cases for which it is possible to obtain comparable results. Each approach has a justifiable theoretical base and is self-consistent. Moreover, both approaches appear to give the right prediction for several well-defined situations.
Transperitoneal versus extraperitoneal robotic-assisted radical prostatectomy: which one?
Atug, F; Thomas, R
2007-06-01
As robotic surgery has proliferated, both in its availability as well as in its popularity, there are certainly several unresolved matters in the burgeoning field of robotic radical prostatectomy. Matters that are commonly discussed at forums relating to robotic prostatectomy include training, proctoring, overcoming the learning curve, positive surgical margins, quality of life issues, etc. Among the approaches available for robotic radical prostatectomy are the trans-peritoneal (TP) and the extraperitoneal (EP) approaches. Although use of the TP approach vastly outnumbers the EP approach by a wide margin, one must not discount the need for learning the EP approach, especially in patients who could greatly benefit from this approach. The obese, those who have had intraperitoneal procedures in the past, those with ostomies (colostomy, ileostomy) should be considered candidates for the EP approach. For the beginner, it is recommended that familiarizing oneself with the TP approach may be the quickest way to get proficient with use of the robot and for getting over the learning curve, which varies from surgeon to surgeon. Once comfortable with the TP approach, one should consider the application of the EP access, when indicated. One distinct disadvantage of the EP approach is the limited space available for robotic movements. This is why one would prefer getting experience in the TP before forging into the EP approach. Certainly, adequate balloon dissection of the retroperitoneal space above the bladder is critical, as well as additional dissection with the camera in place. Another criticism of the EP approach is the fact that one may not have enough space or ability to perform a complete pelvic lymph node dissection. However, in experienced hands, one is able to do a very comparable job. Though the TP approach would continue to be the premium approach for robotic and laparoscopic radical prostatectomy, one should familiarize oneself with the EP approach since this can clearly be applied to the patient with the correct indication.
Role of Soft Computing Approaches in HealthCare Domain: A Mini Review.
Gambhir, Shalini; Malik, Sanjay Kumar; Kumar, Yugal
2016-12-01
In the present era, soft computing approaches play a vital role in solving the different kinds of problems and provide promising solutions. Due to popularity of soft computing approaches, these approaches have also been applied in healthcare data for effectively diagnosing the diseases and obtaining better results in comparison to traditional approaches. Soft computing approaches have the ability to adapt itself according to problem domain. Another aspect is a good balance between exploration and exploitation processes. These aspects make soft computing approaches more powerful, reliable and efficient. The above mentioned characteristics make the soft computing approaches more suitable and competent for health care data. The first objective of this review paper is to identify the various soft computing approaches which are used for diagnosing and predicting the diseases. Second objective is to identify various diseases for which these approaches are applied. Third objective is to categories the soft computing approaches for clinical support system. In literature, it is found that large number of soft computing approaches have been applied for effectively diagnosing and predicting the diseases from healthcare data. Some of these are particle swarm optimization, genetic algorithm, artificial neural network, support vector machine etc. A detailed discussion on these approaches are presented in literature section. This work summarizes various soft computing approaches used in healthcare domain in last one decade. These approaches are categorized in five different categories based on the methodology, these are classification model based system, expert system, fuzzy and neuro fuzzy system, rule based system and case based system. Lot of techniques are discussed in above mentioned categories and all discussed techniques are summarized in the form of tables also. This work also focuses on accuracy rate of soft computing technique and tabular information is provided for each category including author details, technique, disease and utility/accuracy.
Havelin, Leif I; Furnes, Ove; Baste, Valborg; Nordsletten, Lars; Hovik, Oystein; Dimmen, Sigbjorn
2014-01-01
Background The surgical approach in total hip arthroplasty (THA) is often based on surgeon preference and local traditions. The anterior muscle-sparing approach has recently gained popularity in Europe. We tested the hypothesis that patient satisfaction, pain, function, and health-related quality of life (HRQoL) after THA is not related to the surgical approach. Patients 1,476 patients identified through the Norwegian Arthroplasty Register were sent questionnaires 1–3 years after undergoing THA in the period from January 2008 to June 2010. Patient-reported outcome measures (PROMs) included the hip disability osteoarthritis outcome score (HOOS), the Western Ontario and McMaster Universities osteoarthritis index (WOMAC), health-related quality of life (EQ-5D-3L), visual analog scales (VAS) addressing pain and satisfaction, and questions about complications. 1,273 patients completed the questionnaires and were included in the analysis. Results Adjusted HOOS scores for pain, other symptoms, activities of daily living (ADL), sport/recreation, and quality of life were significantly worse (p < 0.001 to p = 0.03) for the lateral approach than for the anterior approach and the posterolateral approach (mean differences: 3.2–5.0). These results were related to more patient-reported limping with the lateral approach than with the anterior and posterolateral approaches (25% vs. 12% and 13%, respectively; p < 0.001). Interpretation Patients operated with the lateral approach reported worse outcomes 1–3 years after THA surgery. Self-reported limping occurred twice as often in patients who underwent THA with a lateral approach than in those who underwent THA with an anterior or posterolateral approach. There were no significant differences in patient-reported outcomes after THA between those who underwent THA with a posterolateral approach and those who underwent THA with an anterior approach. PMID:24954494
Amlie, Einar; Havelin, Leif I; Furnes, Ove; Baste, Valborg; Nordsletten, Lars; Hovik, Oystein; Dimmen, Sigbjorn
2014-09-01
The surgical approach in total hip arthroplasty (THA) is often based on surgeon preference and local traditions. The anterior muscle-sparing approach has recently gained popularity in Europe. We tested the hypothesis that patient satisfaction, pain, function, and health-related quality of life (HRQoL) after THA is not related to the surgical approach. 1,476 patients identified through the Norwegian Arthroplasty Register were sent questionnaires 1-3 years after undergoing THA in the period from January 2008 to June 2010. Patient-reported outcome measures (PROMs) included the hip disability osteoarthritis outcome score (HOOS), the Western Ontario and McMaster Universities osteoarthritis index (WOMAC), health-related quality of life (EQ-5D-3L), visual analog scales (VAS) addressing pain and satisfaction, and questions about complications. 1,273 patients completed the questionnaires and were included in the analysis. Adjusted HOOS scores for pain, other symptoms, activities of daily living (ADL), sport/recreation, and quality of life were significantly worse (p < 0.001 to p = 0.03) for the lateral approach than for the anterior approach and the posterolateral approach (mean differences: 3.2-5.0). These results were related to more patient-reported limping with the lateral approach than with the anterior and posterolateral approaches (25% vs. 12% and 13%, respectively; p < 0.001). Patients operated with the lateral approach reported worse outcomes 1-3 years after THA surgery. Self-reported limping occurred twice as often in patients who underwent THA with a lateral approach than in those who underwent THA with an anterior or posterolateral approach. There were no significant differences in patient-reported outcomes after THA between those who underwent THA with a posterolateral approach and those who underwent THA with an anterior approach.
Dyslexia, authorial identity, and approaches to learning and writing: a mixed methods study.
Kinder, Julianne; Elander, James
2012-06-01
Dyslexia may lead to difficulties with academic writing as well as reading. The authorial identity approach aims to help students improve their academic writing and avoid unintentional plagiarism, and could help to understand dyslexic students' approaches to writing. (1) To compare dyslexic and non-dyslexic students' authorial identity and approaches to learning and writing; (2) to compare correlations between approaches to writing and approaches to learning among dyslexic and non-dyslexic students; (3) to explore dyslexic students' understandings of authorship and beliefs about dyslexia, writing and plagiarism. Dyslexic (n= 31) and non-dyslexic (n= 31) university students. Questionnaire measures of self-rated confidence in writing, understanding of authorship, knowledge to avoid plagiarism, and top-down, bottom-up and pragmatic approaches to writing (Student Authorship Questionnaire; SAQ), and deep, surface and strategic approaches to learning (Approaches and Study Skills Inventory for Students; ASSIST), plus qualitative interviews with dyslexic students with high and low SAQ scores. Dyslexic students scored lower for confidence in writing, understanding authorship, and strategic approaches to learning, and higher for surface approaches to learning. Correlations among SAQ and ASSIST scores were larger and more frequently significant among non-dyslexic students. Self-rated knowledge to avoid plagiarism was associated with a top-down approach to writing among dyslexic students and with a bottom-up approach to writing among non-dyslexic students. All the dyslexic students interviewed described how dyslexia made writing more difficult and reduced their confidence in academic writing, but they had varying views about whether dyslexia increased the risk of plagiarism. Dyslexic students have less strong authorial identities, and less congruent approaches to learning and writing. Knowledge to avoid plagiarism may be more salient for dyslexic students, who may benefit from specific interventions to increase confidence in writing and understanding of authorship. Further research could investigate how dyslexic students develop approaches to academic writing, and how that could be affected by perceived knowledge to avoid plagiarism. ©2011 The British Psychological Society.
Pollock, Alex; Baer, Gillian; Langhorne, Peter; Pomeroy, Valerie
2007-05-01
To determine whether there is a difference in global dependency and functional independence in patients with stroke associated with different approaches to physiotherapy treatment. We searched the Cochrane Stroke Group Trials Register (last searched May 2005), Cochrane Central Register of Controlled Trials (CENTRAL) (Cochrane Library Issue 2, 2005), MEDLINE (1966 to May 2005), EMBASE (1980 to May 2005) and CINAHL (1982 to May 2005). We contacted experts and researchers with an interest in stroke rehabilitation. Inclusion criteria were: (a) randomized or quasi-randomized controlled trials; (b) adults with a clinical diagnosis of stroke; (c) physiotherapy treatment approaches aimed at promoting postural control and lower limb function; (d) measures of disability, motor impairment or participation. Two independent reviewers categorized identified trials according to the inclusion/exclusion criteria, documented the methodological quality and extracted the data. Twenty trials (1087 patients) were included in the review. Comparisons included: neurophysiological approach versus other approach; motor learning approach versus other approach; mixed approach versus other approach for the outcomes of global dependency and functional independence. A mixed approach was significantly more effective than no treatment control at improving functional independence (standardized mean difference (SMD) 0.94, 95% confidence interval (CI) 0.08 to 1.80). There were no significant differences found for any other comparisons. Physiotherapy intervention, using a 'mix' of components from different 'approaches' is more effective than no treatment control in attaining functional independence following stroke. There is insufficient evidence to conclude that any one physiotherapy 'approach' is more effective in promoting recovery of disability than any other approach.
Wiers, Corinde E; Stelzel, Christine; Park, Soyoung Q; Gawron, Christiane K; Ludwig, Vera U; Gutwinski, Stefan; Heinz, Andreas; Lindenmeyer, Johannes; Wiers, Reinout W; Walter, Henrik; Bermpohl, Felix
2014-02-01
Behavioral studies have shown an alcohol-approach bias in alcohol-dependent patients: the automatic tendency to faster approach than avoid alcohol compared with neutral cues, which has been associated with craving and relapse. Although this is a well-studied psychological phenomenon, little is known about the brain processes underlying automatic action tendencies in addiction. We examined 20 alcohol-dependent patients and 17 healthy controls with functional magnetic resonance imaging (fMRI), while performing an implicit approach-avoidance task. Participants pushed and pulled pictorial cues of alcohol and soft-drink beverages, according to a content-irrelevant feature of the cue (landscape/portrait). The critical fMRI contrast regarding the alcohol-approach bias was defined as (approach alcohol>avoid alcohol)>(approach soft drink>avoid soft drink). This was reversed for the avoid-alcohol contrast: (avoid alcohol>approach alcohol)>(avoid soft drink>approach soft drink). In comparison with healthy controls, alcohol-dependent patients had stronger behavioral approach tendencies for alcohol cues than for soft-drink cues. In the approach, alcohol fMRI contrast patients showed larger blood-oxygen-level-dependent responses in the nucleus accumbens and medial prefrontal cortex, regions involved in reward and motivational processing. In alcohol-dependent patients, alcohol-craving scores were positively correlated with activity in the amygdala for the approach-alcohol contrast. The dorsolateral prefrontal cortex was not activated in the avoid-alcohol contrast in patients vs controls. Our data suggest that brain regions that have a key role in reward and motivation are associated with the automatic alcohol-approach bias in alcohol-dependent patients.
Pittig, Andre; Hengen, Kristina; Bublatzky, Florian; Alpers, Georg W
2018-04-22
The reduction of avoidance behavior is a central target in the treatment of anxiety disorders, but it has rarely been studied how approach of fear-relevant stimuli may be initiated. In two studies, the impact of hypothetical monetary and symbolic social incentives on approach-avoidance behavior was examined. In Study 1, individuals high or low on fear of spiders (N = 84) could choose to approach a fear-relevant versus a neutral stimulus, which were equally rewarded. In a subsequent micro-intervention, approaching the fear-relevant stimulus was differentially rewarded either by monetary or social incentives. In Study 2 (N = 76), initial incentives for approach were discontinued to investigate the stability of approach. Hypothetical monetary and symbolic social incentives reduced or eliminated initial avoidance, even in highly fearful individuals. Approach resulted in a decrease of self-reported aversiveness towards the fear-relevant stimulus. However, even after successful approach, fearful individuals showed significant avoidance behavior when incentives for approach were discontinued. Future research should investigate the long-term effects of prolonged approach incentives on multiple levels of fear (e.g., self-report, behavioral, physiological). It should also be tested if such an intervention actually improves compliance with exposure based interventions. The present findings highlight that incentives are useful to initiate initial approach towards a feared stimulus. Although incentive-based approach may neither fully eliminate avoidance nor negative feelings towards the feared stimulus, such operant interventions may set the stage for more extensive extinction training. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cousijn, Janna; Snoek, Robin W M; Wiers, Reinout W
2013-09-01
Experimental laboratory studies suggest that the approach bias (relatively fast approach responses) toward substance-related materials plays an important role in problematic substance use. How this bias is moderated by intention to use versus recent use remains unknown. Moreover, the relationship between approach bias and other motivational processes (satiation and craving) and executive functioning remains unclear. The aim of this study was to investigate the cannabis approach bias before and after cannabis use in real-life setting (Amsterdam coffee shops) and to assess the relationship between approach bias, craving, satiation, cannabis use, and response inhibition. Cannabis, tobacco, and neutral approach and avoidance action tendencies were measured with the Approach Avoidance Task and compared between 42 heavy cannabis users with the intention to use and 45 heavy cannabis users shortly after cannabis use. The classical Stroop was used to measure response inhibition. Multiple regression analyses were conducted to investigate relationships between approach bias, satiation, craving, cannabis use, and response inhibition. In contrast to the hypotheses, heavy cannabis users with the intention to use did not show a cannabis approach bias, whereas intoxicated cannabis users did show an approach bias regardless of image category. This could be attributed to a general slowing of avoidance action tendencies. Moreover, craving was negatively associated with the approach bias, and no relationships were observed between the cannabis approach bias, satiation, prior cannabis use, and response inhibition. Cannabis intoxication in a real-life setting inhibited general avoidance. Expression of the cannabis approach bias appeared not to be modulated by satiation or response inhibition.
Commercial dissemination approaches for solar home systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terrado, E.
1997-12-01
The author discusses the issue of providing solar home systems to primarily rural areas from the perspective of how to commercialize the process. He considers two different approaches, one an open market approach and the other an exclusive market approach. He describes examples of the exclusive market approach which are in process in Argentina and Brazil. Coming from a banking background, the business aspects are discussed in detail. He points out the strengths and weaknesses of both approaches toward developing such systems.
Valuing a long-term care facility.
Mellen, C M
1992-10-01
The business valuation industry generally uses at least one of three basic approaches to value a long-term care facility: the cost approach, sales comparison approach, or income approach. The approach that is chosen and the resulting weight that is applied to it depend largely on the circumstances involved. Because a long-term care facility is a business enterprise, more weight usually is given to the income approach which factors into the estimate of value both the tangible and intangible assets of the facility.
Colling, Lincoln J; Williamson, Kellie
2014-01-01
Joint actions, such as music and dance, rely crucially on the ability of two, or more, agents to align their actions with great temporal precision. Within the literature that seeks to explain how this action alignment is possible, two broad approaches have appeared. The first, what we term the entrainment approach, has sought to explain these alignment phenomena in terms of the behavioral dynamics of the system of two agents. The second, what we term the emulator approach, has sought to explain these alignment phenomena in terms of mechanisms, such as forward and inverse models, that are implemented in the brain. They have often been pitched as alternative explanations of the same phenomena; however, we argue that this view is mistaken, because, as we show, these two approaches are engaged in distinct, and not mutually exclusive, explanatory tasks. While the entrainment approach seeks to uncover the general laws that govern behavior the emulator approach seeks to uncover mechanisms. We argue that is possible to do both and that the entrainment approach must pay greater attention to the mechanisms that support the behavioral dynamics of interest. In short, the entrainment approach must be transformed into a neuroentrainment approach by adopting a mechanistic view of explanation and by seeking mechanisms that are implemented in the brain.
Markovian master equations for quantum thermal machines: local versus global approach
NASA Astrophysics Data System (ADS)
Hofer, Patrick P.; Perarnau-Llobet, Martí; Miranda, L. David M.; Haack, Géraldine; Silva, Ralph; Bohr Brask, Jonatan; Brunner, Nicolas
2017-12-01
The study of quantum thermal machines, and more generally of open quantum systems, often relies on master equations. Two approaches are mainly followed. On the one hand, there is the widely used, but often criticized, local approach, where machine sub-systems locally couple to thermal baths. On the other hand, in the more established global approach, thermal baths couple to global degrees of freedom of the machine. There has been debate as to which of these two conceptually different approaches should be used in situations out of thermal equilibrium. Here we compare the local and global approaches against an exact solution for a particular class of thermal machines. We consider thermodynamically relevant observables, such as heat currents, as well as the quantum state of the machine. Our results show that the use of a local master equation is generally well justified. In particular, for weak inter-system coupling, the local approach agrees with the exact solution, whereas the global approach fails for non-equilibrium situations. For intermediate coupling, the local and the global approach both agree with the exact solution and for strong coupling, the global approach is preferable. These results are backed by detailed derivations of the regimes of validity for the respective approaches.
Flu Diagnosis System Using Jaccard Index and Rough Set Approaches
NASA Astrophysics Data System (ADS)
Efendi, Riswan; Azah Samsudin, Noor; Mat Deris, Mustafa; Guan Ting, Yip
2018-04-01
Jaccard index and rough set approaches have been frequently implemented in decision support systems with various domain applications. Both approaches are appropriate to be considered for categorical data analysis. This paper presents the applications of sets operations for flu diagnosis systems based on two different approaches, such as, Jaccard index and rough set. These two different approaches are established using set operations concept, namely intersection and subset. The step-by-step procedure is demonstrated from each approach in diagnosing flu system. The similarity and dissimilarity indexes between conditional symptoms and decision are measured using Jaccard approach. Additionally, the rough set is used to build decision support rules. Moreover, the decision support rules are established using redundant data analysis and elimination of unclassified elements. A number data sets is considered to attempt the step-by-step procedure from each approach. The result has shown that rough set can be used to support Jaccard approaches in establishing decision support rules. Additionally, Jaccard index is better approach for investigating the worst condition of patients. While, the definitely and possibly patients with or without flu can be determined using rough set approach. The rules may improve the performance of medical diagnosis systems. Therefore, inexperienced doctors and patients are easier in preliminary flu diagnosis.
The Effects of Approach-Avoidance Modification on Social Anxiety Disorder: A Pilot Study
Asnaani, Anu; Rinck, Mike; Becker, Eni; Hofmann, Stefan G.
2014-01-01
Cognitive bias modification has recently been discussed as a possible intervention for mental disorders. A specific form of this novel treatment approach is approach-avoidance modification. In order to examine the efficacy of approach-avoidance modification for positive stimuli associated with social anxiety, we recruited 43 individuals with social anxiety disorder and randomly assigned them to a training (implicit training to approach smiling faces) or a control (equal approach and avoidance of smiling faces) condition in three sessions over the course of a one-week period. Dependent measures included clinician ratings, self-report measures of social anxiety, and overt behavior during behavioral approach tasks. No group differences in any of the outcome measures were observed after training. In addition, while individuals in the training group showed increased approach tendency in one of the sessions, this effect was inconsistent across the three sessions and did not result in long-term changes in implicit approach tendencies between the groups over the course of the entire study. These results suggest that approach-avoidance modification might result in short-lasting effects on implicit approach tendencies towards feared positive stimuli, but this modification may not result in meaningful behavioral change or symptom reduction in individuals with social anxiety disorder. PMID:24659832
The approaches to the didactics of physics in the Czech Republic - Historical development
NASA Astrophysics Data System (ADS)
Žák, Vojtěch
2017-01-01
The aim of this paper is to describe approaches to the didactics of physics which have appeared in the Czech Republic during its development and to discuss mainly their relationships with other fields. It is potentially beneficial to the understanding of the current situation of the Czech didactics of physics and to the prognosis of its future development. The main part of the article includes a description of the particular approaches of the Czech didactics of physics, such as the methodological, application, integration and communication approaches described in chronological order. Special attention is paid to the relationships of the didactics of physics and physics itself, pedagogy and other fields. It is obvious that the methodological approach is narrowly connected to physics, while the application approach comes essentially from pedagogy. The integration approach seeks the utilization of other scientific fields to develop the didactics of physics. It was revealed that the most elaborate is the communication approach. This approach belongs to the concepts that have influenced the current didactical thinking in the Czech Republic to a high extent in other fields as well (including within the didactics of socio-humanist fields). In spite of the importance of the communication approach, it should be admitted that the other approaches are, to a certain extent, employed as well and co-exist.
Peralta, Louisa R; Dudley, Dean A; Cotton, Wayne G
2016-05-01
School-based programs represent an ideal setting to enhance healthy eating, as most children attend school regularly and consume at least one meal and a number of snacks at school each day. However, current research reports that elementary school teachers often display low levels of nutritional knowledge, self-efficacy, and skills to effectively deliver nutrition education. The purpose of this review was to understand the availability and quality of resources that are accessible for elementary school teachers to use to support curriculum delivery or nutrition education programs. The review included 32 resources from 4 countries in the final analysis from 1989 to 2014. The 32 resources exhibited 8 dominant teaching strategies: curriculum approaches; cross-curricular approaches; parental involvement; experiential learning approaches; contingent reinforcement approaches; literary abstraction approaches; games-based approaches; and web-based approaches. The resources were accessible to elementary school teachers, with all the resources embedding curriculum approaches, and most of the resources embedding parental involvement strategies. Resources were less likely to embed cross-curricular and experiential learning approaches, as well as contingent reinforcement approaches, despite recent research suggesting that the most effective evidence-based strategies for improving healthy eating in elementary school children are cross-curricular and experiential learning approaches. © 2016, American School Health Association.
NASA Technical Reports Server (NTRS)
Ricks, Wendell R.; Jonnson, Jon E.; Barry, John S.
1996-01-01
Adequately presenting all necessary information on an approach chart represents a challenge for cartographers. Since many tasks associated with using approach charts are cognitive (e.g., planning the approach and monitoring its progress), and since the characteristic of a successful interface is one that conforms to the users' mental models, understanding pilots' underlying models of approach chart information would greatly assist cartographers. To provide such information, a new methodology was developed for this study that enhances traditional information requirements analyses by combining psychometric scaling techniques with a simulation task to provide quantifiable links between pilots' cognitive representations of approach information and their use of approach information. Results of this study should augment previous information requirements analyses by identifying what information is acquired, when it is acquired, and what presentation concepts might facilitate its efficient use by better matching the pilots' cognitive model of the information. The primary finding in this study indicated that pilots mentally organize approach chart information into ten primary categories: communications, geography, validation, obstructions, navigation, missed approach, final items, other runways, visibility requirement, and navigation aids. These similarity categories were found to underlie the pilots' information acquisitions, other mental models, and higher level cognitive processes that are used to accomplish their approach and landing tasks.
Knowledge brokering for healthy aging: a scoping review of potential approaches.
Van Eerd, Dwayne; Newman, Kristine; DeForge, Ryan; Urquhart, Robin; Cornelissen, Evelyn; Dainty, Katie N
2016-10-19
Developing a healthcare delivery system that is more responsive to the future challenges of an aging population is a priority in Canada. The World Health Organization acknowledges the need for knowledge translation frameworks in aging and health. Knowledge brokering (KB) is a specific knowledge translation approach that includes making connections between people to facilitate the use of evidence. Knowledge gaps exist about KB roles, approaches, and guiding frameworks. The objective of the scoping review is to identify and describe KB approaches and the underlying conceptual frameworks (models, theories) used to guide the approaches that could support healthy aging. Literature searches were done in PubMed, EMBASE, PsycINFO, EBM reviews (Cochrane Database of systematic reviews), CINAHL, and SCOPUS, as well as Google and Google Scholar using terms related to knowledge brokering. Titles, abstracts, and full reports were reviewed independently by two reviewers who came to consensus on all screening criteria. Documents were included if they described a KB approach and details about the underlying conceptual basis. Data about KB approach, target stakeholders, KB outcomes, and context were extracted independently by two reviewers. Searches identified 248 unique references. Screening for inclusion revealed 19 documents that described 15 accounts of knowledge brokering and details about conceptual guidance and could be applied in healthy aging contexts. Eight KB elements were detected in the approaches though not all approaches incorporated all elements. The underlying conceptual guidance for KB approaches varied. Specific KB frameworks were referenced or developed for nine KB approaches while the remaining six cited more general KT frameworks (or multiple frameworks) as guidance. The KB approaches that we found varied greatly depending on the context and stakeholders involved. Three of the approaches were explicitly employed in the context of health aging. Common elements of KB approaches that could be conducted in healthy aging contexts focussed on acquiring, adapting, and disseminating knowledge and networking (linkage). The descriptions of the guiding conceptual frameworks (theories, models) focussed on linkage and exchange but varied across approaches. Future research should gather KB practitioner and stakeholder perspectives on effective practices to develop KB approaches for healthy aging.
Jiang, Ling; Yang, Christopher C
2017-09-01
The rapid growth of online health social websites has captured a vast amount of healthcare information and made the information easy to access for health consumers. E-patients often use these social websites for informational and emotional support. However, health consumers could be easily overwhelmed by the overloaded information. Healthcare information searching can be very difficult for consumers, not to mention most of them are not skilled information searcher. In this work, we investigate the approaches for measuring user similarity in online health social websites. By recommending similar users to consumers, we can help them to seek informational and emotional support in a more efficient way. We propose to represent the healthcare social media data as a heterogeneous healthcare information network and introduce the local and global structural approaches for measuring user similarity in a heterogeneous network. We compare the proposed structural approaches with the content-based approach. Experiments were conducted on a dataset collected from a popular online health social website, and the results showed that content-based approach performed better for inactive users, while structural approaches performed better for active users. Moreover, global structural approach outperformed local structural approach for all user groups. In addition, we conducted experiments on local and global structural approaches using different weight schemas for the edges in the network. Leverage performed the best for both local and global approaches. Finally, we integrated different approaches and demonstrated that hybrid method yielded better performance than the individual approach. The results indicate that content-based methods can effectively capture the similarity of inactive users who usually have focused interests, while structural methods can achieve better performance when rich structural information is available. Local structural approach only considers direct connections between nodes in the network, while global structural approach takes the indirect connections into account. Therefore, the global similarity approach can deal with sparse networks and capture the implicit similarity between two users. Different approaches may capture different aspects of the similarity relationship between two users. When we combine different methods together, we could achieve a better performance than using each individual method. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Figari, Francesco; Iacovou, Maria; Skew, Alexandra J.; Sutherland, Holly
2012-01-01
In this paper, we evaluate income distributions in four European countries (Austria, Italy, Spain and Hungary) using two complementary approaches: a standard approach based on reported incomes in survey data, and a microsimulation approach, where taxes and benefits are simulated. These two approaches may be expected to generate slightly different…
ERIC Educational Resources Information Center
Hendrix, William H.
This report focuses on the problem of how to improve leadership effectiveness in order to improve overall organization effectiveness. First, three different approaches to leadership behavior are presented: Trait Approach, Behavioral Approach, and Situational Approach. Next, reviews of the leadership literature and of eight contingency models of…
ERIC Educational Resources Information Center
O'Sullivan, Margo C.
2001-01-01
Examines Namibia's communicative approach to teaching English speaking and listening skills by exploring the extent to which this approach is appropriate to the Namibian context. Raises the issue of transfer, specifically that communicative approaches are transferable to the Namibian context if they are simplified and adequate prescriptive…
ERIC Educational Resources Information Center
Nicol, Janni; Taplin, Jill
2012-01-01
Understanding the Steiner Waldorf Approach is a much needed source of information for those wishing to extend and consolidate their understanding of the Steiner Waldorf High Scope Approach. It will enable the reader to analyse the essential elements of the Steiner Waldorf Approach to early childhood and its relationship to quality early years…
ERIC Educational Resources Information Center
Hall, Kia M. Q.
2014-01-01
In this study, a dual-level capabilities approach to development is introduced. This approach intends to improve upon individual-focused capabilities approaches developed by Amartya Sen and Martha Nussbaum. Based upon seven months of ethnographic research in the Afro-descendant, autochthonous Garifuna community of Honduras, constructivist grounded…
Labs21 Approach to Climate Neutral Campuses | Climate Neutral Research
Campuses | NREL Labs21 Approach to Climate Neutral Campuses Labs21 Approach to Climate Neutral included a whole-building approach to energy efficiency in laboratory buildings. This website takes that approach a step further in carrying out campus-wide energy- and carbon-reduction strategies. The
Ways of Teaching Values: An Outline of Six Values Approaches.
ERIC Educational Resources Information Center
Kupchenko, Ian; Parsons, Jim
Six different approaches to teaching values in the classroom are reviewed in this paper. Each approach is reviewed according to: (1) the rationale of the approach; (2) the process of valuing; (3) the teaching methods used to achieve the specific purpose to the approach; (4) an instructional mode or system of procedures used by teachers to…
ERIC Educational Resources Information Center
Tüzel, Sait
2013-01-01
Two basic approaches namely "independent lesson approach" and "integration approach" appear in teaching media literacy. Media literacy is regarded as a separate lesson in the education program like mathematics and social sciences in "independent lesson approach". However, in "integration approach",…
ERIC Educational Resources Information Center
Wiltshire, Monica
2011-01-01
"Understanding the HighScope Approach" is a much needed source of information for those wishing to extend and consolidate their understanding of the HighScope Approach. It will enable the reader to analyse the essential elements of the HighScope Approach to early childhood and its relationship to quality early years practice. Exploring…
Electronic Equipment Maintainability Data
1980-01-01
MISSION CRITICALITY HIGH MISSION CRITICALITY HIGH DESIGN APPROACH DESIGN APPROACH SURVEILLANCE/SEARCH SURVEILLANCE/SEARCH TRACKING TRACKING ECCN ECCM...CRITICALITY NIGH DESIGN APPROACH DESIGN APPROACH SURVEILLANCE/SEARCH SURVEILLANCE/SEARCH TRACKING TRACKING ECCM ECCN MULT ICHANNEL/MULTIFREQUENCY
Uribe-Convers, Simon; Duke, Justin R.; Moore, Michael J.; Tank, David C.
2014-01-01
• Premise of the study: We present an alternative approach for molecular systematic studies that combines long PCR and next-generation sequencing. Our approach can be used to generate templates from any DNA source for next-generation sequencing. Here we test our approach by amplifying complete chloroplast genomes, and we present a set of 58 potentially universal primers for angiosperms to do so. Additionally, this approach is likely to be particularly useful for nuclear and mitochondrial regions. • Methods and Results: Chloroplast genomes of 30 species across angiosperms were amplified to test our approach. Amplification success varied depending on whether PCR conditions were optimized for a given taxon. To further test our approach, some amplicons were sequenced on an Illumina HiSeq 2000. • Conclusions: Although here we tested this approach by sequencing plastomes, long PCR amplicons could be generated using DNA from any genome, expanding the possibilities of this approach for molecular systematic studies. PMID:25202592
Passive vibration control: a structure–immittance approach
Zhang, Sara Ying; Neild, Simon A.
2017-01-01
Linear passive vibration absorbers, such as tuned mass dampers, often contain springs, dampers and masses, although recently there has been a growing trend to employ or supplement the mass elements with inerters. When considering possible configurations with these elements broadly, two approaches are normally used: one structure-based and one immittance-based. Both approaches have their advantages and disadvantages. In this paper, a new approach is proposed: the structure–immittance approach. Using this approach, a full set of possible series–parallel networks with predetermined numbers of each element type can be represented by structural immittances, obtained via a proposed general formulation process. Using the structural immittances, both the ability to investigate a class of absorber possibilities together (advantage of the immittance-based approach), and the ability to control the complexity, topology and element values in resulting absorber configurations (advantages of the structure-based approach) are provided at the same time. The advantages of the proposed approach are demonstrated through two case studies on building vibration suppression and automotive suspension design, respectively. PMID:28588407
Passive vibration control: a structure-immittance approach.
Zhang, Sara Ying; Jiang, Jason Zheng; Neild, Simon A
2017-05-01
Linear passive vibration absorbers, such as tuned mass dampers, often contain springs, dampers and masses, although recently there has been a growing trend to employ or supplement the mass elements with inerters. When considering possible configurations with these elements broadly, two approaches are normally used: one structure-based and one immittance-based. Both approaches have their advantages and disadvantages. In this paper, a new approach is proposed: the structure-immittance approach. Using this approach, a full set of possible series-parallel networks with predetermined numbers of each element type can be represented by structural immittances, obtained via a proposed general formulation process. Using the structural immittances, both the ability to investigate a class of absorber possibilities together (advantage of the immittance-based approach), and the ability to control the complexity, topology and element values in resulting absorber configurations (advantages of the structure-based approach) are provided at the same time. The advantages of the proposed approach are demonstrated through two case studies on building vibration suppression and automotive suspension design, respectively.
Passive vibration control: a structure-immittance approach
NASA Astrophysics Data System (ADS)
Zhang, Sara Ying; Jiang, Jason Zheng; Neild, Simon A.
2017-05-01
Linear passive vibration absorbers, such as tuned mass dampers, often contain springs, dampers and masses, although recently there has been a growing trend to employ or supplement the mass elements with inerters. When considering possible configurations with these elements broadly, two approaches are normally used: one structure-based and one immittance-based. Both approaches have their advantages and disadvantages. In this paper, a new approach is proposed: the structure-immittance approach. Using this approach, a full set of possible series-parallel networks with predetermined numbers of each element type can be represented by structural immittances, obtained via a proposed general formulation process. Using the structural immittances, both the ability to investigate a class of absorber possibilities together (advantage of the immittance-based approach), and the ability to control the complexity, topology and element values in resulting absorber configurations (advantages of the structure-based approach) are provided at the same time. The advantages of the proposed approach are demonstrated through two case studies on building vibration suppression and automotive suspension design, respectively.
The mnemonic mover: nostalgia regulates avoidance and approach motivation.
Stephan, Elena; Wildschut, Tim; Sedikides, Constantine; Zhou, Xinyue; He, Wuming; Routledge, Clay; Cheung, Wing-Yee; Vingerhoets, Ad J J M
2014-06-01
In light of its role in maintaining psychological equanimity, we proposed that nostalgia--a self-relevant, social, and predominantly positive emotion--regulates avoidance and approach motivation. We advanced a model in which (a) avoidance motivation triggers nostalgia and (b) nostalgia, in turn, increases approach motivation. As a result, nostalgia counteracts the negative impact of avoidance motivation on approach motivation. Five methodologically diverse studies supported this regulatory model. Study 1 used a cross-sectional design and showed that avoidance motivation was positively associated with nostalgia. Nostalgia, in turn, was positively associated with approach motivation. In Study 2, an experimental induction of avoidance motivation increased nostalgia. Nostalgia then predicted increased approach motivation. Studies 3-5 tested the causal effect of nostalgia on approach motivation and behavior. These studies demonstrated that experimental nostalgia inductions strengthened approach motivation (Study 3) and approach behavior as manifested in reduced seating distance (Study 4) and increased helping (Study 5). The findings shed light on nostalgia's role in regulating the human motivation system.
Dave, Vivek S; Gupta, Deepak; Yu, Monica; Nguyen, Phuong; Varghese Gupta, Sheeba
2017-02-01
The Biopharmaceutics Classification System (BCS) classifies pharmaceutical compounds based on their aqueous solubility and intestinal permeability. The BCS Class III compounds are hydrophilic molecules (high aqueous solubility) with low permeability across the biological membranes. While these compounds are pharmacologically effective, poor absorption due to low permeability becomes the rate-limiting step in achieving adequate bioavailability. Several approaches have been explored and utilized for improving the permeability profiles of these compounds. The approaches include traditional methods such as prodrugs, permeation enhancers, ion-pairing, etc., as well as relatively modern approaches such as nanoencapsulation and nanosizing. The most recent approaches include a combination/hybridization of one or more traditional approaches to improve drug permeability. While some of these approaches have been extremely successful, i.e. drug products utilizing the approach have progressed through the USFDA approval for marketing; others require further investigation to be applicable. This article discusses the commonly studied approaches for improving the permeability of BCS Class III compounds.
Phelps, Kevin D; Harmer, Luke S; Crickard, Colin V; Hamid, Nady; Sample, Katherine M; Andrews, Erica B; Seymour, Rachel B; Hsu, Joseph R
2018-06-01
Extensile approaches to the humerus are often needed when treating complex proximal or distal fractures that have extension into the humeral shaft or in those fractures that occur around implants. The 2 most commonly used approaches for more complex fractures include the modified lateral paratricipital approach and the deltopectoral approach with distal anterior extension. Although the former is well described and quantified, the latter is often associated with variable nomenclature with technical descriptions that can be confusing. Furthermore, a method to expose the entire humerus through an anterior extensile approach has not been described. Here, we illustrate and quantify a technique for connecting anterior humeral approaches in a stepwise fashion to form an aggregate anterior approach (AAA). We also describe a method for further distal extension to expose 100% of the length of the humerus and compare this approach with both the AAA and the lateral paratricipital in terms of access to critical bony landmarks, as well as the length and area of bone exposed.
Conceptualizing and Assessing Self-Enhancement Bias: A Componential Approach
Kwan, Virginia S. Y.; Kuang, Lu Lu; John, Oliver P.; Robins, Richard W.
2014-01-01
Four studies implemented a componential approach to assessing self-enhancement and contrasted this approach with 2 earlier ones: social comparison (comparing self-ratings with ratings of others) and self-insight (comparing self-ratings with ratings by others). In Study 1, the authors varied the traits being rated to identify conditions that lead to more or less similarity between approaches. In Study 2, the authors examined the effects of acquaintance on the conditions identified in Study 1. In Study 3, the authors showed that using rankings renders the self-insight approach equivalent to the component-based approach but also has limitations in assessing self-enhancement. In Study 4, the authors compared the social-comparison and the component-based approaches in terms of their psychological implications; the relation between self-enhancement and adjustment depended on the self-enhancement approach used, and the positive-adjustment correlates of the social-comparison approach disappeared when the confounding influence of the target effect was controlled. PMID:18505318
Cost approach of health care entity intangible asset valuation.
Reilly, Robert F
2012-01-01
In the valuation synthesis and conclusion process, the analyst should consider the following question: Does the selected valuation approach(es) and method(s) accomplish the analyst's assignment? Also, does the selected valuation approach and method actually quantify the desired objective of the intangible asset analysis? The analyst should also consider if the selected valuation approach and method analyzes the appropriate bundle of legal rights. The analyst should consider if there were sufficient empirical data available to perform the selected valuation approach and method. The valuation synthesis should consider if there were sufficient data available to make the analyst comfortable with the value conclusion. The valuation analyst should consider if the selected approach and method will be understandable to the intended audience. In the valuation synthesis and conclusion, the analyst should also consider which approaches and methods deserve the greatest consideration with respect to the intangible asset's RUL. The intangible asset RUL is a consideration of each valuation approach. In the income approach, the RUL may affect the projection period for the intangible asset income subject to either yield capitalization or direct capitalization. In the cost approach, the RUL may affect the total amount of obsolescence, if any, from the estimate cost measure (that is, the intangible reproduction cost new or replacement cost new). In the market approach, the RUL may effect the selection, rejection, and/or adjustment of the comparable or guideline intangible asset sale and license transactional data. The experienced valuation analyst will use professional judgment to weight the various value indications to conclude a final intangible asset value, based on: The analyst's confidence in the quantity and quality of available data; The analyst's level of due diligence performed on that data; The relevance of the valuation method to the intangible asset life cycle stage and degree of marketability; and The degree of variation in the range of value indications. Valuation analysts value health care intangible assets for a number of reasons. In addition to regulatory compliance reasons, these reasons include various transaction, taxation, financing, litigation, accounting, bankruptcy, and planning purposes. The valuation analyst should consider all generally accepted intangible asset valuation approaches, methods, and procedures. Many valuation analysts are more familiar with market approach and income approach valuation methods. However, there are numerous instances when cost approach valuation methods are also applicable to the health care intangible asset valuation. This discussion summarized the analyst's procedures and considerations with regard to the cost approach. The cost approach is often applicable to the valuation of intangible assets in the health care industry. However, the cost approach is only applicable if the valuation analyst (1) appropriately considers all of the cost components and (2) appropriately identifies and quantifies all obsolescence allowances. Regardless of the health care intangible asset or the reason for the valuation, the analyst should be familiar with all generally accepted valuation approaches and methods. And, the valuation analyst should have a clear, convincing, and cogent rationale for (1) accepting each approach and method applied and (2) rejecting each approach and method not applied. That way, the valuation analyst will best achieve the purpose and objective of the health care intangible asset valuation.
Pollock, A; Baer, G; Pomeroy, V; Langhorne, P
2007-01-24
There are a number of different approaches to physiotherapy treatment following stroke that, broadly speaking, are based on neurophysiological, motor learning and orthopaedic principles. Some physiotherapists base their treatment on a single approach, while others use a mixture of components from a number of different approaches. To determine if there is a difference in the recovery of postural control and lower limb function in patients with stroke if physiotherapy treatment is based on orthopaedic or neurophysiological or motor learning principles, or on a mixture of these treatment principles. We searched the Cochrane Stroke Group Trials Register (last searched May 2005), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library Issue 2, 2005), MEDLINE (1966 to May 2005), EMBASE (1980 to May 2005) and CINAHL (1982 to May 2005). We contacted experts and researchers with an interest in stroke rehabilitation. Randomised or quasi-randomised controlled trials of physiotherapy treatment approaches aimed at promoting the recovery of postural control and lower limb function in adult participants with a clinical diagnosis of stroke. Outcomes included measures of disability, motor impairment or participation. Two review authors independently categorised the identified trials according to the inclusion and exclusion criteria, documented their methodological quality, and extracted the data. Twenty-one trials were included in the review, five of which were included in two comparisons. Eight trials compared a neurophysiological approach with another approach; eight compared a motor learning approach with another approach; and eight compared a mixed approach with another approach. A mixed approach was significantly more effective than no treatment or placebo control for improving functional independence (standardised mean difference (SMD) 0.94, 95% confidence intervals (CI) 0.08 to 1.80). There was no significant evidence that any single approach had a better outcome than any other single approach or no treatment control. There is evidence that physiotherapy intervention, using a mix of components from different approaches, is significantly more effective than no treatment or placebo control in the recovery of functional independence following stroke. There is insufficient evidence to conclude that any one physiotherapy approach is more effective in promoting recovery of lower limb function or postural control following stroke than any other approach. We recommend that future research should concentrate on investigating the effectiveness of clearly described individual techniques and task-specific treatments, regardless of their historical or philsophical origin.
Cancer and Complementary Health Approaches
... Cancer Institute's activities in research on complementary health approaches. Toll-free in the U.S.: 1-800-4-CANCER (1-800-422-6237) Web ... complementary health approaches. Information on complementary health approaches in cancer treatment: ...
Design and evaluation of instrument approach procedure charts
DOT National Transportation Integrated Search
1993-01-01
A new format for instrument approach procedure : charts has been designed. Special attention was paid to : improving the readability of communication frequencies, : approach course heading and missed approach instructions. : Selected components of th...
Edwards, J R; Scully, J A; Brtek, M D
2000-12-01
Research into the changing nature of work requires comprehensive models of work design. One such model is the interdisciplinary framework (M. A. Campion, 1988), which integrates 4 work-design approaches (motivational, mechanistic, biological, perceptual-motor) and links each approach to specific outcomes. Unfortunately, studies of this framework have used methods that disregard measurement error, overlook dimensions within each work-design approach, and treat each approach and outcome separately. This study reanalyzes data from M. A. Campion (1988), using structural equation models that incorporate measurement error, specify multiple dimensions for each work-design approach, and examine the work-design approaches and outcomes jointly. Results show that previous studies underestimate relationships between work-design approaches and outcomes and that dimensions within each approach exhibit relationships with outcomes that differ in magnitude and direction.
Processing approaches to cognition: the impetus from the levels-of-processing framework.
Roediger, Henry L; Gallo, David A; Geraci, Lisa
2002-01-01
Processing approaches to cognition have a long history, from act psychology to the present, but perhaps their greatest boost was given by the success and dominance of the levels-of-processing framework. We review the history of processing approaches, and explore the influence of the levels-of-processing approach, the procedural approach advocated by Paul Kolers, and the transfer-appropriate processing framework. Processing approaches emphasise the procedures of mind and the idea that memory storage can be usefully conceptualised as residing in the same neural units that originally processed information at the time of encoding. Processing approaches emphasise the unity and interrelatedness of cognitive processes and maintain that they can be dissected into separate faculties only by neglecting the richness of mental life. We end by pointing to future directions for processing approaches.
Bonadio, Marcelo B; Friedman, James M; Sennett, Mackenzie L; Mauck, Robert L; Dodge, George R; Madry, Henning
2017-12-01
This study compares a traditional parapatellar retinaculum-sacrificing arthrotomy to a retinaculum-sparing arthrotomy in a porcine stifle joint as a cartilage repair model. Surgical exposure of the femoral trochlea of ten Yucatan pigs stifle joint was performed using either a traditional medial parapatellar approach with retinaculum incision and luxation of the patella (n = 5) or a minimally invasive (MIS) approach which spared the patellar retinaculum (n = 5). Both classical and MIS approaches provided adequate access to the trochlea, enabling the creation of cartilage defects without difficulties. Four full thickness, 4 mm circular full-thickness cartilage defects were created in each trochlea. There were no intraoperative complications observed in either surgical approach. All pigs were allowed full weight-bearing and full range of motion immediately postoperatively and were euthanized between 2 and 3 weeks. The traditional approach was associated with increased cartilage wear compared to the MIS approach. Two blinded raters performed gross evaluation of the trochlea cartilage surrounding the defects according to the modified ICRS cartilage injury classification. The traditional approach cartilage received a significantly worse score than the MIS approach group from both scorers (3.2 vs 0.8, p = 0.01 and 2.8 vs 0, p = 0.005 respectively). The MIS approach results in less damage to the trochlear cartilage and faster return to load bearing activities. As an arthrotomy approach in the porcine model, MIS is superior to the traditional approach.
A Quantitative Exposure Planning Tool for Surgical Approaches to the Sacroiliac Joint.
Phelps, Kevin D; Ming, Bryan W; Fox, Wade E; Bellamy, Nelly; Sims, Stephen H; Karunakar, Madhav A; Hsu, Joseph R
2016-06-01
To aid in surgical planning by quantifying and comparing the osseous exposure between the anterior and posterior approaches to the sacroiliac joint. Anterior and posterior approaches were performed on 12 sacroiliac joints in 6 fresh-frozen torsos. Visual and palpable access to relevant surgical landmarks was recorded. Calibrated digital photographs were taken of each approach and analyzed using Image J. The average surface areas of exposed bone were 44 and 33 cm for the anterior and posterior approaches, respectively. The anterior iliolumbar ligament footprint could be visualized in all anterior approaches, whereas the posterior aspect could be visualized in all but one posterior approach. The anterior approach provided visual and palpable access to the anterior superior edge of the sacroiliac joint in all specimens, the posterior superior edge in 75% of specimens, and the inferior margin in 25% and 50% of specimens, respectively. The inferior sacroiliac joint was easily visualized and palpated in all posterior approaches, although access to the anterior and posterior superior edges was more limited. The anterior S1 neuroforamen was not visualized with either approach and was more consistently palpated when going posterior (33% vs. 92%). Both anterior and posterior approaches can be used for open reduction of pure sacroiliac dislocations, each with specific areas for assessing reduction. In light of current plate dimensions, fractures more than 2.5 cm lateral to the anterior iliolumbar ligament footprint are amenable to anterior plate fixation, whereas those more medial may be better addressed through a posterior approach.
Simulator evaluation of manually flown curved instrument approaches. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sager, D.
1973-01-01
Pilot performance in flying horizontally curved instrument approaches was analyzed by having nine test subjects fly curved approaches in a fixed-base simulator. Approaches were flown without an autopilot and without a flight director. Evaluations were based on deviation measurements made at a number of points along the curved approach path and on subject questionnaires. Results indicate that pilots can fly curved approaches, though less accurately than straight-in approaches; that a moderate wind does not effect curve flying performance; and that there is no performance difference between 60 deg. and 90 deg. turns. A tradeoff of curve path parameters and a paper analysis of wind compensation were also made.
Critiquing: A Different Approach to Expert Computer Advice in Medicine
Miller, Perry L.
1984-01-01
The traditional approach to computer-based advice in medicine has been to design systems which simulate a physician's decision process. This paper describes a different approach to computer advice in medicine: a critiquing approach. A critiquing system first asks how the physician is planning to manage his patient and then critiques that plan, discussing the advantages and disadvantages of the proposed approach, compared to other approaches which might be reasonable or preferred. Several critiquing systems are currently in different stages of implementation. The paper describes these systems and discusses the characteristics which make each domain suitable for critiquing. The critiquing approach may prove especially well-suited in domains where decisions involve a great deal of subjective judgement.
ERIC Educational Resources Information Center
Ullah, Raza; Richardson, John T. E.; Hafeez, Muhammad
2011-01-01
There has been a paucity of research on the experiences of students at Pakistani universities. A survey of over 900 students at two universities examined their approaches to studying and perceptions of their courses. Evidence was obtained for a deep approach, a surface approach and two aspects of a strategic approach. Their perceptions were based…
ERIC Educational Resources Information Center
Puorro, Michelle
A study examined two first-grade classrooms implementing the whole language approach and two utilizing the basal reading approach to determine the differences, if any, between the treatments. The hypothesis was that the whole language reading approach when combined with a phonics program would not result in higher test scores on a standardized…
ERIC Educational Resources Information Center
Pettersson, Kerstin; Svedin, Maria; Scheja, Max; Bälter, Olle
2018-01-01
This combined interview and survey study explored the relationship between interview data and data from an inventory describing engineering students' ratings of their approaches to studying. Using the 18-item Approaches and Study Skills Inventory for Students (ASSIST) students were asked to rate their approaches to studying in relation to…
ERIC Educational Resources Information Center
King, Diane; Coughlin, Patricia Kathleen
2016-01-01
There are two approaches for providing Tier 2 interventions within Response to Intervention (RtI): standard treatment protocol (STP) and the problem-solving approach (PSA). This article describes the multi-tiered RtI prevention model being implemented across the United States through an analysis of these two approaches in reading instruction. It…
Modifying Automatic Approach Action Tendencies in Individuals with Elevated Social Anxiety Symptoms
Taylor, Charles T.; Amir, Nader
2012-01-01
Research suggests that social anxiety is associated with a reduced approach orientation for positive social cues. In the current study we examined the effect of experimentally manipulating automatic approach action tendencies on the social behavior of individuals with elevated social anxiety symptoms. The experimental paradigm comprised a computerized Approach Avoidance Task (AAT) in which participants responded to pictures of faces conveying positive or neutral emotional expressions by pulling a joystick toward themselves (approach) or by moving it to the right (sideways control). Participants were randomly assigned to complete an AAT designed to increase approach tendencies for positive social cues by pulling these cues toward themselves on the majority of trials, or to a control condition in which there was no contingency between the arm movement direction and picture type. Following the manipulation, participants took part in a relationship-building task with a trained confederate. Results revealed that participants trained to approach positive stimuli displayed greater social approach behaviors during the social interaction and elicited more positive reactions from their partner compared to participants in the control group. These findings suggest that modifying automatic approach tendencies may facilitate engagement in the types of social approach behaviors that are important for relationship development. PMID:22728645
Zhao, Xiaohui; Oppler, Scott; Dunleavy, Dana; Kroopnick, Marc
2010-10-01
This study investigated the validity of four approaches (the average, most recent, highest-within-administration, and highest-across-administration approaches) of using repeaters' Medical College Admission Test (MCAT) scores to predict Step 1 scores. Using the differential predication method, this study investigated the magnitude of differences in the expected Step 1 total scores between MCAT nonrepeaters and three repeater groups (two-time, three-time, and four-time test takers) for the four scoring approaches. For the average score approach, matriculants with the same MCAT average are expected to achieve similar Step 1 total scores regardless of whether the individual attempted the MCAT exam one or multiple times. For the other three approaches, repeaters are expected to achieve lower Step 1 scores than nonrepeaters; for a given MCAT score, as the number of attempts increases, the expected Step 1 decreases. The effect was strongest for the highest-across-administration approach, followed by the highest-within-administration approach, and then the most recent approach. Using the average score is the best approach for considering repeaters' MCAT scores in medical school admission decisions.
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Kuczera, George
2016-04-01
Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic streamflow predictions. In particular, residual errors of hydrological predictions are often heteroscedastic, with large errors associated with high runoff events. Although multiple approaches exist for representing this heteroscedasticity, few if any studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating a range of approaches for representing heteroscedasticity in residual errors. These approaches include the 'direct' weighted least squares approach and 'transformational' approaches, such as logarithmic, Box-Cox (with and without fitting the transformation parameter), logsinh and the inverse transformation. The study reports (1) theoretical comparison of heteroscedasticity approaches, (2) empirical evaluation of heteroscedasticity approaches using a range of multiple catchments / hydrological models / performance metrics and (3) interpretation of empirical results using theory to provide practical guidance on the selection of heteroscedasticity approaches. Importantly, for hydrological practitioners, the results will simplify the choice of approaches to represent heteroscedasticity. This will enhance their ability to provide hydrological probabilistic predictions with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality).
Reflecting on the challenges of choosing and using a grounded theory approach.
Markey, Kathleen; Tilki, Mary; Taylor, Georgina
2014-11-01
To explore three different approaches to grounded theory and consider some of the possible philosophical assumptions underpinning them. Grounded theory is a comprehensive yet complex methodology that offers a procedural structure that guides the researcher. However, divergent approaches to grounded theory present dilemmas for novice researchers seeking to choose a suitable research method. This is a methodology paper. This is a reflexive paper that explores some of the challenges experienced by a PhD student when choosing and operationalising a grounded theory approach. Before embarking on a study, novice grounded theory researchers should examine their research beliefs to assist them in selecting the most suitable approach. This requires an insight into the approaches' philosophical assumptions, such as those pertaining to ontology and epistemology. Researchers need to be clear about the philosophical assumptions underpinning their studies and the effects that different approaches will have on the research results. This paper presents a personal account of the journey of a novice grounded theory researcher who chose a grounded theory approach and worked within its theoretical parameters. Novice grounded theory researchers need to understand the different philosophical assumptions that influence the various grounded theory approaches, before choosing one particular approach.
Mind the gap! Three approaches to scarcity in health care.
Denier, Yvonne
2008-03-01
This paper addresses two ways in which scarcity in health care turns up and three ways in which this dual condition of scarcity can be approached. The first approach is the economic approach, which focuses on the causes of cost-increase in health care and on developing various mechanisms of rationing and priority-setting in health care. The second approach is the justice approach, which interprets scarcity as one of the Humean [Symbol: see text]Circumstances of Justice.' Whereas these approaches interpret scarcity as a given fact, the third approach casts doubt on this interpretation. Rather, it interprets scarcity as a social, anthropological, and technologically induced construction of Modernity. This paper supports the theories of Hans Achterhuis, Ivan Illich, and Nicholas Xenos but also further elaborates their views with regard to health care by offering an approach to scarcity that interprets it as an economic translation of finitude. I argue that this approach, which entails a contemporary revaluation of the ancient Socratic attitude on human life and finitude, will be better able to deal with the pressing contemporary issues of setting limits on health care because it mitigates contemporary health care's tendency toward infinity in meeting - and creating - health care needs.
Peters, Rinne M; van Beers, Loes W A H; van Steenbergen, Liza N; Wolkenfelt, Julius; Ettema, Harmen B; Ten Have, Bas L E F; Rijk, Paul C; Stevens, Martin; Bulstra, Sjoerd K; Poolman, Rudolf W; Zijlstra, Wierd P
2018-06-01
Patient-reported outcome measures (PROMs) are used to evaluate the outcome of total hip arthroplasty (THA). We determined the effect of surgical approach on PROMs after primary THA. All primary THAs, with registered preoperative and 3 months postoperative PROMs were selected from the Dutch Arthroplasty Register. Based on surgical approach, 4 groups were discerned: (direct) anterior, anterolateral, direct lateral, and posterolateral approaches. The following PROMs were recorded: Hip disability and Osteoarthritis Outcome Score Physical function Short form (HOOS-PS); Oxford Hip Score; EQ-5D index score; EQ-5D thermometer; and Numeric Rating Scale measuring pain, both active and in rest. The difference between preoperative and postoperative scores was calculated (delta-PROM) and used as primary outcome measure. Multivariable linear regression analysis was performed for comparisons. Cohen's d was calculated as measure of effect size. All examined 4 approaches resulted in a significant increase of PROMs after primary THA in the Netherlands (n = 12,274). The anterior and posterolateral approaches were associated with significantly more improvement in HOOS-PS scores compared with the anterolateral and direct lateral approaches. Furthermore, the posterolateral and anterior approaches showed greater improvement on Numeric Rating Scale pain scores compared with the anterolateral approach. No relevant differences in delta-PROM were seen between the anterior and posterolateral surgical approaches. Anterior and posterolateral surgical approaches showed more improvement in self-reported physical functioning (HOOS-PS) compared with anterolateral and direct lateral approaches in patients receiving a primary THA. However, clinical differences were only small. Copyright © 2018 Elsevier Inc. All rights reserved.
Neural substrates of approach-avoidance conflict decision-making
Aupperle, Robin L.; Melrose, Andrew J.; Francisco, Alex; Paulus, Martin P.; Stein, Murray B.
2014-01-01
Animal approach-avoidance conflict paradigms have been used extensively to operationalize anxiety, quantify the effects of anxiolytic agents, and probe the neural basis of fear and anxiety. Results from human neuroimaging studies support that a frontal-striatal-amygdala neural circuitry is important for approach-avoidance learning. However, the neural basis of decision-making is much less clear in this context. Thus, we combined a recently developed human approach-avoidance paradigm with functional magnetic resonance imaging (fMRI) to identify neural substrates underlying approach-avoidance conflict decision-making. Fifteen healthy adults completed the approach-avoidance conflict (AAC) paradigm during fMRI. Analyses of variance were used to compare conflict to non-conflict (avoid-threat and approach-reward) conditions and to compare level of reward points offered during the decision phase. Trial-by-trial amplitude modulation analyses were used to delineate brain areas underlying decision-making in the context of approach/avoidance behavior. Conflict trials as compared to the non-conflict trials elicited greater activation within bilateral anterior cingulate cortex (ACC), anterior insula, and caudate, as well as right dorsolateral prefrontal cortex. Right caudate and lateral PFC activation was modulated by level of reward offered. Individuals who showed greater caudate activation exhibited less approach behavior. On a trial-by-trial basis, greater right lateral PFC activation related to less approach behavior. Taken together, results suggest that the degree of activation within prefrontal-striatal-insula circuitry determines the degree of approach versus avoidance decision-making. Moreover, the degree of caudate and lateral PFC activation is related to individual differences in approach-avoidance decision-making. Therefore, the AAC paradigm is ideally suited to probe anxiety-related processing differences during approach-avoidance decision-making. PMID:25224633
Cousijn, Janna; Goudriaan, Anna E; Wiers, Reinout W
2011-01-01
Aims Repeated drug exposure can lead to an approach-bias, i.e. the relatively automatically triggered tendencies to approach rather that avoid drug-related stimuli. Our main aim was to study this approach-bias in heavy cannabis users with the newly developed cannabis Approach Avoidance Task (cannabis-AAT) and to investigate the predictive relationship between an approach-bias for cannabis-related materials and levels of cannabis use, craving, and the course of cannabis use. Design, settings and participants Cross-sectional assessment and six-month follow-up in 32 heavy cannabis users and 39 non-using controls. Measurements Approach and avoidance action-tendencies towards cannabis and neutral images were assessed with the cannabis AAT. During the AAT, participants pulled or pushed a joystick in response to image orientation. To generate additional sense of approach or avoidance, pulling the joystick increased picture size while pushing decreased it. Craving was measured pre- and post-test with the multi-factorial Marijuana Craving Questionnaire (MCQ). Cannabis use frequencies and levels of dependence were measured at baseline and after a six-month follow-up. Findings Heavy cannabis users demonstrated an approach-bias for cannabis images, as compared to controls. The approach-bias predicted changes in cannabis use at six-month follow-up. The pre-test MCQ emotionality and expectancy factor were associated negatively with the approach-bias. No effects were found on levels of cannabis dependence. Conclusions Heavy cannabis users with a strong approach-bias for cannabis are more likely to increase their cannabis use. This approach-bias could be used as a predictor of the course of cannabis use to identify individuals at risk from increasing cannabis use. PMID:21518067
The interplay between motivation, self-efficacy, and approaches to studying.
Prat-Sala, Mercè; Redford, Paul
2010-06-01
The strategies students adopt in their study are influenced by a number of social-cognitive factors and impact upon their academic performance. The present study examined the interrelationships between motivation orientation (intrinsic and extrinsic), self-efficacy (in reading academic texts and essay writing), and approaches to studying (deep, strategic, and surface). The study also examined changes in approaches to studying over time. A total of 163 first-year undergraduate students in psychology at a UK university took part in the study. Participants completed the Work Preference Inventory motivation questionnaire, self-efficacy in reading and writing questionnaires and the short version of the Revised Approaches to Study Inventory. The results showed that both intrinsic and extrinsic motivation orientations were correlated with approaches to studying. The results also showed that students classified as high in self-efficacy (reading and writing) were more likely to adopt a deep or strategic approach to studying, while students classified as low in self-efficacy (reading and writing) were more likely to adopt a surface approach. More importantly, changes in students' approaches to studying over time were related to their self-efficacy beliefs, where students with low levels of self-efficacy decreased in their deep approach and increased their surface approach across time. Students with high levels of self-efficacy (both reading and writing) demonstrated no such change in approaches to studying. Our results demonstrate the important role of self-efficacy in understanding both motivation and learning approaches in undergraduate students. Furthermore, given that reading academic text and writing essays are essential aspects of many undergraduate degrees, our results provide some indication that focusing on self-efficacy beliefs amongst students may be beneficial to improving their approaches to study.
Kasthurirathne, Suranga N; Dixon, Brian E; Gichoya, Judy; Xu, Huiping; Xia, Yuni; Mamlin, Burke; Grannis, Shaun J
2016-04-01
Increased adoption of electronic health records has resulted in increased availability of free text clinical data for secondary use. A variety of approaches to obtain actionable information from unstructured free text data exist. These approaches are resource intensive, inherently complex and rely on structured clinical data and dictionary-based approaches. We sought to evaluate the potential to obtain actionable information from free text pathology reports using routinely available tools and approaches that do not depend on dictionary-based approaches. We obtained pathology reports from a large health information exchange and evaluated the capacity to detect cancer cases from these reports using 3 non-dictionary feature selection approaches, 4 feature subset sizes, and 5 clinical decision models: simple logistic regression, naïve bayes, k-nearest neighbor, random forest, and J48 decision tree. The performance of each decision model was evaluated using sensitivity, specificity, accuracy, positive predictive value, and area under the receiver operating characteristics (ROC) curve. Decision models parameterized using automated, informed, and manual feature selection approaches yielded similar results. Furthermore, non-dictionary classification approaches identified cancer cases present in free text reports with evaluation measures approaching and exceeding 80-90% for most metrics. Our methods are feasible and practical approaches for extracting substantial information value from free text medical data, and the results suggest that these methods can perform on par, if not better, than existing dictionary-based approaches. Given that public health agencies are often under-resourced and lack the technical capacity for more complex methodologies, these results represent potentially significant value to the public health field. Copyright © 2016 Elsevier Inc. All rights reserved.
Treatment of Children's Fears: A Strategic Utilization Approach.
ERIC Educational Resources Information Center
Protinsky, Howard
1985-01-01
Describes briefly Milton Erickson's strategic utilization approach to therapy. Discusses the usefulness of this approach in treating children's fears. Presents two case histories in which the approach successfully eliminated the fear of the child. (BH)
Financial Management: An Organic Approach
ERIC Educational Resources Information Center
Laux, Judy
2013-01-01
Although textbooks present corporate finance using a topical approach, good financial management requires an organic approach that integrates the various assignments financial managers confront every day. Breaking the tasks into meaningful subcategories, the current article offers one approach.
Yang, Pinglin; Zang, Quanjin; Kang, Jian; Li, Haopeng; He, Xijing
2016-12-01
We aimed to provide evidence for clinical choice of surgical approach in treating spinal tuberculosis, including anterior, posterior and combined approaches (combined anterior and posterior approach). A literature search up to June 2015 was performed on PubMed, Embase, Cochrane library, CNKI, Wanfang and Weipu database. Weighted mean differences (WMDs) or risk radios (RRs) and their 95 % confidence intervals (CI) were calculated. Total 26 studies with 2345 spinal tuberculosis adults were analyzed. Results showed advantages of posterior approach compared with anterior approach in operation time (WMD = 20.91; 95 % CI: 9.05-32.76), blood loss (WMD = 72.32, 95 % CI: 13.87-130.78), correction of angle (WMD = -2.47; 95 % CI: -4.04 to -0.90) and complications (RR = 1.78; 95 % CI: 1.21-2.60), and compared with combined approach in operation time (WMD = -82.76; 95 % CI: -94.38 to -71.14), blood loss (WMD = -263.63; 95 % CI: -336.85 to -190.41), hospital stay [(WMD = -4.60; 95 % CI: -5.10 to -4.10) and complications (RR = 0.36; 95 % CI: 0.23-0.58]. Meanwhile, significantly larger correction of angle (WMD = -2.25; 95 % CI: -4.35 to -0.14; P = 0.04) and less loss of correction (WMD = 3.97; 95 % CI: 2.22-5.72) were found when compared combined approach with anterior approach. However, combined approach had significantly longer operation time (WMD = -41.92; 95 % CI: -52.45 to -31.38) and more blood loss (WMD = -102.18; 95 % CI: -160.45 to -43.91) than anterior approach. Posterior approach has better clinical outcomes than anterior or combined approach for spinal tuberculosis. However, individual assessment of each case should be considered in the clinical application of these surgical approaches.
Lin, Tao; Shao, Wei; Zhang, Ke; Gao, Rui; Zhou, Xuhui
2018-03-01
To compare outcomes of anterior-only (AO), posterior-only (PO), and anteroposterior (AP) surgical approaches for treatment of dystrophic cervical kyphosis in patients with neurofibromatosis 1 (NF1). This retrospective observational study included 81 patients with dystrophic cervical kyphosis secondary to NF1. Length of kyphosis, duration of halo traction, Cobb angle, C2-7-sagittal vertical axis (SVA), T1 slope, Neck Disability Index score, and postoperative complications were evaluated before and, if possible, after each surgical approach. AP approach provided the best outcomes (average spinal Cobb angle was corrected from 61.2 ± 9.1° to 5.7 ± 3.2°, P < 0.05); there was no significant difference between AO and PO approaches (P > 0.05). With regard to cervical sagittal balance, AP approach had the most improvements of C2-7-SVA (mean C2-7-SVA was corrected from 3.2 ± 9.2 mm to 12.8 ± 2.6 mm, P < 0.05); the difference between AO and PO approaches was not significant (P > 0.05). T1 slope results were similar to C2-7-SVA. Neck Disability Index score of all patients improved significantly after surgery (P < 0.05); specifically, patients who had an AP approach constituted the largest portion of the satisfied patient group. Postoperative junctional kyphosis occurred in 11 patients (1 AP approach, 6 AO approach, 4 PO approach); these findings correlated with patients with ≤5 fused segments. AP approach surgery provided the best correction of dystrophic cervical kyphosis and sagittal balance for patients with NF1. Patients undergoing an AP approach were more satisfied with their outcomes. Junctional kyphosis can be prevented effectively using an AP approach in patients with >5 fused segments. Copyright © 2017 Elsevier Inc. All rights reserved.
Griffiths, James; Carnegie, Amadeus; Kendall, Richard; Madan, Rajeev
2017-12-01
Ultrasound-guided peripheral intravenous access may present an alternative to central or intraosseous access in patients with difficult peripheral veins. Using venepuncture of a phantom model as a proxy, we investigated whether novice ultrasound users should adopt a cross-sectional or longitudinal approach when learning to access peripheral veins under ultrasound guidance. This result would inform the development of a structured training method for this procedure. We conducted a randomised controlled trial of 30 medical students. Subjects received 35 min of training, then attempted to aspirate 1 ml of synthetic blood from a deep vein in a training model under ultrasound guidance. Subjects attempted both the cross-sectional and longitudinal approaches. Group 1 used cross-sectional first, followed by longitudinal. Group 2 used longitudinal first, then cross-sectional. We measured the time from first puncture of the model's skin to aspiration of fluid, and the number of attempts required. Subjects also reported difficulty ratings for each approach. Paired sample t-tests were used for statistical analysis. The mean number of attempts was 1.13 using the cross-sectional approach, compared with 1.30 using the longitudinal approach (p = 0.17). Mean time to aspiration of fluid was 45.1 s using the cross-sectional approach and 52.8 s using the longitudinal approach (p = 0.43). The mean difficulty score out of 10 was 3.97 for the cross-sectional approach and 3.93 for the longitudinal approach (p = 0.95). We found no significant difference in effectiveness between the cross-sectional and longitudinal approaches to ultrasound-guided venepuncture when performed on a model. We believe that both approaches should be included when teaching ultrasound-guided peripheral vascular access. To confirm which approach would be best in clinical practice, we advocate future testing of both approaches on patients.
Improving organ donation rates by modifying the family approach process.
Ebadat, Aileen; Brown, Carlos V R; Ali, Sadia; Guitierrez, Tim; Elliot, Eric; Dworaczyk, Sarah; Kadric, Carie; Coopwood, Ben
2014-06-01
The purpose of this study was to identify steps during family approach for organ donation that may be modified to improve consent rates of potential organ donors. Retrospective study of our local organ procurement organization (OPO) database of potential organ donors. Modifiable variables involved in the family approach of potential organ donors were collected and included race and sex of OPO representative, individual initiating approach discussion with family (RN or MD vs. OPO), length of donation discussion, use of a translator, and time of day of approach. Of 1137 potential organ donors, 661 (58%) consented and 476 (42%) declined. Consent rates were higher with matched race of donor and OPO representative (66% vs. 52%, p < 0.001), family approach by female OPO representative (67% vs. 56%, p = 0.002), if approach was initiated by OPO representative (69% vs. 49%, p < 0.001), and if consent rate was dependent on time of day the approach occurred: 6:00 am to noon (56%), noon to 6:00 pm (67%), 6:00 pm to midnight (68%), and midnight to 6:00 am (45%), p = 0.04. Family approach that led to consent lasted longer than those declining (67 vs. 43 minutes, p < 0.001). Independent predictors of consent to donation included female OPO representative (odds ratio [OR], 1.7; p = 0.006), approach discussion initiated by OPO representative (OR, 1.9; p = 0.001), and longer approach discussions (OR, 1.02; p < 0.001). The independent predictor of declined donation was the use of a translator (OR, 0.39; p = 0.01). Variables such as race and sex of OPO representative and time of day should be considered before approaching a family for organ donation. Avoiding translators during the approach process may improve donation rates. Education for health care providers should reinforce the importance of allowing OPO representatives to initiate the family approach for organ donation. Epidemiologic study, level IV. Therapeutic study, level IV.
NASA Astrophysics Data System (ADS)
Hong, Yuh-Fong
With the rapid growth of online courses in higher education institutions, research on quality of learning for online courses is needed. However, there is a notable lack of research in the cited literature providing evidence that online distance education promotes the quality of independent learning to which it aspires. Previous studies focused on academic outcomes and technology applications which do not monitor students' learning processes, such as their approaches to learning. Understanding students' learning processes and factors influencing quality of learning will provide valuable information for instructors and institutions in providing quality online courses and programs. The purpose of this study was to identify and investigate college biology teachers' approaches to teaching and students' learning styles, and to examine the impact of approaches to teaching and learning styles on students' approaches to learning via online instruction. Data collection included eighty-seven participants from five online biology courses at a community college in the southern area of Texas. Data analysis showed the following results. First, there were significant differences in approaches to learning among students with different learning styles. Second, there was a significant difference in students' approaches to learning between classes using different approaches to teaching. Three, the impact of learning styles on students' approaches to learning was not influenced by instructors' approaches to teaching. Two conclusions were obtained from the results. First, individuals with the ability to perceive information abstractly might be more likely to adopt deep approaches to learning than those preferring to perceive information through concrete experience in online learning environments. Second, Teaching Approach Inventory might not be suitable to measure approaches to teaching for online biology courses due to online instructional design and technology limitations. Based on the findings and conclusions of this study, implications for distance education and future research are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elter, M.; Schulz-Wendtland, R.; Wittenberg, T.
2007-11-15
Mammography is the most effective method for breast cancer screening available today. However, the low positive predictive value of breast biopsy resulting from mammogram interpretation leads to approximately 70% unnecessary biopsies with benign outcomes. To reduce the high number of unnecessary breast biopsies, several computer-aided diagnosis (CAD) systems have been proposed in the last several years. These systems help physicians in their decision to perform a breast biopsy on a suspicious lesion seen in a mammogram or to perform a short term follow-up examination instead. We present two novel CAD approaches that both emphasize an intelligible decision process to predictmore » breast biopsy outcomes from BI-RADS findings. An intelligible reasoning process is an important requirement for the acceptance of CAD systems by physicians. The first approach induces a global model based on decison-tree learning. The second approach is based on case-based reasoning and applies an entropic similarity measure. We have evaluated the performance of both CAD approaches on two large publicly available mammography reference databases using receiver operating characteristic (ROC) analysis, bootstrap sampling, and the ANOVA statistical significance test. Both approaches outperform the diagnosis decisions of the physicians. Hence, both systems have the potential to reduce the number of unnecessary breast biopsies in clinical practice. A comparison of the performance of the proposed decision tree and CBR approaches with a state of the art approach based on artificial neural networks (ANN) shows that the CBR approach performs slightly better than the ANN approach, which in turn results in slightly better performance than the decision-tree approach. The differences are statistically significant (p value <0.001). On 2100 masses extracted from the DDSM database, the CRB approach for example resulted in an area under the ROC curve of A(z)=0.89{+-}0.01, the decision-tree approach in A(z)=0.87{+-}0.01, and the ANN approach in A(z)=0.88{+-}0.01.« less
Chen, G; Wu, F Y; Liu, Z C; Yang, K; Cui, F
2015-08-01
Subject-specific finite element (FE) models can be generated from computed tomography (CT) datasets of a bone. A key step is assigning material properties automatically onto finite element models, which remains a great challenge. This paper proposes a node-based assignment approach and also compares it with the element-based approach in the literature. Both approaches were implemented using ABAQUS. The assignment procedure is divided into two steps: generating the data file of the image intensity of a bone in a MATLAB program and reading the data file into ABAQUS via user subroutines. The node-based approach assigns the material properties to each node of the finite element mesh, while the element-based approach assigns the material properties directly to each integration point of an element. Both approaches are independent from the type of elements. A number of FE meshes are tested and both give accurate solutions; comparatively the node-based approach involves less programming effort. The node-based approach is also independent from the type of analyses; it has been tested on the nonlinear analysis of a Sawbone femur. The node-based approach substantially improves the level of automation of the assignment procedure of bone material properties. It is the simplest and most powerful approach that is applicable to many types of analyses and elements. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Jiang, Jingyi; Zhang, Xiangru; Zhu, Xiaohu; Li, Yu
2017-03-21
During chlorine disinfection of drinking water, chlorine may react with natural organic matter (NOM) and bromide ion in raw water to generate halogenated disinfection byproducts (DBPs). To mitigate adverse effects from DBP exposure, granular activated carbon (GAC) adsorption has been considered as one of the best available technologies for removing NOM (DBP precursor) in drinking water treatment. Recently, we have found that many aromatic halogenated DBPs form in chlorination, and they act as intermediate DBPs to decompose and form commonly known DBPs including trihalomethanes and haloacetic acids. In this work, we proposed a new approach to controlling drinking water halogenated DBPs by GAC adsorption of intermediate aromatic halogenated DBPs during chlorination, rather than by GAC adsorption of NOM prior to chlorination (i.e., traditional approach). Rapid small-scale column tests were used to simulate GAC adsorption in the new and traditional approaches. Significant reductions of aromatic halogenated DBPs were observed in the effluents with the new approach; the removals of total organic halogen, trihalomethanes, and haloacetic acids by the new approach always exceeded those by the traditional approach; and the effluents with the new approach were considerably less developmentally toxic than those with the traditional approach. Our findings indicate that the new approach is substantially more effective in controlling halogenated DBPs than the traditional approach.
Méndez-Giménez, Antonio; Cecchini-Estrada, José-Antonio; Fernández-Río, Javier; Prieto Saborit, José Antonio; Méndez-Alonso, David
2017-09-20
The main objective was to analyze relationships and predictive patterns between 3x2 classroom goal structures (CGS), and motivational regulations, dimensions of self-concept, and affectivity in the context of secondary education. A sample of 1,347 secondary school students (56.6% young men, 43.4% young women) from 10 different provinces of Spain agreed to participate (M age = 13.43, SD = 1.05). Hierarchical regression analyses indicated the self-approach CGS was the most adaptive within the spectrum of self-determination, followed by the task-approach CGS. The other-approach CGS had an ambivalent influence on motivation. Task-approach and self-approach CGS predicted academic self-concept (p < .01; p < .001, respectively; R 2 = .134), and both along with other-approach CGS (negatively) predicted family self-concept (p < .05; p < .001; p < .01, respectively; R 2 = .064). Physical self-concept was predicted by the task-approach and other-approach CGS's (p < .05; p < .001, respectively; R 2 = .078). Finally, positive affect was predicted by all three approach-oriented CGS's (p < .001; R 2 = .137), whereas negative affect was predicted by other-approach (positively) and self-approach (negatively) CGS (p < .001; p < .05, respectively; R 2 = .028). These results expand the 3x2 achievement goal framework to include environmental factors, and reiterate that teachers should focus on raising levels of self- and task-based goals for students in their classes.
An anatomical study comparing two surgical approaches for isolated talonavicular arthrodesis.
Higgs, Zoe; Jamal, Bilal; Fogg, Quentin A; Kumar, C Senthil
2014-10-01
Two operative approaches are commonly used for isolated talonavicular arthrodesis: the medial and the dorsal approach. It is recognized that access to the lateral aspect of the talonavicular joint can be limited when using the medial approach, and it is our experience that using the dorsal approach addresses this issue. We performed an anatomical study using cadaver specimens, to compare the amount of articular surface that can be accessed by each operative approach. Medial and dorsal approaches to the talonavicular joint were performed on each of 11 cadaveric specimens (10 fresh frozen, 1 embalmed). Distraction of the joint was performed as used intraoperatively and the accessible area of articular surfaces was marked for each of the 2 approaches using a previously reported technique. Disarticulation was performed and the marked surface area was quantified using an immersion digital microscribe, allowing a 3-dimensional virtual model of the articular surfaces to be assessed. The median percentage of total accessible talonavicular articular surface area for the medial and dorsal approaches was 71% and 92%, respectively (Wilcoxon signed-rank test, P < .001). This study provides quantifiable measurements of the articular surface accessible by the medial and dorsal approaches to the talonavicular joint. These data support for the use of the dorsal approach for talonavicular arthrodesis, particularly in cases where access to the lateral half of the joint is necessary. © The Author(s) 2014.
Abuhamad, Alfred; Zhao, Yili; Abuhamad, Sharon; Sinkovskaya, Elena; Rao, Rashmi; Kanaan, Camille; Platt, Lawrence
2016-01-01
This study aims to validate the feasibility and accuracy of a new standardized six-step approach to the performance of the focused basic obstetric ultrasound examination, and compare the new approach to the regular approach performed in the scheduled obstetric ultrasound examination. A new standardized six-step approach to the performance of the focused basic obstetric ultrasound examination, to evaluate fetal presentation, fetal cardiac activity, presence of multiple pregnancy, placental localization, amniotic fluid volume evaluation, and biometric measurements, was prospectively performed on 100 pregnant women between 18(+0) and 27(+6) weeks of gestation and another 100 pregnant women between 28(+0) and 36(+6) weeks of gestation. The agreement of findings for each of the six steps of the standardized six-step approach was evaluated against the regular approach. In all ultrasound examinations performed, substantial to perfect agreement (Kappa value between 0.64 and 1.00) was observed between the new standardized six-step approach and the regular approach. The new standardized six-step approach to the focused basic obstetric ultrasound examination can be performed successfully and accurately between 18(+0) and 36(+6) weeks of gestation. This standardized approach can be of significant benefit to limited resource settings and in point of care obstetric ultrasound applications. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
The Asian Human Resource Approach in Global Perspective.
ERIC Educational Resources Information Center
Cummings, William K.
1995-01-01
Challenges the prevailing Western approach to education by asserting that several Asian nations have and are developing a distinctive approach to human resource development. Describes characteristics of this approach and contrasts it to the Western model. (CFR)
A new window of opportunity to reject process-based biotechnology regulation
Marchant, Gary E; Stevens, Yvonne A
2015-01-01
ABSTRACT. The question of whether biotechnology regulation should be based on the process or the product has long been debated, with different jurisdictions adopting different approaches. The European Union has adopted a process-based approach, Canada has adopted a product-based approach, and the United States has implemented a hybrid system. With the recent proliferation of new methods of genetic modification, such as gene editing, process-based regulatory systems, which are premised on a binary system of transgenic and conventional approaches, will become increasingly obsolete and unsustainable. To avoid unreasonable, unfair and arbitrary results, nations that have adopted process-based approaches will need to migrate to a product-based approach that considers the novelty and risks of the individual trait, rather than the process by which that trait was produced. This commentary suggests some approaches for the design of such a product-based approach. PMID:26930116
A social-ecological systems approach for environmental management.
Virapongse, Arika; Brooks, Samantha; Metcalf, Elizabeth Covelli; Zedalis, Morgan; Gosz, Jim; Kliskey, Andrew; Alessa, Lilian
2016-08-01
Urgent environmental issues are testing the limits of current management approaches and pushing demand for innovative approaches that integrate across traditional disciplinary boundaries. Practitioners, scholars, and policy-makers alike call for increased integration of natural and social sciences to develop new approaches that address the range of ecological and societal impacts of modern environmental issues. From a theoretical perspective, social-ecological systems (SES) science offers a compelling approach for improved environmental management through the application of transdisciplinary and resilience concepts. A framework for translating SES theory into practice, however, is lacking. In this paper, we define the key components of an SES-based environmental management approach. We offer recommendations for integrating an SES approach into existing environmental management practices. Results presented are useful for management professionals that seek to employ an SES environmental management approach and scholars aiming to advance the theoretical foundations of SES science for practical application. Published by Elsevier Ltd.
Girls' soccer performance and motivation: games vs technique approach.
Chatzopoulos, Dimitris; Drakou, Amalia; Kotzamanidou, Marina; Tsorbatzoudis, Haralambos
2006-10-01
The purpose of this study was to investigate the effects of the Technique and Games approaches on girls' soccer performance and motivation. The Technique approach focuses on technique instruction using drills, whereas the Games approach places emphasis on tactic instruction with modified games. 37 girls, 12 to 13 years old, were taught 15 soccer lessons by the Technique approach and 35 girls by the Games approach. At the beginning and at the end of the research soccer matches were videotaped and evaluated by Oslin, Mitchell, and Griffin's Game Performance Assessment Instrument. Girls' motivation was assessed on the Intrinsic Motivation Inventory. The Games group had significantly better scores after training on tactical behaviour and intrinsic motivation than the Technique group. There were no significant differences in skill execution between groups trained under the two approaches. Considering the importance of intrinsic motivation for a lifelong, physically active lifestyle, researchers could focus study on the approaches and girls' motivation.
Hirani, Shela Akbar Ali; Richter, Solina
2017-02-21
The world is progressing in terms of communication, innovative technology and cure of various diseases through advanced pharmacological preparations. Unfortunately, populations are still struggling with ill-health, disabilities, poverty, hunger, inequality, gender disparities and conflicts. Several questions come to mind in this regard: why are prosperity, health, peace and progress not evenly distributed and what is the best approach to address the issues associated with population health? The capability approach may offer a possible model. This approach is a blend of 5 key concepts: capabilities, functioning, agency, endowment, and conversion factors. It proposes an innovative approach to examine and enhance the quality of life and wellbeing of individuals. This reflective paper provides an overview of the capability approach, critically analyses population health from the theoretical lens of the capability approach and highlights the relevance of this approach to achieving the Sustainable Developmental Goals.
A new window of opportunity to reject process-based biotechnology regulation.
Marchant, Gary E; Stevens, Yvonne A
2015-01-01
The question of whether biotechnology regulation should be based on the process or the product has long been debated, with different jurisdictions adopting different approaches. The European Union has adopted a process-based approach, Canada has adopted a product-based approach, and the United States has implemented a hybrid system. With the recent proliferation of new methods of genetic modification, such as gene editing, process-based regulatory systems, which are premised on a binary system of transgenic and conventional approaches, will become increasingly obsolete and unsustainable. To avoid unreasonable, unfair and arbitrary results, nations that have adopted process-based approaches will need to migrate to a product-based approach that considers the novelty and risks of the individual trait, rather than the process by which that trait was produced. This commentary suggests some approaches for the design of such a product-based approach.
Laparoscopic approach for inflammatory bowel disease surgical managment.
Maggiori, Léon; Panis, Yves
2012-01-01
For IBD surgical management, laparoscopic approach offers several theoretical advantages over the open approach. However, the frequent presence of adhesions from previous surgery and the high rate of inflammatory lesions have initially questioned its feasibility and safety. In the present review article, we will discuss the role of laparoscopic approach for IBD surgical management, along with its potential benefits as compared to the open approach.
ERIC Educational Resources Information Center
Varunki, Maaret; Katajavuori, Nina; Postareff, Liisa
2017-01-01
Research shows that a surface approach to learning is more common among students in the natural sciences, while students representing the "soft" sciences are more likely to apply a deep approach. However, findings conflict concerning the stability of approaches to learning in general. This study explores the variation in students'…
Building Bridges Between Structural and Program Evaluation Approaches to Evaluating Policy
Heckman, James J.
2011-01-01
This paper compares the structural approach to economic policy analysis with the program evaluation approach. It offers a third way to do policy analysis that combines the best features of both approaches. We illustrate the value of this alternative approach by making the implicit economics of LATE explicit, thereby extending the interpretability and range of policy questions that LATE can answer. PMID:21743749
A Janus-Faced Approach to Learning. A Critical Discussion of Habermas' Pragmatic Approach
ERIC Educational Resources Information Center
Italia, Salvatore
2017-01-01
A realist approach to learning is what I propose here. This is based on a non-epistemic dimension whose presence is a necessary assumption for a concept of learning of a life-world as complementary to learning within a life-world. I develop my approach in opposition to Jürgen Habermas' pragmatic approach, which seems to lack of something from a…
ERIC Educational Resources Information Center
Oraif, Iman M.
2016-01-01
The aim of this paper is to describe the different approaches applied to teaching writing in the L2 context and the way these different methods have been established so far. The perspectives include a product approach, genre approach and process approach. Each has its own merits and objectives for application. Regarding the study context, it may…
Learning intervention and the approach to study of engineering undergraduates
NASA Astrophysics Data System (ADS)
Solomonides, Ian Paul
The aim of the research was to: investigate the effect of a learning intervention on the Approach to Study of first year engineering degree students. The learning intervention was a local programme of learning to learn' workshops designed and facilitated by the author. The primary aim of these was to develop students' Approaches to Study. Fifty-three first year engineering undergraduates at The Nottingham Trent University participated in the workshops. Approaches to Study were quantified using data obtained from the Revised Approach to Study Inventory (RASI) which was also subjected to a validity and reliability study using local data. Quantitative outcomes were supplemented using a qualitative analysis of essays written by students during the workshops. These were analysed for detail regarding student Approach to Study. It was intended that any findings would inform the local system of Engineering Education, although more general findings also emerged, in particular in relation to the utility of the research instrument. It was concluded that the intervention did not promote the preferential Deep Approach and did not affect Approaches to Study generally as measured by the RASI. This concurred with previous attempts to change student Approaches to Study at the group level. It was also established that subsequent years of the Integrated Engineering degree course are associated with progressively deteriorating Approaches to Study. Students who were exposed to the intervention followed a similar pattern of deteriorating Approaches suggesting that the local course context and its demands had a greater influence over the Approach of students than the intervention did. It was found that academic outcomes were unrelated to the extent to which students took a Deep Approach to the local assessment demands. There appeared therefore to be a mis-match between the Approach students adopted to pass examinations and those that are required for high quality learning outcomes. It is suggested that more co-ordinated and coherent action for changing the local course demands is needed before an improvement in student Approaches will be observed. These conclusions were broadly supported by the results from the qualitative analysis which also indicated the dominating effects of course context over Approach. However, some students appeared to have gained from the intervention in that they reported being in a better position to evaluate their relationships with the course demands following the workshops. It therefore appeared that some students could be described as being in tension between the desire to take a Deep Approach and the adoption of less desirable Approaches as promoted and encouraged by the course context. It is suggested that questions regarding the integrity of the intervention are thereby left unresolved even though the immediate effects of it are quite clear. It is also suggested that the integrity of the research instrument is open to question in that the Strategic Approach to Study scale failed to be defined by one factor under common factor analysis. The intentional or motivational element which previously defined this scale was found to be associated with a Deep Approach factor within the local context. The Strategic Approach was found to be defined by skill rather than motivation. This indicated that some reinterpretation of the RASI and in particular the Strategic Approach to Study scale is needed.
A comparison of approaches for estimating relative impacts of nonnative fishes
Lapointe, N.W.R.; Pendleton, R. M.; Angermeier, Paul
2012-01-01
Lack of standard methods for quantifying impact has hindered risk assessments of high-impact invaders. To understand methodological strengths and weaknesses, we compared five approaches (in parentheses) for quantifying impact of nonnative fishes: reviewing documented impacts in a large-scale database (review); surveying fish biologists regarding three categories of impact (socioeconomic, ecological, abundance); and estimating frequency of occurrence from existing collection records (collection). In addition, we compared game and nongame biologists’ ratings of game and nongame species. Although mean species ratings were generally correlated among approaches, we documented important discrepancies. The review approach required little effort but often inaccurately estimated impact in our study region (Mid-Atlantic United States). Game fishes received lower ratings from the socioeconomic approach, which yielded the greatest consistency among respondents. The ecological approach exhibited lower respondent bias but was sensitive to pre-existing perceptions of high-impact invaders. The abundance approach provided the least-biased assessment of region-specific impact but did not account for differences in per-capita effects among species. The collection approach required the most effort and did not provide reliable estimates of impact. Multiple approaches to assessing a species’ impact are instructive, but impact ratings must be interpreted in the context of methodological strengths and weaknesses and key management issues. A combination of our ecological and abundance approaches may be most appropriate for assessing ecological impact, whereas our socioeconomic approach is more useful for understanding social dimensions. These approaches are readily transferrable to other regions and taxa; if refined, they can help standardize the assessment of impacts of nonnative species.
Boehm-Davis, Deborah A; Casali, John G; Kleiner, Brian M; Lancaster, Jeffrey A; Saleem, Jason J; Wochinger, Kathryn
2007-10-01
We examined the willingness and ability of general aviation pilots to execute steep approaches in low-visibility conditions into nontowered airports. Executing steep approaches in poor weather is required for a proposed Small Aircraft Transportation System (SATS) that consists of small aircraft flying direct routes to a network of regional airports. Across two experiments, 17 pilots rated for Instrument Flight Rules at George Mason University or Virginia Tech flew a Cessna 172R simulator into Blacksburg, Virginia. Pilots were familiarized with the simulator and asked to fly approaches with either a 200- or 400-foot ceiling (at approach angles of 3 degrees, 5 degrees, and 7 degrees in the first experiment, 3 degrees and 6 degrees in the second). Pilots rated subjective workload and the simulator recorded flight parameters for each set of approaches. Approaches with a 5 degree approach angle produced safe landings with minimal deviations from normal descent control configurations and were rated as having a moderate level of workload. Approaches with 6 degree and 7 degree approach angles produced safe landings but high workload ratings. Pilots reduced power to control the speed of descent and flew the aircraft slightly above the glide path to gain time to control the landing. Although the 6 degree and 7 degree approaches may not be practical for routine approaches, they may be achievable in the event of an emergency. Further work using other aircraft flying under a wider variety of conditions is needed before implementing SATS-type flights into airports intended to supplant or complement commercial operations in larger airports.
Zhang, Heng-Zhu; Li, Yu-Ping; Yan, Zheng-cun; Wang, Xing-dong; She, Lei; Wang, Xiao-dong; Dong, Lun
2014-01-01
Neuroendoscopic (NE) surgery as a minimal invasive treatment for basal ganglia hemorrhage is a promising approach. The present study aims to evaluate the efficacy and safety of NE approach using an adjustable cannula to treat basal ganglia hemorrhage. In this study, we analysed the clinical and radiographic outcomes between NE group (21 cases) and craniotomy group (30 cases). The results indicated that NE surgery might be an effective and safe approach for basal ganglia haemorrhage, and it is also suggested that NE approach may improve good functional recovery. However, NE approach only suits the selected patient, and the usefulness of NE approach needs further randomized controlled trials (RCTs) to evaluate. PMID:24949476
NASA Astrophysics Data System (ADS)
Mananga, Eugene Stephane; Charpentier, Thibault
2015-04-01
In this paper we present a theoretical perturbative approach for describing the NMR spectrum of strongly dipolar-coupled spin systems under fast magic-angle spinning. Our treatment is based on two approaches: the Floquet approach and the Floquet-Magnus expansion. The Floquet approach is well known in the NMR community as a perturbative approach to get analytical approximations. Numerical procedures are based on step-by-step numerical integration of the corresponding differential equations. The Floquet-Magnus expansion is a perturbative approach of the Floquet theory. Furthermore, we address the " γ -encoding" effect using the Floquet-Magnus expansion approach. We show that the average over " γ " angle can be performed for any Hamiltonian with γ symmetry.
NASA Technical Reports Server (NTRS)
Babuscia, Alessandra; Cheung, Kar-Ming; Divsalar, Dariush; Lee, Charles
2014-01-01
This paper aims to address this problem by proposing cooperative communication approaches in which multiple CubeSats communicate cooperatively together to improve the link performance with respect to the case of a single satellite transmitting. Three approaches are proposed: a beam-forming approach, a coding approach, and a network approach. The approaches are applied to the specific case of a proposed constellation of CubeSats at the Lunar Lagrangian point L1 which aims to perform radio astronomy at very low frequencies (30 KHz -3 MHz). The paper describes the development of the approaches, the simulation and a graphical user interface developed in Matlab which allows to perform trade-offs across multiple constellation's configurations.
Scientific Approaches | Office of Cancer Clinical Proteomics Research
CPTAC employs two complementary scientific approaches, a "Targeting Genome to Proteome" (Targeting G2P) approach and a "Mapping Proteome to Genome" (Mapping P2G) approach, in order to address biological questions from data generated on a sample.
He, M; Wang, H L; Yan, J Y; Xu, S W; Chen, W; Wang, J
2018-05-01
Objective: To compare the efficiency between the transhepatic hilar approach and conventional approach for the surgical treatment of Bismuth type Ⅲ and Ⅳ hilar cholangiocarcinoma. Methods: There were 42 consecutive patients with hilar cholangiocarcinoma of Bismuth type Ⅲ and Ⅳ who underwent surgical treatment at Department of Biliary-Pancreatic Surgery, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University from January 2008 to December 2013.The transhepatic hilar approach was used in 19 patients and conventional approach was performed in 23 patients.There were no differences in clinical parameters between the two groups(all P >0.05). The t-test was used to analyze the measurement data, and the χ(2) test was used to analyze the count data.Kaplan-Meier analysis was used to analyze the survival period.Multivariate COX regression analysis was used to analyze the prognosis factors. Results: Among the 19 patients who underwent transhepatic hilar approach, 3 patients changed the operative planning after reevaluated by exposing the hepatic hilus.The intraoperative blood was 300(250-400)ml in the transhepatic hilar approach group, which was significantly less than the conventional approach group, 800(450-1 300)ml( t =4.276, P =0.00 1), meanwhile, the R0 resection rate was significantly higher in the transhepatic hilar approach group than in the conventional approach group(89.4% vs . 52.2; χ(2)=6.773, P =0.009) and the 3-year and 5-year cumulative survival rate was better in the transhepatic hilar approach group than in the conventional approach group(63.2% vs . 47.8%, 26.3% vs . 0; χ(2)=66.363, 127.185, P =0.000). On univariate analysis, transhepatic hilar approach, intraoperative blood loss, intraoperative blood transfusion, R0 resection and lymph node metastasis were significant risk factors for patient survival(all P <0.05). On multivariate analysis, use of transhepatic hilar approach, intraoperative blood loss, R0 resection and lymph node metastasis were significant independent risk factors for patient survival(all P <0.05). Conclusion: The transhepatic hilar approach is the preferred technique for surgical treatment for hilar cholangiocarcinoma because it can improve accuracy of surgical planning, safety of operation, R0 resection rate and survival rate compared with the conventional approach.
El-Anwar, Mohammad Waheed; Elsheikh, Ezzeddin; Hussein, Atef M; Tantawy, Adly A; Abdelbaki, Youssef Mansour
2017-06-01
Although some studies addressed the differences between subciliary and transconjunctival approaches, no previous prospective comparative study on displaced zygomaticomaxillary complex (ZMC) fracture that repaired by three-point internal fixation using also upper gingivolabial incision and upper eye lid incision. So, the effect of these incisions on the comparison was not investigated. The purpose of this study was to compare transconjunctival and subciliary approaches for open reduction and internal rigid fixation (OR/IF) of ZMC fractures. This prospective study was carried out on 40 patients had displaced ZMC fractures repaired by OR/IF. Patients were randomly assigned into two equal groups (20 patients for each); subciliary group subjected to subciliary approach and transconjunctival group subjected to transconjunctival approach for inferior orbital rim repair. In both groups, frontozygomatic and zygomaticomaxillary buttresses were also approached by lateral eye brow and superior gingivolabial incision, respectively. Primary outcome measures include accessibility (need for lateral canthotomy), the exposure duration, postoperative pain, early postoperative edema, and operative complications. Secondary outcome measures include dental occlusion, average intrinsic vertical mouth opening, post subciliary scar assessment, late postoperative complication, and opthalmological assessment concerning ectropion, entropion, scleral show, and eye globe affection (enophthalmos or diplopia). The mean duration from incisions to fracture exposure was 13.7 ± 2.17 min in subciliary approach and 14.6 ± 2.31 min in transconjunctival approach with nonsignificant difference (p = 0.1284). Lateral canthotomy was required for proper exposure of the fracture and OR/IF using transconjunctival approach while not needed with subciliary approach. Ectropion and scleral show occurred in 10 and 15% respectively in subciliary group and were not encountered in transconjunctival group. Although postoperative periorbital edema was significantly more sever in transconjunctival group within the first postoperative week (p = 0.028), no persistent periorbital edema was reported. Infection, hematoma, and globe complication were not detected in any patient. All authors characterized all scars of the subciliary group as unnoticeable. Transconjunctival approach mostly needs lateral canthotomy that was not needed with subciliary approach. Transient postoperative edema is more in transconjunctival approach while postoperative ectropion and sclera show was detected only with subciliary approach. So, building up of experience in transconjunctival approach will be beneficial for maxillofacial surgeons and more measures to avoid ectropion are needed with subciliary approach.
Nichols, James D.; Hines, James E.
2002-01-01
We first consider the estimation of the finite rate of population increase or population growth rate, u i , using capture-recapture data from open populations. We review estimation and modelling of u i under three main approaches to modelling openpopulation data: the classic approach of Jolly (1965) and Seber (1965), the superpopulation approach of Crosbie & Manly (1985) and Schwarz & Arnason (1996), and the temporal symmetry approach of Pradel (1996). Next, we consider the contributions of different demographic components to u i using a probabilistic approach based on the composition of the population at time i + 1 (Nichols et al., 2000b). The parameters of interest are identical to the seniority parameters, n i , of Pradel (1996). We review estimation of n i under the classic, superpopulation, and temporal symmetry approaches. We then compare these direct estimation approaches for u i and n i with analogues computed using projection matrix asymptotics. We also discuss various extensions of the estimation approaches to multistate applications and to joint likelihoods involving multiple data types.
Nichols, J.D.; Hines, J.E.
2002-01-01
We first consider the estimation of the finite rate of population increase or population growth rate, lambda sub i, using capture-recapture data from open populations. We review estimation and modelling of lambda sub i under three main approaches to modelling open-population data: the classic approach of Jolly (1965) and Seber (1965), the superpopulation approach of Crosbie & Manly (1985) and Schwarz & Arnason (1996), and the temporal symmetry approach of Pradel (1996). Next, we consider the contributions of different demographic components to lambda sub i using a probabilistic approach based on the composition of the population at time i + 1 (Nichols et al., 2000b). The parameters of interest are identical to the seniority parameters, gamma sub i, of Pradel (1996). We review estimation of gamma sub i under the classic, superpopulation, and temporal symmetry approaches. We then compare these direct estimation approaches for lambda sub i and gamma sub i with analogues computed using projection matrix asymptotics. We also discuss various extensions of the estimation approaches to multistate applications and to joint likelihoods involving multiple data types.
NASA Astrophysics Data System (ADS)
Zhang, Yongping; Shang, Pengjian; Xiong, Hui; Xia, Jianan
Time irreversibility is an important property of nonequilibrium dynamic systems. A visibility graph approach was recently proposed, and this approach is generally effective to measure time irreversibility of time series. However, its result may be unreliable when dealing with high-dimensional systems. In this work, we consider the joint concept of time irreversibility and adopt the phase-space reconstruction technique to improve this visibility graph approach. Compared with the previous approach, the improved approach gives a more accurate estimate for the irreversibility of time series, and is more effective to distinguish irreversible and reversible stochastic processes. We also use this approach to extract the multiscale irreversibility to account for the multiple inherent dynamics of time series. Finally, we apply the approach to detect the multiscale irreversibility of financial time series, and succeed to distinguish the time of financial crisis and the plateau. In addition, Asian stock indexes away from other indexes are clearly visible in higher time scales. Simulations and real data support the effectiveness of the improved approach when detecting time irreversibility.
Learning in First-Year Biology: Approaches of Distance and On-Campus Students
NASA Astrophysics Data System (ADS)
Quinn, Frances Catherine
2011-01-01
This paper aims to extend previous research into learning of tertiary biology, by exploring the learning approaches adopted by two groups of students studying the same first-year biology topic in either on-campus or off-campus "distance" modes. The research involved 302 participants, who responded to a topic-specific version of the Study Process Questionnaire, and in-depth interviews with 16 of these students. Several quantitative analytic techniques, including cluster analysis and Rasch differential item functioning analysis, showed that the younger, on-campus cohort made less use of deep approaches, and more use of surface approaches than the older, off-campus group. At a finer scale, clusters of students within these categories demonstrated different patterns of learning approach. Students' descriptions of their learning approaches at interview provided richer complementary descriptions of the approach they took to their study in the topic, showing how deep and surface approaches were manifested in the study context. These findings are critically analysed in terms of recent literature questioning the applicability of learning approaches theory in mass education, and their implications for teaching and research in undergraduate biology.
Bartlett, Jonathan W; Keogh, Ruth H
2018-06-01
Bayesian approaches for handling covariate measurement error are well established and yet arguably are still relatively little used by researchers. For some this is likely due to unfamiliarity or disagreement with the Bayesian inferential paradigm. For others a contributory factor is the inability of standard statistical packages to perform such Bayesian analyses. In this paper, we first give an overview of the Bayesian approach to handling covariate measurement error, and contrast it with regression calibration, arguably the most commonly adopted approach. We then argue why the Bayesian approach has a number of statistical advantages compared to regression calibration and demonstrate that implementing the Bayesian approach is usually quite feasible for the analyst. Next, we describe the closely related maximum likelihood and multiple imputation approaches and explain why we believe the Bayesian approach to generally be preferable. We then empirically compare the frequentist properties of regression calibration and the Bayesian approach through simulation studies. The flexibility of the Bayesian approach to handle both measurement error and missing data is then illustrated through an analysis of data from the Third National Health and Nutrition Examination Survey.
Lee, Y; Tien, J M
2001-01-01
We present mathematical models that determine the optimal parameters for strategically routing multidestination traffic in an end-to-end network setting. Multidestination traffic refers to a traffic type that can be routed to any one of a multiple number of destinations. A growing number of communication services is based on multidestination routing. In this parameter-driven approach, a multidestination call is routed to one of the candidate destination nodes in accordance with predetermined decision parameters associated with each candidate node. We present three different approaches: (1) a link utilization (LU) approach, (2) a network cost (NC) approach, and (3) a combined parametric (CP) approach. The LU approach provides the solution that would result in an optimally balanced link utilization, whereas the NC approach provides the least expensive way to route traffic to destinations. The CP approach, on the other hand, provides multiple solutions that help leverage link utilization and cost. The LU approach has in fact been implemented by a long distance carrier resulting in a considerable efficiency improvement in its international direct services, as summarized.
Path integral Monte Carlo ground state approach: formalism, implementation, and applications
NASA Astrophysics Data System (ADS)
Yan, Yangqian; Blume, D.
2017-11-01
Monte Carlo techniques have played an important role in understanding strongly correlated systems across many areas of physics, covering a wide range of energy and length scales. Among the many Monte Carlo methods applicable to quantum mechanical systems, the path integral Monte Carlo approach with its variants has been employed widely. Since semi-classical or classical approaches will not be discussed in this review, path integral based approaches can for our purposes be divided into two categories: approaches applicable to quantum mechanical systems at zero temperature and approaches applicable to quantum mechanical systems at finite temperature. While these two approaches are related to each other, the underlying formulation and aspects of the algorithm differ. This paper reviews the path integral Monte Carlo ground state (PIGS) approach, which solves the time-independent Schrödinger equation. Specifically, the PIGS approach allows for the determination of expectation values with respect to eigen states of the few- or many-body Schrödinger equation provided the system Hamiltonian is known. The theoretical framework behind the PIGS algorithm, implementation details, and sample applications for fermionic systems are presented.
Alternative approaches to forestry research evaluation: an assessment.
Pamela J. Jakes; Earl C. Leatherberry
1986-01-01
Reviews research evaluation techniques in a variety of fields an assesses the usefulness of various approaches or combinations of approaches for forestry research evaluation. Presents an evaluation framework that will help users develop an approach suitable for their specific problem.
Investigation of approach slab and its settlement for roads and bridges.
DOT National Transportation Integrated Search
2014-01-01
Approach slabs serve as a transitional system between an approach road and a bridge. Settlement of bridge approach slabs and their : supporting backfill has been experienced by more than ten Departments of Transportation throughout the United States....
A Functional Model for Counseling Parents of Gifted Students.
ERIC Educational Resources Information Center
Dettmann, David F.; Colangelo, Nicholas
1980-01-01
The authors present a model of parent-school involvement in furthering the educational development of gifted students. The disadvantages and advantages of three counseling approaches are pointed out--parent centered approach, school centered approach, and the partnership approach. (SBH)
APPLICATION OF THE SURFACE COMPLEXATION CONCEPT TO COMPLEX MINERAL ASSEMBLAGES
Two types of modeling approaches are illustrated for describing inorganic contaminant adsorption in aqueous environments: (a) the component additivity approach and (b) the generalized composite approach. Each approach is applied to simulate Zn2+ adsorption by a well-characterize...
49 CFR 236.760 - Locking, approach.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Locking, approach. 236.760 Section 236.760 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Locking, approach. Electric locking effective while a train is approaching, within a specified distance, a...
On Tradeoffs between Trust and Survivability using a Game Theoretic Approach
2016-04-13
On Tradeoffs between Trust and Survivability using a Game Theoretic Approach Jin-Hee Cho and Ananthram Swami U.S. Army Research Laboratory...introduces a game theoretic approach, namely Aoyagi’s game theory based on positive collusion of players. This approach improves group trust by...communication and networking field [17]. We employ a game theoretic approach, namely Aoyagi’s game theory [2], to introduce the concept of positive
ERIC Educational Resources Information Center
Moustaki, Irini; Joreskog, Karl G.; Mavridis, Dimitris
2004-01-01
We consider a general type of model for analyzing ordinal variables with covariate effects and 2 approaches for analyzing data for such models, the item response theory (IRT) approach and the PRELIS-LISREL (PLA) approach. We compare these 2 approaches on the basis of 2 examples, 1 involving only covariate effects directly on the ordinal variables…
A Supervised Approach to Windowing Detection on Dynamic Networks
2017-07-01
A supervised approach to windowing detection on dynamic networks Benjamin Fish University of Illinois at Chicago 1200 W. Harrison St. Chicago...Using this framework, we introduce windowing algorithms that take a supervised approach : they leverage ground truth on training data to find a good...windowing of the test data. We compare the supervised approach to previous approaches and several baselines on real data. ACM Reference format: Benjamin
Behavioural therapies versus other psychological therapies for depression
Churchill, Rachel; Caldwell, Deborah; Moore, Theresa HM; Davies, Philippa; Jones, Hannah; Lewis, Glyn; Hunot, Vivien
2014-01-01
This is the protocol for a review and there is no abstract. The objectives are as follows: To examine the effectiveness and acceptability of all BT approaches compared with all other psychological therapy approaches for acute depressionTo examine the effectiveness and acceptability of different BT approaches (behavioural therapy, behavioural activation, social skills training and relaxation training) compared with all other psychological therapy approaches for acute depression.To examine the effectiveness and acceptability of all BT approaches compared with different psychological therapy approaches (psychodynamic, humanistic, integrative, cognitive-behavioural and third wave CBT) for acute depression. PMID:25067905
Demonstrating and Evaluating an Action Learning Approach to Building Project Management Competence
NASA Technical Reports Server (NTRS)
Kotnour, Tim; Starr, Stan; Steinrock, T. (Technical Monitor)
2001-01-01
This paper contributes a description of an action-learning approach to building project management competence. This approach was designed, implemented, and evaluated for use with the Dynacs Engineering Development Contract at the Kennedy Space Center. The aim of the approach was to improve three levels of competence within the organization: individual project management skills, project team performance. and organizational capabilities such as the project management process and tools. The overall steps to the approach, evaluation results, and lessons learned are presented. Managers can use this paper to design a specific action-learning approach for their organization.
Moving Aerospace Structural Design Practice to a Load and Resistance Factor Approach
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.; Raju, Ivatury S.
2016-01-01
Aerospace structures are traditionally designed using the factor of safety (FOS) approach. The limit load on the structure is determined and the structure is then designed for FOS times the limit load - the ultimate load. Probabilistic approaches utilize distributions for loads and strengths. Failures are predicted to occur in the region of intersection of the two distributions. The load and resistance factor design (LRFD) approach judiciously combines these two approaches by intensive calibration studies on loads and strength to result in structures that are efficient and reliable. This paper discusses these three approaches.
Differentiating between rights-based and relational ethical approaches.
Trobec, Irena; Herbst, Majda; Zvanut, Bostjan
2009-05-01
When forced treatment in mental health care is under consideration, two approaches guide clinicians in their actions: the dominant rights-based approach and the relational ethical approach. We hypothesized that nurses with bachelor's degrees differentiate better between the two approaches than nurses without a degree. To test this hypothesis a survey was performed in major Slovenian health institutions. We found that nurses emphasize the importance of ethics and personal values, but 55.4% of all the nurse participants confused the two approaches. The results confirmed our hypothesis and indicate the importance of nurses' formal education, especially when caring for patients with mental illness.
Head Pose Estimation on Eyeglasses Using Line Detection and Classification Approach
NASA Astrophysics Data System (ADS)
Setthawong, Pisal; Vannija, Vajirasak
This paper proposes a unique approach for head pose estimation of subjects with eyeglasses by using a combination of line detection and classification approaches. Head pose estimation is considered as an important non-verbal form of communication and could also be used in the area of Human-Computer Interface. A major improvement of the proposed approach is that it allows estimation of head poses at a high yaw/pitch angle when compared with existing geometric approaches, does not require expensive data preparation and training, and is generally fast when compared with other approaches.
Research & market strategy: how choice of drug discovery approach can affect market position.
Sams-Dodd, Frank
2007-04-01
In principal, drug discovery approaches can be grouped into target- and function-based, with the respective aims of developing either a target-selective drug or a drug that produces a specific biological effect irrespective of its mode of action. Most analyses of drug discovery approaches focus on productivity, whereas the strategic implications of the choice of drug discovery approach on market position and ability to maintain market exclusivity are rarely considered. However, a comparison of approaches from the perspective of market position indicates that the functional approach is superior for the development of novel, innovative treatments.
Archdeacon, Michael T
2015-02-01
The ilioinguinal and anterior intrapelvic approaches to the acetabulum often involve different strategies for the treatment of acetabular fractures. The ilioinguinal approach allows access to the entire internal iliac fossa and pelvic brim, including indirect access to the quadrilateral surface. In contrast, the anterior intrapelvic approach allows access to the anterior elements from inside the pelvis with the surgeon standing opposite the fracture pathology. Therefore, the goal of this article is to clarify the advantages and disadvantages for each approach with respect to exposure, reduction, and fixation.
Embedding a Palliative Approach in Nursing Care Delivery
Porterfield, Pat; Roberts, Della; Lee, Joyce; Liang, Leah; Reimer-Kirkham, Sheryl; Pesut, Barb; Schalkwyk, Tilly; Stajduhar, Kelli; Tayler, Carolyn; Baumbusch, Jennifer; Thorne, Sally
2017-01-01
A palliative approach involves adapting and integrating principles and values from palliative care into the care of persons who have life-limiting conditions throughout their illness trajectories. The aim of this research was to determine what approaches to nursing care delivery support the integration of a palliative approach in hospital, residential, and home care settings. The findings substantiate the importance of embedding the values and tenets of a palliative approach into nursing care delivery, the roles that nurses have in working with interdisciplinary teams to integrate a palliative approach, and the need for practice supports to facilitate that embedding and integration. PMID:27930401
An Approach for Integrating the Prioritization of Functional and Nonfunctional Requirements
Dabbagh, Mohammad; Lee, Sai Peck
2014-01-01
Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches. PMID:24982987
Mishra, A; Mishra, S C; Verma, V; Singh, H P; Kumar, S; Tripathi, A M; Patel, B; Singh, V
2016-05-01
Juvenile nasopharyngeal angiofibroma often presents with lateral extensions. In countries with limited resources, selection of a cost-effective and least morbid surgical approach for complete excision is challenging. Sixty-three patients with juvenile nasopharyngeal angiofibroma, with lateral extensions, underwent transpalatal, transpalatal-circumaxillary (transpterygopalatine) or transpalatal-circumaxillary-sublabial approaches for resection. Clinico-radiological characteristics, tumour volume and intra-operative bleeding were recorded. The transpalatal approach was suitable for extensions involving medial part of pterygopalatine fossa; transpalatal-circumaxillary for extensions involving complete pterygopalatine fossa, with or without partial infratemporal fossa; and transpalatal-circumaxillary-sublabial for extensions involving complete infratemporal fossa, even cheek or temporal fossa up to zygomatic arch. Haemorrhage was greatest with the transpalatal-circumaxillary-sublabial approach, followed by transpalatal approach and transpalatal-circumaxillary approach (1212, 950 and 777 ml respectively). Tumour size (volume) was greatest with the transpalatal-circumaxillary approach, followed by transpalatal-circumaxillary-sublabial approach and transpalatal approach (40, 34 and 29 mm3). There was recurrence in three cases and residual disease in two cases. Long-term morbidity included small palatal perforation (n = 1), trismus (n = 1) and atrophic rhinitis (n = 2). These modified techniques, performed with endoscopic assistance under hypotensive anaesthesia, without embolisation, offer a superior option over other open procedures with regard to morbidity and recurrences.
An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352
An approach for integrating the prioritization of functional and nonfunctional requirements.
Dabbagh, Mohammad; Lee, Sai Peck
2014-01-01
Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches.
Straus, David; Byrne, Richard W; Sani, Sepehr; Serici, Anthony; Moftakhar, Roham
2013-01-01
Various vascular, neoplastic, and epileptogenic pathologies occur in the mediobasal temporal region. A transsylvian translimen insula (TTI) approach can be used as an alternative to temporal transcortical approach to the mediobasal temporal region. The aim of this study was to demonstrate the surgical anatomy of the TTI approach, including the gyral, sulcal, and vascular anatomy in and around the limen insula. The use of this approach is illustrated in the resection of a complex arteriovenous malformation. The TTI approach to the mediobasal temporal region was performed on three silicone-injected cadaveric heads. The gyral, sulcal, and arterial anatomy of the limen insula was studied in six formalin-fixed injected hemispheres. The TTI approach provided access to the anterior and middle segments of the mediobasal temporal lobe region as well as allowing access to temporal horn of the lateral ventricle. Using this approach we were able to successfully resect an arteriovenous malformation of the dominant medial temporal lobe. The TTI approach provides a viable surgical route to the region of mediobasal temporal lobe region. This approach offers an advantage over the temporal transcortical route in that there is less risk of damage to optic radiations and speech area in the dominant hemisphere.
Approaches to learning among occupational therapy undergraduate students: A cross-cultural study.
Brown, Ted; Fong, Kenneth N K; Bonsaksen, Tore; Lan, Tan Hwei; Murdolo, Yuki; Gonzalez, Pablo Cruz; Beng, Lim Hua
2017-07-01
Students may adopt various approaches to academic learning. Occupational therapy students' approaches to study and the impact of cultural context have not been formally investigated to date. To examine the approaches to study adopted by undergraduate occupational therapy students from four different cultural settings. 712 undergraduate occupational therapy students (n = 376 from Australia, n = 109 from Hong Kong, n = 160 from Norway and n = 67 from Singapore) completed the Approaches and Study Skills Inventory for Students (ASSIST). A one-way analysis of variance (ANOVA) was conducted to compare the ASSIST subscales for the students from the four countries. Post-hoc comparisons using the Tukey HSD test indicated that the mean scores for the strategic approach were significantly different between Australia and the other three countries. The mean scores for the surface approach were significantly different between Australia and Hong Kong, and Hong Kong and Norway. There were no significant differences between the deep approach to studying between Australia, Norway, Singapore and Hong Kong. Culture and educational context do appear to impact the approaches to study adopted by undergraduate occupational therapy students. Academic and practice educators need to be cognizant of what approaches to studying the students they work with adopt.
Pourhassan, Mojgan; Neumann, Frank
2018-06-22
The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.
An iterative approach for the optimization of pavement maintenance management at the network level.
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.
Mazzoni, A; Zanoletti, E; Faccioli, C; Martini, A
2017-05-01
Intracochlear schwannomas can occur either as an extension of a larger tumor from the internal auditory canal, or as a solitary labyrinthine tumor. They are currently removed via a translabyrinthine approach extended to the basal turn, adding a transotic approach for tumors lying beyond the basal turn. Facial bridge cochleostomy may be associated with the translabyrinthine approach to enable the whole cochlea to be approached without sacrificing the external auditory canal and tympanum. We describe seven cases, five of which underwent cochlear schwannoma resection with facial bridge cochleostomy, one case with the same procedure for a suspect tumor and one, previously subjected to radical tympanomastoidectomy, who underwent schwannoma resection via a transotic approach. Facial bridge cochleostomy involved removing the bone between the labyrinthine and tympanic portions of the fallopian canal, and exposing the cochlea from the basal to the apical turn. Patients' recovery was uneventful, and long-term magnetic resonance imaging showed no residual tumor. Facial bridge cochleostomy can be a flexible extension of the translabyrinthine approach for tumors extending from the internal auditory canal to the cochlea. The transcanal approach is suitable for the primary exclusive intralabyrinthine tumor. The indications for the different approaches are discussed.
Husereau, Don; Henshall, Chris; Jivraj, Jamil
2014-07-01
Adaptive approaches to the introduction of drugs and medical devices involve the use of an evolving evidence base rather than conventional single-point-in-time evaluations as a proposed means to promote patient access to innovation, reduce clinical uncertainty, ensure effectiveness, and improve the health technology development process. This report summarizes a Health Technology Assessment International (HTAi) Policy Forum discussion, drawing on presentations from invited experts, discussions among attendees about real-world case examples, and background paper. For adaptive approaches to be understood, accepted, and implemented, the Forum identified several key issues that must be addressed. These include the need to define the goals of and to set priorities for adaptive approaches; to examine evidence collection approaches; to clarify the roles and responsibilities of stakeholders; to understand the implications of adaptive approaches on current legal and ethical standards; to determine costs of such approaches and how they will be met; and to identify differences in applying adaptive approaches to drugs versus medical devices. The Forum also explored the different implications of adaptive approaches for various stakeholders, including patients, regulators, HTA/coverage bodies, health systems, clinicians, and industry. A key outcome of the meeting was a clearer understanding of the opportunities and challenges adaptive approaches present. Furthermore, the Forum brought to light the critical importance of recognizing and including a full range of stakeholders as contributors to a shared decision-making model implicit in adaptive pathways in future discussions on, and implementation of, adaptive approaches.
Mason, Eric; Van Rompaey, Jason; Carrau, Ricardo; Panizza, Benedict; Solares, C Arturo
2014-03-01
Advances in the field of skull base surgery aim to maximize anatomical exposure while minimizing patient morbidity. The petroclival region of the skull base presents numerous challenges for surgical access due to the complex anatomy. The transcochlear approach to the region provides adequate access; however, the resection involved sacrifices hearing and results in at least a grade 3 facial palsy. An endoscopic endonasal approach could potentially avoid negative patient outcomes while providing a desirable surgical window in a select patient population. Cadaveric study. Endoscopic access to the petroclival region was achieved through an endonasal approach. For comparison, a transcochlear approach to the clivus was performed. Different facets of the dissections, such as bone removal volume and exposed surface area, were computed using computed tomography analysis. The endoscopic endonasal approach provided a sufficient corridor to the petroclival region with significantly less bone removal and nearly equivalent exposure of the surgical target, thus facilitating the identification of the relevant anatomy. The lateral approach allowed for better exposure from a posterolateral direction until the inferior petrosal sinus; however, the endonasal approach avoided labyrinthine/cochlear destruction and facial nerve manipulation while providing an anteromedial viewpoint. The endonasal approach also avoided external incisions and cosmetic deficits. The endonasal approach required significant sinonasal resection. Endoscopic access to the petroclival region is a feasible approach. It potentially avoids hearing loss, facial nerve manipulation, and cosmetic damage. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.
Sung, Eui Suk; Ji, Yong Bae; Song, Chang Myeon; Yun, Bo Ram; Chung, Won Sang; Tae, Kyung
2016-06-01
Robotic thyroidectomy using remote access approaches has gained popularity with patients seeking to avoid neck scarring and enhanced cosmetic satisfaction. The aim of this study was to compare the efficacy and advantages of a postauricular facelift approach vs a gasless unilateral axillary (GUA) approach in robotic thyroidectomy. Case series with chart review. University tertiary care hospital. We retrospectively analyzed the data of 65 patients who underwent robotic thyroidectomy with or without central neck dissection using a GUA approach (45 patients) or a postauricular facelift approach (20 patients) between September 2013 and December 2014. We excluded patients who underwent simultaneous lateral neck dissection or completion thyroidectomy. Robotic procedures were completed without being converted to an open procedure in all patients. There were no significant differences in terms of patient and tumor characteristics, extent of thyroidectomy and central neck dissection, operative time, complications, and postoperative pain between the 2 approaches, except the higher female ratio in the GUA approach group (female ratio, 95.6% vs 75%, P = .042). Cosmetic satisfaction evaluated by a questionnaire was not significantly different between the 2 groups, and most patients of both groups (85.7%) were satisfied with postoperative cosmesis. Both GUA and postauricular facelift approaches are feasible, with no significant adverse events in patients, and result in excellent cosmesis. However, a GUA approach seems to be superior when performing total thyroidectomy using a unilateral incision based on the preliminary result. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.
Approaches to Foster Transfer of Formal Principles: Which Route to Take?
Schalk, Lennart; Saalbach, Henrik; Stern, Elsbeth
2016-01-01
Enabling learners to transfer knowledge about formal principles to new problems is a major aim of science and mathematics education, which, however, is notoriously difficult to reach. Previous research advocates different approaches of how to introduce principles to foster the transfer of knowledge about formal principles. One approach suggests teaching a generic formalism of the principles. Another approach suggests presenting (at least) two concrete cases instantiating the principle. A third approach suggests presenting a generic formalism accompanied by a case. As yet, though, empirical results regarding the transfer potential of these approaches are mixed and difficult to integrate as the three approaches have rarely been tested competitively. Furthermore, the approaches have been evaluated in relation to different control conditions, and they have been assessed using varying transfer measures. In the present experiment, we introduced undergraduates to the formal principles of propositional logic with the aim to systematically compare the transfer potential of the different approaches in relation to each other and to a common control condition by using various learning and transfer tasks. Results indicate that all approaches supported successful learning and transfer of the principles, but also caused systematic differences in the magnitude of transfer. Results indicate that the combination of a generic formalism with a case was surprisingly unsuccessful while learners who compared two cases outperformed the control condition. We discuss how the simultaneous assessment of the different approaches allows to more precisely capture the underlying learning mechanisms and to advance theory on how these mechanisms contribute to transfer performance.
Thakur, Anil; Kaur, Kiranpreet; Lamba, Aditya; Taxak, Susheela; Dureja, Jagdish; Singhal, Suresh; Bhardwaj, Mamta
2014-01-01
Background and Aims: Infraclavicular (IC) approach of subclavian vein (SCV) catheterisation is widely used as compared to supraclavicular (SC) approach. The aim of the study was to compare the ease of catheterisation of SCV using SC versus IC approach and also record the incidence of complications related to either approach, if any. Methods: In the study, 60 patients enrolled were randomly divided into two groups of 30 patients each. In Gp. SC right SCV catheterisation was performed using SC approach and in Gp. IC catheterisation was performed using IC approach. Access time, success rate of cannulation, number of attempts to cannulate vein, ease of guidewire and catheter insertion and length of catheter inserted and any associated complications were recorded. Results: The mean access time in group SC for SCV catheterisation was 4.30 ± 1.02 min compared to 6.07 ± 2.14 min in group IC. The overall success rate in catheterisation of the right SCV using SC approach (29 out of 30) was better as compared with group IC (27 out of 30) using IC approach. First attempt success in the SC group was 75.6% as compared with 59.25% in the IC group. All successful subclavian vein catheterisations in SC group and IC group were associated with smooth insertion of guidewire following subclavian venipuncture. Conclusion: The SC approach of SCV catheterisation is comparable to IC approach in terms of landmarks accessibility, success rate and rate of complications. PMID:24963180
XV-15 Tiltrotor Low Noise Approach Operations
NASA Technical Reports Server (NTRS)
Conner, David A.; Marcolini, Michael A.; Decker, William A.; Cline, John H.; Edwards, Bryan D.; Nicks, Colby O.; Klein, Peter D.
1999-01-01
Acoustic data have been acquired for the XV-15 tiltrotor aircraft performing approach operations for a variety of different approach profile configurations. This flight test program was conducted jointly by NASA, the U.S. Army, and Bell Helicopter Textron, Inc. (BHTI) in June 1997. The XV-15 was flown over a large area microphone array, which was deployed to directly measure the noise footprint produced during actual approach operations. The XV-15 flew realistic approach profiles that culminated in IGE hover over a landing pad. Aircraft tracking and pilot guidance was provided by a Differential Global Positioning System (DGPS) and a flight director system developed at BHTI. Approach profile designs emphasized noise reduction while maintaining handling qualities sufficient for tiltrotor commercial passenger ride comfort and flight safety under Instrument Flight Rules (IFR) conditions. A discussion of the approach profile design philosophy is provided. Five different approach profiles are discussed in detail -- 3 deg., 6 deg., and 9 deg. approaches, and two very different 3 deg. to 9 deg. segmented approaches. The approach profile characteristics are discussed in detail, followed by the noise footprints and handling qualities. Sound exposure levels are also presented on an averaged basis and as a function of the sideline distance for a number of up-range distances from the landing point. A comparison of the noise contour areas is also provided. The results document the variation in tiltrotor noise due to changes in operating condition, and indicate the potential for significant noise reduction using the unique tiltrotor capability of nacelle tilt.
Focusing light through random photonic layers by four-element division algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin
2018-02-01
The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.
Massa, K; Olsen, A; Sheshe, A; Ntakamulenga, R; Ndawi, B; Magnussen, P
2009-11-01
Control programmes generally use a school-based strategy of mass drug administration to reduce morbidity of schistosomiasis and soil-transmitted helminthiasis (STH) in school-aged populations. The success of school-based programmes depends on treatment coverage. The community-directed treatment (ComDT) approach has been implemented in the control of onchocerciasis and lymphatic filariasis in Africa and improves treatment coverage. This study compared the treatment coverage between the ComDT approach and the school-based treatment approach, where non-enrolled school-aged children were invited for treatment, in the control of schistosomiasis and STH among enrolled and non-enrolled school-aged children. Coverage during the first treatment round among enrolled children was similar for the two approaches (ComDT: 80.3% versus school: 82.1%, P=0.072). However, for the non-enrolled children the ComDT approach achieved a significantly higher coverage than the school-based approach (80.0 versus 59.2%, P<0.001). Similar treatment coverage levels were attained at the second treatment round. Again, equal levels of treatment coverage were found between the two approaches for the enrolled school-aged children, while the ComDT approach achieved a significantly higher coverage in the non-enrolled children. The results of this study showed that the ComDT approach can obtain significantly higher treatment coverage among the non-enrolled school-aged children compared to the school-based treatment approach for the control of schistosomiasis and STH.
Yamamoto, Michiro; Malay, Sunitha; Fujihara, Yuki; Zhong, Lin; Chung, Kevin C.
2016-01-01
Background Outcomes after implant arthroplasty for primary degenerative and posttraumatic osteoarthritis (OA) of proximal interphalangeal (PIP) joint were different according to the implant design and surgical approach. The purpose of this systematic review was to evaluate outcomes of various types of implant arthroplasty for PIP joint OA with emphasis on different surgical approaches. Methods The authors searched all available literature in the PubMed and EMBASE databases for articles reporting on outcomes of implant arthroplasty for PIP joint OA. Data collection included active arc of motion (AOM), extension lag, and complications. We combined the data of various types of surface replacement arthroplasty into one group to compare with silicone arthroplasty. Results A total of 849 articles were screened, yielding 40 studies for final review. The mean postoperative AOM and the mean gain in AOM of silicone implant with volar approach were 58° and 17° respectively which was greater than surface replacement implant with dorsal approach as 51° and 8°, respectively. The mean postoperative extension lag of silicone implant with volar approach and surface replacement with dorsal approach was 5° and 14° respectively. The revision rate of silicone implant with volar approach and surface replacement with dorsal approach was 6% and 18% at the mean follow-up period of 41.2 and 51 months, respectively. Conclusions Silicone implant with volar approach showed the best AOM with less extension lag and fewer complications after surgery among all the implant designs and surgical approaches. PMID:28445369
Sharma, Mayur; Ambekar, Sudheer; Guthikonda, Bharat; Nanda, Anil
2014-01-01
Background The aim of our study was to compare the area of exposure at the ventral brainstem and petroclival region offered by the Kawase, retrosigmoid transtentorial (RTT), and the retrosigmoid intradural suprameatal (RISA) approaches in cadaveric models. Methods We performed 15 approaches (five each of the Kawase, RISA, and RTT approaches) on silicone-injected adult cadaver heads. Ventral brainstem and petroclival areas of exposure were measured and compared. Results The mean ventral brainstem area exposed by the Kawase approach was 55.00 ± 24.1 mm2, significantly less than that exposed by RTT (441 ± 63.3 mm2) and RISA (311 ± 61 mm2) (p < 0.05). The area of ventral brainstem exposure was significantly more via RTT than through RISA (p = 0.01). The mean petroclival area of exposure through the Kawase approach was significantly smaller than that obtained through the RTT and RISA approaches (101.7 ± 545.01 mm2, 696 ± 57.7 mm2, and 716.7 ± 51.4 mm2, respectively). Conclusion Retrosigmoid approaches provide a greater exposure of the brainstem and petroclival areas. The Kawase approach is ideally suited for lesions around the Meckel cave with an extension into the middle fossa. These approaches can be used in conjunction with one another to access petroclival tumors. PMID:24967151
Security and privacy preserving approaches in the eHealth clouds with disaster recovery plan.
Sahi, Aqeel; Lai, David; Li, Yan
2016-11-01
Cloud computing was introduced as an alternative storage and computing model in the health sector as well as other sectors to handle large amounts of data. Many healthcare companies have moved their electronic data to the cloud in order to reduce in-house storage, IT development and maintenance costs. However, storing the healthcare records in a third-party server may cause serious storage, security and privacy issues. Therefore, many approaches have been proposed to preserve security as well as privacy in cloud computing projects. Cryptographic-based approaches were presented as one of the best ways to ensure the security and privacy of healthcare data in the cloud. Nevertheless, the cryptographic-based approaches which are used to transfer health records safely remain vulnerable regarding security, privacy, or the lack of any disaster recovery strategy. In this paper, we review the related work on security and privacy preserving as well as disaster recovery in the eHealth cloud domain. Then we propose two approaches, the Security-Preserving approach and the Privacy-Preserving approach, and a disaster recovery plan. The Security-Preserving approach is a robust means of ensuring the security and integrity of Electronic Health Records, and the Privacy-Preserving approach is an efficient authentication approach which protects the privacy of Personal Health Records. Finally, we discuss how the integrated approaches and the disaster recovery plan can ensure the reliability and security of cloud projects. Copyright © 2016 Elsevier Ltd. All rights reserved.
Reznik, Samantha J; Nusslock, Robin; Pornpattananangkul, Narun; Abramson, Lyn Y; Coan, James A; Harmon-Jones, Eddie
2017-08-01
Research suggests that midline posterior versus frontal electroencephalographic (EEG) theta activity (PFTA) may reflect a novel neurophysiological index of approach motivation. Elevated PFTA has been associated with approach-related tendencies both at rest and during laboratory tasks designed to enhance approach motivation. PFTA is sensitive to changes in dopamine signaling within the fronto-striatal neural circuit, which is centrally involved in approach motivation, reward processing, and goal-directed behavior. To date, however, no studies have examined PFTA during a laboratory task designed to reduce approach motivation or goal-directed behavior. Considerable animal and human research supports the hypothesis put forth by the learned helplessness theory that exposure to uncontrollable aversive stimuli decreases approach motivation by inducing a state of perceived uncontrollability. Accordingly, the present study examined the effect of perceived uncontrollability (i.e., learned helplessness) on PFTA. EEG data were collected from 74 participants (mean age = 19.21 years; 40 females) exposed to either Controllable (n = 26) or Uncontrollable (n = 25) aversive noise bursts, or a No-Noise Condition (n = 23). In line with prediction, individuals exposed to uncontrollable aversive noise bursts displayed a significant decrease in PFTA, reflecting reduced approach motivation, relative to both individuals exposed to controllable noise bursts or the No-Noise Condition. There was no relationship between perceived uncontrollability and frontal EEG alpha asymmetry, another commonly used neurophysiological index of approach motivation. Results have implications for understanding the neurophysiology of approach motivation and establishing PFTA as a neurophysiological index of approach-related tendencies.
A common distributed language approach to software integration
NASA Technical Reports Server (NTRS)
Antonelli, Charles J.; Volz, Richard A.; Mudge, Trevor N.
1989-01-01
An important objective in software integration is the development of techniques to allow programs written in different languages to function together. Several approaches are discussed toward achieving this objective and the Common Distributed Language Approach is presented as the approach of choice.
Determination of interaction between bridge concrete approach slab and embankment settlement.
DOT National Transportation Integrated Search
2005-07-01
The main objective of this research is to correlate the deformation and internal force of the approach slab with the approach embankment settlements and the approach slab parameters such as length and thickness. Finite element analysis was carried ou...
Estimation of Carcinogenicity using Hierarchical Clustering and Nearest Neighbor Methodologies
Previously a hierarchical clustering (HC) approach and a nearest neighbor (NN) approach were developed to model acute aquatic toxicity end points. These approaches were developed to correlate the toxicity for large, noncongeneric data sets. In this study these approaches applie...
Rethinking the Organizational Culture Approach.
ERIC Educational Resources Information Center
Sotirin, Patty
Arguing for a feminist appropriation of the organizational culture approach to the study of complex formal organizations, this paper contends that, far from being an alternative approach that facilitates asking radically different questions about organizational life, the organizational culture approach's radical intentions are undermined by the…
Edith Kaplan and the Boston Process Approach.
Libon, David J; Swenson, Rodney; Ashendorf, Lee; Bauer, Russell M; Bowers, Dawn
2013-01-01
The history including some of the intellectual origins of the Boston Process Approach and some misconceptions about the Boston Process Approach are reviewed. The influence of Gestalt psychology and Edith Kaplan's principal collaborators regarding the development of the Boston Process Approach is discussed.
Does Surgical Approach Affect Patient-reported Function After Primary THA?
Graves, Sara C; Dropkin, Benjamin M; Keeney, Benjamin J; Lurie, Jon D; Tomek, Ivan M
2016-04-01
Total hip arthroplasty (THA) relieves pain and improves physical function in patients with hip osteoarthritis, but requires a year or more for full postoperative recovery. Proponents of intermuscular surgical approaches believe that the direct-anterior approach may restore physical function more quickly than transgluteal approaches, perhaps because of diminished muscle trauma. To evaluate this, we compared patient-reported physical function and other outcome metrics during the first year after surgery between groups of patients who underwent primary THA either through the direct-anterior approach or posterior approach. We asked: (1) Is a primary THA using a direct-anterior approach associated with better patient-reported physical function at early postoperative times (1 and 3 months) compared with a THA performed through the posterior approach? (2) Is the direct-anterior approach THA associated with shorter operative times and higher rates of noninstitutional discharge than a posterior approach THA? Between October 2008 and February 2010, an arthroplasty fellowship-trained surgeon performed 135 THAs. All 135 were performed using the posterior approach. During that period, we used this approach when patients had any moderate to severe degenerative joint disease of the hip attributable to any type of arthritis refractory to nonoperative treatment measures. Of the patients who were treated with this approach, 21 (17%; 23 hips) were lost to followup, whereas 109 (83%; 112 hips) were available for followup at 1 year. Between February and September 2011, the same surgeon performed 86 THAs. All 86 were performed using the direct-anterior approach. During that period, we used this approach when patients with all types of moderate to severe degenerative joint disease had nonoperative treatment measures fail. Of the patients who were treated with this approach, 35 (41%; 35 hips) were lost to followup, whereas 51 (59%; 51 hips) were available for followup at 1 year. THAs during the surgeon's direct-anterior approach learning period (February 2010 through January 2011) were excluded because both approaches were being used selectively depending on patient characteristics. Clinical outcomes included operative blood loss; allogeneic transfusion; adverse events; patient-reported Veterans RAND-12 Physical (PCS) and Mental Component Summary (MCS) scores, and University of California Los Angeles (UCLA) activity scores at 1 month, 3 months, and 1 year after surgery. Resource utilization outcomes included operative time, length of stay, and discharge disposition (home versus institution). Outcomes were compared using logistic and linear regression techniques. After controlling for relevant confounding variables including age, sex, and BMI, the direct-anterior approach was associated with worse adjusted MCS changes 1 and 3 months after surgery (1-month score change, -9; 95% CI, -13 to -5; standard error, 2), compared with the posterior approach (3-month score change, -9; 95% CI, -14 to -3; standard error, 3) (both p < 0.001), while the direct-anterior approach was associated with greater PCS improvement at 3 months compared with the posterior approach (score change, 6; 95% CI, 2-10; standard error, 2; p = 0.008). There were no differences in adjusted PCS at either 1 month or 12 months, and no clinically important differences in UCLA scores. Although the PCS score differences are greater than the minimum clinically important difference of 5 points for this endpoint, the clinical importance of such a small effect is questionable. At 1 year after THA, there were no intergroup differences in self-reported physical function, although both groups had significant loss-to-followup at that time. Operative time (skin incision to skin closure) between the two groups did not differ (81 versus 79 minutes; p = 0.411). Mean surgical blood loss (403 versus 293 mL; p < 0.001; adjusted, 119 more mL; 95% CI, 79-160; p < 0.001) and in-hospital transfusion rates (direct-anterior approach, 20% [17/86] versus posterior approach, 10% [14/135], p = 0.050; adjusted odds ratio, 3.6; 95% CI, 1.3-10.1; p = 0.016) were higher in the direct-anterior approach group. With the numbers available, there was no difference in the frequency of adverse events between groups when comparing intraoperative complications, perioperative Technical Expert Panel complications, and other non-Technical Expert Panel complications within 1 year of surgery, although this study was not adequately powered to detect differences in rare adverse events. With suitable experience, the direct-anterior approach can be performed with expected results similar to those of the posterior approach. There may be transient and small benefits to the direct-anterior approach, including improved physical function at 3 months after surgery. However, the greater operative blood loss and greater likelihood of blood transfusions, even when the surgeon is experienced, may be a disadvantage. Given some of the kinds of bias present that we found, including loss to followup, the conclusions we present should be considered preliminary, but it appears that any benefits that accrue to the patients who had the direct-anterior approach would be transient and modest. Prospective randomized studies on the topic are needed to address the differences between surgical approaches more definitively. Level III, therapeutic study.
Comtois, Jonathan; Paris, Yvon; Poder, Thomas G; Chaussé, Sylvain
2013-01-01
The purpose of this study was to calculate the cost savings associated with using the kaizen approach in our hospital. Originally developed in Japan, the kaizen approach, based on the idea of continuous improvement, has considerable support in North America, including in the Quebec health care system. This study assessed the first fifteen kaizen projects at the CHUS. Based on an economic evaluation, we showed that using the kaizen approach can result in substantial cost savings. The success of the kaizen approach requires compliance with specific prerequisites. The future of the approach will depend on our ability to comply with these prerequisites. More specifically, such compliance will determine whether the approach is merely a passing fad or a strategy for improving our management style to promote greater efficiency.
Factors affecting the surgical approach and timing of bilateral adrenalectomy.
Lan, Billy Y; Taskin, Halit E; Aksoy, Erol; Birsen, Onur; Dural, Cem; Mitchell, Jamie; Siperstein, Allan; Berber, Eren
2015-07-01
Laparoscopic adrenalectomy has gained widespread acceptance. However, the optimal surgical approach to laparoscopic bilateral adrenalectomy has not been clearly defined. The aim of this study is to analyze the patient and intraoperative factors affecting the feasibility and outcome of different surgical approaches to define an algorithm for bilateral adrenalectomy. Between 2000 and 2013, all patients who underwent bilateral adrenalectomy at a single institution were selected for retrospective analysis. Patient factors, surgical approach, operative outcomes, and complications were analyzed. From 2000 to 2013, 28 patients underwent bilateral adrenalectomy. Patient diagnoses included Cushing's disease (n = 19), pheochromocytoma (n = 7), and adrenal metastasis (n = 2). Of these 28 patients, successful laparoscopic adrenalectomy was performed in all but 2 patients. Twenty-three out of the 26 adrenalectomies were completed in a single stage, while three were performed as a staged approach due to deterioration in intraoperative respiratory status in two patients and patient body habitus in one. Of the adrenalectomies completed using the minimally invasive approach, a posterior retroperitoneal (PR) approach was performed in 17 patients and lateral transabdominal (LT) approach in 9 patients. Patients who underwent a LT approach had higher BMI, larger tumor size, and other concomitant intraabdominal pathology. Hospital stay for laparoscopic adrenalectomy was 3.5 days compared to 5 and 12 days for the two open cases. There were no 30-day hospital mortality and 5 patients had minor complications for the entire cohort. A minimally invasive operation is feasible in 93% of patients undergoing bilateral adrenalectomy with 65% of adrenalectomies performed using the PR approach. Indications for the LT approach include morbid obesity, tumor size >6 cm, and other concomitant intraabdominal pathology. Single-stage adrenalectomies are feasible in most patients, with prolonged operative time causing respiratory instability being the main indication for a staged approach.
Lester, Kathryn J.; Lisk, Stephen C.; Mikita, Nina; Mitchell, Sophie; Huijding, Jorg; Rinck, Mike; Field, Andy P.
2015-01-01
Background and objectives This study examined the effects of verbal information and approach-avoidance training on fear-related cognitive and behavioural responses about novel animals. Methods One hundred and sixty children (7–11 years) were randomly allocated to receive: a) positive verbal information about one novel animal and threat information about a second novel animal (verbal information condition); b) approach-avoidance training in which they repeatedly pushed away (avoid) or pulled closer (approach) pictures of the animals (approach-avoidance training), c) a combined condition in which verbal information was given prior to approach-avoidance training (verbal information + approach-avoidance training) and d) a combined condition in which approach-avoidance training was given prior to verbal information (approach-avoidance training + verbal information). Results Threat and positive information significantly increased and decreased fear beliefs and avoidance behaviour respectively. Approach-avoidance training was successful in training the desired behavioural responses but had limited effects on fear-related responses. Verbal information and both combined conditions resulted in significantly larger effects than approach-avoidance training. We found no evidence for an additive effect of these pathways. Limitations This study used a non-clinical sample and focused on novel animals rather than animals about which children already had experience or established fears. The study also compared positive information/approach with threat information/avoid training, limiting specific conclusions regarding the independent effects of these conditions. Conclusions The present study finds little evidence in support of a possible causal role for behavioural response training in the aetiology of childhood fear. However, the provision of verbal information appears to be an important pathway involved in the aetiology of childhood fear. PMID:25698069
NASA Astrophysics Data System (ADS)
Ilyas, Muhammad; Salwah
2017-02-01
The type of this research was experiment. The purpose of this study was to determine the difference and the quality of student's learning achievement between students who obtained learning through Realistic Mathematics Education (RME) approach and students who obtained learning through problem solving approach. This study was a quasi-experimental research with non-equivalent experiment group design. The population of this study was all students of grade VII in one of junior high school in Palopo, in the second semester of academic year 2015/2016. Two classes were selected purposively as sample of research that was: year VII-5 as many as 28 students were selected as experiment group I and VII-6 as many as 23 students were selected as experiment group II. Treatment that used in the experiment group I was learning by RME Approach, whereas in the experiment group II by problem solving approach. Technique of data collection in this study gave pretest and posttest to students. The analysis used in this research was an analysis of descriptive statistics and analysis of inferential statistics using t-test. Based on the analysis of descriptive statistics, it can be concluded that the average score of students' mathematics learning after taught using problem solving approach was similar to the average results of students' mathematics learning after taught using realistic mathematics education (RME) approach, which are both at the high category. In addition, It can also be concluded that; (1) there was no difference in the results of students' mathematics learning taught using realistic mathematics education (RME) approach and students who taught using problem solving approach, (2) quality of learning achievement of students who received RME approach and problem solving approach learning was same, which was at the high category.
Tayebi Meybodi, Ali; Benet, Arnau; Rodriguez Rubio, Roberto; Yousef, Sonia; Lawton, Michael T
2018-03-03
The orbitozygomatic approach is generally advocated over the pterional approach for basilar apex aneurysms. However, the impact of the extensions of the pterional approach on the obtained maneuverability over multiple vascular targets (relevant to basilar apex surgery) has not been studied before. To analyze the patterns of surgical freedom change across the basilar bifurcation between the pterional, orbitopterional, and orbitozygomatic approaches. Surgical freedom was assessed for 3 vascular targets important in basilar apex aneurysm surgery (ipsilateral and contralateral P1-P2 junctions, and basilar apex), and compared between the pterional, orbitopterional, and orbitozygomatic approaches in 10 cadaveric specimens. Transitioning from the pterional to orbitopterional approach, the surgical freedom increased significantly at all 3 targets (P < .05). However, the gain in surgical freedom declined progressively from the most superficial target (60% for ipsilateral P1-P2 junction) to the deepest target (35% for contralateral P1-P2 junction). Conversely, transitioning from the orbitopterional to the orbitozygomatic approach, the gain in surgical freedom was minimal for the ipsilateral P1-P2 and basilar apex (<4%), but increased dramatically to 19% at the contralateral P1-P2 junction. The orbitopterional approach provides a remarkable increase in surgical maneuverability compared to the pterional approach for the basilar apex target and the relevant adjacent arterial targets. However, compared to the orbitopterional, the orbitozygomatic approach adds little maneuverability except for the deepest target (ie, contralateral P1-P2 junction). Therefore, the orbitozygomatic approach may be most efficacious with larger basilar apex aneurysms limiting the control over of the contralateral P1 PCA.
Cousijn, Janna; Goudriaan, Anna E; Ridderinkhof, K Richard; van den Brink, Wim; Veltman, Dick J; Wiers, Reinout W
2012-01-01
A potentially powerful predictor for the course of drug (ab)use is the approach-bias, that is, the pre-reflective tendency to approach rather than avoid drug-related stimuli. Here we investigated the neural underpinnings of cannabis approach and avoidance tendencies. By elucidating the predictive power of neural approach-bias activations for future cannabis use and problem severity, we aimed at identifying new intervention targets. Using functional Magnetic Resonance Imaging (fMRI), neural approach-bias activations were measured with a Stimulus Response Compatibility task (SRC) and compared between 33 heavy cannabis users and 36 matched controls. In addition, associations were examined between approach-bias activations and cannabis use and problem severity at baseline and at six-month follow-up. Approach-bias activations did not differ between heavy cannabis users and controls. However, within the group of heavy cannabis users, a positive relation was observed between total lifetime cannabis use and approach-bias activations in various fronto-limbic areas. Moreover, approach-bias activations in the dorsolateral prefrontal cortex (DLPFC) and anterior cingulate cortex (ACC) independently predicted cannabis problem severity after six months over and beyond session-induced subjective measures of craving. Higher DLPFC/ACC activity during cannabis approach trials, but lower activity during cannabis avoidance trials were associated with decreases in cannabis problem severity. These findings suggest that cannabis users with deficient control over cannabis action tendencies are more likely to develop cannabis related problems. Moreover, the balance between cannabis approach and avoidance responses in the DLPFC and ACC may help identify individuals at-risk for cannabis use disorders and may be new targets for prevention and treatment.
Laparoscopic surgery for esophageal achalasia: Multiport vs single-incision approach.
Fukuda, Shuichi; Nakajima, Kiyokazu; Miyazaki, Yasuhiro; Takahashi, Tsuyoshi; Makino, Tomoki; Kurokawa, Yukinori; Yamasaki, Makoto; Miyata, Hiroshi; Takiguchi, Shuji; Mori, Masaki; Doki, Yuichiro
2016-02-01
SILS can potentially improve aesthetic outcomes without adversely affecting treatment outcomes, but these outcomes are uncertain in laparoscopic Heller-Dor surgery. We determined if the degree of patient satisfaction with aesthetic outcomes progressed with the equivalent treatment outcomes after the introduction of a single-incision approach to laparoscopic Heller-Dor surgery. We retrospectively reviewed 20 consecutive esophageal achalasia patients (multiport approach, n = 10; single-incision approach, n = 10) and assessed the treatment outcomes and patient satisfaction with the aesthetic outcomes. In the single-incision approach, thin supportive devices were routinely used to gain exposure to the esophageal hiatus. No statistically significant differences in the operating time (210.2 ± 28.8 vs 223.5 ± 46.3 min; P = 0.4503) or blood loss (14.0 ± 31.7 vs 16.0 ± 17.8 mL; P = 0.8637) were detected between the multiport and single-incision approaches. We experienced no intraoperative complications. Mild dysphagia, which resolved spontaneously, was noted postoperatively in one patient treated with the multiport approach. The reduction rate of the maximum lower esophageal sphincter pressure was 25.1 ± 34.4% for the multiport approach and 21.8 ± 19.2% for the single-incision approach (P = 0.8266). Patient satisfaction with aesthetic outcomes was greater for the single-incision approach than for the multiport approach. When single-incision laparoscopic Heller-Dor surgery was performed adequately and combined with the use of thin supportive devices, patient satisfaction with the aesthetic outcomes was higher and treatment outcomes were equivalent to those of the multiport approach. © 2015 Japan Society for Endoscopic Surgery, Asia Endosurgery Task Force and John Wiley & Sons Australia, Ltd.
NASA Astrophysics Data System (ADS)
Coatanoan, C.; Goyet, C.; Gruber, N.; Sabine, C. L.; Warner, M.
2001-03-01
This study compares two recent estimates of anthropogenic CO2 in the northern Indian Ocean along the World Ocean Circulation Experiment cruise II [Goyet et al., 1999; Sabine et al., 1999]. These two studies employed two different approaches to separate the anthropogenic CO2 signal from the large natural background variability. Sabine et al. [1999] used the ΔC* approach first described by Gruber et al. [1996], whereas Goyet et al. [1999] used an optimum multiparameter mixing analysis referred to as the MIX approach. Both approaches make use of similar assumptions in order to remove variations due to remineralization of organic matter and the dissolution of calcium carbonates (biological pumps). However, the two approaches use very different hypotheses in order to account for variations due to physical processes including mixing and the CO2 solubility pump. Consequently, substantial differences exist in the upper thermocline approximately between 200 and 600 m. Anthropogenic CO2 concentrations estimated using the ΔC* approach average 12 ± 4 μmol kg-1 higher in this depth range than concentrations estimated using the MIX approach. Below ˜800 m, the MIX approach estimates slightly higher anthropogenic CO2 concentrations and a deeper vertical penetration. Despite this compensatory effect, water column inventories estimated in the 0-3000 m depth range by the ΔC* approach are generally ˜20% higher than those estimated by the MIX approach, with this difference being statistically significant beyond the 0.001 level. We examine possible causes for these differences and identify a number of critical additional measurements that will make it possible to discriminate better between the two approaches.
Lester, Kathryn J; Lisk, Stephen C; Mikita, Nina; Mitchell, Sophie; Huijding, Jorg; Rinck, Mike; Field, Andy P
2015-09-01
This study examined the effects of verbal information and approach-avoidance training on fear-related cognitive and behavioural responses about novel animals. One hundred and sixty children (7-11 years) were randomly allocated to receive: a) positive verbal information about one novel animal and threat information about a second novel animal (verbal information condition); b) approach-avoidance training in which they repeatedly pushed away (avoid) or pulled closer (approach) pictures of the animals (approach-avoidance training), c) a combined condition in which verbal information was given prior to approach-avoidance training (verbal information + approach-avoidance training) and d) a combined condition in which approach-avoidance training was given prior to verbal information (approach-avoidance training + verbal information). Threat and positive information significantly increased and decreased fear beliefs and avoidance behaviour respectively. Approach-avoidance training was successful in training the desired behavioural responses but had limited effects on fear-related responses. Verbal information and both combined conditions resulted in significantly larger effects than approach-avoidance training. We found no evidence for an additive effect of these pathways. This study used a non-clinical sample and focused on novel animals rather than animals about which children already had experience or established fears. The study also compared positive information/approach with threat information/avoid training, limiting specific conclusions regarding the independent effects of these conditions. The present study finds little evidence in support of a possible causal role for behavioural response training in the aetiology of childhood fear. However, the provision of verbal information appears to be an important pathway involved in the aetiology of childhood fear. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
2012-01-01
Background Dieting has historically been the main behavioural treatment paradigm for overweight/obesity, although a non-dieting paradigm has more recently emerged based on the criticisms of the original dieting approach. There is a dearth of research contrasting why these approaches are adopted. To address this, we conducted a qualitative investigation into the determinants of dieting and non-dieting approaches based on the perspectives and experiences of overweight/obese Australian adults. Methods Grounded theory was used inductively to generate a model of themes contrasting the determinants of dieting and non-dieting approaches based on the perspectives of 21 overweight/obese adults. Data was collected using semi-structured interviews to elicit in-depth individual experiences and perspectives. Results Several categories emerged which distinguished between the adoption of a dieting or non-dieting approach. These categories included the focus of each approach (weight/image or lifestyle/health behaviours); internal or external attributions about dieting failure; attitudes towards established diets, and personal autonomy. Personal autonomy was also influenced by another category; the perceived knowledge and self-efficacy about each approach, with adults more likely to choose an approach they knew more about and were confident in implementing. The time perspective of change (short or long-term) and the perceived identity of the person (fat/dieter or healthy person) also emerged as determinants of dieting or non-dieting approaches respectively. Conclusions The model of determinants elicited from this study assists in understanding why dieting and non-dieting approaches are adopted, from the perspectives and experiences of overweight/obese adults. Understanding this decision-making process can assist clinicians and public health researchers to design and tailor dieting and non-dieting interventions to population subgroups that have preferences and characteristics suitable for each approach. PMID:23249115
Effect of inlet modelling on surface drainage in coupled urban flood simulation
NASA Astrophysics Data System (ADS)
Jang, Jiun-Huei; Chang, Tien-Hao; Chen, Wei-Bo
2018-07-01
For a highly developed urban area with complete drainage systems, flood simulation is necessary for describing the flow dynamics from rainfall, to surface runoff, and to sewer flow. In this study, a coupled flood model based on diffusion wave equations was proposed to simulate one-dimensional sewer flow and two-dimensional overland flow simultaneously. The overland flow model provides details on the rainfall-runoff process to estimate the excess runoff that enters the sewer system through street inlets for sewer flow routing. Three types of inlet modelling are considered in this study, including the manhole-based approach that ignores the street inlets by draining surface water directly into manholes, the inlet-manhole approach that drains surface water into manholes that are each connected to multiple inlets, and the inlet-node approach that drains surface water into sewer nodes that are connected to individual inlets. The simulation results were compared with a high-intensity rainstorm event that occurred in 2015 in Taipei City. In the verification of the maximum flood extent, the two approaches that considered street inlets performed considerably better than that without street inlets. When considering the aforementioned models in terms of temporal flood variation, using manholes as receivers leads to an overall inefficient draining of the surface water either by the manhole-based approach or by the inlet-manhole approach. Using the inlet-node approach is more reasonable than using the inlet-manhole approach because the inlet-node approach greatly reduces the fluctuation of the sewer water level. The inlet-node approach is more efficient in draining surface water by reducing flood volume by 13% compared with the inlet-manhole approach and by 41% compared with the manhole-based approach. The results show that inlet modeling has a strong influence on drainage efficiency in coupled flood simulation.
Icing detection from geostationary satellite data using machine learning approaches
NASA Astrophysics Data System (ADS)
Lee, J.; Ha, S.; Sim, S.; Im, J.
2015-12-01
Icing can cause a significant structural damage to aircraft during flight, resulting in various aviation accidents. Icing studies have been typically performed using two approaches: one is a numerical model-based approach and the other is a remote sensing-based approach. The model based approach diagnoses aircraft icing using numerical atmospheric parameters such as temperature, relative humidity, and vertical thermodynamic structure. This approach tends to over-estimate icing according to the literature. The remote sensing-based approach typically uses meteorological satellite/ground sensor data such as Geostationary Operational Environmental Satellite (GOES) and Dual-Polarization radar data. This approach detects icing areas by applying thresholds to parameters such as liquid water path and cloud optical thickness derived from remote sensing data. In this study, we propose an aircraft icing detection approach which optimizes thresholds for L1B bands and/or Cloud Optical Thickness (COT) from Communication, Ocean and Meteorological Satellite-Meteorological Imager (COMS MI) and newly launched Himawari-8 Advanced Himawari Imager (AHI) over East Asia. The proposed approach uses machine learning algorithms including decision trees (DT) and random forest (RF) for optimizing thresholds of L1B data and/or COT. Pilot Reports (PIREPs) from South Korea and Japan were used as icing reference data. Results show that RF produced a lower false alarm rate (1.5%) and a higher overall accuracy (98.8%) than DT (8.5% and 75.3%), respectively. The RF-based approach was also compared with the existing COMS MI and GOES-R icing mask algorithms. The agreements of the proposed approach with the existing two algorithms were 89.2% and 45.5%, respectively. The lower agreement with the GOES-R algorithm was possibly due to the high uncertainty of the cloud phase product from COMS MI.
Leblanc, Fabien; Delaney, Conor P; Ellis, Clyde N; Neary, Paul C; Champagne, Bradley J; Senagore, Anthony J
2010-12-01
We hypothesized that simulator-generated metrics and intraoperative errors may be able to differentiate the technical differences between hand-assisted laparoscopic (HAL) and straight laparoscopic (SL) approaches. Thirty-eight trainees performed two laparoscopic sigmoid colectomies on an augmented reality simulator, randomly starting by a SL (n = 19) or HAL (n = 19) approach. Both approaches were compared according to simulator-generated metrics, and intraoperative errors were collected by faculty. Sixty-four percent of surgeons were experienced (>50 procedures) with open colon surgery. Fifty-five percent and 69% of surgeons were inexperienced (<10 procedures) with SL and HAL colon surgery, respectively. Time (P < 0.001), path length (P < 0.001), and smoothness (P < 0.001) were lower with the HAL approach. Operative times for sigmoid and splenic flexure mobilization and for the colorectal anastomosis were significantly shorter with the HAL approach. Time to control the vascular pedicle was similar between both approaches. Error rates were similar between both approaches. Operative time, path length, and smoothness correlated directly with the error rate for the HAL approach. In contrast, error rate inversely correlated with the operative time for the SL approach. A HAL approach for sigmoid colectomy accelerated colonic mobilization and anastomosis. The difference in correlation between both laparoscopic approaches and error rates suggests the need for different skills to perform the HAL and the SL sigmoid colectomy. These findings may explain the preference of some surgeons for a HAL approach early in the learning of laparoscopic colorectal surgery.
Cousijn, Janna; Goudriaan, Anna E.; Ridderinkhof, K. Richard; van den Brink, Wim; Veltman, Dick J.; Wiers, Reinout W.
2012-01-01
A potentially powerful predictor for the course of drug (ab)use is the approach-bias, that is, the pre-reflective tendency to approach rather than avoid drug-related stimuli. Here we investigated the neural underpinnings of cannabis approach and avoidance tendencies. By elucidating the predictive power of neural approach-bias activations for future cannabis use and problem severity, we aimed at identifying new intervention targets. Using functional Magnetic Resonance Imaging (fMRI), neural approach-bias activations were measured with a Stimulus Response Compatibility task (SRC) and compared between 33 heavy cannabis users and 36 matched controls. In addition, associations were examined between approach-bias activations and cannabis use and problem severity at baseline and at six-month follow-up. Approach-bias activations did not differ between heavy cannabis users and controls. However, within the group of heavy cannabis users, a positive relation was observed between total lifetime cannabis use and approach-bias activations in various fronto-limbic areas. Moreover, approach-bias activations in the dorsolateral prefrontal cortex (DLPFC) and anterior cingulate cortex (ACC) independently predicted cannabis problem severity after six months over and beyond session-induced subjective measures of craving. Higher DLPFC/ACC activity during cannabis approach trials, but lower activity during cannabis avoidance trials were associated with decreases in cannabis problem severity. These findings suggest that cannabis users with deficient control over cannabis action tendencies are more likely to develop cannabis related problems. Moreover, the balance between cannabis approach and avoidance responses in the DLPFC and ACC may help identify individuals at-risk for cannabis use disorders and may be new targets for prevention and treatment. PMID:22957019
ERIC Educational Resources Information Center
Asikainen, Henna; Gijbels, David
2017-01-01
The focus of the present paper is on the contribution of the research in the student approaches to learning tradition. Several studies in this field have started from the assumption that students' approaches to learning develop towards more deep approaches to learning in higher education. This paper reports on a systematic review of longitudinal…
ERIC Educational Resources Information Center
California State Univ., Fresno. Dept. of Home Economics.
This competency-based preservice home economics teacher education module on consumer approach to textiles and clothing is the first in a set of four modules on consumer education related to textiles and clothing. (This set is part of a larger series of sixty-seven modules on the Management Approach to Teaching Consumer and Homemaking Education…
Review article: Surgical approaches for correction of post-tubercular kyphosis.
Panchmatia, Jaykar R; Lenke, Lawrence G; Molloy, Sean; Cheung, Kenneth M C; Kebaish, Khaled M
2015-12-01
This study reviewed the literature regarding the pros and cons of various surgical approaches (anterior, anterolateral, combined, and posterior) for correction of post-tubercular kyphosis. The anterior and anterolateral approaches are effective in improving neurological deficit but not in correcting kyphosis. The combined anterior and posterior approach and the posterior approach combined with 3-column osteotomy achieve good neurological improvement and kyphosis correction. The latter is superior when expertise and facilities are available.
2008-06-01
joint classification 3 b. Hot spot-stress approach c. Notch-stress approach * d. Mesh-insensitive approach 2. Fracture mechanics (used for crack... classification approach, which is an adaptation of the nominal stress approach just discussed, with the welded joint fatigue curves as given in Table...used. More detail is provided on the joint classifications , and -- 19 I graphic representations are also included. It is explained that the stress
A Symmetric Positive Definite Formulation for Monolithic Fluid Structure Interaction
2010-08-09
more likely to converge than simply iterating the partitioned approach to convergence in a simple Gauss - Seidel manner. Our approach allows the use of...conditions in a second step. These approaches can also be iterated within a given time step for increased stability, noting that in the limit if one... converges one obtains a monolithic (albeit expensive) approach. Other approaches construct strongly coupled systems and then solve them in one of several
A dynamical-systems approach for computing ice-affected streamflow
Holtschlag, David J.
1996-01-01
A dynamical-systems approach was developed and evaluated for computing ice-affected streamflow. The approach provides for dynamic simulation and parameter estimation of site-specific equations relating ice effects to routinely measured environmental variables. Comparison indicates that results from the dynamical-systems approach ranked higher than results from 11 analytical methods previously investigated on the basis of accuracy and feasibility criteria. Additional research will likely lead to further improvements in the approach.
Ionospheric very low frequency transmitter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuo, Spencer P.
2015-02-15
The theme of this paper is to establish a reliable ionospheric very low frequency (VLF) transmitter, which is also broad band. Two approaches are studied that generate VLF waves in the ionosphere. The first, classic approach employs a ground-based HF heater to directly modulate the high latitude ionospheric, or auroral electrojet. In the classic approach, the intensity-modulated HF heater induces an alternating current in the electrojet, which serves as a virtual antenna to transmit VLF waves. The spatial and temporal variations of the electrojet impact the reliability of the classic approach. The second, beat-wave approach also employs a ground-based HFmore » heater; however, in this approach, the heater operates in a continuous wave mode at two HF frequencies separated by the desired VLF frequency. Theories for both approaches are formulated, calculations performed with numerical model simulations, and the calculations are compared to experimental results. Theory for the classic approach shows that an HF heater wave, intensity-modulated at VLF, modulates the electron temperature dependent electrical conductivity of the ionospheric electrojet, which, in turn, induces an ac electrojet current. Thus, the electrojet becomes a virtual VLF antenna. The numerical results show that the radiation intensity of the modulated electrojet decreases with an increase in VLF radiation frequency. Theory for the beat wave approach shows that the VLF radiation intensity depends upon the HF heater intensity rather than the electrojet strength, and yet this approach can also modulate the electrojet when present. HF heater experiments were conducted for both the intensity modulated and beat wave approaches. VLF radiations were generated and the experimental results confirm the numerical simulations. Theory and experimental results both show that in the absence of the electrojet, VLF radiation from the F-region is generated via the beat wave approach. Additionally, the beat wave approach generates VLF radiations over a larger frequency band than by the modulated electrojet.« less
A potential approach for low flow selection in water resource supply and management
NASA Astrophysics Data System (ADS)
Ouyang, Ying
2012-08-01
SummaryLow flow selections are essential to water resource management, water supply planning, and watershed ecosystem restoration. In this study, a new approach, namely the frequent-low (FL) approach (or frequent-low index), was developed based on the minimum frequent-low flow or level used in minimum flows and/or levels program in northeast Florida, USA. This FL approach was then compared to the conventional 7Q10 approach for low flow selections prior to its applications, using the USGS flow data from the freshwater environment (Big Sunflower River, Mississippi) as well as from the estuarine environment (St. Johns River, Florida). Unlike the FL approach that is associated with the biological and ecological impacts, the 7Q10 approach could lead to the selections of extremely low flows (e.g., near-zero flows) that may hinder its use for establishing criteria to prevent streams from significant harm to biological and ecological communities. Additionally, the 7Q10 approach could not be used when the period of data records is less than 10 years by definition while this may not the case for the FL approach. Results from both approaches showed that the low flows from the Big Sunflower River and the St. Johns River decreased as time elapsed, demonstrating that these two rivers have become drier during the last several decades with a potential of salted water intrusion to the St. Johns River. Results from the FL approach further revealed that the recurrence probability of low flow increased while the recurrence interval of low flow decreased as time elapsed in both rivers, indicating that low flows occurred more frequent in these rivers as time elapsed. This report suggests that the FL approach, developed in this study, is a useful alternative for low flow selections in addition to the 7Q10 approach.
Chou, Eva; Liu, Jun; Seaworth, Cathleen; Furst, Meredith; Amato, Malena M; Blaydon, Sean M; Durairaj, Vikram D; Nakra, Tanuj; Shore, John W
To compare revision rates for ptosis surgery between posterior-approach and anterior-approach ptosis repair techniques. This is the retrospective, consecutive cohort study. All patients undergoing ptosis surgery at a high-volume oculofacial plastic surgery practice over a 4-year period. A retrospective chart review was conducted of all patients undergoing posterior-approach and anterior-approach ptosis surgery for all etiologies of ptosis between 2011 and 2014. Etiology of ptosis, concurrent oculofacial surgeries, revision, and complications were analyzed. The main outcome measure is the ptosis revision rate. A total of 1519 patients were included in this study. The mean age was 63 ± 15.4 years. A total of 1056 (70%) of patients were female, 1451 (95%) had involutional ptosis, and 1129 (74.3%) had concurrent upper blepharoplasty. Five hundred thirteen (33.8%) underwent posterior-approach ptosis repair, and 1006 (66.2%) underwent anterior-approach ptosis repair. The degree of ptosis was greater in the anterior-approach ptosis repair group. The overall revision rate for all patients was 8.7%. Of the posterior group, 6.8% required ptosis revision; of the anterior group, 9.5% required revision surgery. The main reason for ptosis revision surgery was undercorrection of one or both eyelids. Concurrent brow lifting was associated with a decreased, but not statistically significant, rate of revision surgery. Patients who underwent unilateral ptosis surgery had a 5.1% rate of Hering's phenomenon requiring ptosis repair in the contralateral eyelid. Multivariable logistic regression for predictive factors show that, when adjusted for gender and concurrent blepharoplasty, the revision rate in anterior-approach ptosis surgery is higher than posterior-approach ptosis surgery (odds ratio = 2.08; p = 0.002). The overall revision rate in patients undergoing ptosis repair via posterior-approach or anterior-approach techniques is 8.7%. There is a statistically higher rate of revision with anterior-approach ptosis repair.
Zijlstra, Wierd P; De Hartog, Bas; Van Steenbergen, Liza N; Scheurs, B Willem; Nelissen, Rob G H H
2017-01-01
Background and purpose Recurrent dislocation is the commonest cause of early revision of a total hip arthropasty (THA). We examined the effect of femoral head size and surgical approach on revision rate for dislocation, and for other reasons, after total hip arthroplasty (THA). Patients and methods We analyzed data on 166,231 primary THAs and 3,754 subsequent revision THAs performed between 2007 and 2015, registered in the Dutch Arthroplasty Register (LROI). Revision rate for dislocation, and for all other causes, were calculated by competing-risk analysis at 6-year follow-up. Multivariable Cox proportional hazard regression ratios (HRs) were used for comparisons. Results Posterolateral approach was associated with higher dislocation revision risk (HR =1) than straight lateral, anterolateral, and anterior approaches (HR =0.5–0.6). However, the risk of revision for all other reasons (especially stem loosening) was higher with anterior and anterolateral approaches (HR =1.2) and lowest with posterolateral approach (HR =1). For all approaches, 32-mm heads reduced the risk of revision for dislocation compared to 22- to 28-mm heads (HR =1 and 1.6, respectively), while the risk of revision for other causes remained unchanged. 36-mm heads increasingly reduced the risk of revision for dislocation but only with the posterolateral approach (HR =0.6), while the risk of revision for other reasons was unchanged. With the anterior approach, 36-mm heads increased the risk of revision for other reasons (HR =1.5). Interpretation Compared to the posterolateral approach, direct anterior and anterolateral approaches reduce the risk of revision for dislocation, but at the cost of more stem revisions and other revisions. For all approaches, there is benefit in using 32-mm heads instead of 22- to 28-mm heads. For the posterolateral approach, 36-mm heads can safely further reduce the risk of revision for dislocation. PMID:28440704
Lädermann, Alexandre; Denard, Patrick Joel; Tirefort, Jérome; Collin, Philippe; Nowak, Alexandra; Schwitzguebel, Adrien Jean-Pierre
2017-07-14
With the growth of reverse shoulder arthroplasty (RSA), it is becoming increasingly necessary to establish the most cost-effective methods for the procedure. The surgical approach is one factor that may influence the cost and outcome of RSA. The purpose of this study was to compare the clinical results of a subscapularis- and deltoid-sparing (SSCS) approach to a traditional deltopectoral (TDP) approach for RSA. The hypothesis was that the SSCS approach would be associated with decreased length of stay (LOS), equal complication rate, and better short-term outcomes compared to the TDP approach. A prospective evaluation was performed on patients undergoing RSA over a 2-year period. A deltopectoral incision was used followed by either an SSCS approach or a traditional tenotomy of the subscapularis (TDP). LOS, adverse events, physical therapy utilization, and patient satisfaction were collected in the 12 months following RSA. LOS was shorter with the SSCS approach compared to the TDP approach (from 8.2 ± 6.4 days to 15.2 ± 11.9 days; P = 0.04). At 3 months postoperative, the single assessment numeric evaluation score (80 ± 11% vs 70 ± 6%; P = 0.04) and active elevation (130 ± 22° vs 109 ± 24°; P = 0.01) were higher in the SSCS group. The SSCS approach resulted in a net cost savings of $5900 per patient. Postoperative physical therapy, pain levels, and patient satisfaction were comparable in both groups. No immediate intraoperative complications were noted. Using a SSCS approach is an option for patients requiring RSA. Overall LOS is minimized compared to a TDP approach with subscapularis tenotomy. The SSCS approach may provide substantial healthcare cost savings, without increasing complication rate or decreasing patient satisfaction.
Koh, Vicky Y; Buhari, Shaik A; Tan, Poh Wee; Tan, Yun Inn; Leong, Yuh Fun; Earnest, Arul; Tang, Johann I
2014-06-01
Currently, there are two described methods of catheter insertion for women undergoing multicatheter interstitial accelerated partial breast irradiation (APBI). These are a volume based template approach (template) and a non-template ultrasound guidance freehand approach (non-template). We aim to compare dosimetric endpoints between the template and non-template approach. Twenty patients, who received adjuvant multicatheter interstitial APBI between August 2008 to March 2010 formed the study cohort. Dosimetric planning was based on the RTOG 04-13 protocol. For standardization, the planning target volume evaluation (PTV-Eval) and organs at risk were contoured with the assistance of the attending surgeon. Dosimetric endpoints include D90 of the PTV-Eval, Dose Homogeneity Index (DHI), V200, maximum skin dose (MSD), and maximum chest wall dose (MCD). A median of 18 catheters was used per patient. The dose prescribed was 34 Gy in 10 fractions BID over 5 days. The average breast volume was 846 cm(3) (526-1384) for the entire cohort and there was no difference between the two groups (p = 0.6). Insertion time was significantly longer for the non-template approach (mean 150 minutes) compared to the template approach (mean: 90 minutes) (p = 0.02). The planning time was also significantly longer for the non-template approach (mean: 240 minutes) compared to the template approach (mean: 150 minutes) (p < 0.01). The template approach yielded a higher D90 (mean: 95%) compared to the non-template approach (mean: 92%) (p < 0.01). There were no differences in DHI (p = 0.14), V200 (p = 0.21), MSD (p = 0.7), and MCD (p = 0.8). Compared to the non-template approach, the template approach offered significant shorter insertion and planning times with significantly improved dosimetric PTV-Eval coverage without significantly compromising organs at risk dosimetrically.
[New anterolateral approach of distal femur for treatment of distal femoral fractures].
Zhang, Bin; Dai, Min; Zou, Fan; Luo, Song; Li, Binhua; Qiu, Ping; Nie, Tao
2013-11-01
To assess the effectiveness of the new anterolateral approach of the distal femur for the treatment of distal femoral fractures. Between July 2007 and December 2009, 58 patients with distal femoral fractures were treated by new anterolateral approach of the distal femur in 28 patients (new approach group) and by conventional approach in 30 patients (conventional approach group). There was no significant difference in gender, age, cause of injury, affected side, type of fracture, disease duration, complication, or preoperative intervention (P > 0.05). The operation time, intraoperative blood loss, intraoperative fluoroscopy frequency, hospitalization days, and Hospital for Special Surgery (HSS) score of knee were recorded. Operation was successfully completed in all patients of 2 groups, and healing of incision by first intention was obtained; no vascular and nerves injuries occurred. The operation time and intraoperative fluoroscopy frequency of new approach group were significantly less than those of conventional approach group (P < 0.05). But the intraoperative blood loss and the hospitalization days showed no significant difference between 2 groups (P > 0.05). All patients were followed up 12-36 months (mean, 19.8 months). Bone union was shown on X-ray films; the fracture healing time was (12.62 +/- 2.34) weeks in the new approach group and was (13.78 +/- 1.94) weeks in the conventional approach group, showing no significant difference (t=2.78, P=0.10). The knee HSS score at last follow-up was 94.4 +/- 4.2 in the new approach group, and was 89.2 +/- 6.0 in the conventional approach group, showing significant difference between 2 groups (t=3.85, P=0.00). New anterolateral approach of the distal femur for distal femoral fractures has the advantages of exposure plenitude, minimal tissue trauma, and early function rehabilitation training so as to enhance the function recovery of knee joint.
The dynamical systems approach to numerical integration
NASA Astrophysics Data System (ADS)
Wisdom, Jack
2018-03-01
The dynamical systems approach to numerical integration is reviewed and extended. The new method is compared to some alternative methods based on the Lie series approach. The test problem is the motion of the outer planets. The algorithms developed using the dynamical systems approach perform well.
A Simplified Decision Support Approach for Evaluating Wetlands Ecosystem Services
We will be presenting a simplified approach to evaluating ecosystem services provided by freshwater wetlands restoration. Our approach is based on an existing functional assessment approach developed by Golet and Miller for the State of Rhode Island, and modified by Miller for ap...
78 FR 48932 - Proposed Agency Information Collection Activities; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-12
... subject to the advanced approaches risk-based capital rules (advanced approaches banking organizations... approaches banking organizations that are savings and loan holding companies and that are subject to the... advanced approaches banking organization is required to file quarterly regulatory capital data. The...
ERIC Educational Resources Information Center
Fahnestock, Jeanne; Secor, Marie
A genre approach to teaching the argumentative essay in composition classes has been developed. The need for this approach emanated from problems associated with the other methods of teaching persuasive discourse, such as the logical/analytic, content/problem solving, and rhetorical/generative approaches. The genre approach depends on the…
The CTD2 Center at University of California San Francisco (UCSF-2) used an integrative genomics approach to reveal unidentified mRNA splicing patterns in neuroblastoma. Read the abstract Experimental Approaches Read the detailed Experimental Approaches
The Professional Approach to Moral Education.
ERIC Educational Resources Information Center
Wright, Derek
1982-01-01
Defines the professional approach to moral education and contrasts it with the commonsense approach. The professional approach means deliberately planning school life to develop pupils as moral persons. The commonsense method treats students as members of the moral community, teachers exercising power and control over them. (RM)
Teaching Writing through Communicative Approach in Military English
ERIC Educational Resources Information Center
Likaj, Manjola
2015-01-01
The paper speaks about teaching writing through communicative approach in English for Specific Purposes, especially in Military English. There are presented three different approaches regarding writing in ESP: product, process and social-constructionist approach. The recent developments in ESP writing consider the social-constructionist approach…
Innovative Approaches to Teaching Technical Communication
ERIC Educational Resources Information Center
Bridgeford, Tracy, Ed.; Kitalong, Karla Saari, Ed.; Selfe, Dickie, Ed.
2004-01-01
"Innovative Approaches to Teaching Technical Communication" offers a variety of activities, projects, and approaches to energize pedagogy in technical communication and to provide a constructive critique of current practice. A practical collection, the approaches recommended here are readily adaptable to a range of technological and institutional…
A comparison of surgical exposures for posterolateral osteochondral lesions of the talar dome.
Mayne, Alistair I W; Lawton, Robert; Reidy, Michael J; Harrold, Fraser; Chami, George
2018-04-01
Perpendicular access to the posterolateral talar dome for the management of osteochondral defects is difficult. We examined exposure available from each of four surgical approaches. Four surgical approaches were performed on 9 Thiel-embalmed cadavers: anterolateral approach with arthrotomy; anterolateral approach with anterior talo-fibular ligament (ATFL) release; anterolateral approach with antero-lateral tibial osteotomy; and anterolateral approach with lateral malleolus osteotomy. The furthest distance posteriorly allowing perpendicular access with a 2mm k-wire was measured. An anterolateral approach with arthrotomy provided a mean exposure of the anterior third of the lateral talar dome. A lateral malleolus osteotomy provided superior exposure (81.5% vs 58.8%) compared to an anterolateral tibial osteotomy. Only the anterior half of the lateral border of the talar dome could be accessed with an anterolateral approach without osteotomy. A fibular osteotomy provided best exposure to the posterolateral aspect of the talar dome. Copyright © 2016 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.
Evaluation of two-test serodiagnostic method for early Lyme disease in clinical practice.
Trevejo, R T; Krause, P J; Sikand, V K; Schriefer, M E; Ryan, R; Lepore, T; Porter, W; Dennis, D T
1999-04-01
The Centers for Disease Control and Prevention (CDC) recommend a two-test approach for the serodiagnosis of Lyme disease (LD), with EIA testing followed by Western immunoblotting (WB) of EIA-equivocal and -positive specimens. This approach was compared with a simplified two-test approach (WB of EIA equivocals only) and WB alone for early LD. Case-patients with erythema migrans (EM) rash >/=5 cm were recruited from three primary-care practices in LD-endemic areas to provide acute- (S1) and convalescent-phase serum specimens (S2). The simplified approach had the highest sensitivity when either S1 or S2 samples were tested, nearly doubling when S2 were tested, while decreasing slightly for the other two approaches. Accordingly, the simplified approach had the lowest negative likelihood ratio for either S1 or S2. For early LD with EM, the simplified approach performed well and was less costly than the other testing approaches since less WB is required.
NASA Astrophysics Data System (ADS)
Xu, Yue-Ping; Yu, Chaofeng; Zhang, Xujie; Zhang, Qingqing; Xu, Xiao
2012-02-01
Hydrological predictions in ungauged basins are of significant importance for water resources management. In hydrological frequency analysis, regional methods are regarded as useful tools in estimating design rainfall/flood for areas with only little data available. The purpose of this paper is to investigate the performance of two regional methods, namely the Hosking's approach and the cokriging approach, in hydrological frequency analysis. These two methods are employed to estimate 24-h design rainfall depths in Hanjiang River Basin, one of the largest tributaries of Yangtze River, China. Validation is made through comparing the results to those calculated from the provincial handbook approach which uses hundreds of rainfall gauge stations. Also for validation purpose, five hypothetically ungauged sites from the middle basin are chosen. The final results show that compared to the provincial handbook approach, the Hosking's approach often overestimated the 24-h design rainfall depths while the cokriging approach most of the time underestimated. Overall, the Hosking' approach produced more accurate results than the cokriging approach.
An Overview of Data Routing Approaches for Wireless Sensor Networks
Anisi, Mohammad Hossein; Abdullah, Abdul Hanan; Razak, Shukor Abd; Ngadi, Md. Asri
2012-01-01
Recent years have witnessed a growing interest in deploying large populations of microsensors that collaborate in a distributed manner to gather and process sensory data and deliver them to a sink node through wireless communications systems. Currently, there is a lot of interest in data routing for Wireless Sensor Networks (WSNs) due to their unique challenges compared to conventional routing in wired networks. In WSNs, each data routing approach follows a specific goal (goals) according to the application. Although the general goal of every data routing approach in WSNs is to extend the network lifetime and every approach should be aware of the energy level of the nodes, data routing approaches may focus on one (or some) specific goal(s) depending on the application. Thus, existing approaches can be categorized according to their routing goals. In this paper, the main goals of data routing approaches in sensor networks are described. Then, the best known and most recent data routing approaches in WSNs are classified and studied according to their specific goals. PMID:23443040
Vafaee Sharbaf, Fatemeh; Mosafer, Sara; Moattar, Mohammad Hossein
2016-06-01
This paper proposes an approach for gene selection in microarray data. The proposed approach consists of a primary filter approach using Fisher criterion which reduces the initial genes and hence the search space and time complexity. Then, a wrapper approach which is based on cellular learning automata (CLA) optimized with ant colony method (ACO) is used to find the set of features which improve the classification accuracy. CLA is applied due to its capability to learn and model complicated relationships. The selected features from the last phase are evaluated using ROC curve and the most effective while smallest feature subset is determined. The classifiers which are evaluated in the proposed framework are K-nearest neighbor; support vector machine and naïve Bayes. The proposed approach is evaluated on 4 microarray datasets. The evaluations confirm that the proposed approach can find the smallest subset of genes while approaching the maximum accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.
Boari, Nicola; Gagliardi, Filippo; Roberti, Fabio; Barzaghi, Lina Raffaella; Caputy, Anthony J; Mortini, Pietro
2013-05-01
Several surgical approaches have been previously reported for the treatment of olfactory groove meningiomas (OGM).The trans-frontal-sinus subcranial approach (TFSSA) for the removal of large OGMs is described, comparing it with other reported approaches in terms of advantages and drawbacks. The TFSSA was performed on cadaveric specimens to illustrate the surgical technique. The surgical steps of the TFSSA and the related anatomical pictures are reported. The approach was adopted in a clinical setting; a case illustration is reported to demonstrate the feasibility of the described approach and to provide intraoperative pictures. The TFSSA represents a possible route to treat large OGMs. The subcranial approach provides early devascularization of the tumor, direct tumor access from the base without traction on the frontal lobes, good overview of dissection of the optic nerves and anterior cerebral arteries, and dural reconstruction with pedicled pericranial flap. Georg Thieme Verlag KG Stuttgart · New York.
Lichstein, Paul M; Kleimeyer, John P; Githens, Michael; Vorhies, John S; Gardner, Michael J; Bellino, Michael; Bishop, Julius
2018-07-01
A well-reduced femoral neck fracture is more likely to heal than a poorly reduced one, and increasing the quality of the surgical exposure makes it easier to achieve anatomic fracture reduction. Two open approaches are in common use for femoral neck fractures, the modified Smith-Petersen and Watson-Jones; however, to our knowledge, the quality of exposure of the femoral neck exposure provided by each approach has not been investigated. (1) What is the respective area of exposed femoral neck afforded by the Watson-Jones and modified Smith-Petersen approaches? (2) Is there a difference in the ability to visualize and/or palpate important anatomic landmarks provided by the Watson-Jones and modified Smith-Petersen approaches? Ten fresh-frozen human pelvi underwent both modified Smith-Petersen (utilizing the caudal extent of the standard Smith-Petersen interval distal to the anterosuperior iliac spine and parallel to the palpable interval between the tensor fascia lata and the sartorius) and Watson-Jones approaches. Dissections were performed by three fellowship-trained orthopaedic traumatologists with extensive experience in both approaches. Exposure (in cm) was quantified with calibrated digital photographs and specialized software. Modified Smith-Petersen approaches were analyzed before and after rectus femoris tenotomy. The ability to visualize and palpate seven clinically relevant anatomic structures (the labrum, femoral head, subcapital femoral neck, basicervical femoral neck, greater trochanter, lesser trochanter, and medial femoral neck) was also recorded. The quantified area of the exposed proximal femur was utilized to compare which approach afforded the largest field of view of the femoral neck and articular surface for assessment of femoral neck fracture and associated femoral head injury. The ability to visualize and palpate surrounding structures was assessed so that we could better understand which approach afforded the ability to assess structures that are relevant to femoral neck fracture reduction and fixation. After controlling for age, body mass index, height, and sex, we found the modified Smith-Petersen approach provided a mean of 2.36 cm (95% confidence interval [CI], 0.45-4.28 cm; p = 0.015) additional exposure without rectus femoris tenotomy (p = 0.015) and 3.33 cm (95% CI, 1.42-5.24 cm; p = 0.001) additional exposure with a tenotomy compared with the Watson-Jones approach. The labrum, femoral head, subcapital femoral neck, basicervical femoral neck, and greater trochanter were reliably visible and palpable in both approaches. The lesser trochanter was palpable in all of the modified Smith-Petersen and none of the Watson-Jones approaches (p < 0.001). All modified Smith-Petersen approaches (10 of 10) provided visualization and palpation of the medial femoral neck, whereas visualization of the medial femoral neck was only possible in one of 10 Watson-Jones approaches (p < 0.001) and palpation was possible in eight of 10 Watson-Jones versus all 10 modified Smith-Petersen approaches (p = 0.470). In the hands of surgeons experienced with both surgical approaches to the femoral neck, the modified Smith-Petersen approach, with or without rectus femoris tenotomy, provides superior exposure of the femoral neck and articular surface as well as visualization and palpation of clinically relevant proximal femoral anatomic landmarks compared with the Watson-Jones approach. Open reduction and internal fixation of a femoral neck fracture is typically performed in a young patient (< 60 years old) with the objective of obtaining anatomic reduction that would not be possible by closed manipulation, thus enhancing healing potential. In the hands of surgeons experienced in both approaches, the modified Smith-Petersen approach offers improved direct access for reduction and fixation. Higher quality reductions and fixation are expected to translate to improved healing potential and outcomes. Although our experimental results are promising, further clinical studies are needed to verify if this larger exposure area imparts increased quality of reduction, healing, and improved outcomes compared with other approaches. The learning curve for the exposure is unclear, but the approach has broad applications and is frequently used in other subspecialties such as for direct anterior THA and pediatric septic hip drainage. Surgeons treating femoral neck fractures with open reduction and fixation should familiarize themselves with the modified Smith-Petersen approach.
Kurokawa, Satoshi; Umemoto, Yukihiro; Mizuno, Kentaro; Okada, Atsushi; Nakane, Akihiro; Nishio, Hidenori; Hamamoto, Shuzo; Ando, Ryosuke; Kawai, Noriyasu; Tozawa, Keiichi; Hayashi, Yutaro; Yasui, Takahiro
2017-11-21
Robot-assisted radical prostatectomy (RARP) is commonly performed using the transperitoneal (TP) approach with six trocars over an 8-cm distance in the steep Trendelenburg position. In this study, we investigated the feasibility and the benefit of using the extraperitoneal (EP) approach with six trocars over a 4-cm distance in a flat or 5° Trendelenburg position. We also introduced four new steps to the surgical procedure and compared the surgical results and complications between the EP and TP approach using propensity score matching. Between August 2012 and August 2016, 200 consecutive patients without any physical restrictions underwent RARP with the EP approach in a less than 5° Trendelenburg position, and 428 consecutive patients underwent RARP with the TP approach in a steep Trendelenburg position. Four new steps to RARP using the EP approach were developed: 1) arranging six trocars; 2) creating the EP space using laparoscopic forceps; 3) holding the separated prostate in the EP space outside the robotic view; and 4) preventing a postoperative inguinal hernia. Clinicopathological results and complications were compared between the EP and TP approaches using propensity score matching. Propensity scores were calculated for each patient using multivariate logistic regression based on the preoperative covariates. All 200 patients safely underwent RARP using the EP approach. The mean volume of estimated blood loss and duration of indwelling urethral catheter use were significantly lower with the EP approach than the TP approach (139.9 vs 184.9 mL, p = 0.03 and 5.6 vs 7.7 days, p < 0.01, respectively). No significant differences in the positive surgical margin were observed. None of the patients developed an inguinal hernia postoperatively after we introduced this technique. The EP approach to RARP was safely performed regardless of patient physique or contraindications to a steep Trendelenburg position. Our method, which involved using the EP approach to perform RARP, can decrease the amount of perioperative blood loss, the duration of indwelling urethral catheter use, and the incidence of postoperative inguinal hernia development.
Machulska, Alla; Zlomuzica, Armin; Adolph, Dirk; Rinck, Mike; Margraf, Jürgen
2015-01-01
Smoking leads to the development of automatic tendencies that promote approach behavior toward smoking-related stimuli which in turn may maintain addictive behavior. The present study examined whether automatic approach tendencies toward smoking-related stimuli can be measured by using an adapted version of the Approach-Avoidance Task (AAT). Given that progression of addictive behavior has been associated with a decreased reactivity of the brain reward system for stimuli signaling natural rewards, we also used the AAT to measure approach behavior toward natural rewarding stimuli in smokers. During the AAT, 92 smokers and 51 non-smokers viewed smoking-related vs. non-smoking-related pictures and pictures of natural rewards (i.e. highly palatable food) vs. neutral pictures. They were instructed to ignore image content and to respond to picture orientation by either pulling or pushing a joystick. Within-group comparisons revealed that smokers showed an automatic approach bias exclusively for smoking-related pictures. Contrary to our expectations, there was no difference in smokers' and non-smokers' approach bias for nicotine-related stimuli, indicating that non-smokers also showed approach tendencies for this picture category. Yet, in contrast to non-smokers, smokers did not show an approach bias for food-related pictures. Moreover, self-reported smoking attitude could not predict approach-avoidance behavior toward nicotine-related pictures in smokers or non-smokers. Our findings indicate that the AAT is suited for measuring smoking-related approach tendencies in smokers. Furthermore, we provide evidence for a diminished approach tendency toward food-related stimuli in smokers, suggesting a decreased sensitivity to natural rewards in the course of nicotine addiction. Our results indicate that in contrast to similar studies conducted in alcohol, cannabis and heroin users, the AAT might only be partially suited for measuring smoking-related approach tendencies in smokers. Nevertheless, our findings are of special importance for current etiological models and smoking cessation programs aimed at modifying nicotine-related approach tendencies in the context of a nicotine addiction.
Adolph, Dirk; Rinck, Mike; Margraf, Jürgen
2015-01-01
Smoking leads to the development of automatic tendencies that promote approach behavior toward smoking-related stimuli which in turn may maintain addictive behavior. The present study examined whether automatic approach tendencies toward smoking-related stimuli can be measured by using an adapted version of the Approach-Avoidance Task (AAT). Given that progression of addictive behavior has been associated with a decreased reactivity of the brain reward system for stimuli signaling natural rewards, we also used the AAT to measure approach behavior toward natural rewarding stimuli in smokers. During the AAT, 92 smokers and 51 non-smokers viewed smoking-related vs. non-smoking-related pictures and pictures of natural rewards (i.e. highly palatable food) vs. neutral pictures. They were instructed to ignore image content and to respond to picture orientation by either pulling or pushing a joystick. Within-group comparisons revealed that smokers showed an automatic approach bias exclusively for smoking-related pictures. Contrary to our expectations, there was no difference in smokers’ and non-smokers’ approach bias for nicotine-related stimuli, indicating that non-smokers also showed approach tendencies for this picture category. Yet, in contrast to non-smokers, smokers did not show an approach bias for food-related pictures. Moreover, self-reported smoking attitude could not predict approach-avoidance behavior toward nicotine-related pictures in smokers or non-smokers. Our findings indicate that the AAT is suited for measuring smoking-related approach tendencies in smokers. Furthermore, we provide evidence for a diminished approach tendency toward food-related stimuli in smokers, suggesting a decreased sensitivity to natural rewards in the course of nicotine addiction. Our results indicate that in contrast to similar studies conducted in alcohol, cannabis and heroin users, the AAT might only be partially suited for measuring smoking-related approach tendencies in smokers. Nevertheless, our findings are of special importance for current etiological models and smoking cessation programs aimed at modifying nicotine-related approach tendencies in the context of a nicotine addiction. PMID:25692468
Physics Computing '92: Proceedings of the 4th International Conference
NASA Astrophysics Data System (ADS)
de Groot, Robert A.; Nadrchal, Jaroslav
1993-04-01
The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis * Ordered Particle Simulations for Serial and MIMD Parallel Computers * "NOLP" -- Program Package for Laser Plasma Nonlinear Optics * Algorithms to Solve Nonlinear Least Square Problems * Distribution of Hydrogen Atoms in Pd-H Computed by Molecular Dynamics * A Ray Tracing of Optical System for Protein Crystallography Beamline at Storage Ring-SIBERIA-2 * Vibrational Properties of a Pseudobinary Linear Chain with Correlated Substitutional Disorder * Application of the Software Package Mathematica in Generalized Master Equation Method * Linelist: An Interactive Program for Analysing Beam-foil Spectra * GROMACS: A Parallel Computer for Molecular Dynamics Simulations * GROMACS Method of Virial Calculation Using a Single Sum * The Interactive Program for the Solution of the Laplace Equation with the Elimination of Singularities for Boundary Functions * Random-Number Generators: Testing Procedures and Comparison of RNG Algorithms * Micro-TOPIC: A Tokamak Plasma Impurities Code * Rotational Molecular Scattering Calculations * Orthonormal Polynomial Method for Calibrating of Cryogenic Temperature Sensors * Frame-based System Representing Basis of Physics * The Role of Massively Data-parallel Computers in Large Scale Molecular Dynamics Simulations * Short-range Molecular Dynamics on a Network of Processors and Workstations * An Algorithm for Higher-order Perturbation Theory in Radiative Transfer Computations * Hydrostochastics: The Master Equation Formulation of Fluid Dynamics * HPP Lattice Gas on Transputers and Networked Workstations * Study on the Hysteresis Cycle Simulation Using Modeling with Different Functions on Intervals * Refined Pruning Techniques for Feed-forward Neural Networks * Random Walk Simulation of the Motion of Transient Charges in Photoconductors * The Optical Hysteresis in Hydrogenated Amorphous Silicon * Diffusion Monte Carlo Analysis of Modern Interatomic Potentials for He * A Parallel Strategy for Molecular Dynamics Simulations of Polar Liquids on Transputer Arrays * Distribution of Ions Reflected on Rough Surfaces * The Study of Step Density Distribution During Molecular Beam Epitaxy Growth: Monte Carlo Computer Simulation * Towards a Formal Approach to the Construction of Large-scale Scientific Applications Software * Correlated Random Walk and Discrete Modelling of Propagation through Inhomogeneous Media * Teaching Plasma Physics Simulation * A Theoretical Determination of the Au-Ni Phase Diagram * Boson and Fermion Kinetics in One-dimensional Lattices * Computational Physics Course on the Technical University * Symbolic Computations in Simulation Code Development and Femtosecond-pulse Laser-plasma Interaction Studies * Computer Algebra and Integrated Computing Systems in Education of Physical Sciences * Coordinated System of Programs for Undergraduate Physics Instruction * Program Package MIRIAM and Atomic Physics of Extreme Systems * High Energy Physics Simulation on the T_Node * The Chapman-Kolmogorov Equation as Representation of Huygens' Principle and the Monolithic Self-consistent Numerical Modelling of Lasers * Authoring System for Simulation Developments * Molecular Dynamics Study of Ion Charge Effects in the Structure of Ionic Crystals * A Computational Physics Introductory Course * Computer Calculation of Substrate Temperature Field in MBE System * Multimagnetical Simulation of the Ising Model in Two and Three Dimensions * Failure of the CTRW Treatment of the Quasicoherent Excitation Transfer * Implementation of a Parallel Conjugate Gradient Method for Simulation of Elastic Light Scattering * Algorithms for Study of Thin Film Growth * Algorithms and Programs for Physics Teaching in Romanian Technical Universities * Multicanonical Simulation of 1st order Transitions: Interface Tension of the 2D 7-State Potts Model * Two Numerical Methods for the Calculation of Periodic Orbits in Hamiltonian Systems * Chaotic Behavior in a Probabilistic Cellular Automata? * Wave Optics Computing by a Networked-based Vector Wave Automaton * Tensor Manipulation Package in REDUCE * Propagation of Electromagnetic Pulses in Stratified Media * The Simple Molecular Dynamics Model for the Study of Thermalization of the Hot Nucleon Gas * Electron Spin Polarization in PdCo Alloys Calculated by KKR-CPA-LSD Method * Simulation Studies of Microscopic Droplet Spreading * A Vectorizable Algorithm for the Multicolor Successive Overrelaxation Method * Tetragonality of the CuAu I Lattice and Its Relation to Electronic Specific Heat and Spin Susceptibility * Computer Simulation of the Formation of Metallic Aggregates Produced by Chemical Reactions in Aqueous Solution * Scaling in Growth Models with Diffusion: A Monte Carlo Study * The Nucleus as the Mesoscopic System * Neural Network Computation as Dynamic System Simulation * First-principles Theory of Surface Segregation in Binary Alloys * Data Smooth Approximation Algorithm for Estimating the Temperature Dependence of the Ice Nucleation Rate * Genetic Algorithms in Optical Design * Application of 2D-FFT in the Study of Molecular Exchange Processes by NMR * Advanced Mobility Model for Electron Transport in P-Si Inversion Layers * Computer Simulation for Film Surfaces and its Fractal Dimension * Parallel Computation Techniques and the Structure of Catalyst Surfaces * Educational SW to Teach Digital Electronics and the Corresponding Text Book * Primitive Trinomials (Mod 2) Whose Degree is a Mersenne Exponent * Stochastic Modelisation and Parallel Computing * Remarks on the Hybrid Monte Carlo Algorithm for the ∫4 Model * An Experimental Computer Assisted Workbench for Physics Teaching * A Fully Implicit Code to Model Tokamak Plasma Edge Transport * EXPFIT: An Interactive Program for Automatic Beam-foil Decay Curve Analysis * Mapping Technique for Solving General, 1-D Hamiltonian Systems * Freeway Traffic, Cellular Automata, and Some (Self-Organizing) Criticality * Photonuclear Yield Analysis by Dynamic Programming * Incremental Representation of the Simply Connected Planar Curves * Self-convergence in Monte Carlo Methods * Adaptive Mesh Technique for Shock Wave Propagation * Simulation of Supersonic Coronal Streams and Their Interaction with the Solar Wind * The Nature of Chaos in Two Systems of Ordinary Nonlinear Differential Equations * Considerations of a Window-shopper * Interpretation of Data Obtained by RTP 4-Channel Pulsed Radar Reflectometer Using a Multi Layer Perceptron * Statistics of Lattice Bosons for Finite Systems * Fractal Based Image Compression with Affine Transformations * Algorithmic Studies on Simulation Codes for Heavy-ion Reactions * An Energy-Wise Computer Simulation of DNA-Ion-Water Interactions Explains the Abnormal Structure of Poly[d(A)]:Poly[d(T)] * Computer Simulation Study of Kosterlitz-Thouless-Like Transitions * Problem-oriented Software Package GUN-EBT for Computer Simulation of Beam Formation and Transport in Technological Electron-Optical Systems * Parallelization of a Boundary Value Solver and its Application in Nonlinear Dynamics * The Symbolic Classification of Real Four-dimensional Lie Algebras * Short, Singular Pulses Generation by a Dye Laser at Two Wavelengths Simultaneously * Quantum Monte Carlo Simulations of the Apex-Oxygen-Model * Approximation Procedures for the Axial Symmetric Static Einstein-Maxwell-Higgs Theory * Crystallization on a Sphere: Parallel Simulation on a Transputer Network * FAMULUS: A Software Product (also) for Physics Education * MathCAD vs. FAMULUS -- A Brief Comparison * First-principles Dynamics Used to Study Dissociative Chemisorption * A Computer Controlled System for Crystal Growth from Melt * A Time Resolved Spectroscopic Method for Short Pulsed Particle Emission * Green's Function Computation in Radiative Transfer Theory * Random Search Optimization Technique for One-criteria and Multi-criteria Problems * Hartley Transform Applications to Thermal Drift Elimination in Scanning Tunneling Microscopy * Algorithms of Measuring, Processing and Interpretation of Experimental Data Obtained with Scanning Tunneling Microscope * Time-dependent Atom-surface Interactions * Local and Global Minima on Molecular Potential Energy Surfaces: An Example of N3 Radical * Computation of Bifurcation Surfaces * Symbolic Computations in Quantum Mechanics: Energies in Next-to-solvable Systems * A Tool for RTP Reactor and Lamp Field Design * Modelling of Particle Spectra for the Analysis of Solid State Surface * List of Participants
Implementing corporate wellness programs: a business approach to program planning.
Helmer, D C; Dunn, L M; Eaton, K; Macedonio, C; Lubritz, L
1995-11-01
1. Support of key decision makers is critical to the successful implementation of a corporate wellness program. Therefore, the program implementation plan must be communicated in a format and language readily understood by business people. 2. A business approach to corporate wellness program planning provides a standardized way to communicate the implementation plan. 3. A business approach incorporates the program planning components in a format that ranges from general to specific. This approach allows for flexibility and responsiveness to changes in program planning. 4. Components of the business approach are the executive summary, purpose, background, ground rules, approach, requirements, scope of work, schedule, and financials.
Arts-based and creative approaches to dementia care.
McGreevy, Jessica
2016-02-01
This article presents a review of arts-based and creative approaches to dementia care as an alternative to antipsychotic medications. While use of antipsychotics may be appropriate for some people, the literature highlights the success of creative approaches and the benefits of their lack of negative side effects associated with antipsychotics. The focus is the use of biographical approaches, music, dance and movement to improve wellbeing, enhance social networks, support inclusive practice and enable participation. Staff must be trained to use these approaches. A case study is presented to demonstrate how creative approaches can be implemented in practice and the outcomes that can be expected when used appropriately.
Russell, Jonathon; Anuwong, Angkoon; Dionigi, Gianlorenzo; Inabnet, William B; Kim, Hoon Yub; Randolph, Gregory W; Richmon, Jeremy D; Tufano, Ralph P
2018-05-23
Transoral endoscopic thyroidectomy vestibular approach (TOETVA) is a new approach to the central neck that avoids an anterior cervical incision. This approach can be performed with endoscopic or robotic assistance and offers access to the bilateral central neck. It has been completed safely in both North American and, even more extensively, international populations. With any new technology or approach, complications during the learning curve, expense, instrument limitations, and overall safety may affect its ultimate adoption and utility. To ensure patient safety, it is imperative to define steps that should be considered by any surgeon or group before adoption of this new approach.
Comparison of a rational vs. high throughput approach for rapid salt screening and selection.
Collman, Benjamin M; Miller, Jonathan M; Seadeek, Christopher; Stambek, Julie A; Blackburn, Anthony C
2013-01-01
In recent years, high throughput (HT) screening has become the most widely used approach for early phase salt screening and selection in a drug discovery/development setting. The purpose of this study was to compare a rational approach for salt screening and selection to those results previously generated using a HT approach. The rational approach involved a much smaller number of initial trials (one salt synthesis attempt per counterion) that were selected based on a few strategic solubility determinations of the free form combined with a theoretical analysis of the ideal solvent solubility conditions for salt formation. Salt screening results for sertraline, tamoxifen, and trazodone using the rational approach were compared to those previously generated by HT screening. The rational approach produced similar results to HT screening, including identification of the commercially chosen salt forms, but with a fraction of the crystallization attempts. Moreover, the rational approach provided enough solid from the very initial crystallization of a salt for more thorough and reliable solid-state characterization and thus rapid decision-making. The crystallization techniques used in the rational approach mimic larger-scale process crystallization, allowing smoother technical transfer of the selected salt to the process chemist.
Using chemical benchmarking to determine the persistence of chemicals in a Swedish lake.
Zou, Hongyan; Radke, Michael; Kierkegaard, Amelie; MacLeod, Matthew; McLachlan, Michael S
2015-02-03
It is challenging to measure the persistence of chemicals under field conditions. In this work, two approaches for measuring persistence in the field were compared: the chemical mass balance approach, and a novel chemical benchmarking approach. Ten pharmaceuticals, an X-ray contrast agent, and an artificial sweetener were studied in a Swedish lake. Acesulfame K was selected as a benchmark to quantify persistence using the chemical benchmarking approach. The 95% confidence intervals of the half-life for transformation in the lake system ranged from 780-5700 days for carbamazepine to <1-2 days for ketoprofen. The persistence estimates obtained using the benchmarking approach agreed well with those from the mass balance approach (1-21% difference), indicating that chemical benchmarking can be a valid and useful method to measure the persistence of chemicals under field conditions. Compared to the mass balance approach, the benchmarking approach partially or completely eliminates the need to quantify mass flow of chemicals, so it is particularly advantageous when the quantification of mass flow of chemicals is difficult. Furthermore, the benchmarking approach allows for ready comparison and ranking of the persistence of different chemicals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, C.T.
1994-03-01
This paper presents a comparison of several qualitatively different approaches to Total Quality Management (TQM). The continuum ranges from management approaches that are primarily standards -- with specific guidelines, but few theoretical concepts -- to approaches that are primarily philosophical, with few specific guidelines. The approaches to TQM discussed in this paper include the International Organization for Standardization (ISO) 9000 Standard, the Malcolm Baldrige National Quality Award, Senge`s the Learning Organization, Watkins and Marsick`s approach to organizational learning, Covey`s Seven Habits of Highly Successful People, and Deming`s Fourteen Points for Management. Some of these approaches (Deming and ISO 9000) aremore » then compared to the DOE`s official position on quality management and conduct of operations (DOE Orders 5700.6C and 5480.19). Using a tabular format, it is shown that while 5700.6C (Quality Assurance) maps well to many of the current approaches to TQM, DOE`s principle guide to management Order 5419.80 (Conduct of Operations) has many significant conflicts with some of the modern approaches to continuous quality improvement.« less
A counterfactual p-value approach for benefit-risk assessment in clinical trials.
Zeng, Donglin; Chen, Ming-Hui; Ibrahim, Joseph G; Wei, Rachel; Ding, Beiying; Ke, Chunlei; Jiang, Qi
2015-01-01
Clinical trials generally allow various efficacy and safety outcomes to be collected for health interventions. Benefit-risk assessment is an important issue when evaluating a new drug. Currently, there is a lack of standardized and validated benefit-risk assessment approaches in drug development due to various challenges. To quantify benefits and risks, we propose a counterfactual p-value (CP) approach. Our approach considers a spectrum of weights for weighting benefit-risk values and computes the extreme probabilities of observing the weighted benefit-risk value in one treatment group as if patients were treated in the other treatment group. The proposed approach is applicable to single benefit and single risk outcome as well as multiple benefit and risk outcomes assessment. In addition, the prior information in the weight schemes relevant to the importance of outcomes can be incorporated in the approach. The proposed CPs plot is intuitive with a visualized weight pattern. The average area under CP and preferred probability over time are used for overall treatment comparison and a bootstrap approach is applied for statistical inference. We assess the proposed approach using simulated data with multiple efficacy and safety endpoints and compare its performance with a stochastic multi-criteria acceptability analysis approach.
Impulsivity moderates the effect of approach bias modification on healthy food consumption.
Kakoschke, Naomi; Kemps, Eva; Tiggemann, Marika
2017-10-01
The study aimed to modify approach bias for healthy and unhealthy food and to determine its effect on subsequent food consumption. In addition, we investigated the potential moderating role of impulsivity in the effect of approach bias re-training on food consumption. Participants were 200 undergraduate women (17-26 years) who were randomly allocated to one of five conditions of an approach-avoidance task varying in the training of an approach bias for healthy food, unhealthy food, and non-food cues in a single session of 10 min. Outcome variables were approach bias for healthy and unhealthy food and the proportion of healthy relative to unhealthy snack food consumed. As predicted, approach bias for healthy food significantly increased in the 'avoid unhealthy food/approach healthy food' condition. Importantly, the effect of training on snack consumption was moderated by trait impulsivity. Participants high in impulsivity consumed a greater proportion of healthy snack food following the 'avoid unhealthy food/approach healthy food' training. This finding supports the suggestion that automatic processing of appetitive cues has a greater influence on consumption behaviour in individuals with poor self-regulatory control. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kao, Fu-Cheng; Tsai, Tsung-Ting; Niu, Chi-Chien; Lai, Po-Liang; Chen, Lih-Huei; Chen, Wen-Jer
2017-10-01
Treating thoracic infective spondylodiscitis with anterior surgical approaches carry a relatively high risk of perioperative and postoperative complications. Posterior approaches have been reported to result in lower complication rates than anterior procedures, but more evidence is needed to demonstrate the safety and efficacy of 1-stage posterior approaches for treating infectious thoracic spondylodiscitis.Preoperative and postoperative clinical data, of 18 patients who underwent 2 types of 1-stage posterior procedures, costotransversectomy and transforaminal thoracic interbody debridement and fusion and 7 patients who underwent anterior debridement and reconstruction with posterior instrumentation, were retrospectively assessed.The clinical outcomes of patients treated with 1-stage posterior approaches were generally good, with good infection control, back pain relief, kyphotic angle correction, and either partial or solid union for fusion status. Furthermore, they achieved shorter surgical time, fewer postoperative complications, and shorter hospital stay than the patients underwent anterior debridement with posterior instrumentation.The results suggested that treating thoracic spondylodiscitis with a single-stage posterior approach might prevent postoperative complications and avoid respiratory problems associated with anterior approaches. Single-stage posterior approaches would be recommended for thoracic spine infection, especially for patients with medical comorbidities.
Business process architectures: overview, comparison and framework
NASA Astrophysics Data System (ADS)
Dijkman, Remco; Vanderfeesten, Irene; Reijers, Hajo A.
2016-02-01
With the uptake of business process modelling in practice, the demand grows for guidelines that lead to consistent and integrated collections of process models. The notion of a business process architecture has been explicitly proposed to address this. This paper provides an overview of the prevailing approaches to design a business process architecture. Furthermore, it includes evaluations of the usability and use of the identified approaches. Finally, it presents a framework for business process architecture design that can be used to develop a concrete architecture. The use and usability were evaluated in two ways. First, a survey was conducted among 39 practitioners, in which the opinion of the practitioners on the use and usefulness of the approaches was evaluated. Second, four case studies were conducted, in which process architectures from practice were analysed to determine the approaches or elements of approaches that were used in their design. Both evaluations showed that practitioners have a preference for using approaches that are based on reference models and approaches that are based on the identification of business functions or business objects. At the same time, the evaluations showed that practitioners use these approaches in combination, rather than selecting a single approach.
14 CFR 97.3 - Symbols and terms used in procedures.
Code of Federal Regulations, 2010 CFR
2010-01-01
... established on the intermediate course or final approach course. (2) Initial approach altitude is the altitude (or altitudes, in high altitude procedure) prescribed for the initial approach segment of an...: Speed 166 knots or more. Approach procedure segments for which altitudes (minimum altitudes, unless...
14 CFR 97.3 - Symbols and terms used in procedures.
Code of Federal Regulations, 2011 CFR
2011-01-01
... established on the intermediate course or final approach course. (2) Initial approach altitude is the altitude (or altitudes, in high altitude procedure) prescribed for the initial approach segment of an...: Speed 166 knots or more. Approach procedure segments for which altitudes (minimum altitudes, unless...
Deterrence and Engagement: A Blended Strategic Approach to a Resurgent Russia
2016-04-15
increasing the alliances’ hard power projection to contain and deter further aggression. This strategic approach represents an extreme pendulum ...This strategic approach represents an extreme pendulum swing that is a polar opposite of the U.S. administration’s 2009 approach to ‘Reset’ relations
Proof Construction: Adolescent Development from Inductive to Deductive Problem-Solving Strategies.
ERIC Educational Resources Information Center
Foltz, Carol; And Others
1995-01-01
Studied 100 adolescents' approaches to problem-solving proofs and reasoning competence tasks. Found that a formal level of reasoning competence is associated with a deductive approach. Results support the notion of a cognitive development progression from an inductive approach to a deductive approach. (ETB)
A Theoretical and Empirical Comparison of Three Approaches to Achievement Testing.
ERIC Educational Resources Information Center
Haladyna, Tom; Roid, Gale
Three approaches to the construction of achievement tests are compared: construct, operational, and empirical. The construct approach is based upon classical test theory and measures an abstract representation of the instructional objectives. The operational approach specifies instructional intent through instructional objectives, facet design,…
Covariate Balance in Bayesian Propensity Score Approaches for Observational Studies
ERIC Educational Resources Information Center
Chen, Jianshen; Kaplan, David
2015-01-01
Bayesian alternatives to frequentist propensity score approaches have recently been proposed. However, few studies have investigated their covariate balancing properties. This article compares a recently developed two-step Bayesian propensity score approach to the frequentist approach with respect to covariate balance. The effects of different…
Student Approaches to Achieving Understanding--Approaches to Learning Revisited
ERIC Educational Resources Information Center
Fyrenius, Anna; Wirell, Staffan; Silen, Charlotte
2007-01-01
This article presents a phenomenographic study that investigates students' approaches to achieving understanding. The results are based on interviews, addressing physiological phenomena, with 16 medical students in a problem-based curriculum. Four approaches--sifting, building, holding and moving--are outlined. The holding and moving approaches…
Another Look: The Process Approach to Composition Instruction.
ERIC Educational Resources Information Center
Pollard, Rita H.
1991-01-01
Responds to Thomas Devine's indictment of the process approach to writing instruction, arguing that teaching practices reflecting misapplication of research are often wrongly labeled the process approach and a more precise definition of the process approach should inform debates over its value. Questions Devine's conclusions. (DMM)
Use of Traffic Displays for General Aviation Approach Spacing: A Human Factors Study
2007-12-01
engine rated pilots participated. Eight flew approaches in a twin-engine Piper Aztec originating in Sanford, ME, and eight flew approaches in the same...flew approaches in a twin-engine Piper Aztec originating in Sanford, ME, and eight flew approaches in the same aircraft originating in Atlantic City... Aztec . The plane was equipped with a horizontal Situation Indicator (hSI). The Garmin International MX-20™ multifunction traffic display or “Basic