Cross-Paradigm Simulation Modeling: Challenges and Successes
2011-12-01
is also highlighted. 2.1 Discrete-Event Simulation Discrete-event simulation ( DES ) is a modeling method for stochastic, dynamic models where...which almost anything can be coded; models can be incredibly detailed. Most commercial DES software has a graphical interface which allows the user to...results. Although the above definition is the commonly accepted definition of DES , there are two different worldviews that dominate DES modeling today: a
Detached-Eddy Simulation Based on the V2-F Model
NASA Technical Reports Server (NTRS)
Jee, Sol Keun; Shariff, Karim R.
2012-01-01
Detached-eddy simulation (DES) based on the v(sup 2)-f Reynolds-averaged Navier-Stokes (RANS) model is developed and tested. The v(sup 2)-f model incorporates the anisotropy of near-wall turbulence which is absent in other RANS models commonly used in the DES community. The v(sup 2)-f RANS model is modified in order the proposed v(sup 2)-f-based DES formulation reduces to a transport equation for the subgrid-scale kinetic energy isotropic turbulence. First, three coefficients in the elliptic relaxation equation are modified, which is tested in channel flows with friction Reynolds number up to 2000. Then, the proposed v(sup 2)-f DES model formulation is derived. The constant, C(sub DES), required in the DES formulation was calibrated by simulating both decaying and statistically-steady isotropic turbulence. After C(sub DES) was calibrated, the v(sub 2)-f DES formulation is tested for flow around a circular cylinder at a Reynolds number of 3900, in which case turbulence develops after separation. Simulations indicate that this model represents the turbulent wake nearly as accurately as the dynamic Smagorinsky model. Spalart-Allmaras-based DES is also included in the cylinder flow simulation for comparison.
Detached-Eddy Simulation Based on the v2-f Model
NASA Technical Reports Server (NTRS)
Jee, Sol Keun; Shariff, Karim
2012-01-01
Detached eddy simulation (DES) based on the v2-f RANS model is proposed. This RANS model incorporates the anisotropy of near-wall turbulence which is absent in other RANS models commonly used in the DES community. In LES mode, the proposed DES formulation reduces to a transport equation for the subgrid-scale kinetic energy. The constant, CDES, required by this model was calibrated by simulating isotropic turbulence. In the final paper, DES simulations of canonical separated flows will be presented.
The Effects of Time Advance Mechanism on Simple Agent Behaviors in Combat Simulations
2011-12-01
modeling packages that illustrate the differences between discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat... DES ) models , often referred to as “next-event” (Law and Kelton 2000) or discrete time simulation (DTS), commonly referred to as “time-step.” DTS...discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat models use DTS as their simulation time advance mechanism
2013-09-01
which utilizes FTA and then loads it into a DES engine to generate simulation results. .......44 Figure 21. This simulation architecture is...While Discrete Event Simulation ( DES ) can provide accurate time estimation and fast simulation speed, models utilizing it often suffer...C4ISR progress in MDW is developed in this research to demonstrate the feasibility of AEMF- DES and explore its potential. The simulation (MDSIM
Quality Improvement With Discrete Event Simulation: A Primer for Radiologists.
Booker, Michael T; O'Connell, Ryan J; Desai, Bhushan; Duddalwar, Vinay A
2016-04-01
The application of simulation software in health care has transformed quality and process improvement. Specifically, software based on discrete-event simulation (DES) has shown the ability to improve radiology workflows and systems. Nevertheless, despite the successful application of DES in the medical literature, the power and value of simulation remains underutilized. For this reason, the basics of DES modeling are introduced, with specific attention to medical imaging. In an effort to provide readers with the tools necessary to begin their own DES analyses, the practical steps of choosing a software package and building a basic radiology model are discussed. In addition, three radiology system examples are presented, with accompanying DES models that assist in analysis and decision making. Through these simulations, we provide readers with an understanding of the theory, requirements, and benefits of implementing DES in their own radiology practices. Copyright © 2016 American College of Radiology. All rights reserved.
NASA Astrophysics Data System (ADS)
Goyette, Stephane
1995-11-01
Le sujet de cette these concerne la modelisation numerique du climat regional. L'objectif principal de l'exercice est de developper un modele climatique regional ayant les capacites de simuler des phenomenes de meso-echelle spatiale. Notre domaine d'etude se situe sur la Cote Ouest nord americaine. Ce dernier a retenu notre attention a cause de la complexite du relief et de son controle sur le climat. Les raisons qui motivent cette etude sont multiples: d'une part, nous ne pouvons pas augmenter, en pratique, la faible resolution spatiale des modeles de la circulation generale de l'atmosphere (MCG) sans augmenter a outrance les couts d'integration et, d'autre part, la gestion de l'environnement exige de plus en plus de donnees climatiques regionales determinees avec une meilleure resolution spatiale. Jusqu'alors, les MCG constituaient les modeles les plus estimes pour leurs aptitudes a simuler le climat ainsi que les changements climatiques mondiaux. Toutefois, les phenomenes climatiques de fine echelle echappent encore aux MCG a cause de leur faible resolution spatiale. De plus, les repercussions socio-economiques des modifications possibles des climats sont etroitement liees a des phenomenes imperceptibles par les MCG actuels. Afin de circonvenir certains problemes inherents a la resolution, une approche pratique vise a prendre un domaine spatial limite d'un MCG et a y imbriquer un autre modele numerique possedant, lui, un maillage de haute resolution spatiale. Ce processus d'imbrication implique alors une nouvelle simulation numerique. Cette "retro-simulation" est guidee dans le domaine restreint a partir de pieces d'informations fournies par le MCG et forcee par des mecanismes pris en charge uniquement par le modele imbrique. Ainsi, afin de raffiner la precision spatiale des previsions climatiques de grande echelle, nous developpons ici un modele numerique appele FIZR, permettant d'obtenir de l'information climatique regionale valide a la fine echelle spatiale. Cette nouvelle gamme de modeles-interpolateurs imbriques qualifies d'"intelligents" fait partie de la famille des modeles dits "pilotes". L'hypothese directrice de notre etude est fondee sur la supposition que le climat de fine echelle est souvent gouverne par des forcages provenant de la surface plutot que par des transports atmospheriques de grande echelle spatiale. La technique que nous proposons vise donc a guider FIZR par la Dynamique echantillonnee d'un MCG et de la forcer par la Physique du MCG ainsi que par un forcage orographique de meso-echelle, en chacun des noeuds de la grille fine de calculs. Afin de valider la robustesse et la justesse de notre modele climatique regional, nous avons choisi la region de la Cote Ouest du continent nord americain. Elle est notamment caracterisee par une distribution geographique des precipitations et des temperatures fortement influencee par le relief sous-jacent. Les resultats d'une simulation d'un mois de janvier avec FIZR demontrent que nous pouvons simuler des champs de precipitations et de temperatures au niveau de l'abri beaucoup plus pres des observations climatiques comparativement a ceux simules a partir d'un MCG. Ces performances sont manifestement attribuees au forcage orographique de meso-echelle de meme qu'aux caracteristiques de surface determinees a fine echelle. Un modele similaire a FIZR peut, en principe, etre implante sur l'importe quel MCG, donc, tout organisme de recherche implique en modelisation numerique mondiale de grande echelle pourra se doter d'un el outil de regionalisation.
Estimating ICU bed capacity using discrete event simulation.
Zhu, Zhecheng; Hen, Bee Hoon; Teow, Kiok Liang
2012-01-01
The intensive care unit (ICU) in a hospital caters for critically ill patients. The number of the ICU beds has a direct impact on many aspects of hospital performance. Lack of the ICU beds may cause ambulance diversion and surgery cancellation, while an excess of ICU beds may cause a waste of resources. This paper aims to develop a discrete event simulation (DES) model to help the healthcare service providers determine the proper ICU bed capacity which strikes the balance between service level and cost effectiveness. The DES model is developed to reflect the complex patient flow of the ICU system. Actual operational data, including emergency arrivals, elective arrivals and length of stay, are directly fed into the DES model to capture the variations in the system. The DES model is validated by open box test and black box test. The validated model is used to test two what-if scenarios which the healthcare service providers are interested in: the proper number of the ICU beds in service to meet the target rejection rate and the extra ICU beds in service needed to meet the demand growth. A 12-month period of actual operational data was collected from an ICU department with 13 ICU beds in service. Comparison between the simulation results and the actual situation shows that the DES model accurately captures the variations in the system, and the DES model is flexible to simulate various what-if scenarios. DES helps the healthcare service providers describe the current situation, and simulate the what-if scenarios for future planning.
Markov modeling and discrete event simulation in health care: a systematic comparison.
Standfield, Lachlan; Comans, Tracy; Scuffham, Paul
2014-04-01
The aim of this study was to assess if the use of Markov modeling (MM) or discrete event simulation (DES) for cost-effectiveness analysis (CEA) may alter healthcare resource allocation decisions. A systematic literature search and review of empirical and non-empirical studies comparing MM and DES techniques used in the CEA of healthcare technologies was conducted. Twenty-two pertinent publications were identified. Two publications compared MM and DES models empirically, one presented a conceptual DES and MM, two described a DES consensus guideline, and seventeen drew comparisons between MM and DES through the authors' experience. The primary advantages described for DES over MM were the ability to model queuing for limited resources, capture individual patient histories, accommodate complexity and uncertainty, represent time flexibly, model competing risks, and accommodate multiple events simultaneously. The disadvantages of DES over MM were the potential for model overspecification, increased data requirements, specialized expensive software, and increased model development, validation, and computational time. Where individual patient history is an important driver of future events an individual patient simulation technique like DES may be preferred over MM. Where supply shortages, subsequent queuing, and diversion of patients through other pathways in the healthcare system are likely to be drivers of cost-effectiveness, DES modeling methods may provide decision makers with more accurate information on which to base resource allocation decisions. Where these are not major features of the cost-effectiveness question, MM remains an efficient, easily validated, parsimonious, and accurate method of determining the cost-effectiveness of new healthcare interventions.
A conceptual modeling framework for discrete event simulation using hierarchical control structures.
Furian, N; O'Sullivan, M; Walker, C; Vössner, S; Neubacher, D
2015-08-01
Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM's applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models' system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example.
A conceptual modeling framework for discrete event simulation using hierarchical control structures
Furian, N.; O’Sullivan, M.; Walker, C.; Vössner, S.; Neubacher, D.
2015-01-01
Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM’s applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models’ system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example. PMID:26778940
Mittelwert- und Arbeitstaktsynchrone Simulation von Dieselmotoren
NASA Astrophysics Data System (ADS)
Zahn, Sebastian
Getrieben durch die immer restriktiveren Anforderungen an das Emissions- und Verbrauchsverhalten moderner Verbrennungsmotoren steigt die Komplexität von Motormanagementsystemen mit jeder Modellgeneration an. Damit geht nicht nur eine Zunahme des Softwareumfangs von Steuergeräten sondern zugleich ein deutlicher Anstieg des Applikations-, Vermessungs- und Testaufwandes einher. Zur Effizienzsteigerung des Software- und Funktionsentwicklungsprozesses haben sich daher in der Automobilindustrie sowie in Forschungsinstituten verschiedene modell- und simulationsbasierte Methoden wie die Model-in-the-Loop (MiL) Simulation, die Software-in-the-Loop (SiL) Simulation, das Rapid Control Prototyping (RCP) sowie die Hardware-in-the-Loop (HiL) Simulation etabliert.
Modeling and Simulation at NASA
NASA Technical Reports Server (NTRS)
Steele, Martin J.
2009-01-01
This slide presentation is composed of two topics. The first reviews the use of modeling and simulation (M&S) particularly as it relates to the Constellation program and discrete event simulation (DES). DES is defined as a process and system analysis, through time-based and resource constrained probabilistic simulation models, that provide insight into operation system performance. The DES shows that the cycles for a launch from manufacturing and assembly to launch and recovery is about 45 days and that approximately 4 launches per year are practicable. The second topic reviews a NASA Standard for Modeling and Simulation. The Columbia Accident Investigation Board made some recommendations related to models and simulations. Some of the ideas inherent in the new standard are the documentation of M&S activities, an assessment of the credibility, and reporting to decision makers, which should include the analysis of the results, a statement as to the uncertainty in the results,and the credibility of the results. There is also discussion about verification and validation (V&V) of models. There is also discussion about the different types of models and simulation.
NASA Astrophysics Data System (ADS)
Minakov, A.; Platonov, D.; Sentyabov, A.; Gavrilov, A.
2017-01-01
We performed numerical simulation of flow in a laboratory model of a Francis hydroturbine at three regimes, using two eddy-viscosity- (EVM) and a Reynolds stress (RSM) RANS models (realizable k-ɛ, k-ω SST, LRR) and detached-eddy-simulations (DES), as well as large-eddy simulations (LES). Comparison of calculation results with the experimental data was carried out. Unlike the linear EVMs, the RSM, DES, and LES reproduced well the mean velocity components, and pressure pulsations in the diffusor draft tube. Despite relatively coarse meshes and insufficient resolution of the near-wall region, LES, DES also reproduced well the intrinsic flow unsteadiness and the dominant flow structures and the associated pressure pulsations in the draft tube.
Karnon, Jonathan; Haji Ali Afzali, Hossein
2014-06-01
Modelling in economic evaluation is an unavoidable fact of life. Cohort-based state transition models are most common, though discrete event simulation (DES) is increasingly being used to implement more complex model structures. The benefits of DES relate to the greater flexibility around the implementation and population of complex models, which may provide more accurate or valid estimates of the incremental costs and benefits of alternative health technologies. The costs of DES relate to the time and expertise required to implement and review complex models, when perhaps a simpler model would suffice. The costs are not borne solely by the analyst, but also by reviewers. In particular, modelled economic evaluations are often submitted to support reimbursement decisions for new technologies, for which detailed model reviews are generally undertaken on behalf of the funding body. This paper reports the results from a review of published DES-based economic evaluations. Factors underlying the use of DES were defined, and the characteristics of applied models were considered, to inform options for assessing the potential benefits of DES in relation to each factor. Four broad factors underlying the use of DES were identified: baseline heterogeneity, continuous disease markers, time varying event rates, and the influence of prior events on subsequent event rates. If relevant, individual-level data are available, representation of the four factors is likely to improve model validity, and it is possible to assess the importance of their representation in individual cases. A thorough model performance evaluation is required to overcome the costs of DES from the users' perspective, but few of the reviewed DES models reported such a process. More generally, further direct, empirical comparisons of complex models with simpler models would better inform the benefits of DES to implement more complex models, and the circumstances in which such benefits are most likely.
2012-07-01
du monde de la modélisation et de la simulation et lui fournir des directives de mise en œuvre ; et fournir des ...définition ; rapports avec les normes ; spécification de procédure de gestion de la MC ; spécification d’artefact de MC. Considérations importantes...utilisant la présente directive comme référence. • Les VV&A (vérification, validation et acceptation) des MC doivent faire partie intégrante du
Standfield, L B; Comans, T A; Scuffham, P A
2017-01-01
To empirically compare Markov cohort modeling (MM) and discrete event simulation (DES) with and without dynamic queuing (DQ) for cost-effectiveness (CE) analysis of a novel method of health services delivery where capacity constraints predominate. A common data-set comparing usual orthopedic care (UC) to an orthopedic physiotherapy screening clinic and multidisciplinary treatment service (OPSC) was used to develop a MM and a DES without (DES-no-DQ) and with DQ (DES-DQ). Model results were then compared in detail. The MM predicted an incremental CE ratio (ICER) of $495 per additional quality-adjusted life-year (QALY) for OPSC over UC. The DES-no-DQ showed OPSC dominating UC; the DES-DQ generated an ICER of $2342 per QALY. The MM and DES-no-DQ ICER estimates differed due to the MM having implicit delays built into its structure as a result of having fixed cycle lengths, which are not a feature of DES. The non-DQ models assume that queues are at a steady state. Conversely, queues in the DES-DQ develop flexibly with supply and demand for resources, in this case, leading to different estimates of resource use and CE. The choice of MM or DES (with or without DQ) would not alter the reimbursement of OPSC as it was highly cost-effective compared to UC in all analyses. However, the modeling method may influence decisions where ICERs are closer to the CE acceptability threshold, or where capacity constraints and DQ are important features of the system. In these cases, DES-DQ would be the preferred modeling technique to avoid incorrect resource allocation decisions.
Hybrid LES/RANS simulation of a turbulent boundary layer over a rectangular cavity
NASA Astrophysics Data System (ADS)
Zhang, Qi; Haering, Sigfried; Oliver, Todd; Moser, Robert
2016-11-01
We report numerical investigations of a turbulent boundary layer over a rectangular cavity using a new hybrid RANS/LES model and the traditional Detached Eddy Simulation (DES). Our new hybrid method aims to address many of the shortcomings from the traditional DES. In the new method, RANS/LES blending controlled by a parameter that measures the ratio of the modeled subgrid kinetic energy to an estimate of the subgrid energy based on the resolved scales. The result is a hybrid method automatically resolves as much turbulence as can be supported by the grid and transitions appropriately from RANS to LES without the need for ad hoc delaying functions that are often required for DES. Further, the new model is designed to improve upon DES by accounting for the effects of grid anisotropy and inhomogeneity in the LES region. We present comparisons of the flow features inside the cavity and the pressure time history and spectra as computed using the new hybrid model and DES.
Statistical and Probabilistic Extensions to Ground Operations' Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Trocine, Linda; Cummings, Nicholas H.; Bazzana, Ashley M.; Rychlik, Nathan; LeCroy, Kenneth L.; Cates, Grant R.
2010-01-01
NASA's human exploration initiatives will invest in technologies, public/private partnerships, and infrastructure, paving the way for the expansion of human civilization into the solar system and beyond. As it is has been for the past half century, the Kennedy Space Center will be the embarkation point for humankind's journey into the cosmos. Functioning as a next generation space launch complex, Kennedy's launch pads, integration facilities, processing areas, launch and recovery ranges will bustle with the activities of the world's space transportation providers. In developing this complex, KSC teams work through the potential operational scenarios: conducting trade studies, planning and budgeting for expensive and limited resources, and simulating alternative operational schemes. Numerous tools, among them discrete event simulation (DES), were matured during the Constellation Program to conduct such analyses with the purpose of optimizing the launch complex for maximum efficiency, safety, and flexibility while minimizing life cycle costs. Discrete event simulation is a computer-based modeling technique for complex and dynamic systems where the state of the system changes at discrete points in time and whose inputs may include random variables. DES is used to assess timelines and throughput, and to support operability studies and contingency analyses. It is applicable to any space launch campaign and informs decision-makers of the effects of varying numbers of expensive resources and the impact of off nominal scenarios on measures of performance. In order to develop representative DES models, methods were adopted, exploited, or created to extend traditional uses of DES. The Delphi method was adopted and utilized for task duration estimation. DES software was exploited for probabilistic event variation. A roll-up process was used, which was developed to reuse models and model elements in other less - detailed models. The DES team continues to innovate and expand DES capabilities to address KSC's planning needs.
Modeling the Transfer Function for the Dark Energy Survey
Chang, C.
2015-03-04
We present a forward-modeling simulation framework designed to model the data products from the Dark Energy Survey (DES). This forward-model process can be thought of as a transfer function—a mapping from cosmological/astronomical signals to the final data products used by the scientists. Using output from the cosmological simulations (the Blind Cosmology Challenge), we generate simulated images (the Ultra Fast Image Simulator) and catalogs representative of the DES data. In this work we demonstrate the framework by simulating the 244 deg 2 coadd images and catalogs in five bands for the DES Science Verification data. The simulation output is compared withmore » the corresponding data to show that major characteristics of the images and catalogs can be captured. We also point out several directions of future improvements. Two practical examples—star-galaxy classification and proximity effects on object detection—are then used to illustrate how one can use the simulations to address systematics issues in data analysis. With clear understanding of the simplifications in our model, we show that one can use the simulations side-by-side with data products to interpret the measurements. This forward modeling approach is generally applicable for other upcoming and future surveys. It provides a powerful tool for systematics studies that is sufficiently realistic and highly controllable.« less
Validating Human Performance Models of the Future Orion Crew Exploration Vehicle
NASA Technical Reports Server (NTRS)
Wong, Douglas T.; Walters, Brett; Fairey, Lisa
2010-01-01
NASA's Orion Crew Exploration Vehicle (CEV) will provide transportation for crew and cargo to and from destinations in support of the Constellation Architecture Design Reference Missions. Discrete Event Simulation (DES) is one of the design methods NASA employs for crew performance of the CEV. During the early development of the CEV, NASA and its prime Orion contractor Lockheed Martin (LM) strived to seek an effective low-cost method for developing and validating human performance DES models. This paper focuses on the method developed while creating a DES model for the CEV Rendezvous, Proximity Operations, and Docking (RPOD) task to the International Space Station. Our approach to validation was to attack the problem from several fronts. First, we began the development of the model early in the CEV design stage. Second, we adhered strictly to M&S development standards. Third, we involved the stakeholders, NASA astronauts, subject matter experts, and NASA's modeling and simulation development community throughout. Fourth, we applied standard and easy-to-conduct methods to ensure the model's accuracy. Lastly, we reviewed the data from an earlier human-in-the-loop RPOD simulation that had different objectives, which provided us an additional means to estimate the model's confidence level. The results revealed that a majority of the DES model was a reasonable representation of the current CEV design.
van Gestel, Aukje; Severens, Johan L; Webers, Carroll A B; Beckers, Henny J M; Jansonius, Nomdo M; Schouten, Jan S A G
2010-01-01
Discrete event simulation (DES) modeling has several advantages over simpler modeling techniques in health economics, such as increased flexibility and the ability to model complex systems. Nevertheless, these benefits may come at the cost of reduced transparency, which may compromise the model's face validity and credibility. We aimed to produce a transparent report on the construction and validation of a DES model using a recently developed model of ocular hypertension and glaucoma. Current evidence of associations between prognostic factors and disease progression in ocular hypertension and glaucoma was translated into DES model elements. The model was extended to simulate treatment decisions and effects. Utility and costs were linked to disease status and treatment, and clinical and health economic outcomes were defined. The model was validated at several levels. The soundness of design and the plausibility of input estimates were evaluated in interdisciplinary meetings (face validity). Individual patients were traced throughout the simulation under a multitude of model settings to debug the model, and the model was run with a variety of extreme scenarios to compare the outcomes with prior expectations (internal validity). Finally, several intermediate (clinical) outcomes of the model were compared with those observed in experimental or observational studies (external validity) and the feasibility of evaluating hypothetical treatment strategies was tested. The model performed well in all validity tests. Analyses of hypothetical treatment strategies took about 30 minutes per cohort and lead to plausible health-economic outcomes. There is added value of DES models in complex treatment strategies such as glaucoma. Achieving transparency in model structure and outcomes may require some effort in reporting and validating the model, but it is feasible.
Karnon, Jonathan; Stahl, James; Brennan, Alan; Caro, J Jaime; Mar, Javier; Möller, Jörgen
2012-01-01
Discrete event simulation (DES) is a form of computer-based modeling that provides an intuitive and flexible approach to representing complex systems. It has been used in a wide range of health care applications. Most early applications involved analyses of systems with constrained resources, where the general aim was to improve the organization of delivered services. More recently, DES has increasingly been applied to evaluate specific technologies in the context of health technology assessment. The aim of this article was to provide consensus-based guidelines on the application of DES in a health care setting, covering the range of issues to which DES can be applied. The article works through the different stages of the modeling process: structural development, parameter estimation, model implementation, model analysis, and representation and reporting. For each stage, a brief description is provided, followed by consideration of issues that are of particular relevance to the application of DES in a health care setting. Each section contains a number of best practice recommendations that were iterated among the authors, as well as among the wider modeling task force. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bel Hadj Kacem, Mohamed Salah
All hydrological processes are affected by the spatial variability of the physical parameters of the watershed, and also by human intervention on the landscape. The water outflow from a watershed strictly depends on the spatial and temporal variabilities of the physical parameters of the watershed. It is now apparent that the integration of mathematical models into GIS's can benefit both GIS and three-dimension environmental models: a true modeling capability can help the modeling community bridge the gap between planners, scientists, decision-makers and end-users. The main goal of this research is to design a practical tool to simulate run-off water surface using Geographic design a practical tool to simulate run-off water surface using Geographic Information Systems and the simulation of the hydrological behavior by the Finite Element Method.
Mission Assignment Model and Simulation Tool for Different Types of Unmanned Aerial Vehicles
2008-09-01
TABLE OF ABBREVIATIONS AND ACRONYMS AAA Anti Aircraft Artillery ATO Air Tasking Order BDA Battle Damage Assessment DES Discrete Event Simulation...clock is advanced in small, fixed time steps. Since the value of simulated time is important in DES , an internal variable, called as simulation clock...VEHICLES Yücel Alver Captain, Turkish Air Force B.S., Turkish Air Force Academy, 2000 Murat Özdoğan 1st Lieutenant, Turkish Air Force B.S., Turkish
Evaluation d'un ecosysteme pastoral sahelien: Apport de la geomatique (Oursi, Burkina Faso)
NASA Astrophysics Data System (ADS)
Kabore, Seraphine Sawadogo
L'objectif principal de cette recherche est la mise au point d'une architecture d'integration de donnees socio-bio-geographiques et de donnees satellitales dans un Systeme d'Information Geographique (SIG) en vue d'une aide a la prise de decisions dans un environnement semi-aride au nord du Burkina Faso. Elle repond a la question fondamentale de l'interpretation des effets des facteurs climatiques et socioeconomiques sur le milieu pastoral. La recherche s'est appuyee sur plusieurs hypotheses de travail: possibilite d'utilisation de modele de simulation, d'approche multicritere et de donnees de teledetection dans un cadre de systeme d'information geographique. L'evolution spatiotemporelle des parametres de productivite du milieu a ete evaluee par approche dynamique selon le modele de Wu et al. (1996) qui modelise les interactions entre le climat, le milieu physique, le vegetal et l'animal pour mieux quantifier la biomasse primaire. A ce modele, quatre parametres ont ete integres par approche floue et multicritere afin de prendre en compte la dimension socioeconomique de la productivite pastorale (apport majeur de la recherche): la sante, l'education, l'agriculture et l'eau. La teledetection (imagerie SPOT) a permis de definir la production primaire a partir de laquelle les simulations ont ete realisees sur 10 annees. Les resultats obtenus montrent une bonne correlation entre biomasse primaire in situ et celle calculee pour les deux modeles, avec toutefois une meilleure efficacite du modele modifie (4 fois plus) dans les zones de forte productivite ou l'on note un taux de surexploitation agricole eleve. A cause de la variabilite spatiale de la production primaire in situ, les erreurs des resultats de simulation (8 a 11%) sont acceptables et montrent la pertinence de l'approche grace a l'utilisation des SIG pour la spatialisation et l'integration des differents parametres des modeles. Les types de production secondaire preconises (production de lait pendant 7 mois ou de viande pendant 6 mois) sont bases sur les besoins de l'UBT et le disponible fourrager qui est de qualite mediocre en saison seche. Dans les deux cas de figure, un deficit fourrager est observe. Deux types de transhumance sont proposes afin d'assurer une production durable selon deux scenarios: exploitation rationnelle des unites pastorales selon un plan de rotation annuelle et mise en defens a moyen terme des zones degradees pour une regeneration. Les zones potentielles pour la transhumance ont ete determinees selon les limites acceptables des criteres d'exploitation durable des milieux saheliens definis par Kessler (1994) soit 0,2 UBT.ha-1.
Static and Dynamic Disorder in Bacterial Light-Harvesting Complex LH2: A 2DES Simulation Study.
Rancova, Olga; Abramavicius, Darius
2014-07-10
Two-dimensional coherent electronic spectroscopy (2DES) is a powerful technique in distinguishing homogeneous and inhomogeneous broadening contributions to the spectral line shapes of molecular transitions induced by environment fluctuations. Using an excitonic model of a double-ring LH2 aggregate, we perform simulations of its 2DES spectra and find that the model of a harmonic environment cannot provide a consistent set of parameters for two temperatures: 77 K and room temperature. This indicates the highly anharmonic nature of protein fluctuations for the pigments of the B850 ring. However, the fluctuations of B800 ring pigments can be assumed as harmonic in this temperature range.
2015-01-01
RTO ou AGARD doivent comporter la dénomination « STO », « RTO » ou « AGARD » selon le cas, suivi du numéro de série. Des informations analogues...rapports de la STO au fur et à mesure de leur publication, vous pouvez consulter notre site Web (http://www.sto.nato.int/) et vous abonner à ce service...le cas, suivie du numéro de série (par exemple AGARD-AG-315). Des informations analogues, telles que le titre et la date de publication sont
2003-03-01
nations, a very thorough examination of current practices. Introduction The Applied Vehicle Technology Panel (AVT) of the Research and Technology...the introduction of new information generated by computer codes required it to be timely and presented in appropriate fashion so that it could...military competition between the NATO allies and the Soviet Union. The second was the introduction of commercial, high capacity transonic aircraft and
NASA Astrophysics Data System (ADS)
Rebaine, Ali
1997-08-01
Ce travail consiste en la simulation numerique des ecoulements internes compressibles bidimensionnels laminaires et turbulents. On s'interesse, particulierement, aux ecoulements dans les ejecteurs supersoniques. Les equations de Navier-Stokes sont formulees sous forme conservative et utilisent, comme variables independantes, les variables dites enthalpiques a savoir: la pression statique, la quantite de mouvement et l'enthalpie totale specifique. Une formulation variationnelle stable des equations de Navier-Stokes est utilisee. Elle est base sur la methode SUPG (Streamline Upwinding Petrov Galerkin) et utilise un operateur de capture des forts gradients. Un modele de turbulence, pour la simulation des ecoulements dans les ejecteurs, est mis au point. Il consiste a separer deux regions distinctes: une region proche de la paroi solide, ou le modele de Baldwin et Lomax est utilise et l'autre, loin de la paroi, ou une formulation nouvelle, basee sur le modele de Schlichting pour les jets, est proposee. Une technique de calcul de la viscosite turbulente, sur un maillage non structure, est implementee. La discretisation dans l'espace de la forme variationnelle est faite a l'aide de la methode des elements finis en utilisant une approximation mixte: quadratique pour les composantes de la quantite de mouvement et de la vitesse et lineaire pour le reste des variables. La discretisation temporelle est effectuee par une methode de differences finies en utilisant le schema d'Euler implicite. Le systeme matriciel, resultant de la discretisation spatio-temporelle, est resolu a l'aide de l'algorithme GMRES en utilisant un preconditionneur diagonal. Les validations numeriques ont ete menees sur plusieurs types de tuyeres et ejecteurs. La principale validation consiste en la simulation de l'ecoulement dans l'ejecteur teste au centre de recherche NASA Lewis. Les resultats obtenus sont tres comparables avec ceux des travaux anterieurs et sont nettement superieurs concernant les ecoulements turbulents dans les ejecteurs.
Using Discrete Event Simulation to predict KPI's at a Projected Emergency Room.
Concha, Pablo; Neriz, Liliana; Parada, Danilo; Ramis, Francisco
2015-01-01
Discrete Event Simulation (DES) is a powerful factor in the design of clinical facilities. DES enables facilities to be built or adapted to achieve the expected Key Performance Indicators (KPI's) such as average waiting times according to acuity, average stay times and others. Our computational model was built and validated using expert judgment and supporting statistical data. One scenario studied resulted in a 50% decrease in the average cycle time of patients compared to the original model, mainly by modifying the patient's attention model.
Can discrete event simulation be of use in modelling major depression?
Le Lay, Agathe; Despiegel, Nicolas; François, Clément; Duru, Gérard
2006-01-01
Background Depression is among the major contributors to worldwide disease burden and adequate modelling requires a framework designed to depict real world disease progression as well as its economic implications as closely as possible. Objectives In light of the specific characteristics associated with depression (multiple episodes at varying intervals, impact of disease history on course of illness, sociodemographic factors), our aim was to clarify to what extent "Discrete Event Simulation" (DES) models provide methodological benefits in depicting disease evolution. Methods We conducted a comprehensive review of published Markov models in depression and identified potential limits to their methodology. A model based on DES principles was developed to investigate the benefits and drawbacks of this simulation method compared with Markov modelling techniques. Results The major drawback to Markov models is that they may not be suitable to tracking patients' disease history properly, unless the analyst defines multiple health states, which may lead to intractable situations. They are also too rigid to take into consideration multiple patient-specific sociodemographic characteristics in a single model. To do so would also require defining multiple health states which would render the analysis entirely too complex. We show that DES resolve these weaknesses and that its flexibility allow patients with differing attributes to move from one event to another in sequential order while simultaneously taking into account important risk factors such as age, gender, disease history and patients attitude towards treatment, together with any disease-related events (adverse events, suicide attempt etc.). Conclusion DES modelling appears to be an accurate, flexible and comprehensive means of depicting disease progression compared with conventional simulation methodologies. Its use in analysing recurrent and chronic diseases appears particularly useful compared with Markov processes. PMID:17147790
Can discrete event simulation be of use in modelling major depression?
Le Lay, Agathe; Despiegel, Nicolas; François, Clément; Duru, Gérard
2006-12-05
Depression is among the major contributors to worldwide disease burden and adequate modelling requires a framework designed to depict real world disease progression as well as its economic implications as closely as possible. In light of the specific characteristics associated with depression (multiple episodes at varying intervals, impact of disease history on course of illness, sociodemographic factors), our aim was to clarify to what extent "Discrete Event Simulation" (DES) models provide methodological benefits in depicting disease evolution. We conducted a comprehensive review of published Markov models in depression and identified potential limits to their methodology. A model based on DES principles was developed to investigate the benefits and drawbacks of this simulation method compared with Markov modelling techniques. The major drawback to Markov models is that they may not be suitable to tracking patients' disease history properly, unless the analyst defines multiple health states, which may lead to intractable situations. They are also too rigid to take into consideration multiple patient-specific sociodemographic characteristics in a single model. To do so would also require defining multiple health states which would render the analysis entirely too complex. We show that DES resolve these weaknesses and that its flexibility allow patients with differing attributes to move from one event to another in sequential order while simultaneously taking into account important risk factors such as age, gender, disease history and patients attitude towards treatment, together with any disease-related events (adverse events, suicide attempt etc.). DES modelling appears to be an accurate, flexible and comprehensive means of depicting disease progression compared with conventional simulation methodologies. Its use in analysing recurrent and chronic diseases appears particularly useful compared with Markov processes.
2001-07-01
hardware - in - loop (HWL) simulation is also developed...Firings / Engine Tests Structure Test Hardware In - Loop Simulation Subsystem Test Lab Tests Seeker Actuators Sensors Electronics Propulsion Model Aero Model...Structure Test Hardware In - Loop Simulation Subsystem Test Lab Tests Seeker Actuators Sensors Electronics Propulsion Model Aero Model Model
Maas, Anne H; Rozendaal, Yvonne J W; van Pul, Carola; Hilbers, Peter A J; Cottaar, Ward J; Haak, Harm R; van Riel, Natal A W
2015-03-01
Current diabetes education methods are costly, time-consuming, and do not actively engage the patient. Here, we describe the development and verification of the physiological model for healthy subjects that forms the basis of the Eindhoven Diabetes Education Simulator (E-DES). E-DES shall provide diabetes patients with an individualized virtual practice environment incorporating the main factors that influence glycemic control: food, exercise, and medication. The physiological model consists of 4 compartments for which the inflow and outflow of glucose and insulin are calculated using 6 nonlinear coupled differential equations and 14 parameters. These parameters are estimated on 12 sets of oral glucose tolerance test (OGTT) data (226 healthy subjects) obtained from literature. The resulting parameter set is verified on 8 separate literature OGTT data sets (229 subjects). The model is considered verified if 95% of the glucose data points lie within an acceptance range of ±20% of the corresponding model value. All glucose data points of the verification data sets lie within the predefined acceptance range. Physiological processes represented in the model include insulin resistance and β-cell function. Adjusting the corresponding parameters allows to describe heterogeneity in the data and shows the capabilities of this model for individualization. We have verified the physiological model of the E-DES for healthy subjects. Heterogeneity of the data has successfully been modeled by adjusting the 4 parameters describing insulin resistance and β-cell function. Our model will form the basis of a simulator providing individualized education on glucose control. © 2014 Diabetes Technology Society.
Etude thermo-hydraulique de l'ecoulement du moderateur dans le reacteur CANDU-6
NASA Astrophysics Data System (ADS)
Mehdi Zadeh, Foad
Etant donne la taille (6,0 m x 7,6 m) ainsi que le domaine multiplement connexe qui caracterisent la cuve des reacteurs CANDU-6 (380 canaux dans la cuve), la physique qui gouverne le comportement du fluide moderateur est encore mal connue de nos jours. L'echantillonnage de donnees dans un reacteur en fonction necessite d'apporter des changements a la configuration de la cuve du reacteur afin d'y inserer des sondes. De plus, la presence d'une zone intense de radiations empeche l'utilisation des capteurs courants d'echantillonnage. En consequence, l'ecoulement du moderateur doit necessairement etre etudie a l'aide d'un modele experimental ou d'un modele numerique. Pour ce qui est du modele experimental, la fabrication et la mise en fonction de telles installations coutent tres cher. De plus, les parametres de la mise a l'echelle du systeme pour fabriquer un modele experimental a l'echelle reduite sont en contradiction. En consequence, la modelisation numerique reste une alternative importante. Actuellement, l'industrie nucleaire utilise une approche numerique, dite de milieu poreux, qui approxime le domaine par un milieu continu ou le reseau des tubes est remplace par des resistances hydrauliques distribuees. Ce modele est capable de decrire les phenomenes macroscopiques de l'ecoulement, mais ne tient pas compte des effets locaux ayant un impact sur l'ecoulement global, tel que les distributions de temperatures et de vitesses a proximite des tubes ainsi que des instabilites hydrodynamiques. Dans le contexte de la surete nucleaire, on s'interesse aux effets locaux autour des tubes de calandre. En effet, des simulations faites par cette approche predisent que l'ecoulement peut prendre plusieurs configurations hydrodynamiques dont, pour certaines, l'ecoulement montre un comportement asymetrique au sein de la cuve. Ceci peut provoquer une ebullition du moderateur sur la paroi des canaux. Dans de telles conditions, le coefficient de reactivite peut varier de maniere importante, se traduisant par l'accroissement de la puissance du reacteur. Ceci peut avoir des consequences majeures pour la surete nucleaire. Une modelisation CFD (Computational Fluid Dynamics) detaillee tenant compte des effets locaux s'avere donc necessaire. Le but de ce travail de recherche est de modeliser le comportement complexe de l'ecoulement du moderateur au sein de la cuve d'un reacteur nucleaire CANDU-6, notamment a proximite des tubes de calandre. Ces simulations servent a identifier les configurations possibles de l'ecoulement dans la calandre. Cette etude consiste ainsi a formuler des bases theoriques a l'origine des instabilites macroscopiques du moderateur, c.-a-d. des mouvements asymetriques qui peuvent provoquer l'ebullition du moderateur. Le defi du projet est de determiner l'impact de ces configurations de l'ecoulement sur la reactivite du reacteur CANDU-6.
Incorporating discrete event simulation into quality improvement efforts in health care systems.
Rutberg, Matthew Harris; Wenczel, Sharon; Devaney, John; Goldlust, Eric Jonathan; Day, Theodore Eugene
2015-01-01
Quality improvement (QI) efforts are an indispensable aspect of health care delivery, particularly in an environment of increasing financial and regulatory pressures. The ability to test predictions of proposed changes to flow, policy, staffing, and other process-level changes using discrete event simulation (DES) has shown significant promise and is well reported in the literature. This article describes how to incorporate DES into QI departments and programs in order to support QI efforts, develop high-fidelity simulation models, conduct experiments, make recommendations, and support adoption of results. The authors describe how DES-enabled QI teams can partner with clinical services and administration to plan, conduct, and sustain QI investigations. © 2013 by the American College of Medical Quality.
DISCRETE EVENT SIMULATION OF OPTICAL SWITCH MATRIX PERFORMANCE IN COMPUTER NETWORKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imam, Neena; Poole, Stephen W
2013-01-01
In this paper, we present application of a Discrete Event Simulator (DES) for performance modeling of optical switching devices in computer networks. Network simulators are valuable tools in situations where one cannot investigate the system directly. This situation may arise if the system under study does not exist yet or the cost of studying the system directly is prohibitive. Most available network simulators are based on the paradigm of discrete-event-based simulation. As computer networks become increasingly larger and more complex, sophisticated DES tool chains have become available for both commercial and academic research. Some well-known simulators are NS2, NS3, OPNET,more » and OMNEST. For this research, we have applied OMNEST for the purpose of simulating multi-wavelength performance of optical switch matrices in computer interconnection networks. Our results suggest that the application of DES to computer interconnection networks provides valuable insight in device performance and aids in topology and system optimization.« less
Investigations of Flow Over a Hemisphere Using Numerical Simulations (Postprint)
2015-06-22
ranging from missile defense, remote sensing , and imaging . An important aspect of these applications is determining the effective beam-on-target...Stokes (URANS), detached eddy simulation (DES), and hybrid RANS/LES. The numerical results were compared with the experiment conducted at Auburn...turret. Using the DES and hybrid RANS/LES turbulence models, Loci-Chem was able to capture the unsteady flow structures, such as the shear layer
Integrating Occupational Characteristics into Human Performance Models: IPME Versus ISMAT Approach
2009-08-01
modélisation générique de la performance humaine appelé Integrated Performance Modelling Environment (IPME). Ce projet a permis d’explorer l’utilisation de la...groupes professionnels dans des modèles de performance humaine : l’approche IPME et l’approche ISMAT Par Christy Lorenzen; RDDC RC 2009-059; R & D...application de simulation d’événements discrets disponible sur le marché et servant à développer des modèles qui simulent la performance humaine et de
Large Eddy Simulation of Flow in Turbine Cascades Using LESTool and UNCLE Codes
NASA Technical Reports Server (NTRS)
Huang, P. G.
2004-01-01
During the period December 23,1997 and December August 31,2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Spalart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.
Large Eddy Simulation of Flow in Turbine Cascades Using LEST and UNCLE Codes
NASA Technical Reports Server (NTRS)
Ashpis, David (Technical Monitor); Huang, P. G.
2004-01-01
During the period December 23, 1997 and December August 31, 2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Sparlart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.
1999-08-01
immediately, re- ducing venous return artifacts during the first beat of the simulation. tn+1 - W+ on c+ / \\ W_ on c_ t 1 Xi-l Xi+1 Figure 4...s) Figure 5: The effect of network complexity. The aortic pressure is shown in Figure 5 during the fifth beat for the networks with one and three...Mechanical Engineering Department, Uni- versity of Victoria. [19] Huyghe J.M., 1986, "Nonlinear Finite Element Models of The Beating Left
A generic discrete-event simulation model for outpatient clinics in a large public hospital.
Weerawat, Waressara; Pichitlamken, Juta; Subsombat, Peerapong
2013-01-01
The orthopedic outpatient department (OPD) ward in a large Thai public hospital is modeled using Discrete-Event Stochastic (DES) simulation. Key Performance Indicators (KPIs) are used to measure effects across various clinical operations during different shifts throughout the day. By considering various KPIs such as wait times to see doctors, percentage of patients who can see a doctor within a target time frame, and the time that the last patient completes their doctor consultation, bottlenecks are identified and resource-critical clinics can be prioritized. The simulation model quantifies the chronic, high patient congestion that is prevalent amongst Thai public hospitals with very high patient-to-doctor ratios. Our model can be applied across five different OPD wards by modifying the model parameters. Throughout this work, we show how DES models can be used as decision-support tools for hospital management.
Detached Eddy Simulation of Film Cooling over a GE Flat Plate
NASA Technical Reports Server (NTRS)
Roy, Subrata
2005-01-01
The detached eddy simulation of film cooling has been utilized for a proprietary GE plate-pipe configuration. The blowing ratio was 2.02, the velocity ratio was 1.26, and the temperature ratio was 1.61. Results indicate that the mixing processes downstream of the hole are highly anisotropic. DES solution shows its ability to depict the dynamic nature of the flow and capture the asymmetry present in temperature and velocity distributions. Further, comparison between experimental and DES time-averaged effectiveness is satisfactory. Numerical values of span-averaged effectiveness show better prediction of the experimental values at downstream locations than a steady state Glenn HT solution. While the DES method shows obvious promise, there are several issues that need further investigation. Despite an accurate prediction in the hole vicinity, the simulation still falls short in the region x = 10d to 100d. This should be investigated. Also the model used flat plate. Actual turbine blade should be modeled in the future if additional finding is available.
2011-10-01
de 2012 à Londres, les Jeux du Commonwealth de 2015 à Toronto et la gestion des cas d’urgence transfrontaliers...tels que les Jeux olympiques. La gestion de la sécurité lors d’événements comme Vancouver 2010 et les sommets du G8 et du G20 est un enjeu... des plans de gestion des mesures d’urgence et de continuité des opérations, une structure permanente a été mise sur pied
Pan, Feng; Reifsnider, Odette; Zheng, Ying; Proskorovsky, Irina; Li, Tracy; He, Jianming; Sorensen, Sonja V
2018-04-01
Treatment landscape in prostate cancer has changed dramatically with the emergence of new medicines in the past few years. The traditional survival partition model (SPM) cannot accurately predict long-term clinical outcomes because it is limited by its ability to capture the key consequences associated with this changing treatment paradigm. The objective of this study was to introduce and validate a discrete-event simulation (DES) model for prostate cancer. A DES model was developed to simulate overall survival (OS) and other clinical outcomes based on patient characteristics, treatment received, and disease progression history. We tested and validated this model with clinical trial data from the abiraterone acetate phase III trial (COU-AA-302). The model was constructed with interim data (55% death) and validated with the final data (96% death). Predicted OS values were also compared with those from the SPM. The DES model's predicted time to chemotherapy and OS are highly consistent with the final observed data. The model accurately predicts the OS hazard ratio from the final data cut (predicted: 0.74; 95% confidence interval [CI] 0.64-0.85 and final actual: 0.74; 95% CI 0.6-0.88). The log-rank test to compare the observed and predicted OS curves indicated no statistically significant difference between observed and predicted curves. However, the predictions from the SPM based on interim data deviated significantly from the final data. Our study showed that a DES model with properly developed risk equations presents considerable improvements to the more traditional SPM in flexibility and predictive accuracy of long-term outcomes. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Requirements analysis for a hardware, discrete-event, simulation engine accelerator
NASA Astrophysics Data System (ADS)
Taylor, Paul J., Jr.
1991-12-01
An analysis of a general Discrete Event Simulation (DES), executing on the distributed architecture of an eight mode Intel PSC/2 hypercube, was performed. The most time consuming portions of the general DES algorithm were determined to be the functions associated with message passing of required simulation data between processing nodes of the hypercube architecture. A behavioral description, using the IEEE standard VHSIC Hardware Description and Design Language (VHDL), for a general DES hardware accelerator is presented. The behavioral description specifies the operational requirements for a DES coprocessor to augment the hypercube's execution of DES simulations. The DES coprocessor design implements the functions necessary to perform distributed discrete event simulations using a conservative time synchronization protocol.
Efficiency of endoscopy units can be improved with use of discrete event simulation modeling.
Sauer, Bryan G; Singh, Kanwar P; Wagner, Barry L; Vanden Hoek, Matthew S; Twilley, Katherine; Cohn, Steven M; Shami, Vanessa M; Wang, Andrew Y
2016-11-01
Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience.
Efficiency of endoscopy units can be improved with use of discrete event simulation modeling
Sauer, Bryan G.; Singh, Kanwar P.; Wagner, Barry L.; Vanden Hoek, Matthew S.; Twilley, Katherine; Cohn, Steven M.; Shami, Vanessa M.; Wang, Andrew Y.
2016-01-01
Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience. PMID:27853739
Some Recent Developments in Turbulence Closure Modeling
NASA Astrophysics Data System (ADS)
Durbin, Paul A.
2018-01-01
Turbulence closure models are central to a good deal of applied computational fluid dynamical analysis. Closure modeling endures as a productive area of research. This review covers recent developments in elliptic relaxation and elliptic blending models, unified rotation and curvature corrections, transition prediction, hybrid simulation, and data-driven methods. The focus is on closure models in which transport equations are solved for scalar variables, such as the turbulent kinetic energy, a timescale, or a measure of anisotropy. Algebraic constitutive representations are reviewed for their role in relating scalar closures to the Reynolds stress tensor. Seamless and nonzonal methods, which invoke a single closure model, are reviewed, especially detached eddy simulation (DES) and adaptive DES. Other topics surveyed include data-driven modeling and intermittency and laminar fluctuation models for transition prediction. The review concludes with an outlook.
Speciations and Extinctions in a Self-Organizing Critical Model of Tree-Like Evolution
NASA Astrophysics Data System (ADS)
Kramer, M.; Vandewalle, N.; Ausloos, M.
1996-04-01
We study analytically a simple model of a self-organized critical evolution. The model considers both extinction and speciation events leading to the growth of phylogenetic-like trees. Through a mean-field like theory, we study the evolution of the local configurations for the tree leaves. The fitness threshold, below which life activity takes place through avalanches of all sizes is calculated. The transition between speciating (evolving) and dead trees is obtained and is in agreement with numerical simulations. Moreover, this theoretical work suggests that the structure of the tree is strongly dependent on the extinction strength. Nous étudions analytiquement un modèle simple d'évolution auto-organisée critique. Le modèle considère des extinctions et des spéciations conduisant à une croissance d'arbre phylogénétiques. Nous étudions ici par une théorie de champ moyen l'évolution des configurations des extrémités de l'arbre. Le seuil critique de “fitness” en-dessous duquel des explosions d'activité biologique de toutes tailles se produisent est calculé. La transition entre arbres croissants et arbres éteints est également obtenue en accord avec les simulations. En outre, ce travail théorique suggère que la structure des arbres générés dépend fortement du paramètre d'extinctions.
Evaluation of the Navys Sea/Shore Flow Policy
2016-06-01
Std. Z39.18 i Abstract CNA developed an independent Discrete -Event Simulation model to evaluate and assess the effect of...a more steady manning level, but the variability remains, even if the system is optimized. In building a Discrete -Event Simulation model, we...steady-state model. In FY 2014, CNA developed a Discrete -Event Simulation model to evaluate the impact of sea/shore flow policy (the DES-SSF model
Jones, Edmund; Masconi, Katya L.; Sweeting, Michael J.; Thompson, Simon G.; Powell, Janet T.
2018-01-01
Markov models are often used to evaluate the cost-effectiveness of new healthcare interventions but they are sometimes not flexible enough to allow accurate modeling or investigation of alternative scenarios and policies. A Markov model previously demonstrated that a one-off invitation to screening for abdominal aortic aneurysm (AAA) for men aged 65 y in the UK and subsequent follow-up of identified AAAs was likely to be highly cost-effective at thresholds commonly adopted in the UK (£20,000 to £30,000 per quality adjusted life-year). However, new evidence has emerged and the decision problem has evolved to include exploration of the circumstances under which AAA screening may be cost-effective, which the Markov model is not easily able to address. A new model to handle this more complex decision problem was needed, and the case of AAA screening thus provides an illustration of the relative merits of Markov models and discrete event simulation (DES) models. An individual-level DES model was built using the R programming language to reflect possible events and pathways of individuals invited to screening v. those not invited. The model was validated against key events and cost-effectiveness, as observed in a large, randomized trial. Different screening protocol scenarios were investigated to demonstrate the flexibility of the DES. The case of AAA screening highlights the benefits of DES, particularly in the context of screening studies.
ERIC Educational Resources Information Center
Meublat, Guy
This document forms part of a research project initiated by the Ministry of Education in Quebec and designed to forecast teacher demand over the next 15 years. It analyzes the problem of identifying potential teacher dropouts by means of a statistical model which provides simulations of various hypotheses and which can be easily revised by the…
Computer modeling of lung cancer diagnosis-to-treatment process
Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U.; Yu, Xinhua; Faris, Nick
2015-01-01
We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed. PMID:26380181
Dark Energy Survey Year 1 Results: Multi-Probe Methodology and Simulated Likelihood Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krause, E.; et al.
We present the methodology for and detail the implementation of the Dark Energy Survey (DES) 3x2pt DES Year 1 (Y1) analysis, which combines configuration-space two-point statistics from three different cosmological probes: cosmic shear, galaxy-galaxy lensing, and galaxy clustering, using data from the first year of DES observations. We have developed two independent modeling pipelines and describe the code validation process. We derive expressions for analytical real-space multi-probe covariances, and describe their validation with numerical simulations. We stress-test the inference pipelines in simulated likelihood analyses that vary 6-7 cosmology parameters plus 20 nuisance parameters and precisely resemble the analysis to be presented in the DES 3x2pt analysis paper, using a variety of simulated input data vectors with varying assumptions. We find that any disagreement between pipelines leads to changes in assigned likelihoodmore » $$\\Delta \\chi^2 \\le 0.045$$ with respect to the statistical error of the DES Y1 data vector. We also find that angular binning and survey mask do not impact our analytic covariance at a significant level. We determine lower bounds on scales used for analysis of galaxy clustering (8 Mpc$$~h^{-1}$$) and galaxy-galaxy lensing (12 Mpc$$~h^{-1}$$) such that the impact of modeling uncertainties in the non-linear regime is well below statistical errors, and show that our analysis choices are robust against a variety of systematics. These tests demonstrate that we have a robust analysis pipeline that yields unbiased cosmological parameter inferences for the flagship 3x2pt DES Y1 analysis. We emphasize that the level of independent code development and subsequent code comparison as demonstrated in this paper is necessary to produce credible constraints from increasingly complex multi-probe analyses of current data.« less
Modeling Anti-Air Warfare With Discrete Event Simulation and Analyzing Naval Convoy Operations
2016-06-01
WARFARE WITH DISCRETE EVENT SIMULATION AND ANALYZING NAVAL CONVOY OPERATIONS by Ali E. Opcin June 2016 Thesis Advisor: Arnold H. Buss Co...REPORT DATE June 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE MODELING ANTI-AIR WARFARE WITH DISCRETE EVENT...In this study, a discrete event simulation (DES) was built by modeling ships, and their sensors and weapons, to simulate convoy operations under
NASA Astrophysics Data System (ADS)
LeBlanc, Luc R.
Les materiaux composites sont de plus en plus utilises dans des domaines tels que l'aerospatiale, les voitures a hautes performances et les equipements sportifs, pour en nommer quelques-uns. Des etudes ont demontre qu'une exposition a l'humidite nuit a la resistance des composites en favorisant l'initiation et la propagation du delaminage. De ces etudes, tres peu traitent de l'effet de l'humidite sur l'initiation du delaminage en mode mixte I/II et aucune ne traite des effets de l'humidite sur le taux de propagation du delaminage en mode mixte I/II dans un composite. La premiere partie de cette these consiste a determiner les effets de l'humidite sur la propagation du delaminage lors d'une sollicitation en mode mixte I/II. Des eprouvettes d'un composite unidirectionnel de carbone/epoxy (G40-800/5276-1) ont ete immergees dans un bain d'eau distillee a 70°C jusqu'a leur saturation. Des essais experimentaux quasi-statiques avec des chargements d'une gamme de mixites des modes I/II (0%, 25%, 50%, 75% et 100%) ont ete executes pour determiner les effets de l'humidite sur la resistance au delaminage du composite. Des essais de fatigue ont ete realises, avec la meme gamme de mixite des modes I/II, pour determiner 1'effet de 1'humidite sur l'initiation et sur le taux de propagation du delaminage. Les resultats des essais en chargement quasi-statique ont demontre que l'humidite reduit la resistance au delaminage d'un composite carbone/epoxy pour toute la gamme des mixites des modes I/II, sauf pour le mode I ou la resistance au delaminage augmente apres une exposition a l'humidite. Pour les chargements en fatigue, l'humidite a pour effet d'accelerer l'initiation du delaminage et d'augmenter le taux de propagation pour toutes les mixites des modes I/II. Les donnees experimentales recueillies ont ete utilisees pour determiner lesquels des criteres de delaminage en statique et des modeles de taux de propagation du delaminage en fatigue en mode mixte I/II proposes dans la litterature representent le mieux le delaminage du composite etudie. Une courbe de regression a ete utilisee pour determiner le meilleur ajustement entre les donnees experimentales et les criteres de delaminage en statique etudies. Une surface de regression a ete utilisee pour determiner le meilleur ajustement entre les donnees experimentales et les modeles de taux de propagation en fatigue etudies. D'apres les ajustements, le meilleur critere de delaminage en statique est le critere B-K et le meilleur modele de propagation en fatigue est le modele de Kenane-Benzeggagh. Afin de predire le delaminage lors de la conception de pieces complexes, des modeles numeriques peuvent etre utilises. La prediction de la longueur de delaminage lors des chargements en fatigue d'une piece est tres importante pour assurer qu'une fissure interlaminaire ne va pas croitre excessivement et causer la rupture de cette piece avant la fin de sa duree de vie de conception. Selon la tendance recente, ces modeles sont souvent bases sur l'approche de zone cohesive avec une formulation par elements finis. Au cours des travaux presentes dans cette these, le modele de progression du delaminage en fatigue de Landry & LaPlante (2012) a ete ameliore en y ajoutant le traitement des chargements en mode mixte I/II et en y modifiant l'algorithme du calcul de la force d'entrainement maximale du delaminage. Une calibration des parametres de zone cohesive a ete faite a partir des essais quasi-statiques experimentaux en mode I et II. Des resultats de simulations numeriques des essais quasi-statiques en mode mixte I/II, avec des eprouvettes seches et humides, ont ete compares avec les essais experimentaux. Des simulations numeriques en fatigue ont aussi ete faites et comparees avec les resultats experimentaux du taux de propagation du delaminage. Les resultats numeriques des essais quasi-statiques et de fatigue ont montre une bonne correlation avec les resultats experimentaux pour toute la gamme des mixites des modes I/II etudiee.
Model-Based Economic Evaluation of Treatments for Depression: A Systematic Literature Review.
Kolovos, Spyros; Bosmans, Judith E; Riper, Heleen; Chevreul, Karine; Coupé, Veerle M H; van Tulder, Maurits W
2017-09-01
An increasing number of model-based studies that evaluate the cost effectiveness of treatments for depression are being published. These studies have different characteristics and use different simulation methods. We aimed to systematically review model-based studies evaluating the cost effectiveness of treatments for depression and examine which modelling technique is most appropriate for simulating the natural course of depression. The literature search was conducted in the databases PubMed, EMBASE and PsycInfo between 1 January 2002 and 1 October 2016. Studies were eligible if they used a health economic model with quality-adjusted life-years or disability-adjusted life-years as an outcome measure. Data related to various methodological characteristics were extracted from the included studies. The available modelling techniques were evaluated based on 11 predefined criteria. This methodological review included 41 model-based studies, of which 21 used decision trees (DTs), 15 used cohort-based state-transition Markov models (CMMs), two used individual-based state-transition models (ISMs), and three used discrete-event simulation (DES) models. Just over half of the studies (54%) evaluated antidepressants compared with a control condition. The data sources, time horizons, cycle lengths, perspectives adopted and number of health states/events all varied widely between the included studies. DTs scored positively in four of the 11 criteria, CMMs in five, ISMs in six, and DES models in seven. There were substantial methodological differences between the studies. Since the individual history of each patient is important for the prognosis of depression, DES and ISM simulation methods may be more appropriate than the others for a pragmatic representation of the course of depression. However, direct comparisons between the available modelling techniques are necessary to yield firm conclusions.
Evaluation of the Navys Sea/Shore Flow Policy
2016-06-01
CNA developed an independent Discrete -Event Simulation model to evaluate and assess the effect of alternative sea/shore flow policies. In this study...remains, even if the system is optimized. In building a Discrete -Event Simulation model, we discovered key factors that should be included in the... Discrete -Event Simulation model to evaluate the impact of sea/shore flow policy (the DES-SSF model) and compared the results with the SSFM for one
NASA Astrophysics Data System (ADS)
Aboutajeddine, Ahmed
Les modeles micromecaniques de transition d'echelles qui permettent de determiner les proprietes effectives des materiaux heterogenes a partir de la microstructure sont consideres dans ce travail. L'objectif est la prise en compte de la presence d'une interphase entre la matrice et le renforcement dans les modeles micromecaniques classiques, de meme que la reconsideration des approximations de base de ces modeles, afin de traiter les materiaux multiphasiques. Un nouveau modele micromecanique est alors propose pour tenir compte de la presence d'une interphase elastique mince lors de la determination des proprietes effectives. Ce modele a ete construit grace a l'apport de l'equation integrale, des operateurs interfaciaux de Hill et de la methode de Mori-Tanaka. Les expressions obtenues pour les modules globaux et les champs dans l'enrobage sont de nature analytique. L'approximation de base de ce modele est amelioree par la suite dans un nouveau modele qui s'interesse aux inclusions enrobees avec un enrobage mince ou epais. La resolution utilisee s'appuie sur une double homogeneisation realisee au niveau de l'inclusion enrobee et du materiau. Cette nouvelle demarche, permettra d'apprehender completement les implications des approximations de la modelisation. Les resultats obtenus sont exploites par la suite dans la solution de l'assemblage de Hashin. Ainsi, plusieurs modeles micromecaniques classiques d'origines differentes se voient unifier et rattacher, dans ce travail, a la representation geometrique de Hashin. En plus de pouvoir apprecier completement la pertinence de l'approximation de chaque modele dans cette vision unique, l'extension correcte de ces modeles aux materiaux multiphasiques est rendue possible. Plusieurs modeles analytiques et explicites sont alors proposee suivant des solutions de differents ordres de l'assemblage de Hashin. L'un des modeles explicite apparait comme une correction directe du modele de Mori-Tanaka, dans les cas ou celui ci echoue a donner de bons resultats. Finalement, ce modele de Mori-Tanaka corrige est utilise avec les operateurs de Hill pour construire un modele de transition d'echelle pour les materiaux ayant une interphase elastoplastique. La loi de comportement effective trouvee est de nature incrementale et elle est conjuguee a la relation de la plasticite de l'interphase. Des simulations d'essais mecaniques pour plusieurs proprietes de l'interphase plastique a permis de dresser des profils de l'enrobage octroyant un meilleur comportement au materiau.
NASA Astrophysics Data System (ADS)
Ait Hammou, Zouhair
Cette etude porte sur la conception d'un accumulateur echangeur de chaleur hybride (AECH) pour la gestion simultanee des energies solaire et electrique. Un modele mathematique reposant sur les equations de conservation de la quantite d'energie est expose. Il est developpe pour tester differents materiaux de stockage, entre autres, les materiaux a changement de phase (solide/liquide) et les materiaux de stockage sensible. Un code de calcul est mis en eeuvre sur ordinateur, puis valide a l'aide des resultats analytiques et numeriques de la litterature. En parallele, un prototype experimental a echelle reduite est concu au laboratoire afin de valider le code de calcul. Des simulations sont effectuees pour etudier les effets des parametres de conception et des materiaux de stockage sur le comportement thermique de l'AECH et sur la consommation d'energie electrique. Les resultats des simulations sur quatre mois d'hiver montrent que la paraffine n-octadecane et l'acide caprique sont deux candidats souhaitables pour le stockage d'energie destine au chauffage des habitats. L'utilisation de ces deux materiaux dans l'AECH permet de reduire la consommation d'energie electrique de 32% et d'aplanir le probleme de pointe electrique puisque 90% de l'energie electrique est consommee durant les heures creuses. En plus, en adoptant un tarif preferentiel, le calcul des couts lies a la consommation d'energie electrique montre que le consommateur adoptant ce systeme beneficie d'une reduction de 50% de la facture d'electricite.
NASA Astrophysics Data System (ADS)
Regev, Shaked; Farago, Oded
2018-10-01
We use a one-dimensional two layer model with a semi-permeable membrane to study the diffusion of a therapeutic drug delivered from a drug-eluting stent (DES). The rate of drug transfer from the stent coating to the arterial wall is calculated by using underdamped Langevin dynamics simulations. Our results reveal that the membrane has virtually no delay effect on the rate of delivery from the DES. The work demonstrates the great potential of underdamped Langevin dynamics simulations as an easy to implement, efficient, method for solving complicated diffusion problems in systems with a spatially-dependent diffusion coefficient.
Improving Energy Efficiency for the Vehicle Assembly Industry: A Discrete Event Simulation Approach
NASA Astrophysics Data System (ADS)
Oumer, Abduaziz; Mekbib Atnaw, Samson; Kie Cheng, Jack; Singh, Lakveer
2016-11-01
This paper presented a Discrete Event Simulation (DES) model for investigating and improving energy efficiency in vehicle assembly line. The car manufacturing industry is one of the highest energy consuming industries. Using Rockwell Arena DES package; a detailed model was constructed for an actual vehicle assembly plant. The sources of energy considered in this research are electricity and fuel; which are the two main types of energy sources used in a typical vehicle assembly plant. The model depicts the performance measurement for process- specific energy measures of painting, welding, and assembling processes. Sound energy efficiency model within this industry has two-fold advantage: reducing CO2 emission and cost reduction associated with fuel and electricity consumption. The paper starts with an overview of challenges in energy consumption within the facilities of automotive assembly line and highlights the parameters for energy efficiency. The results of the simulation model indicated improvements for energy saving objectives and reduced costs.
NASA Astrophysics Data System (ADS)
Comot, Pierre
L'industrie aeronautique, cherche a etudier la possibilite d'utiliser de maniere structurelle des joints brases, dans une optique de reduction de poids et de cout. Le developpement d'une methode d'evaluation rapide, fiable et peu couteuse pour evaluer l'integrite structurelle des joints apparait donc indispensable. La resistance mecanique d'un joint brase dependant principalement de la quantite de phase fragile dans sa microstructure. Les ondes guidees ultrasonores permettent de detecter ce type de phase lorsqu'elles sont couplees a une mesure spatio-temporelle. De plus la nature de ce type d'ondes permet l'inspection de joints ayant des formes complexes. Ce memoire se concentre donc sur le developpement d'une technique basee sur l'utilisation d'ondes guidees ultrasonores pour l'inspection de joints brases a recouvrement d'Inconel 625 avec comme metal d'apport du BNi-2. Dans un premiers temps un modele elements finis du joint a ete utilise pour simuler la propagation des ultrasons et optimiser les parametres d'inspection, la simulation a permis egalement de demontrer la faisabilite de la technique pour la detection de la quantite de phase fragile dans ce type de joints. Les parametres optimises sont la forme de signal d'excitation, sa frequence centrale et la direction d'excitation. Les simulations ont montre que l'energie de l'onde ultrasonore transmise a travers le joint aussi bien que celle reflechie, toutes deux extraites des courbes de dispersion, etaient proportionnelles a la quantite de phase fragile presente dans le joint et donc cette methode permet d'identifier la presence ou non d'une phase fragile dans ce type de joint. Ensuite des experimentations ont ete menees sur trois echantillons typiques presentant differentes quantites de phase fragile dans le joint, pour obtenir ce type d'echantillons differents temps de brasage ont ete utilises (1, 60 et 180 min). Pour cela un banc d'essai automatise a ete developpe permettant d'effectuer une analyse similaire a celle utilisee en simulation. Les parametres experimentaux ayant ete choisis en accord avec l'optimisation effectuee lors des simulations et apres une premiere optimisation du procede experimental. Finalement les resultats experimentaux confirment les resultats obtenus en simulation, et demontrent le potentiel de la methode developpee.
Degeling, Koen; Schivo, Stefano; Mehra, Niven; Koffijberg, Hendrik; Langerak, Rom; de Bono, Johann S; IJzerman, Maarten J
2017-12-01
With the advent of personalized medicine, the field of health economic modeling is being challenged and the use of patient-level dynamic modeling techniques might be required. To illustrate the usability of two such techniques, timed automata (TA) and discrete event simulation (DES), for modeling personalized treatment decisions. An early health technology assessment on the use of circulating tumor cells, compared with prostate-specific antigen and bone scintigraphy, to inform treatment decisions in metastatic castration-resistant prostate cancer was performed. Both modeling techniques were assessed quantitatively, in terms of intermediate outcomes (e.g., overtreatment) and health economic outcomes (e.g., net monetary benefit). Qualitatively, among others, model structure, agent interactions, data management (i.e., importing and exporting data), and model transparency were assessed. Both models yielded realistic and similar intermediate and health economic outcomes. Overtreatment was reduced by 6.99 and 7.02 weeks by applying circulating tumor cell as a response marker at a net monetary benefit of -€1033 and -€1104 for the TA model and the DES model, respectively. Software-specific differences were observed regarding data management features and the support for statistical distributions, which were considered better for the DES software. Regarding method-specific differences, interactions were modeled more straightforward using TA, benefiting from its compositional model structure. Both techniques prove suitable for modeling personalized treatment decisions, although DES would be preferred given the current software-specific limitations of TA. When these limitations are resolved, TA would be an interesting modeling alternative if interactions are key or its compositional structure is useful to manage multi-agent complex problems. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Reducing ambulance response times using discrete event simulation.
Wei Lam, Sean Shao; Zhang, Zhong Cheng; Oh, Hong Choon; Ng, Yih Ying; Wah, Win; Hock Ong, Marcus Eng
2014-01-01
The objectives of this study are to develop a discrete-event simulation (DES) model for the Singapore Emergency Medical Services (EMS), and to demonstrate the utility of this DES model for the evaluation of different policy alternatives to improve ambulance response times. A DES model was developed based on retrospective emergency call data over a continuous 6-month period in Singapore. The main outcome measure is the distribution of response times. The secondary outcome measure is ambulance utilization levels based on unit hour utilization (UHU) ratios. The DES model was used to evaluate different policy options in order to improve the response times, while maintaining reasonable fleet utilization. Three policy alternatives looking at the reallocation of ambulances, the addition of new ambulances, and alternative dispatch policies were evaluated. Modifications of dispatch policy combined with the reallocation of existing ambulances were able to achieve response time performance equivalent to that of adding 10 ambulances. The median (90th percentile) response time was 7.08 minutes (12.69 minutes). Overall, this combined strategy managed to narrow the gap between the ideal and existing response time distribution by 11-13%. Furthermore, the median UHU under this combined strategy was 0.324 with an interquartile range (IQR) of 0.047 versus a median utilization of 0.285 (IQR of 0.051) resulting from the introduction of additional ambulances. Response times were shown to be improved via a more effective reallocation of ambulances and dispatch policy. More importantly, the response time improvements were achieved without a reduction in the utilization levels and additional costs associated with the addition of ambulances. We demonstrated the effective use of DES as a versatile platform to model the dynamic system complexities of Singapore's national EMS systems for the evaluation of operational strategies to improve ambulance response times.
Famiglietti, Robin M; Norboge, Emily C; Boving, Valentine; Langabeer, James R; Buchholz, Thomas A; Mikhail, Osama
To meet demand for radiation oncology services and ensure patient-centered safe care, management in an academic radiation oncology department initiated quality improvement efforts using discrete-event simulation (DES). Although the long-term goal was testing and deploying solutions, the primary aim at the outset was characterizing and validating a computer simulation model of existing operations to identify targets for improvement. The adoption and validation of a DES model of processes and procedures affecting patient flow and satisfaction, employee experience, and efficiency were undertaken in 2012-2013. Multiple sources were tapped for data, including direct observation, equipment logs, timekeeping, and electronic health records. During their treatment visits, patients averaged 50.4 minutes in the treatment center, of which 38% was spent in the treatment room. Patients with appointments between 10 AM and 2 PM experienced the longest delays before entering the treatment room, and those in the clinic in the day's first and last hours, the shortest (<5 minutes). Despite staffed for 14.5 hours daily, the clinic registered only 20% of patients after 2:30 PM. Utilization of equipment averaged 58%, and utilization of staff, 56%. The DES modeling quantified operations, identifying evidence-based targets for next-phase remediation and providing data to justify initiatives.
NASA Astrophysics Data System (ADS)
Friedrich, Oliver; Eifler, Tim
2018-01-01
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.
NASA Astrophysics Data System (ADS)
Donnadieu, P.; Dénoyer, F.
1996-11-01
A comparative X-ray and electron diffraction study has been performed on Al-Li-Cu icosahedral quasicrystal in order to investigate the diffuse scattering rings revealed by a previous work. Electron diffraction confirms the existence of rings but shows that the rings have a fine structure. The diffuse aspect on the X-ray diffraction patterns is then due to an averaging effect. Recent simulations based on the model of canonical cells related to the icosahedral packing give diffractions patterns in agreement with this fine structure effect. Nous comparons les diagrammes de diffraction des rayon-X et des électrons obtenus sur les mêmes échantillons du quasicristal icosaèdrique Al-Li-Cu. Notre but est d'étudier les anneaux de diffusion diffuse mis en évidence par un travail précédent. Les diagrammes de diffraction électronique confirment la présence des anneaux mais ils montrent aussi que ces anneaux possèdent une structure fine. L'aspect diffus des anneaux révélés par la diffraction des rayons X est dû à un effet de moyenne. Des simulations récentes basées sur la décomposition en cellules canoniques de l'empilement icosaédrique produisent des diagrammes de diffraction en accord avec ces effects de structure fine.
Etude aerodynamique d'un jet turbulent impactant une paroi concave
NASA Astrophysics Data System (ADS)
LeBlanc, Benoit
Etant donne la demande croissante de temperatures elevees dans des chambres de combustion de systemes de propulsions en aerospatiale (turbomoteurs, moteur a reaction, etc.), l'interet dans le refroidissement par jets impactant s'est vu croitre. Le refroidissement des aubes de turbine permet une augmentation de temperature de combustion, ce qui se traduit en une augmentation de l'efficacite de combustion et donc une meilleure economie de carburant. Le transfert de chaleur dans les au bages est influence par les aspects aerodynamiques du refroidissement a jet, particulierement dans le cas d'ecoulements turbulents. Un manque de comprehension de l'aerodynamique a l'interieur de ces espaces confinees peut mener a des changements de transfert thermique qui sont inattendus, ce qui augmente le risque de fluage. Il est donc d'interet pour l'industrie aerospatiale et l'academie de poursuivre la recherche dans l'aerodynamique des jets turbulents impactant les parois courbes. Les jets impactant les surfaces courbes ont deja fait l'objet de nombreuses etudes. Par contre des conditions oscillatoires observees en laboratoire se sont averees difficiles a reproduire en numerique, puisque les structures d'ecoulements impactants des parois concaves sont fortement dependantes de la turbulence et des effets instationnaires. Une etude experimentale fut realisee a l'institut PPRIME a l'Universite de Poitiers afin d'observer le phenomene d'oscillation dans le jet. Une serie d'essais ont verifie les conditions d'ecoulement laminaires et turbulentes, toutefois le cout des essais experimentaux a seulement permis d'avoir un apercu du phenomene global. Une deuxieme serie d'essais fut realisee numeriquement a l'Universite de Moncton avec l'outil OpenFOAM pour des conditions d'ecoulement laminaire et bidimensionnel. Cette etude a donc comme but de poursuivre l'enquete de l'aerodynamique oscillatoire des jets impactant des parois courbes, mais pour un regime d'ecoulement transitoire, turbulent, tridimensionnel. Les nombres de Reynolds utilises dans l'etude numerique, bases sur le diametre du jet lineaire observe, sont de Red = 3333 et 6667, consideres comme etant en transition vers la turbulence. Dans cette etude, un montage numerique est construit. Le maillage, le schema numerique, les conditions frontiere et la discretisation sont discutes et choisis. Les resultats sont ensuite valides avec des donnees turbulentes experimentales. En modelisation numerique de turbulence, les modeles de Moyennage Reynolds des Equations Naviers Stokes (RANS) presentent des difficultes avec des ecoulements instationnaires en regime transitionnel. La Simulation des Grandes Echelles (LES) presente une solution plus precise, mais au cout encore hors de portee pour cette etude. La methode employee pour cette etude est la Simulation des Tourbillons Detaches (DES), qui est un hybride des deux methodes (RANS et LES). Pour analyser la topologie de l'ecoulement, la decomposition des modes propres (POD) a ete egalement ete effectuee sur les resultats numeriques. L'etude a demontre d'abord le temps de calcul relativement eleve associe a des essais DES pour garder le nombre de Courant faible. Les resultats numeriques ont cependant reussi a reproduire correctement le basculement asynchrone observe dans les essais experimentaux. Le basculement observe semble etre cause par des effets transitionnels, ce qui expliquerait la difficulte des modeles RANS a correctement reproduire l'aerodynamique de l'ecoulement. L'ecoulement du jet, a son tour, est pour la plupart du temps tridimensionnel et turbulent sauf pour de courtes periodes de temps stable et independant de la troisieme dimension. L'etude topologique de l'ecoulement a egalement permit la reconaissances de structures principales sousjacentes qui etaient brouillees par la turbulence. Mots cles : jet impactant, paroi concave, turbulence, transitionnel, simulation des tourbillons detaches (DES), OpenFOAM.
2011-12-01
la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale 2011, Abstract...modélisation des fonctions de communication et de prise de décision dans le cadre de la prise de décision partagée (PDP) du volet de travail « Recherche par... la simulation in-vivo sur la prise de décision partagée des méta-organisations ». Cette composante du programme
NASA Astrophysics Data System (ADS)
Gorecki, A.; Brambilla, A.; Moulin, V.; Gaborieau, E.; Radisson, P.; Verger, L.
2013-11-01
Multi-energy (ME) detectors are becoming a serious alternative to classical dual-energy sandwich (DE-S) detectors for X-ray applications such as medical imaging or explosive detection. They can use the full X-ray spectrum of irradiated materials, rather than disposing only of low and high energy measurements, which may be mixed. In this article, we intend to compare both simulated and real industrial detection systems, operating at a high count rate, independently of the dimensions of the measurements and independently of any signal processing methods. Simulations or prototypes of similar detectors have already been compared (see [1] for instance), but never independently of estimation methods and never with real detectors. We have simulated both an ME detector made of CdTe - based on the characteristics of the MultiX ME100 and - a DE-S detector - based on the characteristics of the Detection Technology's X-Card 1.5-64DE model. These detectors were compared to a perfect spectroscopic detector and an optimal DE-S detector. For comparison purposes, two approaches were investigated. The first approach addresses how to distinguise signals, while the second relates to identifying materials. Performance criteria were defined and comparisons were made over a range of material thicknesses and with different photon statistics. Experimental measurements in a specific configuration were acquired to checks simulations. Results showed good agreement between the ME simulation and the ME100 detector. Both criteria seem to be equivalent, and the ME detector performs 3.5 times better than the DE-S detector with same photon statistics based on simulations and experimental measurements. Regardless of the photon statistics ME detectors appeared more efficient than DE-S detectors for all material thicknesses between 1 and 9 cm when measuring plastics with an attenuation signature close that of explosive materials. This translates into an improved false detection rate (FDR): DE-S detectors have an FDR 2.87±0.03-fold higher than ME detectors for 4 cm of POM with 20 000 incident photons, when identifications are screened against a two-material base.
NASA Astrophysics Data System (ADS)
Leblois, T.; Tellier, C. R.
1992-07-01
We propose a theoretical model for the anisotropic etching of crystals, in order to be applied in the micromachining. The originality of the model is due to the introduction of dissolution tensors to express the representative surface of the dissolution slowness. The knowledge of the equation of the slowness surface allows us to determine the trajectories of all the elements which compose the starting surface. It is then possible to construct the final etched shape by numerical simulation. Several examples are given in this paper which show that the final etched shapes are correlated to the extrema of the dissolution slowness. Since the slowness surface must be determined from experiments, emphasis is placed on difficulties encountered when we correlate theory to experiments. Nous avons modélisé le processus de dissolution anisotrope des cristaux en vue d'une application à la simulation des formes obtenues par photolithogravure chimique. La principale originalité de ce modèle tient à l'introduction de tenseurs de dissolution pour exprimer la surface représentative de la lenteur de dissolution. La connaissance de l'équation de la lenteur de dissolution permet de calculer les trajectoires des différents éléments constituant la surface de départ puis de reconstituer par simulation la forme dissoute. Les simulations démontrent que les formes limites des cristaux dissous sont corrélées aux extrema de la lenteur de dissolution. La détermination de la surface de la lenteur se faisant à partir de mesures expérimetales, nous nous sommes efforcés de montrer toutes les difficultés attachées à cette analyse.
Detached Eddy Simulation for the F-16XL Aircraft Configuration
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Abdol-Hamid, Khaled; Parlette, Edward B.
2015-01-01
Numerical simulations for the flow around the F-16XL configuration as a contribution to the Cranked Arrow Wing Aerodynamic Project International 2 (CAWAPI-2) have been performed. The NASA Langley Tetrahedral Unstructured Software System (TetrUSS) with its USM3D solver was used to perform the unsteady flow field simulations for the subsonic high angle-of-attack case corresponding to flight condition (FC) 25. Two approaches were utilized to capture the unsteady vortex flow over the wing of the F-16XL. The first approach was to use Unsteady Reynolds-Averaged Navier-Stokes (URANS) coupled with standard turbulence closure models. The second approach was to use Detached Eddy Simulation (DES), which creates a hybrid model that attempts to combine the most favorable elements of URANS models and Large Eddy Simulation (LES). Computed surface static pressure profiles are presented and compared with flight data. Time-averaged and instantaneous results obtained on coarse, medium and fine grids are compared with the flight data. The intent of this study is to demonstrate that the DES module within the USM3D solver can be used to provide valuable data in predicting vortex-flow physics on a complex configuration.
2003-11-01
de défense sur des matériels, des hommes et des doctrines préexistants, mais part au contraire d’une analyse des menaces et du... hommes et les doctrines. Comme on le verra ultérieurement, cette nouvelle démarche d’ingénierie du système de défense, qui se veut proactive et non...résolvent sous des contraintes de zéro mort ou tout au moins de pertes minimales, dont l ’« acceptabilité » est essentiellement facteur de
Managing resource capacity using hybrid simulation
NASA Astrophysics Data System (ADS)
Ahmad, Norazura; Ghani, Noraida Abdul; Kamil, Anton Abdulbasah; Tahar, Razman Mat
2014-12-01
Due to the diversity of patient flows and interdependency of the emergency department (ED) with other units in hospital, the use of analytical models seems not practical for ED modeling. One effective approach to study the dynamic complexity of ED problems is by developing a computer simulation model that could be used to understand the structure and behavior of the system. Attempts to build a holistic model using DES only will be too complex while if only using SD will lack the detailed characteristics of the system. This paper discusses the combination of DES and SD in order to get a better representation of the actual system than using either modeling paradigm solely. The model is developed using AnyLogic software that will enable us to study patient flows and the complex interactions among hospital resources for ED operations. Results from the model show that patients' length of stay is influenced by laboratories turnaround time, bed occupancy rate and ward admission rate.
Leistedt, B.; Peiris, H. V.; Elsner, F.; ...
2016-10-17
Spatially-varying depth and characteristics of observing conditions, such as seeing, airmass, or sky background, are major sources of systematic uncertainties in modern galaxy survey analyses, in particular in deep multi-epoch surveys. We present a framework to extract and project these sources of systematics onto the sky, and apply it to the Dark Energy Survey (DES) to map the observing conditions of the Science Verification (SV) data. The resulting distributions and maps of sources of systematics are used in several analyses of DES SV to perform detailed null tests with the data, and also to incorporate systematics in survey simulations. Wemore » illustrate the complementarity of these two approaches by comparing the SV data with the BCC-UFig, a synthetic sky catalogue generated by forward-modelling of the DES SV images. We then analyse the BCC-UFig simulation to construct galaxy samples mimicking those used in SV galaxy clustering studies. We show that the spatially-varying survey depth imprinted in the observed galaxy densities and the redshift distributions of the SV data are successfully reproduced by the simulation and well-captured by the maps of observing conditions. The combined use of the maps, the SV data and the BCC-UFig simulation allows us to quantify the impact of spatial systematics on N(z), the redshift distributions inferred using photometric redshifts. We conclude that spatial systematics in the SV data are mainly due to seeing fluctuations and are under control in current clustering and weak lensing analyses. However, they will need to be carefully characterised in upcoming phases of DES in order to avoid biasing the inferred cosmological results. The framework presented is relevant to all multi-epoch surveys, and will be essential for exploiting future surveys such as the Large Synoptic Survey Telescope, which will require detailed null-tests and realistic end-to-end image simulations to correctly interpret the deep, high-cadence observations of the sky.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leistedt, B.; Peiris, H. V.; Elsner, F.
Spatially varying depth and the characteristics of observing conditions, such as seeing, airmass, or sky background, are major sources of systematic uncertainties in modern galaxy survey analyses, particularly in deep multi-epoch surveys. We present a framework to extract and project these sources of systematics onto the sky, and apply it to the Dark Energy Survey (DES) to map the observing conditions of the Science Verification (SV) data. The resulting distributions and maps of sources of systematics are used in several analyses of DES-SV to perform detailed null tests with the data, and also to incorporate systematics in survey simulations. Wemore » illustrate the complementary nature of these two approaches by comparing the SV data with BCC-UFig, a synthetic sky catalog generated by forward-modeling of the DES-SV images. We analyze the BCC-UFig simulation to construct galaxy samples mimicking those used in SV galaxy clustering studies. We show that the spatially varying survey depth imprinted in the observed galaxy densities and the redshift distributions of the SV data are successfully reproduced by the simulation and are well-captured by the maps of observing conditions. The combined use of the maps, the SV data, and the BCC-UFig simulation allows us to quantify the impact of spatial systematics on N(z), the redshift distributions inferred using photometric redshifts. We conclude that spatial systematics in the SV data are mainly due to seeing fluctuations and are under control in current clustering and weak-lensing analyses. However, they will need to be carefully characterized in upcoming phases of DES in order to avoid biasing the inferred cosmological results. The framework presented here is relevant to all multi-epoch surveys and will be essential for exploiting future surveys such as the Large Synoptic Survey Telescope, which will require detailed null tests and realistic end-to-end image simulations to correctly interpret the deep, high-cadence observations of the sky« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leistedt, B.; Peiris, H. V.; Elsner, F.
Spatially-varying depth and characteristics of observing conditions, such as seeing, airmass, or sky background, are major sources of systematic uncertainties in modern galaxy survey analyses, in particular in deep multi-epoch surveys. We present a framework to extract and project these sources of systematics onto the sky, and apply it to the Dark Energy Survey (DES) to map the observing conditions of the Science Verification (SV) data. The resulting distributions and maps of sources of systematics are used in several analyses of DES SV to perform detailed null tests with the data, and also to incorporate systematics in survey simulations. Wemore » illustrate the complementarity of these two approaches by comparing the SV data with the BCC-UFig, a synthetic sky catalogue generated by forward-modelling of the DES SV images. We then analyse the BCC-UFig simulation to construct galaxy samples mimicking those used in SV galaxy clustering studies. We show that the spatially-varying survey depth imprinted in the observed galaxy densities and the redshift distributions of the SV data are successfully reproduced by the simulation and well-captured by the maps of observing conditions. The combined use of the maps, the SV data and the BCC-UFig simulation allows us to quantify the impact of spatial systematics on N(z), the redshift distributions inferred using photometric redshifts. We conclude that spatial systematics in the SV data are mainly due to seeing fluctuations and are under control in current clustering and weak lensing analyses. However, they will need to be carefully characterised in upcoming phases of DES in order to avoid biasing the inferred cosmological results. The framework presented is relevant to all multi-epoch surveys, and will be essential for exploiting future surveys such as the Large Synoptic Survey Telescope, which will require detailed null-tests and realistic end-to-end image simulations to correctly interpret the deep, high-cadence observations of the sky.« less
Discrete event simulation for healthcare organizations: a tool for decision making.
Hamrock, Eric; Paige, Kerrie; Parks, Jennifer; Scheulen, James; Levin, Scott
2013-01-01
Healthcare organizations face challenges in efficiently accommodating increased patient demand with limited resources and capacity. The modern reimbursement environment prioritizes the maximization of operational efficiency and the reduction of unnecessary costs (i.e., waste) while maintaining or improving quality. As healthcare organizations adapt, significant pressures are placed on leaders to make difficult operational and budgetary decisions. In lieu of hard data, decision makers often base these decisions on subjective information. Discrete event simulation (DES), a computerized method of imitating the operation of a real-world system (e.g., healthcare delivery facility) over time, can provide decision makers with an evidence-based tool to develop and objectively vet operational solutions prior to implementation. DES in healthcare commonly focuses on (1) improving patient flow, (2) managing bed capacity, (3) scheduling staff, (4) managing patient admission and scheduling procedures, and (5) using ancillary resources (e.g., labs, pharmacies). This article describes applicable scenarios, outlines DES concepts, and describes the steps required for development. An original DES model developed to examine crowding and patient flow for staffing decision making at an urban academic emergency department serves as a practical example.
Developing Flexible Discrete Event Simulation Models in an Uncertain Policy Environment
NASA Technical Reports Server (NTRS)
Miranda, David J.; Fayez, Sam; Steele, Martin J.
2011-01-01
On February 1st, 2010 U.S. President Barack Obama submitted to Congress his proposed budget request for Fiscal Year 2011. This budget included significant changes to the National Aeronautics and Space Administration (NASA), including the proposed cancellation of the Constellation Program. This change proved to be controversial and Congressional approval of the program's official cancellation would take many months to complete. During this same period an end-to-end discrete event simulation (DES) model of Constellation operations was being built through the joint efforts of Productivity Apex Inc. (PAl) and Science Applications International Corporation (SAIC) teams under the guidance of NASA. The uncertainty in regards to the Constellation program presented a major challenge to the DES team, as to: continue the development of this program-of-record simulation, while at the same time remain prepared for possible changes to the program. This required the team to rethink how it would develop it's model and make it flexible enough to support possible future vehicles while at the same time be specific enough to support the program-of-record. This challenge was compounded by the fact that this model was being developed through the traditional DES process-orientation which lacked the flexibility of object-oriented approaches. The team met this challenge through significant pre-planning that led to the "modularization" of the model's structure by identifying what was generic, finding natural logic break points, and the standardization of interlogic numbering system. The outcome of this work resulted in a model that not only was ready to be easily modified to support any future rocket programs, but also a model that was extremely structured and organized in a way that facilitated rapid verification. This paper discusses in detail the process the team followed to build this model and the many advantages this method provides builders of traditional process-oriented discrete event simulations.
NASA Astrophysics Data System (ADS)
Chahbani, Samia
The masses, centers of gravity and moments of inertia are the main parameters in the three phases of the design of the aircraft. They are of extreme importance in the studies of the stability and proper functioning of the aircraft by modeling and simulation methods. Unfortunately, these data are not always available given the confidentiality of aerospace field. A question arises naturally: How to estimate the mass, center of gravity and moments of inertia of an aircraft based on only its geometry? In this context in which this thesis is realized, the masses are estimated by Raymer`s methods. The aircraft described in procedures based on mechanical techniques engineers are used for determining the centers of gravity. The DATCOM is applied for obtaining moments of inertia. Finally, the results obtained are validated by using the flight simulator at the LARCASE corresponding to Cessna Citation X. we conclude with a representation of an analytical model that sum up the different step to follow up for estimating masses, centers of gravity and moments of inertia for any commercial aircraft.
Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters
NASA Astrophysics Data System (ADS)
Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana
2016-02-01
This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.
Scholz, Stefan; Mittendorf, Thomas
2014-12-01
Rheumatoid arthritis (RA) is a chronic, inflammatory disease with severe effects on the functional ability of patients. Due to the prevalence of 0.5 to 1.0 percent in western countries, new treatment options are a major concern for decision makers with regard to their budget impact. In this context, cost-effectiveness analyses are a helpful tool to evaluate new treatment options for reimbursement schemes. To analyze and compare decision analytic modeling techniques and to explore their use in RA with regard to their advantages and shortcomings. A systematic literature review was conducted in PubMED and 58 studies reporting health economics decision models were analyzed with regard to the modeling technique used. From the 58 reviewed publications, we found 13 reporting decision tree-analysis, 25 (cohort) Markov models, 13 publications on individual sampling methods (ISM) and seven discrete event simulations (DES). Thereby 26 studies were identified as presenting independently developed models and 32 models as adoptions. The modeling techniques used were found to differ in their complexity and in the number of treatment options compared. Methodological features are presented in the article and a comprehensive overview of the cost-effectiveness estimates is given in Additional files 1 and 2. When compared to the other modeling techniques, ISM and DES have advantages in the coverage of patient heterogeneity and, additionally, DES is capable to model more complex treatment sequences and competing risks in RA-patients. Nevertheless, the availability of sufficient data is necessary to avoid assumptions in ISM and DES exercises, thereby enabling biased results. Due to the different settings, time frames and interventions in the reviewed publications, no direct comparison of modeling techniques was applicable. The results from other indications suggest that incremental cost-effective ratios (ICERs) do not differ significantly between Markov and DES models, but DES is able to report more outcome parameters. Given a sufficient data supply, DES is the modeling technique of choice when modeling cost-effectiveness in RA. Otherwise transparency on the data inputs is crucial for valid results and to inform decision makers about possible biases. With regard to ICERs, Markov models might provide similar estimates as more advanced modeling techniques.
DES Y1 Results: Validating Cosmological Parameter Estimation Using Simulated Dark Energy Surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacCrann, N.; et al.
We use mock galaxy survey simulations designed to resemble the Dark Energy Survey Year 1 (DES Y1) data to validate and inform cosmological parameter estimation. When similar analysis tools are applied to both simulations and real survey data, they provide powerful validation tests of the DES Y1 cosmological analyses presented in companion papers. We use two suites of galaxy simulations produced using different methods, which therefore provide independent tests of our cosmological parameter inference. The cosmological analysis we aim to validate is presented in DES Collaboration et al. (2017) and uses angular two-point correlation functions of galaxy number counts and weak lensing shear, as well as their cross-correlation, in multiple redshift bins. While our constraints depend on the specific set of simulated realisations available, for both suites of simulations we find that the input cosmology is consistent with the combined constraints from multiple simulated DES Y1 realizations in themore » $$\\Omega_m-\\sigma_8$$ plane. For one of the suites, we are able to show with high confidence that any biases in the inferred $$S_8=\\sigma_8(\\Omega_m/0.3)^{0.5}$$ and $$\\Omega_m$$ are smaller than the DES Y1 $$1-\\sigma$$ uncertainties. For the other suite, for which we have fewer realizations, we are unable to be this conclusive; we infer a roughly 70% probability that systematic biases in the recovered $$\\Omega_m$$ and $$S_8$$ are sub-dominant to the DES Y1 uncertainty. As cosmological analyses of this kind become increasingly more precise, validation of parameter inference using survey simulations will be essential to demonstrate robustness.« less
Methodes de caracterisation des proprietes thermomecaniques d'un acier martensitique =
NASA Astrophysics Data System (ADS)
Ausseil, Lucas
Le but de l'etude est de developper des methodes permettant de mesurer les proprietes thermomecaniques d'un acier martensitique lors de chauffe rapide. Ces donnees permettent d'alimenter les modeles d'elements finis existant avec des donnees experimentales. Pour cela, l'acier 4340 est utilise. Cet acier est notamment utilise dans les roues d'engrenage, il a des proprietes mecaniques tres interessantes. Il est possible de modifier ses proprietes grâce a des traitements thermiques. Le simulateur thermomecanique Gleeble 3800 est utilise. Il permet de tester theoriquement toutes les conditions presentes dans les procedes de fabrication. Avec les tests de dilatation realises dans ce projet, les temperatures exactes de changement de phases austenitiques et martensitiques sont obtenues. Des tests de traction ont aussi permis de deduire la limite d'elasticite du materiau dans le domaine austenitique allant de 850 °C a 1100 °C. L'effet des deformations sur la temperature de debut de transformation est montre qualitativement. Une simulation numerique est aussi realisee pour comprendre les phenomenes intervenant pendant les essais.
Data encryption standard ASIC design and development report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Perry J.; Pierson, Lyndon George; Witzke, Edward L.
2003-10-01
This document describes the design, fabrication, and testing of the SNL Data Encryption Standard (DES) ASIC. This device was fabricated in Sandia's Microelectronics Development Laboratory using 0.6 {micro}m CMOS technology. The SNL DES ASIC was modeled using VHDL, then simulated, and synthesized using Synopsys, Inc. software and finally IC layout was performed using Compass Design Automation's CAE tools. IC testing was performed by Sandia's Microelectronic Validation Department using a HP 82000 computer aided test system. The device is a single integrated circuit, pipelined realization of DES encryption and decryption capable of throughputs greater than 6.5 Gb/s. Several enhancements accommodate ATMmore » or IP network operation and performance scaling. This design is the latest step in the evolution of DES modules.« less
Hydrodynamic model of cells for designing systems of urban groundwater drainage
NASA Astrophysics Data System (ADS)
Zimmermann, Eric; Riccardi, Gerardo
2000-08-01
An improved mathematical hydrodynamic quasi-two-dimensional model of cells, CELSUB3, is presented for simulating drainage systems that consist of pumping well fields or subsurface drains. The CELSUB3 model is composed of an assemblage of algorithms that have been developed and tested previously and that simulate saturated flow in porous media, closed conduit flow, and flow through pumping stations. A new type of link between aquifer cells and drainage conduits is proposed. This link is verified in simple problems with well known analytical solutions. The correlation between results from analytical and mathematical solutions was considered satisfactory in all cases. To simulate more complex situations, the new proposed version, CELSUB3, was applied in a project designed to control the water-table level within a sewer system in Chañar Ladeado Town, Santa Fe Province, Argentina. Alternative drainage designs, which were evaluated under conditions of dynamic recharge caused by rainfall in a critical year (wettest year for the period of record) and a typical year, are briefly described. After analyzing ten alternative designs, the best technical-economic solution is a subsurface drainage system of closed conduits with pumping stations and evacuation channels. Résumé. Un modèle hydrodynamique perfectionné de cellules en quasi 2D, CELSUB3, est présenté dans le but de simuler des systèmes de drainage qui consistent en des champs de puits de pompage ou de drains souterrains. Le modèle CELSUB3 est composé d'un assemblage d'algorithmes développés et testés précédemment et qui simulent des écoulements en milieu poreux saturé, en conduites et dans des stations de pompage. Un nouveau type de lien entre des cellules d'aquifères et des drains est proposé. Ce lien est vérifié dans des problèmes simples dont les solutions analytiques sont bien connues. La corrélation entre les résultats des solutions analytiques et des solutions mathématiques a été considérée comme satisfaisante dans tous les cas. Afin de simuler des situations plus complexes, la nouvelle version proposée, CELSUB3, a été mise en œuvre dans un projet destiné à contrôler le niveau de la nappe à l'intérieur d'un système d'égouts, dans la ville de Chaar Ladeado (province de Santa Fe, Argentine). Différentes organisations du projet de drainage, qui ont été testées pour des conditions de recharge dynamique causées par la pluie au cours d'une année critique (la plus humide de la chronique disponible) et une année typique, sont brièvement décrites. Après analyse de dix organisations différentes, la meilleure solution technico-économique retenue est un système de drainage souterrain de conduites avec des stations de pompage et des canaux d'évacuation. Resumen. Se presenta un modelo matemático hidrodinámico cuasi-bidimensional de celdas, CELSUB3, apto para la simulación integral de sistemas de drenaje subterráneo basados en campos de bombeo o drenes subsuperficiales. El modelo de simulación presenta un ensamble de algoritmos, previamente desarrollados y testeados, que representan al escurrimiento a través del medio poroso saturado, escurrimiento en conducciones cerradas, estaciones de bombeo, etc. En la estructura del modelo se propone un nuevo tipo de vinculación entre celdas acuíferas y conductos de drenaje, la cual es verificada en problemas simples con solución analítica conocida arrojando, en todos los casos, resultados satisfactorios. Abordando situaciones más complejas, la nueva versión propuesta fue aplicada en un proyecto de control de niveles freáticos que acompaña un sistema de conductos cloacales, en la localidad de Chañar Ladeado, Santa Fe, Argentina. Se describen las alternativas de drenaje consideradas las cuales fueron evaluadas bajo recargas dinámicas provocadas por años críticamente lluviosos y en situaciones típicas. Los resultados derivados permitieron definir, tras analizar una decena de proyectos alternativos, la mejor solución técnico-económica consistente en un sistema de drenes subterráneos, estaciones de bombeo y canales de evacuación.
NASA Astrophysics Data System (ADS)
Moron, Vincent; Navarra, Antonio
2000-05-01
This study presents the skill of the seasonal rainfall of tropical America from an ensemble of three 34-year general circulation model (ECHAM 4) simulations forced with observed sea surface temperature between 1961 and 1994. The skill gives a first idea of the amount of potential predictability if the sea surface temperatures are perfectly known some time in advance. We use statistical post-processing based on the leading modes (extracted from Singular Value Decomposition of the covariance matrix between observed and simulated rainfall fields) to improve the raw skill obtained by simple comparison between observations and simulations. It is shown that 36-55 % of the observed seasonal variability is explained by the simulations on a regional basis. Skill is greatest for Brazilian Nordeste (March-May), but also for northern South America or the Caribbean basin in June-September or northern Amazonia in September-November for example.
Terentiev, Alexander A; Moldogazieva, Nurbubu T; Levtsova, Olga V; Maximenko, Dmitry M; Borozdenko, Denis A; Shaitan, Konstantin V
2012-04-01
It has been long experimentally demonstrated that human alpha-fetoprotein (HAFP) has an ability to bind immobilized estrogens with the most efficiency for synthetic estrogen analog - diethylstilbestrol (DES). However, the question remains why the human AFP (HAFP), unlike rodent AFP, cannot bind free estrogens. Moreover, despite the fact that AFP was first discovered more than 50 years ago and is presently recognized as a "golden standard" among onco-biomarkers, its three-dimensional (3D) structure has not been experimentally solved yet. In this work using MODELLER program, we generated 3D model of HAFP on the basis of homology with human serum albumin (HSA) and Vitamin D-binding protein (VTDB) with subsequent molecular docking of DES to the model structure and molecular dynamics (MD) simulation study of the complex obtained. The model constructed has U-shaped structure in which a cavity may be distinguished. In this cavity the putative estrogen-binding site is localized. Validation by RMSD calculation and with the use of PROCHECK program showed good quality of the model and stability of extended region of four alpha-helical structures that contains putative hormone-binding residues. Data extracted from MD simulation trajectory allow proposing two types of interactions between amino acid residues of HAFP and DES molecule: (1) hydrogen bonding with involvement of residues S445, R452, and E551; (2) hydrophobic interactions with participation of L138, M448, and M548 residues. A suggestion is made that immobilization of the hormone using a long spacer provides delivery of the estrogen molecule to the binding site and, thereby, facilitates interaction between HAFP and the hormone.
Turnaround Time Modeling for Conceptual Rocket Engines
NASA Technical Reports Server (NTRS)
Nix, Michael; Staton, Eric J.
2004-01-01
Recent years have brought about a paradigm shift within NASA and the Space Launch Community regarding the performance of conceptual design. Reliability, maintainability, supportability, and operability are no longer effects of design; they have moved to the forefront and are affecting design. A primary focus of this shift has been a planned decrease in vehicle turnaround time. Potentials for instituting this decrease include attacking the issues of removing, refurbishing, and replacing the engines after each flight. less, it is important to understand the operational affects of an engine on turnaround time, ground support personnel and equipment. One tool for visualizing this relationship involves the creation of a Discrete Event Simulation (DES). A DES model can be used to run a series of trade studies to determine if the engine is meeting its requirements, and, if not, what can be altered to bring it into compliance. Using DES, it is possible to look at the ways in which labor requirements, parallel maintenance versus serial maintenance, and maintenance scheduling affect the overall turnaround time. A detailed DES model of the Space Shuttle Main Engines (SSME) has been developed. Trades may be performed using the SSME Processing Model to see where maintenance bottlenecks occur, what the benefits (if any) are of increasing the numbers of personnel, or the number and location of facilities, in addition to trades previously mentioned, all with the goal of optimizing the operational turnaround time and minimizing operational cost. The SSME Processing Model was developed in such a way that it can easily be used as a foundation for developing DES models of other operational or developmental reusable engines. Performing a DES on a developmental engine during the conceptual phase makes it easier to affect the design and make changes to bring about a decrease in turnaround time and costs.
Design, Performance, and Operation of Efficient Ramjet/Scramjet Combined Cycle Hypersonic Propulsion
2009-10-16
simulations, the blending of the RANS and LES portions is handled by the standard DES equations, now referred to as DES97. The one-equation Spalart...think that RANS can capture these dynamics. • Much remains to be learned about how to model chemistry-turbulence interactions in scramjet flows...BILLIG, F. S., R. BAURLE, AND C. TAM 1999 Design and Analysis of Streamline Traced Hypersonic Inlets. AIAA Paper 1999-4974. BILLIG, F.S., AND
Multiplicative Process in Turbulent Velocity Statistics: A Simplified Analysis
NASA Astrophysics Data System (ADS)
Chillà, F.; Peinke, J.; Castaing, B.
1996-04-01
A lot of models in turbulence links the energy cascade process and intermittency, the characteristic of which being the shape evolution of the probability density functions (pdf) for longitudinal velocity increments. Using recent models and experimental results, we show that the flatness factor of these pdf gives a simple and direct estimate for what is called the deepness of the cascade. We analyse in this way the published data of a Direct Numerical Simulation and show that the deepness of the cascade presents the same Reynolds number dependence as in laboratory experiments. Plusieurs modèles de turbulence relient la cascade d'énergie et l'intermittence, caractérisée par l'évolution des densités de probabilité (pdf) des incréments longitudinaux de vitesse. Nous appuyant aussi bien sur des modèles récents que sur des résultats expérimentaux, nous montrons que la Curtosis de ces pdf permet une estimation simple et directe de la profondeur de la cascade. Cela nous permet de réanalyser les résultats publiés d'une simulation numérique et de montrer que la profondeur de la cascade y évolue de la même façon que pour les expériences de laboratoire en fonction du nombre de Reynolds.
Detached Eddy Simulation of the UH-60 Rotor Wake Using Adaptive Mesh Refinement
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Ahmad, Jasim U.
2012-01-01
Time-dependent Navier-Stokes flow simulations have been carried out for a UH-60 rotor with simplified hub in forward flight and hover flight conditions. Flexible rotor blades and flight trim conditions are modeled and established by loosely coupling the OVERFLOW Computational Fluid Dynamics (CFD) code with the CAMRAD II helicopter comprehensive code. High order spatial differences, Adaptive Mesh Refinement (AMR), and Detached Eddy Simulation (DES) are used to obtain highly resolved vortex wakes, where the largest turbulent structures are captured. Special attention is directed towards ensuring the dual time accuracy is within the asymptotic range, and verifying the loose coupling convergence process using AMR. The AMR/DES simulation produced vortical worms for forward flight and hover conditions, similar to previous results obtained for the TRAM rotor in hover. AMR proved to be an efficient means to capture a rotor wake without a priori knowledge of the wake shape.
Pan, Chong; Zhang, Dali; Kon, Audrey Wan Mei; Wai, Charity Sue Lea; Ang, Woo Boon
2015-06-01
Continuous improvement in process efficiency for specialist outpatient clinic (SOC) systems is increasingly being demanded due to the growth of the patient population in Singapore. In this paper, we propose a discrete event simulation (DES) model to represent the patient and information flow in an ophthalmic SOC system in the Singapore National Eye Centre (SNEC). Different improvement strategies to reduce the turnaround time for patients in the SOC were proposed and evaluated with the aid of the DES model and the Design of Experiment (DOE). Two strategies for better patient appointment scheduling and one strategy for dilation-free examination are estimated to have a significant impact on turnaround time for patients. One of the improvement strategies has been implemented in the actual SOC system in the SNEC with promising improvement reported.
Investigation of Transonic Wake Dynamics for Mechanically Deployable Entry Systems
NASA Technical Reports Server (NTRS)
Stern, Eric; Barnhardt, Michael; Venkatapathy, Ethiraj; Candler, Graham; Prabhu, Dinesh
2012-01-01
A numerical investigation of transonic flow around a mechanically deployable entry system being considered for a robotic mission to Venus has been performed, and preliminary results are reported. The flow around a conceptual representation of the vehicle geometry was simulated at discrete points along a ballistic trajectory using Detached Eddy Simulation (DES). The trajectory points selected span the low supersonic to transonic regimes with freestream Mach numbers from 1:5 to 0:8, and freestream Reynolds numbers (based on diameter) between 2:09 x 10(exp 6) and 2:93 x 10(exp 6). Additionally, the Mach 0:8 case was simulated at angles of attack between 0 and 5 . Static aerodynamic coefficients obtained from the data show qualitative agreement with data from 70deg sphere-cone wind tunnel tests performed for the Viking program. Finally, the effect of choices of models and numerical algorithms is addressed by comparing the DES results to those using a Reynolds Averaged Navier-Stokes (RANS) model, as well as to results using a more dissipative numerical scheme.
NASA Technical Reports Server (NTRS)
Leonard, Daniel; Parsons, Jeremy W.; Cates, Grant
2014-01-01
In May 2013, NASA's GSDO Program requested a study to develop a discrete event simulation (DES) model that analyzes the launch campaign process of the Space Launch System (SLS) from an integrated commodities perspective. The scope of the study includes launch countdown and scrub turnaround and focuses on four core launch commodities: hydrogen, oxygen, nitrogen, and helium. Previously, the commodities were only analyzed individually and deterministically for their launch support capability, but this study was the first to integrate them to examine the impact of their interactions on a launch campaign as well as the effects of process variability on commodity availability. The study produced a validated DES model with Rockwell Arena that showed that Kennedy Space Center's ground systems were capable of supporting a 48-hour scrub turnaround for the SLS. The model will be maintained and updated to provide commodity consumption analysis of future ground system and SLS configurations.
Simulation-based decision support framework for dynamic ambulance redeployment in Singapore.
Lam, Sean Shao Wei; Ng, Clarence Boon Liang; Nguyen, Francis Ngoc Hoang Long; Ng, Yih Yng; Ong, Marcus Eng Hock
2017-10-01
Dynamic ambulance redeployment policies tend to introduce much more flexibilities in improving ambulance resource allocation by capitalizing on the definite geospatial-temporal variations in ambulance demand patterns over the time-of-the-day and day-of-the-week effects. A novel modelling framework based on the Approximate Dynamic Programming (ADP) approach leveraging on a Discrete Events Simulation (DES) model for dynamic ambulance redeployment in Singapore is proposed in this paper. The study was based on the Singapore's national Emergency Medical Services (EMS) system. Based on a dataset comprising 216,973 valid incidents over a continuous two-years study period from 1 January 2011-31 December 2012, a DES model for the EMS system was developed. An ADP model based on linear value function approximations was then evaluated using the DES model via the temporal difference (TD) learning family of algorithms. The objective of the ADP model is to derive approximate optimal dynamic redeployment policies based on the primary outcome of ambulance coverage. Considering an 8min response time threshold, an estimated 5% reduction in the proportion of calls that cannot be reached within the threshold (equivalent to approximately 8000 dispatches) was observed from the computational experiments. The study also revealed that the redeployment policies which are restricted within the same operational division could potentially result in a more promising response time performance. Furthermore, the best policy involved the combination of redeploying ambulances whenever they are released from service and that of relocating ambulances that are idle in bases. This study demonstrated the successful application of an approximate modelling framework based on ADP that leverages upon a detailed DES model of the Singapore's EMS system to generate approximate optimal dynamic redeployment plans. Various policies and scenarios relevant to the Singapore EMS system were evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.
Adsorption de gaz sur les materiaux microporeux modelisation, thermodynamique et applications
NASA Astrophysics Data System (ADS)
Richard, Marc-Andre
2009-12-01
Nos travaux sur l'adsorption de gaz dans les materiaux microporeux s'inscrivent dans le cadre des recherches visant a augmenter l'efficacite du stockage de l'hydrogene a bord des vehicules. Notre objectif etait d'etudier la possibilite d'utiliser l'adsorption afin d'ameliorer l'efficacite de la liquefaction de l'hydrogene des systemes a petite echelle. Nous avons egalement evalue les performances d'un systeme de stockage cryogenique de l'hydrogene base sur la physisorption. Comme nous avons affaire a des plages de temperatures particulierement etendues et a de hautes pressions dans la region supercritique du gaz, nous avons du commencer par travailler sur la modelisation et la thermodynamique de l'adsorption. La representation de la quantite de gaz adsorbee en fonction de la temperature et de la pression par un modele semi-empirique est un outil utile pour determiner la masse de gaz adsorbee dans un systeme mais egalement pour calculer les effets thermiques lies a l'adsorption. Nous avons adapte le modele Dubinin-Astakhov (D-A) pour modeliser des isothermes d'adsorption d'hydrogene, d'azote et de methane sur du charbon actif a haute pression et sur une grande plage de temperatures supercritiques en considerant un volume d'adsorption invariant. Avec cinq parametres de regression (incluant le volume d'adsorption Va), le modele que nous avons developpe permet de tres bien representer des isothermes experimentales d'adsorption d'hydrogene (de 30 a 293 K, jusqu'a 6 MPa), d'azote (de 93 a 298 K, jusqu'a 6 MPa) et de methane (de 243 a 333 K, jusqu'a 9 MPa) sur le charbon actif. Nous avons calcule l'energie interne de la phase adsorbee a partir du modele en nous servant de la thermodynamique des solutions sans negliger le volume d'adsorption. Par la suite, nous avons presente les equations de conservation de la niasse et de l'energie pour un systeme d'adsorption et valide notre demarche en comparant des simulations et des tests d'adsorption et de desorption. En plus de l'energie interne, nous avons evalue l'entropie, l'energie differentielle d'adsorption et la chaleur isosterique d'adsorption. Nous avons etudie la performance d'un systeme de stockage d'hydrogene par adsorption pour les vehicules. La capacite de stockage d'hydrogene et les performances thermiques d'un reservoir de 150 L contenant du charbon actif Maxsorb MSC-30(TM) (surface specifique ˜ 3000 m2/g) ont ete etudiees sur une plage de temperatures de 60 a 298 K et a des pressions allant jusqu'a 35 MPa. Le systeme a ete considere de facon globale, sans nous attarder a un design particulier. Il est possible de stocker 5 kg d'hydrogene a des pressions de 7.8, 15.2 et 29 MPa pour des temperatures respectivement de 80, 114 et 172 K, lorsqu'on recupere l'hydrogene residuel a 2.5 bars en le chauffant. La simulation des phenomenes thermiques nous a permis d'analyser le refroidissement necessaire lors du remplissage, le chauffage lors de la decharge et le temps de dormance. Nous avons developpe un cycle de liquefaction de l'hydrogene base sur l'adsorption avec compression mecanique (ACM) et avons evalue sa faisabilite. L'objectif etait d'augmenter sensiblement l'efficacite des systemes de liquefaction de l'hydrogene a petite echelle (moins d'une tonne/jour) et ce, sans en augmenter le cout en capital. Nous avons adapte le cycle de refrigeration par ACM afin qu'il puisse par la suite etre ajoute a un cycle de liquefaction de l'hydrogene. Nous avons ensuite simule des cycles idealises de refrigeration par ACM. Meme dans ces conditions ideales, la refrigeration specifique est faible. De plus, l'efficacite theorique maximale de ces cycles de refrigeration est d'environ 20 a 30% de l'ideal. Nous avons realise experimentalement un cycle de refrigeration par ACM avec le couple azote/charbon actif. (Abstract shortened by UMI.)
Li, Jiajia; Deng, Baoqing; Zhang, Bing; Shen, Xiuzhong; Kim, Chang Nyung
2015-01-01
A simulation of an unbaffled stirred tank reactor driven by a magnetic stirring rod was carried out in a moving reference frame. The free surface of unbaffled stirred tank was captured by Euler-Euler model coupled with the volume of fluid (VOF) method. The re-normalization group (RNG) k-ɛ model, large eddy simulation (LES) model and detached eddy simulation (DES) model were evaluated for simulating the flow field in the stirred tank. All turbulence models can reproduce the tangential velocity in an unbaffled stirred tank with a rotational speed of 150 rpm, 250 rpm and 400 rpm, respectively. Radial velocity is underpredicted by the three models. LES model and RNG k-ɛ model predict the better tangential velocity and axial velocity, respectively. RNG k-ɛ model is recommended for the simulation of the flow in an unbaffled stirred tank with magnetic rod due to its computational effort.
2008-09-01
diverses temperatures 26 a) HTPB pur b) HTPB-DOA (polymere et plastifiant) c) GAP pur d) Gpl pur e)Gap-Gpl Liste des tableaux Tableau 1...Composition des mailles amorphes construites 11 Tableau 2. Proprietes des polymeres et plastifiants utilises 11 Tableau 3. Comparaisons entre les Tt...obtenues experimentalement, les T% publiees dans les ecrits scientifiques et celles predites a partir des 7"gdes composes purs 19 Tableau 4. Comparaison
Airlift Operation Modeling Using Discrete Event Simulation (DES)
2009-12-01
Java ......................................................................................................20 2. Simkit...JRE Java Runtime Environment JVM Java Virtual Machine lbs Pounds LAM Load Allocation Mode LRM Landing Spot Reassignment Mode LEGO Listener Event...SOFTWARE DEVELOPMENT ENVIRONMENT The following are the software tools and development environment used for constructing the models. 1. Java Java
DynEarthSol3D: numerical studies of basal crevasses and calving blocks
NASA Astrophysics Data System (ADS)
Logan, E.; Lavier, L. L.; Choi, E.; Tan, E.; Catania, G. A.
2014-12-01
DynEarthSol3D (DES) is a thermomechanical model for the simulation of dynamic ice flow. We present the application of DES toward two case studies - basal crevasses and calving blocks - to illustrate the potential of the model to aid in understanding calving processes. Among the advantages of using DES are: its unstructured meshes which adaptively resolve zones of high interest; its use of multiple rheologies to simulate different types of dynamic behavior; and its explicit and parallel numerical core which both make the implementation of different boundary conditions easy and the model highly scalable. We examine the initiation and development of both basal crevasses and calving blocks through time using visco-elasto-plastic rheology. Employing a brittle-to-ductile transition zone (BDTZ) based on local strain rate shows that the style and development of brittle features like crevasses differs markedly on the rheological parameters. Brittle and ductile behavior are captured by Mohr-Coulomb elastoplasticity and Maxwell viscoelasticity, respectively. We explore the parameter spaces which define these rheologies (including temperature) as well as the BDTZ threshold (shown in the literature as 10-7 Pa s), using time-to-failure as a metric for accuracy within the model. As the time it takes for a block of ice to fail can determine an iceberg's size, this work has implications for calving laws.
Detached-Eddy Simulations of Separated Flow Around Wings With Ice Accretions: Year One Report
NASA Technical Reports Server (NTRS)
Choo, Yung K. (Technical Monitor); Thompson, David; Mogili, Prasad
2004-01-01
A computational investigation was performed to assess the effectiveness of Detached-Eddy Simulation (DES) as a tool for predicting icing effects. The AVUS code, a public domain flow solver, was employed to compute solutions for an iced wing configuration using DES and steady Reynolds Averaged Navier-Stokes (RANS) equation methodologies. The configuration was an extruded GLC305/944-ice shape section with a rectangular planform. The model was mounted between two walls so no tip effects were considered. The numerical results were validated by comparison with experimental data for the same configuration. The time-averaged DES computations showed some improvement in lift and drag results near stall when compared to steady RANS results. However, comparisons of the flow field details did not show the level of agreement suggested by the integrated quantities. Based on our results, we believe that DES may prove useful in a limited sense to provide analysis of iced wing configurations when there is significant flow separation, e.g., near stall, where steady RANS computations are demonstrably ineffective. However, more validation is needed to determine what role DES can play as part of an overall icing effects prediction strategy. We conclude the report with an assessment of existing computational tools for application to the iced wing problem and a discussion of issues that merit further study.
Experiences with the MANA simulation tool
2006-08-01
ALERT and FAVS simulations. Also, agents in these simulations needed to conduct careful formation fighting while following established CF doctrine...simulations auraient dû pouvoir se déplacer en formation , suivant la doctrine établie des Forces canadiennes. Cependant, des comportements d’une...29 Annex C – Alternative approach to inter-squad coordination of retreat.................................. 32 List of acronyms
Analysis of the pump-turbine S characteristics using the detached eddy simulation method
NASA Astrophysics Data System (ADS)
Sun, Hui; Xiao, Ruofu; Wang, Fujun; Xiao, Yexiang; Liu, Weichao
2015-01-01
Current research on pump-turbine units is focused on the unstable operation at off-design conditions, with the characteristic curves in generating mode being S-shaped. Unlike in the traditional water turbines, pump-turbine operation along the S-shaped curve can lead to difficulties during load rejection with unusual increases in the water pressure, which leads to machine vibrations. This paper describes both model tests and numerical simulations. A reduced scale model of a low specific speed pump-turbine was used for the performance tests, with comparisons to computational fluid dynamics(CFD) results. Predictions using the detached eddy simulation(DES) turbulence model, which is a combined Reynolds averaged Naviers-Stokes(RANS) and large eddy simulation(LES) model, are compared with the two-equation turbulence mode results. The external characteristics as well as the internal flow are for various guide vane openings to understand the unsteady flow along the so called S characteristics of a pump-turbine. Comparison of the experimental data with the CFD results for various conditions and times shows that DES model gives better agreement with experimental data than the two-equation turbulence model. For low flow conditions, the centrifugal forces and the large incident angle create large vortices between the guide vanes and the runner inlet in the runner passage, which is the main factor leading to the S-shaped characteristics. The turbulence model used here gives more accurate simulations of the internal flow characteristics of the pump-turbine and a more detailed force analysis which shows the mechanisms controlling of the S characteristics.
What We Did Last Summer: Depicting DES Data to Enhance Simulation Utility and Use
NASA Technical Reports Server (NTRS)
Elfrey, Priscilla; Conroy, Mike; Lagares, Jose G.; Mann, David; Fahmi, Mona
2009-01-01
At Kennedy Space Center (KSC), an important use of Discrete Event Simulation (DES) addresses ground operations .of missions to space. DES allows managers, scientists and engineers to assess the number of missions KSC can complete on a given schedule within different facilities, the effects of various configurations of resources and detect possible problems or unwanted situations. For fifteen years, DES has supported KSC efficiency, cost savings and improved safety and performance. The dense and abstract DES data, however, proves difficult to comprehend and, NASA managers realized, is subject to misinterpretation, misunderstanding and even, misuse. In summer 2008, KSC developed and implemented a NASA Exploration Systems Mission Directorate (ESMD) project based on the premise that visualization could enhance NASA's understanding and use of DES.
NASA Astrophysics Data System (ADS)
El Mansouri, Souleimane
Dans le domaine viscoelastique lineaire (VEL, domaine des petites deformations), le comportement thermomecanique du bitume et du mastic bitumineux (melange uniforme de bitume et de fillers) a ete caracterise au Laboratoire des Chaussees et Materiaux Bitumineux (LCMB) de l'Ecole de technologie superieure (ETS) avec l'appui de nos partenaires externes : la Societe des Alcools du Quebec (SAQ) et Eco Entreprises Quebec (EEQ). Les proprietes rheologiques des bitumes et des mastics ont ete mesurees grâce a un nouvel outil d'investigation appele, Rheometre a Cisaillement Annulaire (RCA), sous differentes conditions de chargement. Cet appareil permet non seulement de solliciter des eprouvettes de tailles importantes par rapport a celles utilisees lors des essais classiques, mais aussi d'effectuer des essais en conditions quasi-homogenes, ce qui permet de donner acces a la loi de comportement des materiaux. Les essais sont realises sur une large gamme de temperatures et de frequences (de -15 °C a 45°C et de 0,03Hz a 10 Hz). Cette etude a ete menee principalement pour comparer le comportement d'un bitume avec celui d'un mastic bitumineux dans le domaine des petites deformations. neanmoins, dans une seconde perspective, on s'interesse a l'influence des fillers de verre de post-consommation sur le comportement d'un mastic a faibles niveaux de deformations en comparant l'evolution des modules complexes de cisaillements (G*) d'un mastic avec fillers de verre et un mastic avec fillers conventionnels (calcaire). Enfin, le modele analogique 2S2P1D est utilise pour simuler le comportement viscoelastique lineaire des bitumes et des mastics bitumineux testes lors de la campagne experimentale.
Young-Person's Guide to Detached-Eddy Simulation Grids
NASA Technical Reports Server (NTRS)
Spalart, Philippe R.; Streett, Craig (Technical Monitor)
2001-01-01
We give the "philosophy", fairly complete instructions, a sketch and examples of creating Detached-Eddy Simulation (DES) grids from simple to elaborate, with a priority on external flows. Although DES is not a zonal method, flow regions with widely different gridding requirements emerge, and should be accommodated as far as possible if a good use of grid points is to be made. This is not unique to DES. We brush on the time-step choice, on simple pitfalls, and on tools to estimate whether a simulation is well resolved.
NASA Astrophysics Data System (ADS)
Grosdidier, Y.; Garcia-Segura, G.; Acker, A.; Moffat, A. F. J.
Nous décrivons, à l'instar de la formation et de l'évolution des nébuleuses planétaires, comment l'histoire des vents stellaires issus d'une même étoile chaude (massive ou non) détermine la morphologie des nébuleuses éjectées. Ensuite, nous présentons sommairement la structure et la dynamique des vents accélérés radiativement au sein des étoiles massives (O, Wolf-Rayet) et des étoiles centrales de nébuleuses planétaires de type [WC]. Enfin, nous tâchong d'illustrer en quoi la prise en compte des phénomènes radiatifs est nécessaire pour effectuer toute simulation hydrodynamique sensée reproduire les observations dans les deux contextes, i.e. les vents stellaires chauds eux-mêmes, la persistance de surdensités en leur sein, et les nébuleuses éjectées qui en résultent.
NASA Astrophysics Data System (ADS)
Allen, D. M.; Mackie, D. C.; Wei, M.
The Grand Forks aquifer, located in south-central British Columbia, Canada was used as a case study area for modeling the sensitivity of an aquifer to changes in recharge and river stage consistent with projected climate-change scenarios for the region. Results suggest that variations in recharge to the aquifer under the different climate-change scenarios, modeled under steady-state conditions, have a much smaller impact on the groundwater system than changes in river-stage elevation of the Kettle and Granby Rivers, which flow through the valley. All simulations showed relatively small changes in the overall configuration of the water table and general direction of groundwater flow. High-recharge and low-recharge simulations resulted in approximately a +0.05 m increase and a -0.025 m decrease, respectively, in water-table elevations throughout the aquifer. Simulated changes in river-stage elevation, to reflect higher-than-peak-flow levels (by 20 and 50%), resulted in average changes in the water-table elevation of 2.72 and 3.45 m, respectively. Simulated changes in river-stage elevation, to reflect lower-than-baseflow levels (by 20 and 50%), resulted in average changes in the water-table elevation of -0.48 and -2.10 m, respectively. Current observed water-table elevations in the valley are consistent with an average river-stage elevation (between current baseflow and peak-flow stages). L'aquifère de Grand Forks, situé en Colombie britannique (Canada), a été utilisé comme zone d'étude pour modéliser la sensibilité d'un aquifère à des modifications de la recharge et du niveau de la rivière, correspondant à des scénarios envisagés de changement climatique dans cette région. Les résultats font apparaître que les variations de recharge de l'aquifère pour différents scénarios de changement climatique, modélisées pour des conditions de régime permanent, ont un impact sur le système aquifère beaucoup plus faible que les changements du niveau des rivières Kettle et Granby, qui coulent dans la vallée. Toutes les simulations ont montré des différences relativement faibles dans la configuration d'ensemble de la nappe et dans la direction générale des écoulements. Des simulations de conditions de recharge forte et de recharge faible produisent respectivement une remontée de 0,05 m et un abaissement de 0,025 m, approximativement, des cotes de la nappe pour l'ensemble de l'aquifère. Des changements simulés de la cote du niveau de la rivière, pour refléter des niveaux plus hauts que ceux des pics de crues (de 20 et de 50%), produisent respectivement des remontées de la nappe de 2,72 et 3,45 m en moyenne. Des changements simulés de l'altitude du niveau de la rivière, pour refléter des niveaux plus bas que ceux de basses eaux (de 20 et de 50%), produisent respectivement des abaissements de la nappe de 0,48 et 2,10 m en moyenne. Les altitudes courantes observées de la nappe dans la vallée sont cohérentes avec une cote moyenne du niveau de la rivière (entre les niveaux courants de basses eaux et de crues). El acuífero de los Grand Forks, situado al sur de la Columbia Británica central (Canadá) ha sido utilizado como lugar de estudio para modelar la sensibilidad de un acuífero a los cambios en la recarga y el caudal de los ríos de acuerdo con escenarios previstos de cambio climático en la región. Los resultados sugieren que las variaciones en la recarga al acuífero bajo los diversos escenarios, que han sido modelados en régimen estacionario, tienen un impacto mucho menor en las aguas subterráneas que los cambios en el caudal de los ríos Kettle y Granby, que discurren por el valle. Todas las simulaciones muestran diferencias relativamente pequeñas en la configuración regional de los niveles freáticos y en la dirección general del flujo subterráneo. Las simulaciones de recarga elevada y baja causan un incremento de 0,05 m y un decremento de 0,025 m, respectivamente, en los niveles del acuífero. Los cambios de la elevación del río, simulados para reflejar niveles de flujo mayores que los valores pico (en un 20% y un 50%) resultan en cambios medios de los niveles del acuífero de 2,72 m y 3,45 m, respectivamente. Los cambios simulados en la elevación del río para flujos inferiores al caudal de base (en un 20% y en un 50%) provocan descensos en los niveles de 0,48 y 2,10 m, respectivamente. Los niveles actuales del acuífero en el valle son coherentes con una elevación media del nivel en el río (entre el caudal de base actual y los picos de caudal).
Alvarez, Laura V.; Schmeeckle, Mark W.; Grams, Paul E.
2017-01-01
Lateral flow separation occurs in rivers where banks exhibit strong curvature. In canyon-boundrivers, lateral recirculation zones are the principal storage of fine-sediment deposits. A parallelized,three-dimensional, turbulence-resolving model was developed to study the flow structures along lateralseparation zones located in two pools along the Colorado River in Marble Canyon. The model employs thedetached eddy simulation (DES) technique, which resolves turbulence structures larger than the grid spacingin the interior of the flow. The DES-3D model is validated using Acoustic Doppler Current Profiler flowmeasurements taken during the 2008 controlled flood release from Glen Canyon Dam. A point-to-pointvalidation using a number of skill metrics, often employed in hydrological research, is proposed here forfluvial modeling. The validation results show predictive capabilities of the DES model. The model reproducesthe pattern and magnitude of the velocity in the lateral recirculation zone, including the size and position ofthe primary and secondary eddy cells, and return current. The lateral recirculation zone is open, havingcontinuous import of fluid upstream of the point of reattachment and export by the recirculation returncurrent downstream of the point of separation. Differences in magnitude and direction of near-bed andnear-surface velocity vectors are found, resulting in an inward vertical spiral. Interaction between therecirculation return current and the main flow is dynamic, with large temporal changes in flow direction andmagnitude. Turbulence structures with a predominately vertical axis of vorticity are observed in the shearlayer becoming three-dimensional without preferred orientation downstream.
NASA Astrophysics Data System (ADS)
Alvarez, Laura V.; Schmeeckle, Mark W.; Grams, Paul E.
2017-01-01
Lateral flow separation occurs in rivers where banks exhibit strong curvature. In canyon-bound rivers, lateral recirculation zones are the principal storage of fine-sediment deposits. A parallelized, three-dimensional, turbulence-resolving model was developed to study the flow structures along lateral separation zones located in two pools along the Colorado River in Marble Canyon. The model employs the detached eddy simulation (DES) technique, which resolves turbulence structures larger than the grid spacing in the interior of the flow. The DES-3D model is validated using Acoustic Doppler Current Profiler flow measurements taken during the 2008 controlled flood release from Glen Canyon Dam. A point-to-point validation using a number of skill metrics, often employed in hydrological research, is proposed here for fluvial modeling. The validation results show predictive capabilities of the DES model. The model reproduces the pattern and magnitude of the velocity in the lateral recirculation zone, including the size and position of the primary and secondary eddy cells, and return current. The lateral recirculation zone is open, having continuous import of fluid upstream of the point of reattachment and export by the recirculation return current downstream of the point of separation. Differences in magnitude and direction of near-bed and near-surface velocity vectors are found, resulting in an inward vertical spiral. Interaction between the recirculation return current and the main flow is dynamic, with large temporal changes in flow direction and magnitude. Turbulence structures with a predominately vertical axis of vorticity are observed in the shear layer becoming three-dimensional without preferred orientation downstream.
Use of DES in mildly separated internal flow: dimples in a turbulent channel
NASA Astrophysics Data System (ADS)
Tay, Chien Ming Jonathan; Khoo, Boo Cheong; Chew, Yong Tian
2017-12-01
Detached eddy simulation (DES) is investigated as a means to study an array of shallow dimples with depth to diameter ratios of 1.5% and 5% in a turbulent channel. The DES captures large-scale flow features relatively well, but is unable to predict skin friction accurately due to flow modelling near the wall. The current work instead relies on the accuracy of DES to predict large-scale flow features, as well as its well-documented reliability in predicting flow separation regions to support the proposed mechanism that dimples reduce drag by introducing spanwise flow components near the wall through the addition of streamwise vorticity. Profiles of the turbulent energy budget show the stabilising effect of the dimples on the flow. The presence of flow separation however modulates the net drag reduction. Increasing the Reynolds number can reduce the size of the separated region and experiments show that this increases the overall drag reduction.
Detached Eddy Simulation of Flap Side-Edge Flow
NASA Technical Reports Server (NTRS)
Balakrishnan, Shankar K.; Shariff, Karim R.
2016-01-01
Detached Eddy Simulation (DES) of flap side-edge flow was performed with a wing and half-span flap configuration used in previous experimental and numerical studies. The focus of the study is the unsteady flow features responsible for the production of far-field noise. The simulation was performed at a Reynolds number (based on the main wing chord) of 3.7 million. Reynolds Averaged Navier-Stokes (RANS) simulations were performed as a precursor to the DES. The results of these precursor simulations match previous experimental and RANS results closely. Although the present DES simulations have not reached statistical stationary yet, some unsteady features of the developing flap side-edge flowfield are presented. In the final paper it is expected that statistically stationary results will be presented including comparisons of surface pressure spectra with experimental data.
NASA Astrophysics Data System (ADS)
Nangia, Nishant; Bhalla, Amneet P. S.; Griffith, Boyce E.; Patankar, Neelesh A.
2016-11-01
Flows over bodies of industrial importance often contain both an attached boundary layer region near the structure and a region of massively separated flow near its trailing edge. When simulating these flows with turbulence modeling, the Reynolds-averaged Navier-Stokes (RANS) approach is more efficient in the former, whereas large-eddy simulation (LES) is more accurate in the latter. Detached-eddy simulation (DES), based on the Spalart-Allmaras model, is a hybrid method that switches from RANS mode of solution in attached boundary layers to LES in detached flow regions. Simulations of turbulent flows over moving structures on a body-fitted mesh incur an enormous remeshing cost every time step. The constraint-based immersed boundary (cIB) method eliminates this operation by placing the structure on a Cartesian mesh and enforcing a rigidity constraint as an additional forcing in the Navier-Stokes momentum equation. We outline the formulation and development of a parallel DES-cIB method using adaptive mesh refinement. We show preliminary validation results for flows past stationary bodies with both attached and separated boundary layers along with results for turbulent flows past moving bodies. This work is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1324585.
Simulation Interoperability (Interoperabilite de la simulation)
2015-01-01
du NMSG pour étudier l’interopérabilité de la simulation. L’ET-027 a identifié 63 problèmes qui limitent fortement l’interopérabilité de la simulation...l’automatisation relative du développement, de l’intégration et de la mise en œuvre des environnements de simulation distribuée. Cela exige une...normalisation des applicatifs créés pendant le développement d’un environnement de simulation, à la suite, par exemple, du
NASA Astrophysics Data System (ADS)
Abidi, Dhafer
TTEthernet is a deterministic network technology that makes enhancements to Layer 2 Quality-of-Service (QoS) for Ethernet. The components that implement its services enrich the Ethernet functionality with distributed fault-tolerant synchronization, robust temporal partitioning bandwidth and synchronous communication with fixed latency and low jitter. TTEthernet services can facilitate the design of scalable, robust, less complex distributed systems and architectures tolerant to faults. Simulation is nowadays an essential step in critical systems design process and represents a valuable support for validation and performance evaluation. CoRE4INET is a project bringing together all TTEthernet simulation models currently available. It is based on the extension of models of OMNeT ++ INET framework. Our objective is to study and simulate the TTEthernet protocol on a flight management subsystem (FMS). The idea is to use CoRE4INET to design the simulation model of the target system. The problem is that CoRE4INET does not offer a task scheduling tool for TTEthernet network. To overcome this problem we propose an adaptation for simulation purposes of a task scheduling approach based on formal specification of network constraints. The use of Yices solver allowed the translation of the formal specification into an executable program to generate the desired transmission plan. A case study allowed us at the end to assess the impact of the arrangement of Time-Triggered frames offsets on the performance of each type of the system traffic.
NASA Astrophysics Data System (ADS)
Gemitzi, Alexandra; Tolikas, Demetrios
A simulation program, which works seamlessly with GIS and simulates flows in coastal aquifers, is presented in the present paper. The model is based on the Galerkin finite element discretization scheme and it simulates both steady and transient freshwater and saltwater flow, assuming that the two fluids are separated by a sharp interface. The model has been verified in simple cases where analytical solutions exist. The simulation program works as a tool of the GIS program, which is the main database that stores and manages all the necessary data. The combined use of the simulation and the GIS program forms an integrated management tool offering a simpler way of simulating and studying saline intrusion in coastal aquifers. Application of the model to the Yermasogia aquifer illustrates the coupled use of modeling and GIS techniques for the examination of regional coastal aquifer systems. Pour étudier un système aquifère côtier, nous avons développé un modèle aux éléments finis en quasi 3-D qui simule les écoulements d'eau douce et d'eau salée en régime aussi bien permanent que transitoire. Les équations qui les régissent sont discrétisées par un schéma de discrétisation de Garlekin aux éléments finis. Le modèle a été vérifié dans des cas simples où il existe des solutions analytiques. Toutes les données nécessaires sont introduites et gérées grâce à un logiciel de gestion de SIG. Le programme de simulation est utilisé comme un outil du logiciel de SIG, constituant ainsi un outil de gestion intégrée dont le but est de simuler et d'étudier l'intrusion saline dans les aquifères côtiers. L'application du modèle à l'aquifère de Yermasogia illustre l'utilisation couplée de la modélisation et des techniques de SIG pour l'étude des systèmes aquifères côtiers régionaux. Se ha desarrollado un modelo casi tridimensional de elementos finitos para simular el flujo de agua dulce y salada, tanto en régimen estacionario como en transitorio, en sistemas acuíferos costeros, bajo la hipótesis de separación por medio de una interfaz abrupta. Las ecuaciones del modelo han sido discretizadas mediante un esquema de Galerkin de discretización en elementos finitos. El modelo ha sido verificado en casos sencillos para los que existe solución analítica. Todos los datos necesarios se introducen y gestionan con un Sistema de Información Geográfica [SIG] por ordenador. El programa de simulación forma parte del programa de SIG, constituyendo una herramienta integrada de gestión para estudiar la intrusión salina en acuíferos costeros. La aplicación del modelo al acuífero de Yermasogia ilustra el uso acoplado de las técnicas de modelación y de SIG con el fin de examinar sistemas acuíferos costeros a escala regional.
Murphy, Elizabeth A.; Garcia, Tatiana; Jackson, P. Ryan; Duncker, James J.
2016-04-05
As part of the Great Lakes and Mississippi River Interbasin Study, the U.S. Army Corps of Engineers (USACE) is conducting an assessment of the vulnerability of the Chicago Area Waterway System and Des Plaines River to Asian carp (specifically, Hypophthalmichthys nobilis (bighead carp) and Hypophthalmichthys molitrix (silver carp)) spawning and recruitment. As part of this assessment, the USACE requested the help of the U.S. Geological Survey in predicting the fate and transport of Asian carp eggs hypothetically spawned at the electric dispersal barrier on the Chicago Sanitary and Ship Canal and downstream of the Brandon Road Lock and Dam on the Des Plaines River under dry weather flow and high water temperature conditions. The Fluvial Egg Drift Simulator (FluEgg) model predicted that approximately 80 percent of silver carp eggs spawned near the electric dispersal barrier would hatch within the Lockport and Brandon Road pools (as close as 3.6 miles downstream of the barrier) and approximately 82 percent of the silver carp eggs spawned near the Brandon Road Dam would hatch in the Des Plaines River (as close as 1.6 miles downstream from the gates of Brandon Road Lock). Extension of the FluEgg model to include the fate and transport of larvae until gas bladder inflation—the point at which the larvae begin to leave the drift—suggests that eggs spawned at the electric dispersal barrier would reach the gas bladder inflation stage primarily within the Dresden Island Pool, and those spawned at the Brandon Road Dam would reach this stage primarily within the Marseilles and Starved Rock Pools.
2011-12-01
la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2011 DRDC CSS CR 2011-31 ii...participants. Résumé …..... Introduction : Ce rapport présente la Tâche 2 du projet « Recherche par la simulation in-vivo sur la prise de décision partagée...environnement sur la prise de décision partagée in-vivo des opérations de gestion des urgences et pour colliger des données
Detached Eddy Simulations of Hypersonic Transition
NASA Technical Reports Server (NTRS)
Yoon, S.; Barnhardt, M.; Candler, G.
2010-01-01
This slide presentation reviews the use of Detached Eddy Simulation (DES) of hypersonic transistion. The objective of the study was to investigate the feasibility of using CFD in general, DES in particular, for prediction of roughness-induced boundary layer transition to turbulence and the resulting increase in heat transfer.
Chenel, Marylore; Bouzom, François; Aarons, Leon; Ogungbenro, Kayode
2008-12-01
To determine the optimal sampling time design of a drug-drug interaction (DDI) study for the estimation of apparent clearances (CL/F) of two co-administered drugs (SX, a phase I compound, potentially a CYP3A4 inhibitor, and MDZ, a reference CYP3A4 substrate) without any in vivo data using physiologically based pharmacokinetic (PBPK) predictions, population PK modelling and multiresponse optimal design. PBPK models were developed with AcslXtreme using only in vitro data to simulate PK profiles of both drugs when they were co-administered. Then, using simulated data, population PK models were developed with NONMEM and optimal sampling times were determined by optimizing the determinant of the population Fisher information matrix with PopDes using either two uniresponse designs (UD) or a multiresponse design (MD) with joint sampling times for both drugs. Finally, the D-optimal sampling time designs were evaluated by simulation and re-estimation with NONMEM by computing the relative root mean squared error (RMSE) and empirical relative standard errors (RSE) of CL/F. There were four and five optimal sampling times (=nine different sampling times) in the UDs for SX and MDZ, respectively, whereas there were only five sampling times in the MD. Whatever design and compound, CL/F was well estimated (RSE < 20% for MDZ and <25% for SX) and expected RSEs from PopDes were in the same range as empirical RSEs. Moreover, there was no bias in CL/F estimation. Since MD required only five sampling times compared to the two UDs, D-optimal sampling times of the MD were included into a full empirical design for the proposed clinical trial. A joint paper compares the designs with real data. This global approach including PBPK simulations, population PK modelling and multiresponse optimal design allowed, without any in vivo data, the design of a clinical trial, using sparse sampling, capable of estimating CL/F of the CYP3A4 substrate and potential inhibitor when co-administered together.
Simulating Mission Command for Planning and Analysis
2015-06-01
mission plan. 14. SUBJECT TERMS Mission Planning, CPM , PERT, Simulation, DES, Simkit, Triangle Distribution, Critical Path 15. NUMBER OF...Battalion Task Force CO Company CPM Critical Path Method DES Discrete Event Simulation FA BAT Field Artillery Battalion FEL Future Event List FIST...management tools that can be utilized to find the critical path in military projects. These are the Critical Path Method ( CPM ) and the Program Evaluation and
Bonnett, C.; Troxel, M. A.; Hartley, W.; ...
2016-08-30
Here we present photometric redshift estimates for galaxies used in the weak lensing analysis of the Dark Energy Survey Science Verification (DES SV) data. Four model- or machine learning-based photometric redshift methods—annz2, bpz calibrated against BCC-Ufig simulations, skynet, and tpz—are analyzed. For training, calibration, and testing of these methods, we construct a catalogue of spectroscopically confirmed galaxies matched against DES SV data. The performance of the methods is evaluated against the matched spectroscopic catalogue, focusing on metrics relevant for weak lensing analyses, with additional validation against COSMOS photo-z’s. From the galaxies in the DES SV shear catalogue, which have meanmore » redshift 0.72±0.01 over the range 0.38 of approximately 3%. This shift is within the one sigma statistical errors on σ8 for the DES SV shear catalogue. We further study the potential impact of systematic differences on the critical surface density, Σ crit, finding levels of bias safely less than the statistical power of DES SV data. In conclusion, we recommend a final Gaussian prior for the photo-z bias in the mean of n(z) of width 0.05 for each of the three tomographic bins, and show that this is a sufficient bias model for the corresponding cosmology analysis.« less
Development of Dielectric Elastomer Nanocomposites as Stretchable and Flexible Actuating Materials
NASA Astrophysics Data System (ADS)
Wang, Yu
Dielectric elastomers (DEs) are a new type of smart materials showing promising functionalities as energy harvesting materials as well as actuating materials for potential applications such as artificial muscles, implanted medical devices, robotics, loud speakers, micro-electro-mechanical systems (MEMS), tunable optics, transducers, sensors, and even generators due to their high electromechanical efficiency, stability, lightweight, low cost, and easy processing. Despite the advantages of DEs, technical challenges must be resolved for wider applications. A high electric field of at least 10-30 V/um is required for the actuation of DEs, which limits the practical applications especially in biomedical fields. We tackle this problem by introducing the multiwalled carbon nanotubes (MWNTs) in DEs to enhance their relative permittivity and to generate their high electromechanical responses with lower applied field level. This work presents the dielectric, mechanical and electromechanical properties of DEs filled with MWNTs. The micromechanics-based finite element models are employed to describe the dielectric, and mechanical behavior of the MWNT-filled DE nanocomposites. A sufficient number of models are computed to reach the acceptable prediction of the dielectric and mechanical responses. In addition, experimental results are analyzed along with simulation results. Finally, laser Doppler vibrometer is utilized to directly detect the enhancement of the actuation strains of DE nanocomposites filled with MWNTs. All the results demonstrate the effective improvement in the electromechanical properties of DE nanocomposites filled with MWNTs under the applied electric fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonnett, C.; Troxel, M. A.; Hartley, W.
Here we present photometric redshift estimates for galaxies used in the weak lensing analysis of the Dark Energy Survey Science Verification (DES SV) data. Four model- or machine learning-based photometric redshift methods—annz2, bpz calibrated against BCC-Ufig simulations, skynet, and tpz—are analyzed. For training, calibration, and testing of these methods, we construct a catalogue of spectroscopically confirmed galaxies matched against DES SV data. The performance of the methods is evaluated against the matched spectroscopic catalogue, focusing on metrics relevant for weak lensing analyses, with additional validation against COSMOS photo-z’s. From the galaxies in the DES SV shear catalogue, which have meanmore » redshift 0.72±0.01 over the range 0.38 of approximately 3%. This shift is within the one sigma statistical errors on σ8 for the DES SV shear catalogue. We further study the potential impact of systematic differences on the critical surface density, Σ crit, finding levels of bias safely less than the statistical power of DES SV data. In conclusion, we recommend a final Gaussian prior for the photo-z bias in the mean of n(z) of width 0.05 for each of the three tomographic bins, and show that this is a sufficient bias model for the corresponding cosmology analysis.« less
Montgomery, Stephen M; Maruszczak, Maciej J; Slater, David; Kusel, Jeanette; Nicholas, Richard; Adlard, Nicholas
2017-05-01
Two disease-modifying therapies are licensed in the EU for use in rapidly-evolving severe (RES) relapsing-remitting multiple sclerosis (RRMS), fingolimod and natalizumab. Here a discrete event simulation (DES) model to analyze the cost-effectiveness of natalizumab and fingolimod in the RES population, from the perspective of the National Health Service (NHS) in the UK, is reported. A DES model was developed to track individual RES patients, based on Expanded Disability Status Scale scores. Individual patient characteristics were taken from the RES sub-groups of the pivotal trials for fingolimod. Utility data were in line with previous models. Published costs were inflated to NHS cost year 2015. Owing to the confidential patient access scheme (PAS) discount applied to fingolimod in the UK, a range of discount levels were applied to the fingolimod list price, to capture the likelihood of natalizumab being cost-effective in a real-world setting. At the lower National Institute of Health and Care Excellence (NICE) threshold of £20,000/quality-adjusted life year (QALY), fingolimod only required a discount greater than 0.8% of list price to be cost-effective. At the upper threshold of £30,000/QALY employed by the NICE, fingolimod was cost-effective if the confidential discount is greater than 2.5%. Sensitivity analyses conducted using fingolimod list-price showed the model to be most sensitive to changes in the cost of each drug, particularly fingolimod. The DES model shows that only a modest discount to the UK fingolimod list-price is required to make fingolimod a more cost-effective option than natalizumab in RES RRMS.
High-Lift System Aerodynamics (L’Aerodynamique des Systems Hypersustentateurs)
1993-09-01
les kcouleinents incoinpressibles sur les profils L -a pr~sente m~thode num~rique montre que la simulation multi-corps, qui est un...de l ’&oulement du dchelle de discrdtisation, d’origine physique, introduit des m~me ordre que l ’~paisseur des couches limites sur le point de...hypersustentateurs. Enfin, les consequences des exigences de furtivite sur la forme des aeronefs - c’est it dire la creation de configurations telles que
NASA Astrophysics Data System (ADS)
Benadja, Mounir
Dans ce travail est presente un systeme de generation d'energie d'un parc eolien offshore et un systeme de transport utilisant les stations VSC-HVDC connectees au reseau principal AC onshore. Trois configurations ont ete etudiees, modelisees et validees par simulation. Dans chacune des configurations, des contributions ameliorant les cotes techniques et economiques sont decrites ci-dessous : La premiere contribution concerne un nouvel algorithme MPPT (Maximum Power Point Tracking) utilise pour l'extraction de la puissance maximale disponible dans les eoliennes des parcs offshores. Cette technique d'extraction du MPPT ameliore le rendement energetique de la chaine de conversion des energies renouvelables notamment l'energie eolienne a petite et a grande echelles (parc eolien offshore) qui constitue un probleme pour les constructeurs qui se trouvent confrontes a developper des dispositifs MPPT simples, moins couteux, robustes, fiables et capable d'obtenir un rendement energetique maximal. La deuxieme contribution concerne la reduction de la taille, du cout et de l'impact des defauts electriques (AC et DC) dans le systeme construit pour transporter l'energie d'un parc eolien offshore (OWF) vers le reseau principal AC onshore via deux stations 3L-NPC VSCHVDC. La solution developpee utilise des observateurs non-lineaires bases sur le filtre de Kalman etendu (EKF). Ce filtre permet d'estimer la vitesse de rotation et la position du rotor de chacune des generatrices du parc eolien offshore et de la tension du bus DC de l'onduleur DC-AC offshore et des deux stations 3L-NPC-VSC-HVDC (offshore et onshore). De plus, ce developpement du filtre de Kalman etendu a permis de reduire l'impact des defauts AC et DC. Deux commandes ont ete utilisees, l'une (commande indirect dans le plan abc) avec EKF integre destinee pour controler le convertisseur DC-AC offshore et l'autre (commande d-q) avec EKF integre pour controler les convertisseurs des deux stations AC-DC et DC-AC tout en tenant compte des entrees de chacune des stations. L'integration des observateurs non-lineaires (EKF) dans le controle des convertisseurs permet de resoudre le probleme des incertitudes de mesure, des incertitudes dans la modelisation, en cas du dysfonctionnement ou de panne des capteurs de mesure ainsi que le probleme de l'impact des defauts (AC et DC) sur la qualite d'energie dans les systemes de transmission. Ces estimations contribuent a rendre le cout global du systeme moins cher et sa taille moins encombrante ainsi que la reduction de l'impact des defauts (AC et DC) sur le systeme. La troisieme contribution concerne la reduction de la taille, du cout et de l'impact des defauts electriques (AC et DC) dans le systeme construit pour transporter l'energie d'un parc eolien offshore (OWF) vers le reseau principal AC onshore via deux stations VSC-HVDC. La solution developpee utilise des observateurs non-lineaires bases sur le filtre de Kalman etendu (EKF). Ce filtre permet d'estimer la vitesse de rotation et la position du rotor de chacune des generatrices du parc eolien et de la tension du bus DC de l'onduleur DC-AC offshore. La contribution porte surtout sur le developpement des deux commandes des deux stations. La premiere, la commande non-lineaire modifiee pour controler le premier convertisseur de la station VSC-HVDC offshore assurant le transfert de la puissance generee par le parc eolien vers la station VSC-HVDC onshore. La deuxieme commande non-lineaire modifiee avec integration de la regulation de la tension du bus DC et de la commande a modele de reference adaptative (MRAC) pour la compensation des surintensites et surtensions durant les defauts AC et DC. On peut constater que lors d'un defaut AC au PCC (Point of Common Coupling) du cote reseau onshore, la profondeur de l'impact du defaut AC sur l'amplitude des courants du reseau principal AC onshore qui etait reduit a 60% par les travaux de recherche (Erlich, Feltes et Shewarega, 2014), comparativement a la nouvelle commande proposee MRAC qui reduit la profondeur de l'impact a 35%. Lors de l'apparition des defauts AC et DC, une reduction de l'impact des defauts sur l'amplitude des courants de reseau AC terrestre et du temps de reponse a ete observee et la stabilite du systeme a ete renforcee par l'utilisation de la commande adaptative basee sur le modele de reference MRAC. La quatrieme contribution concerne une nouvelle commande basee sur le mode de glissement (SM) appliquee pour la station VSC-HVDC qui relie le parc eolien offshore (OWF) au reseau principal AC. Ce parc est compose de dix eoliennes basees sur des generatrices synchrones a aimant permanent (VSWT/PMSGs) connectees en parallele et chacune est controlee par son propre convertisseur DC-DC. Une comparaison des performances entre la commande SM et de la commande non-lineaire avec des controleurs PI pour les deux conditions (presence et absence de defaut DC) a ete analysee et montre la superiorite de la commande par SM. Un prototype du systeme etudie a echelle reduite a ete realise et teste au laboratoire GREPCI en utilisant la carte dSPACE-DS1104 pour la validation experimentale. L'analyse et la simulation des systemes etudies sont developpees sous l'environnement Matlab/Simulink/Simpowersystem. Les resultats obtenus a partir des configurations developpees sont valides par simulation et par experimentation. Les performances sont tres satisfaisantes du point de vue reponse dynamique, reponse en regime permanent, stabilite du systeme et qualite de l'energie.
NASA Astrophysics Data System (ADS)
Nitzsche, O.; Merkel, B.
Knowledge of the transport behavior of radionuclides in groundwater is needed for both groundwater protection and remediation of abandoned uranium mines and milling sites. Dispersion, diffusion, mixing, recharge to the aquifer, and chemical interactions, as well as radioactive decay, should be taken into account to obtain reliable predictions on transport of primordial nuclides in groundwater. This paper demonstrates the need for carrying out rehabilitation strategies before closure of the Königstein in-situ leaching uranium mine near Dresden, Germany. Column experiments on drilling cores with uranium-enriched tap water provided data about the exchange behavior of uranium. Uranium breakthrough was observed after more than 20 pore volumes. This strong retardation is due to the exchange of positively charged uranium ions. The code TReAC is a 1-D, 2-D, and 3-D reactive transport code that was modified to take into account the radioactive decay of uranium and the most important daughter nuclides, and to include double-porosity flow. TReAC satisfactorily simulated the breakthrough curves of the column experiments and provided a first approximation of exchange parameters. Groundwater flow in the region of the Königstein mine was simulated using the FLOWPATH code. Reactive transport behavior was simulated with TReAC in one dimension along a 6000-m path line. Results show that uranium migration is relatively slow, but that due to decay of uranium, the concentration of radium along the flow path increases. Results are highly sensitive to the influence of double-porosity flow. Résumé La protection des eaux souterraines et la restauration des sites miniers et de prétraitement d'uranium abandonnés nécessitent de connaître le comportement des radionucléides au cours de leur transport dans les eaux souterraines. La dispersion, la diffusion, le mélange, la recharge de l'aquifère et les interactions chimiques, de même que la décroissance radioactive, doivent être prises en compte pour obtenir des prédictions fiables concernant le transport des nucléides primaires dans les eaux souterraines. Ce papier montre la nécessité d'établir des stratégies de réhabilitation avant la fermeture de la mine d'uranium de Knigstein, près de Dresde (Allemagne). Des expériences de lessivage en colonne sur des carottes avec de l'eau enrichie en uranium fournissent des données sur le comportement de l'échange de l'uranium. La restitution de l'uranium a été observée après un lessivage par un volume supérieur à 20 fois celui des pores. Ce fort retard est dûà l'échange d'ions uranium positifs. Le code TReAC est un code de transport réactif en 1D, 2D et 3D, qui a été modifié pour prendre en compte la décroissance radioactive de l'uranium et les principaux nucléides descendants, et pour introduire l'écoulement dans un milieu à double porosité. TReAC a simulé de façon satisfaisante les courbes de restitution des expériences sur colonne et a fourni une première approche des paramètres de l'échange. L'écoulement souterrain dans la région de la mine de Knigstein a été simulé au moyen du code FLOWPATH. Le comportement du transport réactif a été simulé avec TReAC en une dimension, le long d'un axe d'écoulement long de 6000 m. Les résultats montrent que la migration de l'uranium est relativement lente ; mais du fait de la décroissance radioactive de l'uranium, la concentration en radium le long de cet axe augmente. Les résultats sont très sensibles à l'influence de l'écoulement en milieu à double porosité.
DES Prediction of Cavitation Erosion and Its Validation for a Ship Scale Propeller
NASA Astrophysics Data System (ADS)
Ponkratov, Dmitriy, Dr
2015-12-01
Lloyd's Register Technical Investigation Department (LR TID) have developed numerical functions for the prediction of cavitation erosion aggressiveness within Computational Fluid Dynamics (CFD) simulations. These functions were previously validated for a model scale hydrofoil and ship scale rudder [1]. For the current study the functions were applied to a cargo ship's full scale propeller, on which the severe cavitation erosion was reported. The performed Detach Eddy Simulation (DES) required a fine computational mesh (approximately 22 million cells), together with a very small time step (2.0E-4 s). As the cavitation for this type of vessel is primarily caused by a highly non-uniform wake, the hull was also included in the simulation. The applied method under predicted the cavitation extent and did not fully resolve the tip vortex; however, the areas of cavitation collapse were captured successfully. Consequently, the developed functions showed a very good prediction of erosion areas, as confirmed by comparison with underwater propeller inspection results.
NASA Technical Reports Server (NTRS)
Westra, Doug G.; West, Jeffrey S.; Richardson, Brian R.
2015-01-01
Historically, the analysis and design of liquid rocket engines (LREs) has relied on full-scale testing and one-dimensional empirical tools. The testing is extremely expensive and the one-dimensional tools are not designed to capture the highly complex, and multi-dimensional features that are inherent to LREs. Recent advances in computational fluid dynamics (CFD) tools have made it possible to predict liquid rocket engine performance, stability, to assess the effect of complex flow features, and to evaluate injector-driven thermal environments, to mitigate the cost of testing. Extensive efforts to verify and validate these CFD tools have been conducted, to provide confidence for using them during the design cycle. Previous validation efforts have documented comparisons of predicted heat flux thermal environments with test data for a single element gaseous oxygen (GO2) and gaseous hydrogen (GH2) injector. The most notable validation effort was a comprehensive validation effort conducted by Tucker et al. [1], in which a number of different groups modeled a GO2/GH2 single element configuration by Pal et al [2]. The tools used for this validation comparison employed a range of algorithms, from both steady and unsteady Reynolds Averaged Navier-Stokes (U/RANS) calculations, large-eddy simulations (LES), detached eddy simulations (DES), and various combinations. A more recent effort by Thakur et al. [3] focused on using a state-of-the-art CFD simulation tool, Loci/STREAM, on a two-dimensional grid. Loci/STREAM was chosen because it has a unique, very efficient flamelet parameterization of combustion reactions that are too computationally expensive to simulate with conventional finite-rate chemistry calculations. The current effort focuses on further advancement of validation efforts, again using the Loci/STREAM tool with the flamelet parameterization, but this time with a three-dimensional grid. Comparisons to the Pal et al. heat flux data will be made for both RANS and Hybrid RANSLES/ Detached Eddy simulations (DES). Computation costs will be reported, along with comparison of accuracy and cost to much less expensive two-dimensional RANS simulations of the same geometry.
Exploring the Use of Computer Simulations in Unraveling Research and Development Governance Problems
NASA Technical Reports Server (NTRS)
Balaban, Mariusz A.; Hester, Patrick T.
2012-01-01
Understanding Research and Development (R&D) enterprise relationships and processes at a governance level is not a simple task, but valuable decision-making insight and evaluation capabilities can be gained from their exploration through computer simulations. This paper discusses current Modeling and Simulation (M&S) methods, addressing their applicability to R&D enterprise governance. Specifically, the authors analyze advantages and disadvantages of the four methodologies used most often by M&S practitioners: System Dynamics (SO), Discrete Event Simulation (DES), Agent Based Modeling (ABM), and formal Analytic Methods (AM) for modeling systems at the governance level. Moreover, the paper describes nesting models using a multi-method approach. Guidance is provided to those seeking to employ modeling techniques in an R&D enterprise for the purposes of understanding enterprise governance. Further, an example is modeled and explored for potential insight. The paper concludes with recommendations regarding opportunities for concentration of future work in modeling and simulating R&D governance relationships and processes.
Ship Dynamics Identification Using Simulator and Sea Trial Data
2002-04-29
Committee "© Her Majesty the Queen as represented by the Minister of National Defence, 2002 "© Sa majest6 la reine, repr~sent~e par le ministre de la...cotes d’instructeurs. Le but du present rapport est de ddterminer la nature de la dynamique des navires virtuels et r6els de mani&re A ce que des...la dynamique du navire. Le pr6sent document pr6cise la dynamique du navire pour un navire de classe Bay des FC, ainsi que pour un navire simul6. Les
CALIBRATED ULTRA FAST IMAGE SIMULATIONS FOR THE DARK ENERGY SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruderer, Claudio; Chang, Chihway; Refregier, Alexandre
2016-01-20
Image simulations are becoming increasingly important in understanding the measurement process of the shapes of galaxies for weak lensing and the associated systematic effects. For this purpose we present the first implementation of the Monte Carlo Control Loops (MCCL), a coherent framework for studying systematic effects in weak lensing. It allows us to model and calibrate the shear measurement process using image simulations from the Ultra Fast Image Generator (UFig) and the image analysis software SExtractor. We apply this framework to a subset of the data taken during the Science Verification period (SV) of the Dark Energy Survey (DES). Wemore » calibrate the UFig simulations to be statistically consistent with one of the SV images, which covers ∼0.5 square degrees. We then perform tolerance analyses by perturbing six simulation parameters and study their impact on the shear measurement at the one-point level. This allows us to determine the relative importance of different parameters. For spatially constant systematic errors and point-spread function, the calibration of the simulation reaches the weak lensing precision needed for the DES SV survey area. Furthermore, we find a sensitivity of the shear measurement to the intrinsic ellipticity distribution, and an interplay between the magnitude-size and the pixel value diagnostics in constraining the noise model. This work is the first application of the MCCL framework to data and shows how it can be used to methodically study the impact of systematics on the cosmic shear measurement.« less
Systematic Biases in Weak Lensing Cosmology with the Dark Energy Survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samuroff, Simon
This thesis sets out a practical guide to applying shear measurements as a cosmological tool. We first present one of two science-ready galaxy shape catalogues from Year 1 of the Dark Energy Survey (DES Y1), which covers 1500 square degrees in four bandsmore » $griz$, with a median redshift of $0.59$. We describe the shape measurement process implemented by the DES Y1 imshape catalogue, which contains 21.9 million high-quality $r$-band bulge/disc fits. In Chapter 3 a new suite of image simulations, referred to as Hoopoe, are presented. The Hoopoe dataset is tailored to DES Y1 and includes realistic blending, spatial masks and variation in the point spread function. We derive shear corrections, which we show are robust to changes in calibration method, galaxy binning and variance within the simulated dataset. Sources of systematic uncertainty in the simulation-based shear calibration are discussed, leading to a final estimate of the $$1\\sigma$$ uncertainties in the residual multiplica tive bias after calibration of 0.025. Chapter 4 describes an extension of the analysis on the Hoopoe simulations into a detailed investigation of the impact of galaxy neighbours on shape measurement and shear cosmology. Four mechanisms by which neighbours can have a non-negligible influence on shear measurement are identified. These effects, if ignored, would contribute a net multiplicative bias of $$m \\sim 0.03 - 0.09$$ in DES Y1, though the precise impact will depend on both the measurement code and the selection cuts applied. We use the cosmological inference pipeline of DES Y1 to explore the cosmological implications of neighbour bias and show that omitting blending from the calibration simulation for DES Y1 would bias the inferred clustering amplitude $$S_8 \\equiv \\sigma_8 (\\omegam /0.3)^{0.5}$$ by $$1.5 \\sigma$$ towards low values. Finally, we use the Hoopoe simulations to test the effect of neighbour-induced spatial correlations in the multiplicative bias. We find the cosmo logical impact to be subdominant to statistical error at the! current level of precision. Another major uncertainity in shear cosmology is the accuracy of our ensemble redshift distributions. Chapter 5 presents a numerical investigation into the combined constraining power of cosmic shear, galaxy clustering and their cross-correlation in DES Y1, and the potential for internal calibration of redshift errors. Introducing a moderate uniform bias into the redshift distributions used to model the weak lensing (WL) galaxies is shown to produce a $$> 2\\sigma$$ bias in $$S_8$$. We demonstrate that this cosmological bias can be eliminated by marginalising over redshift error nuisance parameters. Strikingly, the cosmological constraint of the combined dataset is largely undiminished by the loss of prior information on the WL distributions. We demonstrate that this implicit self-calibration is the result of complementary degeneracy directions in the combined data. In Chapter 6 we present the preliminary results of an investigation into galaxy intrin sic alignments. Using the DES Y1 data, we show a clear dependence in alignment amplitude on galaxy type, in agreement with previous results. We subject these findings to a series of initial robustness tests. We conclude with a short overview of the work presented, and discuss prospects for the future.« less
Algorithmes de couplage RANS et ecoulement potentiel
NASA Astrophysics Data System (ADS)
Gallay, Sylvain
Dans le processus de developpement d'avion, la solution retenue doit satisfaire de nombreux criteres dans de nombreux domaines, comme par exemple le domaine de la structure, de l'aerodynamique, de la stabilite et controle, de la performance ou encore de la securite, tout en respectant des echeanciers precis et minimisant les couts. Les geometries candidates sont nombreuses dans les premieres etapes de definition du produit et de design preliminaire, et des environnements d'optimisations multidisciplinaires sont developpes par les differentes industries aeronautiques. Differentes methodes impliquant differents niveaux de modelisations sont necessaires pour les differentes phases de developpement du projet. Lors des phases de definition et de design preliminaires, des methodes rapides sont necessaires afin d'etudier les candidats efficacement. Le developpement de methodes ameliorant la precision des methodes existantes tout en gardant un cout de calcul faible permet d'obtenir un niveau de fidelite plus eleve dans les premieres phases de developpement du projet et ainsi grandement diminuer les risques associes. Dans le domaine de l'aerodynamisme, les developpements des algorithmes de couplage visqueux/non visqueux permettent d'ameliorer les methodes de calcul lineaires non visqueuses en methodes non lineaires prenant en compte les effets visqueux. Ces methodes permettent ainsi de caracteriser l'ecoulement visqueux sur les configurations et predire entre autre les mecanismes de decrochage ou encore la position des ondes de chocs sur les surfaces portantes. Cette these se focalise sur le couplage entre une methode d'ecoulement potentiel tridimensionnelle et des donnees de section bidimensionnelles visqueuses. Les methodes existantes sont implementees et leurs limites identifiees. Une methode originale est ensuite developpee et validee. Les resultats sur une aile elliptique demontrent la capacite de l'algorithme a de grands angles d'attaques et dans la region post-decrochage. L'algorithme de couplage a ete compare a des donnees de plus haute fidelite sur des configurations issues de la litterature. Un modele de fuselage base sur des relations empiriques et des simulations RANS a ete teste et valide. Les coefficients de portance, de trainee et de moment de tangage ainsi que les coefficients de pression extraits le long de l'envergure ont montre un bon accord avec les donnees de soufflerie et les modeles RANS pour des configurations transsoniques. Une configuration a geometrie hypersustentatoire a permis d'etudier la modelisation des surfaces hypersustentees de la methode d'ecoulement potentiel, demontrant que la cambrure peut etre prise en compte uniquement dans les donnees visqueuses.
Synthetic Infrared Scene: Improving the KARMA IRSG Module and Signature Modelling Tool SMAT
2011-03-01
d’engagements impliquant des autodirecteurs infrarouges dans l’environnement de simulation KARMA. Le travail a été réalisé à partir de novembre 2008...infrarouges dans l’environnement de simulation KARMA. Le travail a été réalisé à partir de novembre 2008 jusqu’à mars 2011. Ce rapport de contrat est axé...74 13 Evaluating Performance Validator tool
An Investigation of Transonic Resonance in a Mach 2.2 Round Convergent-Divergent Nozzle
NASA Technical Reports Server (NTRS)
Dippold, Vance F., III; Zaman, Khairul B. M. Q.
2015-01-01
Hot-wire and acoustic measurements were taken for a round convergent nozzle and a round convergent-divergent (C-D) nozzle at a jet Mach number of 0.61. The C-D nozzle had a design Mach number of 2.2. Compared to the convergent nozzle jet flow, the Mach 2.2 nozzle jet flow produced excess broadband noise (EBBN). It also produced a transonic resonance tone at 1200 Herz. Computational simulations were performed for both nozzle flows. A steady Reynolds-Averaged Navier-Stokes simulation was performed for the convergent nozzle jet flow. For the Mach 2.2 nozzle flow, a steady RANS simulation, an unsteady RANS (URANS) simulation, and an unsteady Detached Eddy Simulation (DES) were performed. The RANS simulation of the convergent nozzle showed good agreement with the hot-wire velocity and turbulence measurements, though the decay of the potential core was over-predicted. The RANS simulation of the Mach 2.2 nozzle showed poor agreement with the experimental data, and more closely resembled an ideally-expanded jet. The URANS simulation also showed qualitative agreement with the hot-wire data, but predicted a transonic resonance at 1145 Herz. The DES showed good agreement with the hot-wire velocity and turbulence data. The DES also produced a transonic tone at 1135 Herz. The DES solution showed that the destabilization of the shock-induced separation region inside the nozzle produced increased levels of turbulence intensity. This is likely the source of the EBBN.
Error discrimination of an operational hydrological forecasting system at a national scale
NASA Astrophysics Data System (ADS)
Jordan, F.; Brauchli, T.
2010-09-01
The use of operational hydrological forecasting systems is recommended for hydropower production as well as flood management. However, the forecast uncertainties can be important and lead to bad decisions such as false alarms and inappropriate reservoir management of hydropower plants. In order to improve the forecasting systems, it is important to discriminate the different sources of uncertainties. To achieve this task, reanalysis of past predictions can be realized and provide information about the structure of the global uncertainty. In order to discriminate between uncertainty due to the weather numerical model and uncertainty due to the rainfall-runoff model, simulations assuming perfect weather forecast must be realized. This contribution presents the spatial analysis of the weather uncertainties and their influence on the river discharge prediction of a few different river basins where an operational forecasting system exists. The forecast is based on the RS 3.0 system [1], [2], which is also running the open Internet platform www.swissrivers.ch [3]. The uncertainty related to the hydrological model is compared to the uncertainty related to the weather prediction. A comparison between numerous weather prediction models [4] at different lead times is also presented. The results highlight an important improving potential of both forecasting components: the hydrological rainfall-runoff model and the numerical weather prediction models. The hydrological processes must be accurately represented during the model calibration procedure, while weather prediction models suffer from a systematic spatial bias. REFERENCES [1] Garcia, J., Jordan, F., Dubois, J. & Boillat, J.-L. 2007. "Routing System II, Modélisation d'écoulements dans des systèmes hydrauliques", Communication LCH n° 32, Ed. Prof. A. Schleiss, Lausanne [2] Jordan, F. 2007. Modèle de prévision et de gestion des crues - optimisation des opérations des aménagements hydroélectriques à accumulation pour la réduction des débits de crue, thèse de doctorat n° 3711, Ecole Polytechnique Fédérale, Lausanne [3] Keller, R. 2009. "Le débit des rivières au peigne fin", Revue Technique Suisse, N°7/8 2009, Swiss engineering RTS, UTS SA, Lausanne, p. 11 [4] Kaufmann, P., Schubiger, F. & Binder, P. 2003. Precipitation forecasting by a mesoscale numerical weather prediction (NWP) model : eight years of experience, Hydrology and Earth System
F-16XL Hybrid Reynolds-Averaged Navier-Stokes/Large Eddy Simulation on Unstructured Grids
NASA Technical Reports Server (NTRS)
Park, Michael A.; Abdol-Hamid, Khaled S.; Elmiligui, Alaa
2015-01-01
This study continues the Cranked Arrow Wing Aerodynamics Program, International (CAWAPI) investigation with the FUN3D and USM3D flow solvers. CAWAPI was established to study the F-16XL, because it provides a unique opportunity to fuse fight test, wind tunnel test, and simulation to understand the aerodynamic features of swept wings. The high-lift performance of the cranked-arrow wing planform is critical for recent and past supersonic transport design concepts. Simulations of the low speed high angle of attack Flight Condition 25 are compared: Detached Eddy Simulation (DES), Modi ed Delayed Detached Eddy Simulation (MDDES), and the Spalart-Allmaras (SA) RANS model. Iso- surfaces of Q criterion show the development of coherent primary and secondary vortices on the upper surface of the wing that spiral, burst, and commingle. SA produces higher pressure peaks nearer to the leading-edge of the wing than flight test measurements. Mean DES and MDDES pressures better predict the flight test measurements, especially on the outer wing section. Vorticies and vortex-vortex interaction impact unsteady surface pressures. USM3D showed many sharp tones in volume points spectra near the wing apex with low broadband noise and FUN3D showed more broadband noise with weaker tones. Spectra of the volume points near the outer wing leading-edge was primarily broadband for both codes. Without unsteady flight measurements, the flight pressure environment can not be used to validate the simulations containing tonal or broadband spectra. Mean forces and moment are very similar between FUN3D models and between USM3D models. Spectra of the unsteady forces and moment are broadband with a few sharp peaks for USM3D.
NASA Astrophysics Data System (ADS)
Sybilska, Agnieszka; Łokas, Ewa Luiza; Fouquet, Sylvain
2017-03-01
We combine high-quality IFU data with a new set of numerical simulations to study low-mass early type galaxies (dEs) in dense environments. Our earlier study of dEs in the Virgo cluster has produced the first large-scale maps of kinematic and stellar population properties of dEs in those environments (Ryś et al. 2013, 2014, 2015). A quantitative discrimination between various (trans)formation processes proposed for these objects is, however, a complex issue, requiring a priori assumptions about the progenitors of galaxies we observe and study today. To bridge this gap between observations and theoretical predictions, we use the expertise gained in the IFU data analysis to look ``through the eye of SAURON'' at our new suite of high-resolution N-body simulations of dEs in the Virgo cluster. Mimicking the observers perspective as closely as possible, we can also indicate the existing instrumental and viewer limitations regarding what we are/are not able to detect as observers.
2004-05-01
currently contains 79 tools and others should be added as they become known. Finally, the Task Group has recommended that the tool list be made available...approach and analysis. Conclusions and recommendations are contained in Chapter 5. RTO-TR-MSG-005 v Des outils d’aide au processus de développement...generation, Version 1.5 [A.3-1], was created in December 1999 and contained only minor editorial changes. RTO-TR-MSG-005 2 - 1 FEDEP With this
NASA Astrophysics Data System (ADS)
Varlet, Madeleine
Le recours aux modeles et a la modelisation est mentionne dans la documentation scientifique comme un moyen de favoriser la mise en oeuvre de pratiques d'enseignement-apprentissage constructivistes pour pallier les difficultes d'apprentissage en sciences. L'etude prealable du rapport des enseignantes et des enseignants aux modeles et a la modelisation est alors pertinente pour comprendre leurs pratiques d'enseignement et identifier des elements dont la prise en compte dans les formations initiale et disciplinaire peut contribuer au developpement d'un enseignement constructiviste des sciences. Plusieurs recherches ont porte sur ces conceptions sans faire de distinction selon les matieres enseignees, telles la physique, la chimie ou la biologie, alors que les modeles ne sont pas forcement utilises ou compris de la meme maniere dans ces differentes disciplines. Notre recherche s'est interessee aux conceptions d'enseignantes et d'enseignants de biologie au secondaire au sujet des modeles scientifiques, de quelques formes de representations de ces modeles ainsi que de leurs modes d'utilisation en classe. Les resultats, que nous avons obtenus au moyen d'une serie d'entrevues semi-dirigees, indiquent que globalement leurs conceptions au sujet des modeles sont compatibles avec celle scientifiquement admise, mais varient quant aux formes de representations des modeles. L'examen de ces conceptions temoigne d'une connaissance limitee des modeles et variable selon la matiere enseignee. Le niveau d'etudes, la formation prealable, l'experience en enseignement et un possible cloisonnement des matieres pourraient expliquer les differentes conceptions identifiees. En outre, des difficultes temporelles, conceptuelles et techniques peuvent freiner leurs tentatives de modelisation avec les eleves. Toutefois, nos resultats accreditent l'hypothese que les conceptions des enseignantes et des enseignants eux-memes au sujet des modeles, de leurs formes de representation et de leur approche constructiviste en enseignement representent les plus grands obstacles a la construction des modeles en classe. Mots-cles : Modeles et modelisation, biologie, conceptions, modes d'utilisation, constructivisme, enseignement, secondaire.
2002-04-01
configuration associated with the HSCT program was analyzed in terms of inlet unstart and the effect of the regurgitated shock wave. Inlet start is a...heavily loaded take off or dog -fight phases of flight. Less critical issues, such as thrust loss during supersonic operations, may also appear. From the
Turbomachinery Design Using CFD (La Conception des Turbomachines par l’Aerodynamique Numerique).
1994-05-01
Method for Flow Calculations in Turbomachines", Vrije Thompkins, W.T.,1981, "A Fortran Program for Calcu- Univ.Brussel, Dienst Stromingsmechanica, VUB- STR ...Model Equation for Simulating Flows in mung um Profile Multistage Turbomachinery MBB-Bericht Nr. UFE 1352, 1977 ASME paper 85-GT-226, Houston, March
2014-05-01
de simulation du Simulateur de Contre- mesures de la Menace Navale afin de pouvoir inclure des leurres et des autodirecteurs de missiles ; 4) Une...sur le littoral ; 2) La détection des petites cibles de surface sur le littoral ; 3) L’amélioration et la validation de la modélisation et du code...amélioration et une validation supplémentaire de la modélisation et du
NASA Astrophysics Data System (ADS)
Bretin, Remy
L'endommagement par fatigue des materiaux est un probleme courant dans de nombreux domaines, dont celui de l'aeronautique. Afin de prevenir la rupture par fatigue des materiaux il est necessaire de determiner leur duree de vie en fatigue. Malheureusement, dues aux nombreuses heterogeneites presentes, la duree de vie en fatigue peut fortement varier entre deux pieces identiques faites dans le meme materiau ayant subi les memes traitements. Il est donc necessaire de considerer ces heterogeneites dans nos modeles afin d'avoir une meilleure estimation de la duree de vie des materiaux. Comme premiere etape vers une meilleure consideration des heterogeneites dans nos modeles, une etude en elasticite lineaire de l'influence des orientations cristallographiques sur les champs de deformations et de contraintes dans un polycristal a ete realisee a l'aide de la methode des elements finis. Des correlations ont pu etre etablies a partir des resultats obtenus, et un modele analytique en elasticite lineaire prenant en compte les distributions d'orientations cristallographiques et les effets de voisinage a pu etre developpe. Ce modele repose sur les bases des modeles d'homogeneisation classique, comme le schema auto-coherent, et reprend aussi les principes de voisinage des automates cellulaires. En prenant pour reference les resultats des analyses elements finis, le modele analytique ici developpe a montre avoir une precision deux fois plus grande que le modele auto-coherent, quel que soit le materiau etudie.
NASA Astrophysics Data System (ADS)
Minotti, P.; Le Moal, P.; Buchaillot, L.; Ferreira, A.
1996-10-01
The modeling of traveling wave type piezoelectric motors involves a large variety of mechanical and physical phenomena and therefore leads to numerous approaches and models. The latter, mainly based on phenomenological and numerical (based on Finite Element Method) analyses, are not suitable for current objectives oriented toward the development of efficient C.A.D. tools. As a result, an attempt is done to investigate analytical approaches, in order to theoretically model the mechanical energy conversion at the stator/rotor interface. This paper is the first in a serie of three articles devoted to the modeling of such rotative motors. After a short description of the operating principles specific to the piezomotors, the mechanical and tribological assumptions made for the driving mechanism of the rotor are briefly described. Then it is shown that the kinematic and dynamic modeling of the stator, combined with the static representation of the stator/rotor interface, gives an efficient way in order to perform the calculation of the loading characteristics of the driving shaft. Finally, the specifications of a new software named C.A.S.I.M.M.I.R.E., which has been recently developed on the basis of our earlier mechanical modeling, are described. In the last of these three papers, the theoretical simulations performed on SHINSEI Japanese motors will show to be close to the experimental data and that the results reported in this paper will lead to the structural optimization of future traveling wave ultrasonic motors. La modélisation des moteurs piézo-électriques à onde progressive implique une grande variété de phénomènes physiques et mécaniques. Cette variété conduit à des approches et modèles tout aussi nombreux et variés, qui reposent principalement sur des analyses phénoménologiques et numériques (Méthode Élements Finis), et ne permettent pas de répondre aux éxigences actuelles concernant le développement d'outils C.A.O. performants. Cette nécessité nous a conduits à développer une modélisation théorique analytique de la conversion d'énergie à l'interface stator/rotor. Ce papier est le premier d'une série de trois articles consacrés à la modélisation des moteurs piézo-électriques rotatifs. Après une rapide description des principes de fonctionnement de ces piézomoteurs, les hypothèses mécaniques et tribologiques concernant le mécanisme d'entraînement du rotor sont énoncées succinctement. On démontre ensuite que la modélisation cinématique et dynamique du stator, combinée à une représentation statique du comportement à l'interface stator/rotor, autorise l'évaluation des caractéristiques en charge des moteurs à onde progressive. Enfin, le logiciel baptisé C.A.S.I.M.M.I.R.E., récemment développé sur la base de la modélisation mécanique précédente, est présenté puis testé. Dans le dernier article de cette série, nous confirmerons la validité des simulations théoriques issues de ce logiciel, à partir de la caractérisation expérimentale de moteurs japonais de la firme SHINSEI. Ce nouveau logiciel constitue d'ores et déjà un outil performant en vue de l'optimisation des futurs moteurs à onde progressive, et a déjà fait l'objet d'une première exploitation en milieu industriel.
Discrete Event Supervisory Control Applied to Propulsion Systems
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Shah, Neerav
2005-01-01
The theory of discrete event supervisory (DES) control was applied to the optimal control of a twin-engine aircraft propulsion system and demonstrated in a simulation. The supervisory control, which is implemented as a finite-state automaton, oversees the behavior of a system and manages it in such a way that it maximizes a performance criterion, similar to a traditional optimal control problem. DES controllers can be nested such that a high-level controller supervises multiple lower level controllers. This structure can be expanded to control huge, complex systems, providing optimal performance and increasing autonomy with each additional level. The DES control strategy for propulsion systems was validated using a distributed testbed consisting of multiple computers--each representing a module of the overall propulsion system--to simulate real-time hardware-in-the-loop testing. In the first experiment, DES control was applied to the operation of a nonlinear simulation of a turbofan engine (running in closed loop using its own feedback controller) to minimize engine structural damage caused by a combination of thermal and structural loads. This enables increased on-wing time for the engine through better management of the engine-component life usage. Thus, the engine-level DES acts as a life-extending controller through its interaction with and manipulation of the engine s operation.
Khalid, Ruzelan; Nawawi, Mohd Kamal M; Kawsar, Luthful A; Ghani, Noraida A; Kamil, Anton A; Mustafa, Adli
2013-01-01
M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed.
Advanced Technology for SAM Systems Analysis Synthesis and Simulation
1984-05-01
aides de nature op~ratlcnnelle, Einanci~rv et technicue. Ainsi peut-on esp6rer iu’en r~sulte- ront les rhoix les rroilloura. Trois des neuf expras~s...EAST (Royal Military College of Science, UNITED KINGDOM) traite des structures des boucles de quidaqe at compare los lois d’ali- gnement t~l~command~es...avec les lois de navigation des missiles autoquid~s. Dants los structures en alignement, le Dr EAST montre qu’il est possible et indispensable
Day, Theodore Eugene; Sarawgi, Sandeep; Perri, Alexis; Nicolson, Susan C
2015-04-01
This study describes the use of discrete event simulation (DES) to model and analyze a large academic pediatric and test cardiac center. The objective was to identify a strategy, and to predict and test the effectiveness of that strategy, to minimize the number of elective cardiac procedures that are postponed because of a lack of available cardiac intensive care unit (CICU) capacity. A DES of the cardiac center at The Children's Hospital of Philadelphia was developed and was validated by use of 1 year of deidentified administrative patient data. The model was then used to analyze strategies for reducing postponements of cases requiring CICU care through improved scheduling of multipurpose space. Each of five alternative scenarios was simulated for ten independent 1-year runs. Reductions in simulated elective procedure postponements were found when a multipurpose procedure room (the hybrid room) was used for operations on Wednesday and Thursday, compared with Friday (as was the real-world use). The reduction Wednesday was statistically significant, with postponements dropping from 27.8 to 23.3 annually (95% confidence interval 18.8-27.8). Thus, we anticipate a relative reduction in postponements of 16.2%. Since the implementation, there have been two postponements from July 1 to November 21, 2014, compared with ten for the same time period in 2013. Simulation allows us to test planned changes in complex environments, including pediatric cardiac care. Reduction in postponements of cardiac procedures requiring CICU care is predicted through reshuffling schedules of existing multipurpose capacity, and these reductions appear to be achievable in the real world after implementation. Copyright © 2015 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
2012-04-01
Systems Concepts and Integration SET Sensors and Electronics Technology SISO Simulation Interoperability Standards Organization SIW Simulation...conjunction with 2006 Fall SIW 2006 September SISO Standards Activity Committee approved beginning IEEE balloting 2006 October IEEE Project...019 published 2008 June Edinborough, UK Held in conjunction with 2008 Euro- SIW 2008 September Laurel, MD, US Work on Composite Model 2008 December
NASA Astrophysics Data System (ADS)
Paradis, Alexandre
The principal objective of the present thesis is to elaborate a computational model describing the mechanical properties of NiTi under different loading conditions. Secondary objectives are to build an experimental database of NiTi under stress, strain and temperature in order to validate the versatility of the new model proposed herewith. The simulation model used presently at Laboratoire sur les Alliage a Memoire et les Systemes Intelligents (LAMSI) of ETS is showing good behaviour in quasi-static loading. However, dynamic loading with the same model do not allows one to include degradation. The goal of the present thesis is to build a model capable of describing such degradation in a relatively accurate manner. Some experimental testing and results will be presented. In particular, new results on the behaviour of NiTi being paused during cycling are presented in chapter 2. A model is developed in chapter 3 based on Likhachev's micromechanical model. Good agreement is found with experimental data. Finally, an adaptation of the model is presented in chapter 4, allowing it to be eventually implemented into a finite-element commercial software.
Evaluation of probabilistic flow in two unsaturated soils
NASA Astrophysics Data System (ADS)
Boateng, Samuel
2001-11-01
A variably saturated flow model is coupled to a first-order reliability algorithm to simulate unsaturated flow in two soils. The unsaturated soil properties are considered as uncertain variables with means, standard deviations, and marginal probability distributions. Thus, each simulation constitutes an unsaturated probability flow event. Sensitivities of the uncertain variables are estimated for each event. The unsaturated hydraulic properties of a fine-textured soil and a coarse-textured soil are used. The properties are based on the van Genuchten model. The flow domain has a recharge surface, a seepage boundary along the bottom, and a no-flow boundary along the sides. The uncertain variables are saturated water content, residual water content, van Genuchten model parameters alpha (α) and n, and saturated hydraulic conductivity. The objective is to evaluate the significance of each uncertain variable to the probabilistic flow. Under wet conditions, saturated water content and residual water content are the most significant uncertain variables in the sand. For dry conditions in the sand, however, the van Genuchten model parameters α and n are the most significant. Model parameter n and saturated hydraulic conductivity are the most significant for the wet clay loam. Saturated water content is most significant for the dry clay loam. Résumé. Un modèle d'écoulement variable en milieu saturé est couplé à un algorithme d'exactitude de premier ordre pour simuler les écoulements en milieu non saturé dans deux sols. Les propriétés des sols non saturés sont considérés comme des variables incertaines avec des moyennes, des écarts-types et des distributions de probabilité marginale. Ainsi chaque simulation constitue un événement d'écoulement non saturé probable. La sensibilité des variables incertaines est estimée pour chaque événement. Les propriétés hydrauliques non saturées d'un sol à texture fine et d'un sol à texture grossière sont utilisées. Les propriétés sont basées sur le modèle de van Genuchten. Le domaine d'écoulement possède une surface de recharge, une limite de fuite à sa base et des limites sans écoulement sur les côtés. Les variables incertaines sont la teneur en eau à saturation, la teneur en eau résiduelle, les paramètres alpha (α) et n du modèle de van Genuchten et la conductivité hydraulique à saturation. L'objectif est d'évaluer la signification de chacune des variables incertaines dans l'écoulement probabiliste. Dans des conditions humides, la teneur en eau à saturation et la teneur en eau résiduelle sont les variables incertaines les plus significatives dans le sable. Toutefois, dans des conditions sèches dans le sable, les paramètres α et n du modèle de van Genuchten sont les plus significatifs. Le paramètre n du modèle et la conductivité hydraulique à saturation sont les plus significatifs pour un sol argileux humide. La teneur en eau à saturation est très significative pour le sol argileux sec. Resumen. Se ha acoplado un modelo de flujo de saturación variable con un algoritmo de fiabilidad de primer orden con el fin de simular el flujo no saturado en dos tipos de suelos. Se ha tratado las propiedades del suelo no saturado como variables inciertas, a las que se asigna las medias, desviaciones estándar y distribuciones de probabilidad marginal correspondientes. Así, cada simulación constituye un evento probabilístico de flujo no saturado y la sensibilidad de las variables inciertas es estimada para cada evento. Se ha utilizado las propiedades de la conductividad hidráulica no saturada de dos suelos con dos tipos de textura - fina y gruesa - mediante el modelo de van Genuchten. El dominio de flujo está delimitado por una superficie de recarga, base de goteo y contornos laterales de flujo nulo. Las variables inciertas son el contenido de agua residual, el de saturación, los parámetros del modelo de van Genuchten (α y n) y la conductividad hidráulica saturada. El objetivo era evaluar la contribución de cada variable incierta al flujo probabilístico. Para arenas, las variables inciertas más importantes, en condiciones de humedad, son el contenido de agua residual y el de saturación en ausencia de humedad, lo son ambos parámetros del modelo de van Genuchten. Para margas arcillosas, las variables más significativas en condiciones húmedas son el parámetro n y la conductividad hidráulica saturada; en condiciones secas, el contenido de agua en saturación.
Use of DES Modeling for Determining Launch Availability for SLS
NASA Technical Reports Server (NTRS)
Watson, Michael; Staton, Eric; Cates, Grant; Finn, Ronald; Altino, Karen M.; Burns, K. Lee
2014-01-01
(1) NASA is developing a new heavy lift launch system for human and scientific exploration beyond Earth orbit comprising of the Space Launch System (SLS), Orion Multi-Purpose Crew Vehicle (MPCV), and Ground Systems Development and Operations (GSDO); (2) The desire of the system is to ensure a high confidence of successfully launching the exploration missions, especially those that require multiple launches, have a narrow Earth departure window, and high investment costs; and (3) This presentation discusses the process used by a Cross-Program team to develop the Exploration Systems Development (ESD) Launch Availability (LA) Technical Performance Measure (TPM) and allocate it to each of the Programs through the use of Discrete Event Simulations (DES).
NASA Astrophysics Data System (ADS)
Aurousseau, Emmanuelle
Les modeles sont des outils amplement utilises en sciences et technologies (S&T) afin de representer et d’expliquer un phenomene difficilement accessible, voire abstrait. La demarche de modelisation est presentee de maniere explicite dans le programme de formation de l’ecole quebecoise (PFEQ), notamment au 2eme cycle du secondaire (Quebec. Ministere de l'Education du Loisir et du Sport, 2007a). Elle fait ainsi partie des sept demarches auxquelles eleves et enseignants sont censes recourir. Cependant, de nombreuses recherches mettent en avant la difficulte des enseignants a structurer leurs pratiques d’enseignement autour des modeles et de la demarche de modelisation qui sont pourtant reconnus comme indispensables. En effet, les modeles favorisent la conciliation des champs concrets et abstraits entre lesquels le scientifique, meme en herbe, effectue des allers-retours afin de concilier le champ experimental de reference qu’il manipule et observe au champ theorique relie qu’il construit. L’objectif de cette recherche est donc de comprendre comment les modeles et la demarche de modelisation contribuent a faciliter l’articulation du concret et de l’abstrait dans l’enseignement des sciences et des technologies (S&T) au 2eme cycle du secondaire. Pour repondre a cette question, nous avons travaille avec les enseignants dans une perspective collaborative lors de groupes focalises et d’observation en classe. Ces dispositifs ont permis d’examiner les pratiques d’enseignement que quatre enseignants mettent en oeuvre en utilisant des modeles et des demarches de modelisation. L’analyse des pratiques d’enseignement et des ajustements que les enseignants envisagent dans leur pratique nous permet de degager des connaissances a la fois pour la recherche et pour la pratique des enseignants, au regard de l’utilisation des modeles et de la demarche de modelisation en S&T au secondaire.
Discrete Event Simulation-Based Resource Modelling in Health Technology Assessment.
Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Dixon, Simon
2017-10-01
The objective of this article was to conduct a systematic review of published research on the use of discrete event simulation (DES) for resource modelling (RM) in health technology assessment (HTA). RM is broadly defined as incorporating and measuring effects of constraints on physical resources (e.g. beds, doctors, nurses) in HTA models. Systematic literature searches were conducted in academic databases (JSTOR, SAGE, SPRINGER, SCOPUS, IEEE, Science Direct, PubMed, EMBASE) and grey literature (Google Scholar, NHS journal library), enhanced by manual searchers (i.e. reference list checking, citation searching and hand-searching techniques). The search strategy yielded 4117 potentially relevant citations. Following the screening and manual searches, ten articles were included. Reviewing these articles provided insights into the applications of RM: firstly, different types of economic analyses, model settings, RM and cost-effectiveness analysis (CEA) outcomes were identified. Secondly, variation in the characteristics of the constraints such as types and nature of constraints and sources of data for the constraints were identified. Thirdly, it was found that including the effects of constraints caused the CEA results to change in these articles. The review found that DES proved to be an effective technique for RM but there were only a small number of studies applied in HTA. However, these studies showed the important consequences of modelling physical constraints and point to the need for a framework to be developed to guide future applications of this approach.
Low Earth Orbit Rendezvous Strategy for Lunar Missions
NASA Technical Reports Server (NTRS)
Cates, Grant R.; Cirillo, William M.; Stromgren, Chel
2006-01-01
On January 14, 2004 President George W. Bush announced a new Vision for Space Exploration calling for NASA to return humans to the moon. In 2005 NASA decided to use a Low Earth Orbit (LEO) rendezvous strategy for the lunar missions. A Discrete Event Simulation (DES) based model of this strategy was constructed. Results of the model were then used for subsequent analysis to explore the ramifications of the LEO rendezvous strategy.
NASA Astrophysics Data System (ADS)
Mejdi, Abderrazak
Les fuselages des avions sont generalement en aluminium ou en composite renforces par des raidisseurs longitudinaux (lisses) et transversaux (cadres). Les raidisseurs peuvent etre metalliques ou en composite. Durant leurs differentes phases de vol, les structures d'avions sont soumises a des excitations aeriennes (couche limite turbulente : TBL, champs diffus : DAF) sur la peau exterieure dont l'energie acoustique produite se transmet a l'interieur de la cabine. Les moteurs, montes sur la structure, produisent une excitation solidienne significative. Ce projet a pour objectifs de developper et de mettre en place des strategies de modelisations des fuselages d'avions soumises a des excitations aeriennes et solidiennes. Tous d'abord, une mise a jour des modeles existants de la TBL apparait dans le deuxieme chapitre afin de mieux les classer. Les proprietes de la reponse vibro-acoustique des structures planes finies et infinies sont analysees. Dans le troisieme chapitre, les hypotheses sur lesquelles sont bases les modeles existants concernant les structures metalliques orthogonalement raidies soumises a des excitations mecaniques, DAF et TBL sont reexamines en premier lieu. Ensuite, une modelisation fine et fiable de ces structures est developpee. Le modele est valide numeriquement a l'aide des methodes des elements finis (FEM) et de frontiere (BEM). Des tests de validations experimentales sont realises sur des panneaux d'avions fournis par des societes aeronautiques. Au quatrieme chapitre, une extension vers les structures composites renforcees par des raidisseurs aussi en composites et de formes complexes est etablie. Un modele analytique simple est egalement implemente et valide numeriquement. Au cinquieme chapitre, la modelisation des structures raidies periodiques en composites est beaucoup plus raffinee par la prise en compte des effets de couplage des deplacements planes et transversaux. L'effet de taille des structures finies periodiques est egalement pris en compte. Les modeles developpes ont permis de conduire plusieurs etudes parametriques sur les proprietes vibro-acoustiques des structures d'avions facilitant ainsi la tache des concepteurs. Dans le cadre de cette these, un article a ete publie dans le Journal of Sound and Vibration et trois autres soumis, respectivement aux Journal of Acoustical Society of America, International Journal of Solid Mechanics et au Journal of Sound and Vibration Mots cles : structures raidies, composites, vibro-acoustique, perte par transmission.
Analysis of Massively Separated Flows of Aircraft Using Detached Eddy Simulation
NASA Astrophysics Data System (ADS)
Morton, Scott
2002-08-01
An important class of turbulent flows of aerodynamic interest are those characterized by massive separation, e.g., the flow around an aircraft at high angle of attack. Numerical simulation is an important tool for analysis, though traditional models used in the solution of the Reynolds-averaged Navier-Stokes (RANS) equations appear unable to accurately account for the time-dependent and three-dimensional motions governing flows with massive separation. Large-eddy simulation (LES) is able to resolve these unsteady three-dimensional motions, yet is cost prohibitive for high Reynolds number wall-bounded flows due to the need to resolve the small scale motions in the boundary layer. Spalart et. al. proposed a hybrid technique, Detached-Eddy Simulation (DES), which takes advantage of the often adequate performance of RANS turbulence models in the "thin," typically attached regions of the flow. In the separated regions of the flow the technique becomes a Large Eddy Simulation, directly resolving the time-dependent and unsteady features that dominate regions of massive separation. The current work applies DES to a 70 degree sweep delta wing at 27 degrees angle of attack, a geometrically simple yet challenging flowfield that exhibits the unsteady three-dimensional massively separated phenomena of vortex breakdown. After detailed examination of this basic flowfield, the method is demonstrated on three full aircraft of interest characterized by massive separation, the F-16 at 45 degrees angle of attack, the F-15 at 65 degree angle of attack (with comparison to flight test), and the C-130 in a parachute drop condition at near stall speed with cargo doors open.
CFD simulation of local and global mixing time in an agitated tank
NASA Astrophysics Data System (ADS)
Li, Liangchao; Xu, Bin
2017-01-01
The Issue of mixing efficiency in agitated tanks has drawn serious concern in many industrial processes. The turbulence model is very critical to predicting mixing process in agitated tanks. On the basis of computational fluid dynamics(CFD) software package Fluent 6.2, the mixing characteristics in a tank agitated by dual six-blade-Rushton-turbines(6-DT) are predicted using the detached eddy simulation(DES) method. A sliding mesh(SM) approach is adopted to solve the rotation of the impeller. The simulated flow patterns and liquid velocities in the agitated tank are verified by experimental data in the literature. The simulation results indicate that the DES method can obtain more flow details than Reynolds-averaged Navier-Stokes(RANS) model. Local and global mixing time in the agitated tank is predicted by solving a tracer concentration scalar transport equation. The simulated results show that feeding points have great influence on mixing process and mixing time. Mixing efficiency is the highest for the feeding point at location of midway of the two impellers. Two methods are used to determine global mixing time and get close result. Dimensionless global mixing time remains unchanged with increasing of impeller speed. Parallel, merging and diverging flow pattern form in the agitated tank, respectively, by changing the impeller spacing and clearance of lower impeller from the bottom of the tank. The global mixing time is the shortest for the merging flow, followed by diverging flow, and the longest for parallel flow. The research presents helpful references for design, optimization and scale-up of agitated tanks with multi-impeller.
NASA Astrophysics Data System (ADS)
Ahmed, Chaara El Mouez
Nous avons etudie les relations de dispersion et la diffusion des glueballs et des mesons dans le modele U(1)_{2+1} compact. Ce modele a ete souvent utilise comme un simple modele de la chromodynamique quantique (QCD), parce qu'il possede le confinement ainsi que les etats de glueballs. Par contre, sa structure mathematique est beaucoup plus simple que la QCD. Notre methode consiste a diagonaliser l'Hamiltonien de ce modele dans une base appropriee de graphes et sur reseau impulsion, afin de generer les relations de dispersion des glueballs et des mesons. Pour la diffusion, nous avons utilise la methode dependante du temps pour calculer la matrice S et la section efficace de diffusion des glueballs et des mesons. Les divers resultats obtenus semblent etre en accord avec les travaux anterieurs de Hakim, Alessandrini et al., Irving et al., qui eux, utilisent plutot la theorie des perturbations en couplage fort, et travaillent sur un reseau espace-temps.
Adaptability in Coalition Teamwork (Faculte d’adaptation au travail d’ equipe en coalition)
2008-04-01
et des outils sont nécessaires au développement rapide d’équipes multiculturelles efficaces pour assurer le succès des missions, celles-ci étant...Les principaux résultats des 30 communications théoriques et de recherche ont été les suivants : • Les outils de formation (jeux, simulations...parmi les militaires ; • Le retour d’information sur le moral et les performances des équipes en opérations est un instrument qui est particulièrement
The use of discrete-event simulation modelling to improve radiation therapy planning processes.
Werker, Greg; Sauré, Antoine; French, John; Shechter, Steven
2009-07-01
The planning portion of the radiation therapy treatment process at the British Columbia Cancer Agency is efficient but nevertheless contains room for improvement. The purpose of this study is to show how a discrete-event simulation (DES) model can be used to represent this complex process and to suggest improvements that may reduce the planning time and ultimately reduce overall waiting times. A simulation model of the radiation therapy (RT) planning process was constructed using the Arena simulation software, representing the complexities of the system. Several types of inputs feed into the model; these inputs come from historical data, a staff survey, and interviews with planners. The simulation model was validated against historical data and then used to test various scenarios to identify and quantify potential improvements to the RT planning process. Simulation modelling is an attractive tool for describing complex systems, and can be used to identify improvements to the processes involved. It is possible to use this technique in the area of radiation therapy planning with the intent of reducing process times and subsequent delays for patient treatment. In this particular system, reducing the variability and length of oncologist-related delays contributes most to improving the planning time.
CDPP Tools in the IMPEx infrastructure
NASA Astrophysics Data System (ADS)
Gangloff, Michel; Génot, Vincent; Bourrel, Nataliya; Hess, Sébastien; Khodachenko, Maxim; Modolo, Ronan; Kallio, Esa; Alexeev, Igor; Al-Ubaidi, Tarek; Cecconi, Baptiste; André, Nicolas; Budnik, Elena; Bouchemit, Myriam; Dufourg, Nicolas; Beigbeder, Laurent
2014-05-01
The CDPP (Centre de Données de la Physique des Plasmas, http://cdpp.eu/), the French data center for plasma physics, is engaged for more than a decade in the archiving and dissemination of plasma data products from space missions and ground observatories. Besides these activities, the CDPP developed services like AMDA (http://amda.cdpp.eu/) which enables in depth analysis of large amount of data through dedicated functionalities such as: visualization, conditional search, cataloguing, and 3DView (http://3dview.cdpp.eu/) which provides immersive visualisations in planetary environments and is further developed to include simulation and observational data. Both tools implement the IMPEx protocol (http://impexfp7.oeaw.ac.at/) to give access to outputs of simulation runs and models in planetary sciences from several providers like LATMOS, FMI , SINP; prototypes have also been built to access some UCLA and CCMC simulations. These tools and their interaction will be presented together with the IMPEx simulation data model (http://impex.latmos.ipsl.fr/tools/DataModel.htm) used for the interface to model databases.
Fast Plasma Instrument for MMS: Simulation Results
NASA Technical Reports Server (NTRS)
Figueroa-Vinas, Adolfo; Adrian, Mark L.; Lobell, James V.; Simpson, David G.; Barrie, Alex; Winkert, George E.; Yeh, Pen-Shu; Moore, Thomas E.
2008-01-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. The Dual Electron Spectrometer (DES) of the Fast Plasma Instrument (FPI) for MMS meets these demanding requirements by acquiring the electron velocity distribution functions (VDFs) for the full sky with high-resolution angular measurements every 30 ms. This will provide unprecedented access to electron scale dynamics within the reconnection diffusion region. The DES consists of eight half-top-hat energy analyzers. Each analyzer has a 6 deg. x 11.25 deg. Full-sky coverage is achieved by electrostatically stepping the FOV of each of the eight sensors through four discrete deflection look directions. Data compression and burst memory management will provide approximately 30 minutes of high time resolution data during each orbit of the four MMS spacecraft. Each spacecraft will intelligently downlink the data sequences that contain the greatest amount of temporal structure. Here we present the results of a simulation of the DES analyzer measurements, data compression and decompression, as well as ground-based analysis using as a seed re-processed Cluster/PEACE electron measurements. The Cluster/PEACE electron measurements have been reprocessed through virtual DES analyzers with their proper geometrical, energy, and timing scale factors and re-mapped via interpolation to the DES angular and energy phase-space sampling measurements. The results of the simulated DES measurements are analyzed and the full moments of the simulated VDFs are compared with those obtained from the Cluster/PEACE spectrometer using a standard quadrature moment, a newly implemented spectral spherical harmonic method, and a singular value decomposition method. Our preliminary moment calculations show a remarkable agreement within the uncertainties of the measurements, with the results obtained by the Cluster/PEACE electron spectrometers. The data analyzed was selected because it represented a potential reconnection event as currently published.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, Casey J.; Brigantic, Robert T.; Keating, Douglas H.
There is a need to develop and demonstrate technical approaches for verifying potential future agreements to limit and reduce total warhead stockpiles. To facilitate this aim, warhead monitoring systems employ both concepts of operations (CONOPS) and technologies. A systems evaluation approach can be used to assess the relative performance of CONOPS and technologies in their ability to achieve monitoring system objectives which include: 1) confidence that a treaty accountable item (TAI) initialized by the monitoring system is as declared; 2) confidence that there is no undetected diversion from the monitoring system; and 3) confidence that a TAI is dismantled asmore » declared. Although there are many quantitative methods that can be used to assess system performance for the above objectives, this paper focuses on a simulation perspective primarily for the ability to support analysis of the probabilities that are used to define operating characteristics of CONOPS and technologies. This paper describes a discrete event simulation (DES) model, comprised of three major sub-models: including TAI lifecycle flow, monitoring activities, and declaration behavior. The DES model seeks to capture all processes and decision points associated with the progressions of virtual TAIs, with notional characteristics, through the monitoring system from initialization through dismantlement. The simulation updates TAI progression (i.e., whether the generated test objects are accepted and rejected at the appropriate points) all the way through dismantlement. Evaluation of TAI lifecycles primarily serves to assess how the order, frequency, and combination of functions in the CONOPS affect system performance as a whole. It is important, however, to note that discrete event simulation is also capable (at a basic level) of addressing vulnerabilities in the CONOPS and interdependencies between individual functions as well. This approach is beneficial because it does not rely on complex mathematical models, but instead attempts to recreate the real world system as a decision and event driven simulation. Finally, because the simulation addresses warhead confirmation, chain of custody, and warhead dismantlement in a modular fashion, a discrete-event model could be easily adapted to multiple CONOPS for the exploration of a large number of “what if” scenarios.« less
Iserson, Kenneth V
2017-09-01
Emergency medicine personnel frequently respond to major disasters. They expect to have an effective and efficient management system to elegantly allocate available resources. Despite claims to the contrary, experience demonstrates this rarely occurs. This article describes privatizing disaster assessment using a single-purposed, accountable, and well-trained organization. The goal is to achieve elegant disaster assessment, rather than repeatedly exhorting existing groups to do it. The Rapid Disaster Evaluation System (RaDES) would quickly and efficiently assess a postdisaster population's needs. It would use an accountable nongovernmental agency's teams with maximal training, mobility, and flexibility. Designed to augment the Inter-Agency Standing Committee's 2015 Emergency Response Preparedness Plan, RaDES would provide the initial information needed to avoid haphazard and overlapping disaster responses. Rapidly deployed teams would gather information from multiple sources and continually communicate those findings to their base, which would then disseminate them to disaster coordinators in a concise, coherent, and transparent way. The RaDES concept represents an elegant, minimally bureaucratic, and effective rapid response to major disasters. However, its implementation faces logistical, funding, and political obstacles. Developing and maintaining RaDES would require significant funding and political commitment to coordinate the numerous agencies that claim to be performing the same tasks. Although simulations can demonstrate efficacy and deficiencies, only field tests will demonstrate RaDES' power to improve interagency coordination and decrease the cost of major disaster response. At the least, the RaDES concept should serve as a model for discussing how to practicably improve our current chaotic disaster responses. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghaddar, A.; Sinno, N.
2005-05-01
La complexité du phénomène de files d'attente dans les systèmes informatiques et télécommunications nécessite leur simulation par des modèles Markoviens pour les mesures de performance, mesure des délais d'attente au niveau des routeurs pour le modèle informatique et l'étude de la gestion des appels téléphoniques pour le modèle des circuits téléphoniques. L'optimisation des méthodes numériques de résolution des équations relatives à ces deux modèles va permettre d' ídentifier les critères de convergence rapide vers les états stationnaires correspondant à ces mesures.
Lim, Morgan E; Worster, Andrew; Goeree, Ron; Tarride, Jean-Éric
2013-05-22
Computer simulation studies of the emergency department (ED) are often patient driven and consider the physician as a human resource whose primary activity is interacting directly with the patient. In many EDs, physicians supervise delegates such as residents, physician assistants and nurse practitioners each with different skill sets and levels of independence. The purpose of this study is to present an alternative approach where physicians and their delegates in the ED are modeled as interacting pseudo-agents in a discrete event simulation (DES) and to compare it with the traditional approach ignoring such interactions. The new approach models a hierarchy of heterogeneous interacting pseudo-agents in a DES, where pseudo-agents are entities with embedded decision logic. The pseudo-agents represent a physician and delegate, where the physician plays a senior role to the delegate (i.e. treats high acuity patients and acts as a consult for the delegate). A simple model without the complexity of the ED is first created in order to validate the building blocks (programming) used to create the pseudo-agents and their interaction (i.e. consultation). Following validation, the new approach is implemented in an ED model using data from an Ontario hospital. Outputs from this model are compared with outputs from the ED model without the interacting pseudo-agents. They are compared based on physician and delegate utilization, patient waiting time for treatment, and average length of stay. Additionally, we conduct sensitivity analyses on key parameters in the model. In the hospital ED model, comparisons between the approach with interaction and without showed physician utilization increase from 23% to 41% and delegate utilization increase from 56% to 71%. Results show statistically significant mean time differences for low acuity patients between models. Interaction time between physician and delegate results in increased ED length of stay and longer waits for beds. This example shows the importance of accurately modeling physician relationships and the roles in which they treat patients. Neglecting these relationships could lead to inefficient resource allocation due to inaccurate estimates of physician and delegate time spent on patient related activities and length of stay.
Predicting Liver Transplant Capacity Using Discrete Event Simulation.
Toro-Díaz, Hector; Mayorga, Maria E; Barritt, A Sidney; Orman, Eric S; Wheeler, Stephanie B
2015-08-01
The number of liver transplants (LTs) performed in the US increased until 2006 but has since declined despite an ongoing increase in demand. This decline may be due in part to decreased donor liver quality and increasing discard of poor-quality livers. We constructed a discrete event simulation (DES) model informed by current donor characteristics to predict future LT trends through the year 2030. The data source for our model is the United Network for Organ Sharing database, which contains patient-level information on all organ transplants performed in the US. Previous analysis showed that liver discard is increasing and that discarded organs are more often from donors who are older, are obese, have diabetes, and donated after cardiac death. Given that the prevalence of these factors is increasing, the DES model quantifies the reduction in the number of LTs performed through 2030. In addition, the model estimatesthe total number of future donors needed to maintain the current volume of LTs and the effect of a hypothetical scenario of improved reperfusion technology.We also forecast the number of patients on the waiting list and compare this with the estimated number of LTs to illustrate the impact that decreased LTs will have on patients needing transplants. By altering assumptions about the future donor pool, this model can be used to develop policy interventions to prevent a further decline in this lifesaving therapy. To our knowledge, there are no similar predictive models of future LT use based on epidemiological trends. © The Author(s) 2014.
Predicting Liver Transplant Capacity Using Discrete Event Simulation
Diaz, Hector Toro; Mayorga, Maria; Barritt, A. Sidney; Orman, Eric S.; Wheeler, Stephanie B.
2014-01-01
The number of liver transplants (LTs) performed in the US increased until 2006, but has since declined despite an ongoing increase in demand. This decline may be due in part to decreased donor liver quality and increasing discard of poor quality livers. We constructed a Discrete Event Simulation (DES) model informed by current donor characteristics to predict future LT trends through the year 2030. The data source for our model is the United Network for Organ Sharing database, which contains patient level information on all organ transplants performed in the US. Previous analysis showed that liver discard is increasing and that discarded organs are more often from donors who are older, obese, have diabetes, and donated after cardiac death. Given that the prevalence of these factors is increasing, the DES model quantifies the reduction in the number of LTs performed through 2030. In addition, the model estimates the total number of future donors needed to maintain the current volume of LTs, and the effect of a hypothetical scenario of improved reperfusion technology. We also forecast the number of patients on the waiting list and compare this to the estimated number of LTs to illustrate the impact that decreased LTs will have on patients needing transplants. By altering assumptions about the future donor pool, this model can be used to develop policy interventions to prevent a further decline in this life saving therapy. To our knowledge, there are no similar predictive models of future LT use based on epidemiologic trends. PMID:25391681
Methodes d'amas quantiques a temperature finie appliquees au modele de Hubbard
NASA Astrophysics Data System (ADS)
Plouffe, Dany
Depuis leur decouverte dans les annees 80, les supraconducteurs a haute temperature critique ont suscite beaucoup d'interet en physique du solide. Comprendre l'origine des phases observees dans ces materiaux, telle la supraconductivite, est l'un des grands defis de la physique theorique du solide des 25 dernieres annees. L'un des mecanismes pressentis pour expliquer ces phenomenes est la forte interaction electron-electron. Le modele de Hubbard est l'un des modeles les plus simples pour tenir compte de ces interactions. Malgre la simplicite apparente de ce modele, certaines de ses caracteristiques, dont son diagramme de phase, ne sont toujours pas bien etablies, et ce malgre plusieurs avancements theoriques dans les dernieres annees. Cette etude se consacre a faire une analyse de methodes numeriques permettant de calculer diverses proprietes du modele de Hubbard en fonction de la temperature. Nous decrivons des methodes (la VCA et la CPT) qui permettent de calculer approximativement la fonction de Green a temperature finie sur un systeme infini a partir de la fonction de Green calculee sur un amas de taille finie. Pour calculer ces fonctions de Green, nous allons utiliser des methodes permettant de reduire considerablement les efforts numeriques necessaires pour les calculs des moyennes thermodynamiques, en reduisant considerablement l'espace des etats a considerer dans ces moyennes. Bien que cette etude vise d'abord a developper des methodes d'amas pour resoudre le modele de Hubbard a temperature finie de facon generale ainsi qu'a etudier les proprietes de base de ce modele, nous allons l'appliquer a des conditions qui s'approchent de supraconducteurs a haute temperature critique. Les methodes presentees dans cette etude permettent de tracer un diagramme de phase pour l'antiferromagnetisme et la supraconductivite qui presentent plusieurs similarites avec celui des supraconducteurs a haute temperature. Mots-cles : modele de Hubbard, thermodynamique, antiferromagnetisme, supraconductivite, methodes numeriques, larges matrices
Bellows, Brandon K; Nelson, Richard E; Oderda, Gary M; LaFleur, Joanne
2016-01-01
Painful diabetic neuropathy (PDN) affects nearly half of patients with diabetes. The objective of this study was to compare the cost-effectiveness of starting patients with PDN on pregabalin (PRE), duloxetine (DUL), gabapentin (GABA), or desipramine (DES) over a 10-year time horizon from the perspective of third-party payers in the United States. A Markov model was used to compare the costs (2013 $US) and effectiveness (quality-adjusted life-years [QALYs]) of first-line PDN treatments in 10,000 patients using microsimulation. Costs and QALYs were discounted at 3% annually. Probabilities and utilities were derived from the published literature. Costs were average wholesale price for drugs and national estimates for office visits and hospitalizations. One-way and probabilistic (PSA) sensitivity analyses were used to examine parameter uncertainty. Starting with PRE was dominated by DUL as DUL cost less and was more effective. Starting with GABA was extendedly dominated by a combination of DES and DUL. DES and DUL cost $23,468 and $25,979, while yielding 3.05 and 3.16 QALYs, respectively. The incremental cost-effectiveness ratio for DUL compared with DES was $22,867/QALY gained. One-way sensitivity analysis showed that the model was most sensitive to the adherence threshold and utility for mild pain. PSA showed that, at a willingness-to-pay (WTP) of $50,000/QALY, DUL was the most cost-effective option in 56.3% of the simulations, DES in 29.2%, GABA in 14.4%, and PRE in 0.1%. Starting with DUL is the most cost-effective option for PDN when WTP is greater than $22,867/QALY. Decision makers may consider starting with DUL for PDN patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonnett, C.; Troxel, M. A.; Hartley, W.
We present photometric redshift estimates for galaxies used in the weak lensing analysis of the Dark Energy Survey Science Verification (DES SV) data. Four model- or machine learning-based photometric redshift methods { annz2, bpz calibrated against BCC-U fig simulations, skynet, and tpz { are analysed. For training, calibration, and testing of these methods, we also construct a catalogue of spectroscopically confirmed galaxies matched against DES SV data. The performance of the methods is evalu-ated against the matched spectroscopic catalogue, focusing on metrics relevant for weak lensing analyses, with additional validation against COSMOS photo-zs. From the galaxies in the DES SVmore » shear catalogue, which have mean redshift 0.72 ±0.01 over the range 0:3 < z < 1:3, we construct three tomographic bins with means of z = {0.45; 0.67,1.00g}. These bins each have systematic uncertainties δ z ≲ 0.05 in the mean of the fiducial skynet photo-z n(z). We propagate the errors in the redshift distributions through to their impact on cosmological parameters estimated with cosmic shear, and find that they cause shifts in the value of σ 8 of approx. 3%. This shift is within the one sigma statistical errors on σ8 for the DES SV shear catalog. We also found that further study of the potential impact of systematic differences on the critical surface density, Σ crit, contained levels of bias safely less than the statistical power of DES SV data. We recommend a final Gaussian prior for the photo-z bias in the mean of n(z) of width 0:05 for each of the three tomographic bins, and show that this is a sufficient bias model for the corresponding cosmology analysis.« less
NASA Astrophysics Data System (ADS)
Amalia, E.; Moelyadi, M. A.; Ihsan, M.
2018-04-01
The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.
Schnelle, John F; Schroyer, L Dale; Saraf, Avantika A; Simmons, Sandra F
2016-11-01
Nursing aides provide most of the labor-intensive activities of daily living (ADL) care to nursing home (NH) residents. Currently, most NHs do not determine nurse aide staffing requirements based on the time to provide ADL care for their unique resident population. The lack of an objective method to determine nurse aide staffing requirements suggests that many NHs could be understaffed in their capacity to provide consistent ADL care to all residents in need. Discrete event simulation (DES) mathematically models key work parameters (eg, time to provide an episode of care and available staff) to predict the ability of the work setting to provide care over time and offers an objective method to determine nurse aide staffing needs in NHs. This study had 2 primary objectives: (1) to describe the relationship between ADL workload and the level of nurse aide staffing reported by NHs; and, (2) to use a DES model to determine the relationship between ADL workload and nurse aide staffing necessary for consistent, timely ADL care. Minimum Data Set data related to the level of dependency on staff for ADL care for residents in over 13,500 NHs nationwide were converted into 7 workload categories that captured 98% of all residents. In addition, data related to the time to provide care for the ADLs within each workload category was used to calculate a workload score for each facility. The correlation between workload and reported nurse aide staffing levels was calculated to determine the association between staffing reported by NHs and workload. Simulations to project staffing requirements necessary to provide ADL care were then conducted for 65 different workload scenarios, which included 13 different nurse aide staffing levels (ranging from 1.6 to 4.0 total hours per resident day) and 5 different workload percentiles (ranging from the 5th to the 95th percentile). The purpose of the simulation model was to determine the staffing necessary to provide care within each workload percentile based on resident ADL care needs and compare the simulated staffing projections to the NH reported staffing levels. The percentage of scheduled care time that was omitted was estimated by the simulation model for each of the 65 workload scenarios using optimistic assumptions about staff productivity and efficiency. There was a low correlation between ADL workload and reported nurse aide staffing (Pearson = .11; P < .01), which suggests that most of the 13,500 NHs were not using ADL acuity to determine nurse aide staffing levels. Based on the DES model, the nurse aide staffing required for ADL care that would result in a rate of care omissions below 10% ranged from 2.8 hours/resident/day for NHs with a low workload (5th percentile) to 3.6 hours/resident/day for NHs with a high workload (95th percentile). In contrast, NHs reported staffing levels that ranged from an average of 2.3 to 2.5 hours/resident/day across all 5 workload percentiles. Higher workload NHs had the largest discrepancies between reported and predicted nurse aide staffing levels. The average nurse aide staffing levels reported by NHs falls below the level of staffing predicted as necessary to provide consistent ADL care to all residents in need. DES methodology can be used to determine nurse aide staffing requirements to provide ADL care and simulate management interventions to improve care efficiency and quality. Copyright © 2016 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Khalid, Ruzelan; M. Nawawi, Mohd Kamal; Kawsar, Luthful A.; Ghani, Noraida A.; Kamil, Anton A.; Mustafa, Adli
2013-01-01
M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed. PMID:23560037
NASA Astrophysics Data System (ADS)
Zaag, Mahdi
La disponibilite des modeles precis des avions est parmi les elements cles permettant d'assurer leurs ameliorations. Ces modeles servent a ameliorer les commandes de vol et de concevoir de nouveaux systemes aerodynamiques pour la conception des ailes deformables des avions. Ce projet consiste a concevoir un systeme d'identification de certains parametres du modele du moteur de l'avion d'affaires americain Cessna Citation X pour la phase de croisiere a partir des essais en vol. Ces essais ont ete effectues sur le simulateur de vol concu et fabrique par CAE Inc. qui possede le niveau D de la dynamique de vol. En effet, le niveau D est le plus haut niveau de precision donne par l'autorite federale de reglementation FAA de l'aviation civile aux Etats-Unis. Une methodologie basee sur les reseaux de neurones optimises a l'aide d'un algorithme intitule le "grand deluge etendu" est utilisee dans la conception de ce systeme d'identification. Plusieurs tests de vol pour differentes altitudes et differents nombres de Mach ont ete realises afin de s'en servir comme bases de donnees pour l'apprentissage des reseaux de neurones. La validation de ce modele a ete realisee a l'aide des donnees du simulateur. Malgre la nonlinearite et la complexite du systeme, les parametres du moteur ont ete tres bien predits pour une enveloppe de vol determinee. Ce modele estime pourrait etre utilise pour des analyses de fonctionnement du moteur et pourrait assurer le controle de l'avion pendant cette phase de croisiere. L'identification des parametres du moteur pourrait etre realisee aussi pour les autres phases de montee et de descente afin d'obtenir son modele complet pour toute l'enveloppe du vol de l'avion Cessna Citation X (montee, croisiere, descente). Cette methode employee dans ce travail pourrait aussi etre efficace pour realiser un modele pour l'identification des coefficients aerodynamiques du meme avion a partir toujours des essais en vol. None None None
2005-12-01
moteur de simulation de l’Environnement Intégré de Modélisation de la Performance est utilisé de pair avec l’approche pour démontrer comment des...d’être utilisé dans les forces générées par ordinateur. Le travail ultérieur inclura plus d’essais, l’intégration avec les moteurs de simulateurs et...Aspects are reasoning units relevant to simulated tasks. Each Aspect schema is a 4-tuple: AspectSchema = <MA, WM, LM, CL>, where MA refers to meta
NASA Technical Reports Server (NTRS)
Collinson, Glyn A.; Dorelli, John Charles; Avanov, Leon A.; Lewis, Gethyn R.; Moore, Thomas E.; Pollock, Craig; Kataria, Dhiren O.; Bedington, Robert; Arridge, Chris S.; Chornay, Dennis J.;
2012-01-01
We report our findings comparing the geometric factor (GF) as determined from simulations and laboratory measurements of the new Dual Electron Spectrometer (DES) being developed at NASA Goddard Space Flight Center as part of the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission. Particle simulations are increasingly playing an essential role in the design and calibration of electrostatic analyzers, facilitating the identification and mitigation of the many sources of systematic error present in laboratory calibration. While equations for laboratory measurement of the Geometric Factpr (GF) have been described in the literature, these are not directly applicable to simulation since the two are carried out under substantially different assumptions and conditions, making direct comparison very challenging. Starting from first principles, we derive generalized expressions for the determination of the GF in simulation and laboratory, and discuss how we have estimated errors in both cases. Finally, we apply these equations to the new DES instrument and show that the results agree within errors. Thus we show that the techniques presented here will produce consistent results between laboratory and simulation, and present the first description of the performance of the new DES instrument in the literature.
Prediction du profil de durete de l'acier AISI 4340 traite thermiquement au laser
NASA Astrophysics Data System (ADS)
Maamri, Ilyes
Les traitements thermiques de surfaces sont des procedes qui visent a conferer au coeur et a la surface des pieces mecaniques des proprietes differentes. Ils permettent d'ameliorer la resistance a l'usure et a la fatigue en durcissant les zones critiques superficielles par des apports thermiques courts et localises. Parmi les procedes qui se distinguent par leur capacite en terme de puissance surfacique, le traitement thermique de surface au laser offre des cycles thermiques rapides, localises et precis tout en limitant les risques de deformations indesirables. Les proprietes mecaniques de la zone durcie obtenue par ce procede dependent des proprietes physicochimiques du materiau a traiter et de plusieurs parametres du procede. Pour etre en mesure d'exploiter adequatement les ressources qu'offre ce procede, il est necessaire de developper des strategies permettant de controler et regler les parametres de maniere a produire avec precision les caracteristiques desirees pour la surface durcie sans recourir au classique long et couteux processus essai-erreur. L'objectif du projet consiste donc a developper des modeles pour predire le profil de durete dans le cas de traitement thermique de pieces en acier AISI 4340. Pour comprendre le comportement du procede et evaluer les effets des differents parametres sur la qualite du traitement, une etude de sensibilite a ete menee en se basant sur une planification experimentale structuree combinee a des techniques d'analyse statistiques eprouvees. Les resultats de cette etude ont permis l'identification des variables les plus pertinentes a exploiter pour la modelisation. Suite a cette analyse et dans le but d'elaborer un premier modele, deux techniques de modelisation ont ete considerees, soient la regression multiple et les reseaux de neurones. Les deux techniques ont conduit a des modeles de qualite acceptable avec une precision d'environ 90%. Pour ameliorer les performances des modeles a base de reseaux de neurones, deux nouvelles approches basees sur la caracterisation geometrique du profil de durete ont ete considerees. Contrairement aux premiers modeles predisant le profil de durete en fonction des parametres du procede, les nouveaux modeles combinent les memes parametres avec les attributs geometriques du profil de durete pour refleter la qualite du traitement. Les modeles obtenus montrent que cette strategie conduit a des resultats tres prometteurs.
Des schémas équivalents pour les circuits couplés multi-enroulements
NASA Astrophysics Data System (ADS)
Keradec, J. P.; Cogitore, B.; Laveuve, E.; Bensoam, M.
1994-04-01
The aim of this paper is to represent the electrical behaviour of any number of magnetically coupled windings with couplers and inductors. Two methods, mathematicaly justified, are proposed. The second one introduces only positive inductances. As an exemple, it is applied to the representation of a three column three phase transformer. The obtained circuits supply the requisite guide to design more complete circuits which allow the high frequency behaviour of wound components to be taken into account, especialy in electronics simulation softwares. Le but de cet article est de traduire le comportement électrique d'un nombre quelconque d'enroulements magnétiquement couplés, par des coupleurs et des inductances. Deux méthodes, établies mathématiquement, sont proposées. La seconde n'introduit que des inductances positives. A titre d'exemple, elle est appliquée à la représentation d'un transformateur triphasé à trois colonnes. Les schémas obtenus fournissent l'indispensable ossature de schémas plus complets, aptes à représenter le comportement haute fréquence des composants bobinés, notamment dans un logiciel de simulation électronique.
Methodologies nouvelles pour la realisation d'essais dans la soufflerie Price-Paidoussis
NASA Astrophysics Data System (ADS)
Flores Salinas, Manuel
Le present memoire en genie de la production automatisee vise a decrire le travail effectue dans la soufflerie Price-Paidoussis du laboratoire LARCASE pour trouver les methodologies experimentales et les procedures de tests, qui seront utilisees avec les modeles d'ailes actuellement au laboratoire. Les methodologies et procedures presentees ici vont permettre de preparer les tests en soufflerie du projet MDO-505 Architectures et technologies deformables pour l'amelioration des performances des ailes, qui se derouleront durant l'annee 2015. D'abord, un bref historique des souffleries subsoniques sera fait. Les differentes sections de la soufflerie Price-Paidoussis seront decrites en mettant l'emphase sur leur influence dans la qualite de l'ecoulement qui se retrouve dans la chambre d'essai. Ensuite, une introduction a la pression, a sa mesure lors de tests en soufflerie et les instruments utilises pour les tests en soufflerie au laboratoire LARCASE sera presente, en particulier le capteur piezoelectrique XCQ-062. Une attention particuliere sera portee au mode de fonctionnement, a son installation, a la mesure et a la detection des frequences et aux sources d'erreurs lorsqu'on utilise des capteurs de haute precision comme la serie XCQ-062 du fournisseur Kulite. Finalement, les procedures et les methodologies elaborees pour les tests dans la soufflerie Price-Paidoussis seront utilisees sur quatre types d'ailes differentes. L'article New methodology for wind tunnel calibration using neural networks - EGD approch portant sur une nouvelle facon de predire les caracteristiques de l'ecoulement a l'interieur de la soufflerie Price-Paidoussis se trouve dans l'annexe 2 de ce document. Cet article porte sur la creation d'un reseau de neurones multicouche et sur l'entrainement des neurones, Ensuite, une comparaison des resultats du reseau de neurones a ete fait avec des valeurs simules avec le logiciel Fluent.
2003-02-01
conducteur pour des sources liées au véhicule (bruits liés au moteur et à la transmission, bruits d’équipements, bruits des passagers , …), pour des...maquette de film , de pièce de théâtre ou de publicité. Initié par le département Réalité Virtuelle de CS, et financé par la Communauté Européenne, le projet... films vidéo, l’utilisateur interagit avec la simulation. L’objectif était de permettre d’étudier les apports de la Réalité Virtuelle dans les
Jiang, Minghuan; You, Joyce H S
2017-10-01
Continuation of dual antiplatelet therapy (DAPT) beyond 1 year reduces late stent thrombosis and ischemic events after drug-eluting stents (DES) but increases risk of bleeding. We hypothesized that extending DAPT from 12 months to 30 months in patients with acute coronary syndrome (ACS) after DES is cost-effective. A lifelong decision-analytic model was designed to simulate 2 antiplatelet strategies in event-free ACS patients who had completed 12-month DAPT after DES: aspirin monotherapy (75-162 mg daily) and continuation of DAPT (clopidogrel 75 mg daily plus aspirin 75-162 mg daily) for 18 months. Clinical event rates, direct medical costs, and quality-adjusted life-years (QALYs) gained were the primary outcomes from the US healthcare provider perspective. Base-case results showed DAPT continuation gained higher QALYs (8.1769 vs 8.1582 QALYs) at lower cost (USD42 982 vs USD44 063). One-way sensitivity analysis found that base-case QALYs were sensitive to odds ratio (OR) of cardiovascular death with DAPT continuation and base-case cost was sensitive to OR of nonfatal stroke with DAPT continuation. DAPT continuation remained cost-effective when the ORs of nonfatal stroke and cardiovascular death were below 1.241 and 1.188, respectively. In probabilistic sensitivity analysis, DAPT continuation was the preferred strategy in 74.75% of 10 000 Monte Carlo simulations at willingness-to-pay threshold of 50 000 USD/QALYs. Continuation of DAPT appears to be cost-effective in ACS patients who were event-free for 12-month DAPT after DES. The cost-effectiveness of DAPT for 30 months was highly subject to the OR of nonfatal stroke and OR of death with DAPT continuation. © 2017 Wiley Periodicals, Inc.
Turk, Marvee; Gupta, Vishal; Fischell, Tim A
2010-03-01
There have been reports of serious complications related to difficulty removing the deflated Taxus stent delivery balloon after stent deployment. The purpose of this study was to determine whether the Taxus SIBS polymer was "sticky" and associated with an increase in the force required to remove the stent delivery balloon after stent deployment, using a quantitative, ex-vivo model. Balloon-polymer-stent interactions during balloon withdrawal were measured with the Taxus Liberté, Liberté bare-metal stent (BMS; no polymer = control), the Cordis Cypher drug-eluting stent (DES; PEVA/PBMA polymer) and the BX Velocity (no polymer). We quantitatively measured the force required to remove the deflated stent delivery balloon from each of these stents in simulated vessels at 37 degrees C in a water bath. Balloon withdrawal forces were measured in straight (0 degree curve), mildly curved (20 degree curve) and moderately curved (40 degree curve) simulated vessel segments. The average peak force required to remove the deflated balloon catheter from the Taxus Liberté DES, the Liberté BMS, the Cypher DES, and the Bx Velocity BMS were similar in straight segments, but were much greater for the Taxus Liberté in the moderately curved segments (1.4 lbs vs. 0.11 lbs, 0.11 lbs and 0.12 lbs, respectively; p < 0.0001). The SIBS polymer of the Taxus Liberté DES appears to be "sticky" and is associated with high forces required to withdraw the deflated balloon from the deployed stent in curved segments. This withdrawal issue may help to explain the clinical complications that have been reported with this device.
Optimal Discrete Event Supervisory Control of Aircraft Gas Turbine Engines
NASA Technical Reports Server (NTRS)
Litt, Jonathan (Technical Monitor); Ray, Asok
2004-01-01
This report presents an application of the recently developed theory of optimal Discrete Event Supervisory (DES) control that is based on a signed real measure of regular languages. The DES control techniques are validated on an aircraft gas turbine engine simulation test bed. The test bed is implemented on a networked computer system in which two computers operate in the client-server mode. Several DES controllers have been tested for engine performance and reliability.
Developing integrated patient pathways using hybrid simulation
NASA Astrophysics Data System (ADS)
Zulkepli, Jafri; Eldabi, Tillal
2016-10-01
Integrated patient pathways includes several departments, i.e. healthcare which includes emergency care and inpatient ward; intermediate care which patient(s) will stay for a maximum of two weeks and at the same time be assessed by assessment team to find the most suitable care; and social care. The reason behind introducing the intermediate care in western countries was to reduce the rate of patients that stays in the hospital especially for elderly patients. This type of care setting has been considered to be set up in some other countries including Malaysia. Therefore, to assess the advantages of introducing this type of integrated healthcare setting, we suggest develop the model using simulation technique. We argue that single simulation technique is not viable enough to represent this type of patient pathways. Therefore, we suggest develop this model using hybrid techniques, i.e. System Dynamics (SD) and Discrete Event Simulation (DES). Based on hybrid model result, we argued that the result is viable to be as references for decision making process.
Linking Six Sigma to simulation: a new roadmap to improve the quality of patient care.
Celano, Giovanni; Costa, Antonio; Fichera, Sergio; Tringali, Giuseppe
2012-01-01
Improving the quality of patient care is a challenge that calls for a multidisciplinary approach, embedding a broad spectrum of knowledge and involving healthcare professionals from diverse backgrounds. The purpose of this paper is to present an innovative approach that implements discrete-event simulation (DES) as a decision-supporting tool in the management of Six Sigma quality improvement projects. A roadmap is designed to assist quality practitioners and health care professionals in the design and successful implementation of simulation models within the define-measure-analyse-design-verify (DMADV) or define-measure-analyse-improve-control (DMAIC) Six Sigma procedures. A case regarding the reorganisation of the flow of emergency patients affected by vertigo symptoms was developed in a large town hospital as a preliminary test of the roadmap. The positive feedback from professionals carrying out the project looks promising and encourages further roadmap testing in other clinical settings. The roadmap is a structured procedure that people involved in quality improvement can implement to manage projects based on the analysis and comparison of alternative scenarios. The role of Six Sigma philosophy in improvement of the quality of healthcare services is recognised both by researchers and by quality practitioners; discrete-event simulation models are commonly used to improve the key performance measures of patient care delivery. The two approaches are seldom referenced and implemented together; however, they could be successfully integrated to carry out quality improvement programs. This paper proposes an innovative approach to bridge the gap and enrich the Six Sigma toolbox of quality improvement procedures with DES.
NASA Astrophysics Data System (ADS)
Alvarez, L. V.; Grams, P.
2017-12-01
We present a parallelized, three-dimensional, turbulence-resolving model using the Detached-Eddy Simulation (DES) technique, tested at the scale of the river-reach in the Colorado River. DES is a hybrid large eddy simulation (LES) and Reynolds-averaged Navier Stokes (RANS). RANS is applied to the near-bed grid cells, where grid resolution is not sufficient to fully resolve wall turbulence. LES is applied in the flow interior. We utilize the Spalart-Allmaras one equation turbulence closure with a rough wall extension. The model resolves large-scale turbulence using DES and simultaneously integrates the suspended sediment advection-diffusion equation. The Smith and McLean suspended sediment boundary condition is used to calculate the upward and downward settling of sediment fluxes in the grid cells attached to the bed. Model results compare favorably with ADCP measurements of flow taken on the Colorado River in Grand Canyon during the High Flow Experiment (HFE) of 2008. The model accurately reproduces the size and position of the major recirculation currents, and the error in velocity magnitude was found to be less than 17% or 0.22 m/s absolute error. The mean deviation of the direction of velocity with respect to the measured velocity was found to be 20 degrees. Large-scale turbulence structures with vorticity predominantly in the vertical direction are produced at the shear layer between the main channel and the separation zone. However, these structures rapidly become three-dimensional with no preferred orientation of vorticity. Cross-stream velocities, into the main recirculation zone just upstream of the point of reattachment and out of the main recirculation region just downstream of the point of separation, are highest near the bed. Lateral separation eddies are more efficient at storing and exporting sediment than previously modeled. The input of sediment to the eddy recirculation zone occurs in the interface of the eddy and main channel. Pulsation of the strength of the return current becomes a key factor to determine the rates of erosion and deposition in the main recirculation zone.
Lam, Sean Shao Wei; Zhang, Ji; Zhang, Zhong Cheng; Oh, Hong Choon; Overton, Jerry; Ng, Yih Yng; Ong, Marcus Eng Hock
2015-02-01
Dynamically reassigning ambulance deployment locations throughout a day to balance ambulance availability and demands can be effective in reducing response times. The objectives of this study were to model dynamic ambulance allocation plans in Singapore based on the system status management (SSM) strategy and to evaluate the dynamic deployment plans using a discrete event simulation (DES) model. The geographical information system-based analysis and mathematical programming were used to develop the dynamic ambulance deployment plans for SSM based on ambulance calls data from January 1, 2011, to June 30, 2011. A DES model that incorporated these plans was used to compare the performance of the dynamic SSM strategy against static reallocation policies under various demands and travel time uncertainties. When the deployment plans based on the SSM strategy were followed strictly, the DES model showed that the geographical information system-based plans resulted in approximately 13-second reduction in the median response times compared to the static reallocation policy, whereas the mathematical programming-based plans resulted in approximately a 44-second reduction. The response times and coverage performances were still better than the static policy when reallocations happened for only 60% of all the recommended moves. Dynamically reassigning ambulance deployment locations based on the SSM strategy can result in superior response times and coverage performance compared to static reallocation policies even when the dynamic plans were not followed strictly. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Benjanirat, Sarun
Next generation horizontal-axis wind turbines (HAWTs) will operate at very high wind speeds. Existing engineering approaches for modeling the flow phenomena are based on blade element theory, and cannot adequately account for 3-D separated, unsteady flow effects. Therefore, researchers around the world are beginning to model these flows using first principles-based computational fluid dynamics (CFD) approaches. In this study, an existing first principles-based Navier-Stokes approach is being enhanced to model HAWTs at high wind speeds. The enhancements include improved grid topology, implicit time-marching algorithms, and advanced turbulence models. The advanced turbulence models include the Spalart-Allmaras one-equation model, k-epsilon, k-o and Shear Stress Transport (k-o-SST) models. These models are also integrated with detached eddy simulation (DES) models. Results are presented for a range of wind speeds, for a configuration termed National Renewable Energy Laboratory Phase VI rotor, tested at NASA Ames Research Center. Grid sensitivity studies are also presented. Additionally, effects of existing transition models on the predictions are assessed. Data presented include power/torque production, radial distribution of normal and tangential pressure forces, root bending moments, and surface pressure fields. Good agreement was obtained between the predictions and experiments for most of the conditions, particularly with the Spalart-Allmaras-DES model.
Poder, Thomas G; Erraji, Jihane; Coulibaly, Lucien P; Koffi, Kouamé
2017-01-01
Drug-eluting stents (DESs) were considered as ground-breaking technology promising to eradicate restenosis and the necessity to perform multiple revascularization procedures subsequent to percutaneous coronary intervention. Soon after DESs were released on the market, however, there were reports of a potential increase in mortality and of early or late thrombosis. In addition, DESs are far more expensive than bare-metal stents (BMSs), which has led to their limited use in many countries. The technology has improved over the last few years with the second generation of DESs (DES-2). Moreover, costs have come down and an improved safety profile with decreased thrombosis has been reported. Perform a cost-benefit analysis of DES-2s versus BMSs in the context of a publicly funded university hospital in Quebec, Canada. A systematic review of meta-analyses was conducted between 2012 and 2016 to extract data on clinical effectiveness. The clinical outcome of interest for the cost-benefit analysis was target-vessel revascularization (TVR). Cost units are those used in the Quebec health-care system. The cost-benefit analysis was based on a 2-year perspective. Deterministic and stochastic models (discrete-event simulation) were used, and various risk factors of reintervention were considered. DES-2s are much more effective than BMSs with respect to TVR rate ratio (i.e., 0.29 to 0.62 in more recent meta-analyses). DES-2s seem to cause fewer deaths and in-stent thrombosis than BMSs, but results are rarely significant, with the exception of the cobalt-chromium everolimus DES. The rate ratio of myocardial infraction is systematically in favor of DES-2s and very often significant. Despite the higher cost of DES-2s, fewer reinterventions can lead to huge savings (i.e., -$479 to -$769 per patient). Moreover, the higher a patient's risk of reintervention, the higher the savings associated with the use of DES-2s. Despite the higher purchase cost of DES-2s compared to BMSs, generalizing their use, in particular for patients at high risk of reintervention, should enable significant savings.
Erraji, Jihane; Coulibaly, Lucien P.; Koffi, Kouamé
2017-01-01
Background Drug-eluting stents (DESs) were considered as ground-breaking technology promising to eradicate restenosis and the necessity to perform multiple revascularization procedures subsequent to percutaneous coronary intervention. Soon after DESs were released on the market, however, there were reports of a potential increase in mortality and of early or late thrombosis. In addition, DESs are far more expensive than bare-metal stents (BMSs), which has led to their limited use in many countries. The technology has improved over the last few years with the second generation of DESs (DES-2). Moreover, costs have come down and an improved safety profile with decreased thrombosis has been reported. Objective Perform a cost–benefit analysis of DES-2s versus BMSs in the context of a publicly funded university hospital in Quebec, Canada. Methods A systematic review of meta-analyses was conducted between 2012 and 2016 to extract data on clinical effectiveness. The clinical outcome of interest for the cost–benefit analysis was target-vessel revascularization (TVR). Cost units are those used in the Quebec health-care system. The cost–benefit analysis was based on a 2-year perspective. Deterministic and stochastic models (discrete-event simulation) were used, and various risk factors of reintervention were considered. Results DES-2s are much more effective than BMSs with respect to TVR rate ratio (i.e., 0.29 to 0.62 in more recent meta-analyses). DES-2s seem to cause fewer deaths and in-stent thrombosis than BMSs, but results are rarely significant, with the exception of the cobalt–chromium everolimus DES. The rate ratio of myocardial infraction is systematically in favor of DES-2s and very often significant. Despite the higher cost of DES-2s, fewer reinterventions can lead to huge savings (i.e., -$479 to -$769 per patient). Moreover, the higher a patient’s risk of reintervention, the higher the savings associated with the use of DES-2s. Conclusion Despite the higher purchase cost of DES-2s compared to BMSs, generalizing their use, in particular for patients at high risk of reintervention, should enable significant savings. PMID:28498849
NASA Astrophysics Data System (ADS)
Cormier, Marianne
Les faibles resultats en sciences des eleves du milieu francophone minoritaire, lors d'epreuves au plan national et international, ont interpelle la recherche de solutions. Cette these avait pour but de creer et d'experimenter un modele pedagogique pour l'enseignement des sciences en milieu linguistique minoritaire. En raison de la presence de divers degres de francite chez la clientele scolaire de ce milieu, plusieurs elements langagiers (l'ecriture, la discussion et la lecture) ont ete integres a l'apprentissage scientifique. Nous avions recommande de commencer le processus d'apprentissage avec des elements langagiers plutot informels (redaction dans un journal, discussions en dyades...) pour progresser vers des activites langagieres plus formelles (redaction de rapports ou d'explications scientifiques). En ce qui a trait a l'apprentissage scientifique, le modele preconisait une demarche d'evolution conceptuelle d'inspiration socio-constructiviste tout en s'appuyant fortement sur l'apprentissage experientiel. Lors de l'experimentation du modele, nous voulions savoir si celui-ci provoquait une evolution conceptuelle chez les eleves, et si, simultanement, le vocabulaire scientifique de ces derniers s'enrichissait. Par ailleurs, nous cherchions a comprendre comment les eleves vivaient leurs apprentissages dans le cadre de ce modele pedagogique. Une classe de cinquieme annee de l'ecole de Grande-Digue, dans le Sud-est du Nouveau-Brunswick, a participe a la mise a l'essai du modele en etudiant les marais sales locaux. Lors d'entrevues initiales, nous avons remarque que les connaissances des eleves au sujet des marais sales etaient limitees. En effet, s'ils etaient conscients que les marais etaient des lieux naturels, ils ne pouvaient pas necessairement les decrire avec precision. Nous avons egalement constate que les eleves utilisaient surtout des mots communs (plantes, oiseaux, insectes) pour decrire le marais. Les resultats obtenus indiquent que les eleves ont progresse dans leurs conceptions au sujet des marais. A la suite de l'intervention pedagogique, ils peuvent decrire le marais de facon comparable a des scientifiques en mettant a profit des mots scientifiques (spartine alterniflore, detritus, chevalier a pattes jaunes). Selon nous, les apprentissages des eleves s'expliquent surtout par la juxtaposition, dans le modele pedagogique, des elements langagiers avec une demarche de changement conceptuel a caractere experientiel. En effet, lors de cette demarche, les eleves se sont beaucoup questionnes, ont ecrit leurs reflexions, discute de leurs preoccupations et consulte des documents. Ces activites langagieres se sont deroulees directement dans le marais ainsi qu'a la suite de visites dans celui-ci. Ainsi, la possibilite de decouverte a ete reelle pour eux. Ces differents elements se sont combines pour creer une forte motivation. Le tout s'est arrime pour permettre une evolution conceptuelle et langagiere. Le modele pedagogique experimente pourrait ainsi s'averer tres fecond aupres des eleves du milieu linguistique minoritaire.
Zancopé, Bruna R; Rodrigues, Lívia P; Parisotto, Thais M; Steiner-Oliveira, Carolina; Rodrigues, Lidiany K A; Nobre-dos-Santos, Marinês
2016-04-01
This study evaluated if Carbon dioxide (CO2) (λ 10.6 μm) laser irradiation combined with acidulated phosphate fluoride gel application (APF gel) enhances "CaF2" uptake by demineralized enamel specimens (DES) and inhibits enamel lesion progression. Thus, two studies were conducted and DES were subjected to APF gel combined or not with CO2 laser irradiation (11.3 or 20.0 J/cm(2), 0.4 or 0.7 W) performed before, during, or after APF gel application. In study 1, 165 DES were allocated to 11 groups. Fluoride as "CaF2 like material" formed on enamel was determined in 100 DES (n = 10/group), and the surface morphologies of 50 specimens were evaluated by scanning electron microscopy (SEM) before and after "CaF2" extraction. In study 2, 165 DES (11 groups, n = 15), subjected to the same treatments as in study 1, were further subjected to a pH-cycling model to simulate a high cariogenic challenge. The progression of demineralization in DES was evaluated by cross-sectional microhardness and polarized light microscopy analyses. Laser at 11.3 J/cm(2) applied during APF gel application increased "CaF2" uptake on enamel surface. Laser irradiation and APF gel alone arrested the lesion progression compared with the control (p < 0.05). Areas of melting, fusion, and cracks were observed. CO2 laser irradiation, combined with a single APF application enhanced "CaF2" uptake on enamel surface and a synergistic effect was found. However, regarding the inhibition of caries lesion progression, no synergistic effect could be demonstrated. In conclusion, the results have shown that irradiation with specific laser parameters significantly enhanced CaF2 uptake by demineralized enamel and inhibited lesion progression.
Nie, Lei; Hu, Mingming; Yan, Xu; Guo, Tingting; Wang, Haibin; Zhang, Sheng; Qu, Haibin
2018-05-03
This case study described a successful application of the quality by design (QbD) principles to a coupling process development of insulin degludec. Failure mode effects analysis (FMEA) risk analysis was first used to recognize critical process parameters (CPPs). Five CPPs, including coupling temperature (Temp), pH of desB30 solution (pH), reaction time (Time), desB30 concentration (Conc), and molar equivalent of ester per mole of desB30 insulin (MolE), were then investigated using a fractional factorial design. The curvature effect was found significant, indicating the requirement of second-order models. Afterwards, a central composite design was built with an augmented star and center points study. Regression models were developed for the CPPs to predict the purity and yield of predegludec using above experimental data. The R 2 and adjusted R 2 were higher than 96 and 93% for the two models respectively. The Q 2 values were more than 80% indicating a good predictive ability of models. MolE was found to be the most significant factor affecting both yield and purity of predegludec. Temp, pH, and Conc were also significant for predegludec purity, while Time appeared to remarkably influence the yield model. The multi-dimensional design space and normal operating region (NOR) with a robust setpoint were determined using a probability-based Monte-Carlo simulation method. The verified experimental results showed that the design space was reliable and effective. This study enriches the understanding of acetylation process and is instructional to other complicated operations in biopharmaceutical engineering.
3D Neutronic Analysis in MHD Calculations at ARIES-ST Fusion Reactors Systems
NASA Astrophysics Data System (ADS)
Hançerliogulları, Aybaba; Cini, Mesut
2013-10-01
In this study, we developed new models for liquid wall (FW) state at ARIES-ST fusion reactor systems. ARIES-ST is a 1,000 MWe fusion reactor system based on a low aspect ratio ST plasma. In this article, we analyzed the characteristic properties of magnetohydrodynamics (MHD) and heat transfer conditions by using Monte-Carlo simulation methods (ARIES Team et al. in Fusion Eng Des 49-50:689-695, 2000; Tillack et al. in Fusion Eng Des 65:215-261, 2003) . In fusion applications, liquid metals are traditionally considered to be the best working fluids. The working liquid must be a lithium-containing medium in order to provide adequate tritium that the plasma is self-sustained and that the fusion is a renewable energy source. As for Flibe free surface flows, the MHD effects caused by interaction with the mean flow is negligible, while a fairly uniform flow of thick can be maintained throughout the reactor based on 3-D MHD calculations. In this study, neutronic parameters, that is to say, energy multiplication factor radiation, heat flux and fissile fuel breeding were researched for fusion reactor with various thorium and uranium molten salts. Sufficient tritium amount is needed for the reactor to work itself. In the tritium breeding ratio (TBR) >1.05 ARIES-ST fusion model TBR is >1.1 so that tritium self-sufficiency is maintained for DT fusion systems (Starke et al. in Fusion Energ Des 84:1794-1798, 2009; Najmabadi et al. in Fusion Energ Des 80:3-23, 2006).
NASA Astrophysics Data System (ADS)
Pollender-Moreau, Olivier
Ce document présente, dans le cadre d'un contexte conceptuel, une méthode d'enchaînement servant à faire le lien entre les différentes étapes qui permettent de réaliser la simulation d'un aéronef à partir de ses données géométriques et de ses propriétés massiques. En utilisant le cas de l'avion d'affaires Hawker 800XP de la compagnie Hawker Beechcraft, on démontre, via des données, un processus de traitement par lots et une plate-forme de simulation, comment (1) modéliser la géométrie d'un aéronef en plusieurs surfaces, (2) calculer les forces aérodynamiques selon une technique connue sous le nom de
Genuis, Emerson D; Doan, Quynh
2013-11-01
Providing patient care and medical education are both important missions of teaching hospital emergency departments (EDs). With medical school enrollment rising, and ED crowding becoming an increasing prevalent issue, it is important for both pediatric EDs (PEDs) and general EDs to find a balance between these two potentially competing goals. The objective was to determine how the number of trainees in a PED affects patient wait time, total ED length of stay (LOS), and rates of patients leaving without being seen (LWBS) for PED patients overall and stratified by acuity level as defined by the Pediatric Canadian Triage and Acuity Scale (CTAS) using discrete event simulation (DES) modeling. A DES model of an urban tertiary care PED, which receives approximately 40,000 visits annually, was created and validated. Thirteen different trainee schedules, which ranged from averaging zero to six trainees per shift, were input into the DES model and the outcome measures were determined using the combined output of five model iterations. An increase in LOS of approximately 7 minutes was noted to be associated with each additional trainee per attending emergency physician working in the PED. The relationship between the number of trainees and wait time varied with patients' level of acuity and with the degree of PED utilization. Patient wait time decreased as the number of trainees increased for low-acuity visits and when the PED was not operating at full capacity. With rising numbers of trainees, the PED LWBS rate decreased in the whole department and in the CTAS 4 and 5 patient groups, but it rose in patients triaged CTAS 3 or higher. A rising numbers of trainees was not associated with any change to flow outcomes for CTAS 1 patients. The results of this study demonstrate that trainees in PEDs have an impact mainly on patient LOS and that the effect on wait time differs between patients presenting with varying degrees of acuity. These findings will assist PEDs in finding a balance between providing high-quality medical education and timely patient care. © 2013 by the Society for Academic Emergency Medicine.
Numerical Simulation of Adaptive Control Applicaton to Unstable Solid Rocket Motors
2001-06-01
la Technologie des Lanceurs "Vi- Sciences Meeting & Exhibit, Reno, Jan. 15-18. bration des Lanceurs , Toulouse", 1999. AIAA Paper 96-0759, 1996. 7-8...Schmidt. Some recent de- in the presence of pipeline acoustic resonance. velopments in numerical methods for transonic J. Fluids and Structures ., 5:207
NASA Astrophysics Data System (ADS)
Harvazinski, Matthew Evan
Self-excited combustion instabilities have been studied using a combination of two- and three-dimensional computational fluid dynamics (CFD) simulations. This work was undertaken to assess the ability of CFD simulations to generate the high-amplitude resonant combustion dynamics without external forcing or a combustion response function. Specifically, detached eddy simulations (DES), which allow for significantly coarser grid resolutions in wall bounded flows than traditional large eddy simulations (LES), were investigated for their capability of simulating the instability. A single-element laboratory rocket combustor which produces self-excited longitudinal instabilities is used for the configuration. The model rocket combustor uses an injector configuration based on practical oxidizer-rich staged-combustion devices; a sudden expansion combustion section; and uses decomposed hydrogen peroxide as the oxidizer and gaseous methane as the fuel. A better understanding of the physics has been achieved using a series of diagnostics. Standard CFD outputs like instantaneous and time averaged flowfield outputs are combined with other tools, like the Rayleigh index to provide additional insight. The Rayleigh index is used to identify local regions in the combustor which are responsible for driving and damping the instability. By comparing the Rayleigh index to flowfield parameters it is possible to connect damping and driving to specific flowfield conditions. A cost effective procedure to compute multidimensional local Rayleigh index was developed. This work shows that combustion instabilities can be qualitatively simulated using two-dimensional axisymmetric simulations for fuel rich operating conditions. A full three-dimensional simulation produces a higher level of instability which agrees quite well with the experimental results. In addition to matching the level of instability the three-dimensional simulation also predicts the harmonic nature of the instability that is observed in experiments. All fuel rich simulations used a single step global reaction for the chemical kinetic model. A fuel lean operating condition is also studied and has a lower level of instability. The two-dimensional results are unable to provide good agreement with experimental results unless a more expensive four-step chemical kinetic model is used. The three-dimensional simulation is able to predict the harmonic behavior but fails to capture the amplitude of the instability observed in the companion experiment, instead predicting lower amplitude oscillations. A detailed analysis of the three-dimensional results on a single cycle shows that the periodic heat release commonly associated with combustion instability can be interpreted to be a result of the time lag between the instant the fuel is injected and when it is burned. The time lag is due to two mechanisms. First, methane present near the backstep can become trapped and transported inside shed vortices to the point of combustion. The second aspect of the time lag arises due to the interaction of the fuel with upstream-running pressure waves. As the wave moves past the injection point the flow is temporarily disrupted, reducing the fuel flow into the combustor. A comparison between the fuel lean and fuel rich cases shows several differences. Whereas both cases can produce instability, the fuel-rich case is measurably more unstable. Using the tools developed differences in the location of the damping, and driving regions are evident. By moving the peak driving area upstream of the damping region the level of instability is lower in the fuel lean case. The location of the mean heat release is also important; locating the mean heat release adjacent to the vortex impingement point a higher level of instability is observed for the fuel rich case. This research shows that DES instability modeling has the ability to be a valuable tool in the study of combustion instability. The lower grid size requirement makes the use of DES based modeling a potential candidate in the modeling of full-scale rocket engines. Whereas three-dimensional simulations may be necessary for very good agreement, two-dimensional simulations allow efficient parametric investigation and tool development. The insights obtained from the simulations offer the possibility that their results can be used in the design of future engines to exploit damping and reduce driving.
Developpement d'une commande pour une hydrolienne de riviere et optimisation =
NASA Astrophysics Data System (ADS)
Tetrault, Philippe
Suivant le developpement des energies renouvelables, la presente etude se veut une base theorique quant aux principes fondamentaux necessaires au bon fonctionnement et a l'implementation d'une hydrolienne de riviere. La problematique derriere ce nouveau type d'appareil est d'abord presentee. La machine electrique utilisee dans l'application, c'est-a-dire la machine synchrone a aimants permanents, est etudiee : ses equations dynamiques mecaniques et electriques sont developpees, introduisant en meme temps le concept de referentiel tournant. Le fonctionnement de l'onduleur utilise, soit un montage en pont complet a deux niveaux a semi-conducteurs, est explique et mit en equation pour permettre de comprendre les strategies de modulation disponibles. Un bref historique de ces strategies est fait avant de mettre l'emphase sur la modulation vectorielle qui sera celle utilisee pour l'application en cours. Les differents modules sont assembles dans une simulation Matlab pour confirmer leur bon fonctionnement et comparer les resultats de la simulation avec les calculs theoriques. Differents algorithmes permettant de traquer et maintenir un point de fonctionnement optimal sont presentes. Le comportement de la riviere est etudie afin d'evaluer l'ampleur des perturbations que le systeme devra gerer. Finalement, une nouvelle approche est presentee et comparee a une strategie plus conservatrice a l'aide d'un autre modele de simulation Matlab.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pluquet, Alain
Cette théetudie les techniques d'identication de l'electron dans l'experience D0 au laboratoire Fermi pres de Chicago Le premier chapitre rappelle quelques unes des motivations physiques de l'experience physique des jets physique electrofaible physique du quark top Le detecteur D0 est decrit en details dans le second chapitre Le troisieme cha pitre etudie les algorithmes didentication de lelectron trigger reconstruction ltres et leurs performances Le quatrieme chapitre est consacre au detecteur a radiation de transition TRD construit par le Departement dAstrophysique Physique des Particules Physique Nucleaire et dInstrumentation Associee de Saclay il presente son principe sa calibration et ses performances Ennmore » le dernier chapitre decrit la methode mise au point pour lanalyse des donnees avec le TRD et illustre son emploi sur quelques exemples jets simulant des electrons recherche du quark top« less
NASA Astrophysics Data System (ADS)
Levitz, P.; Korb, J.-P.; Bryant, R. G.
1999-10-01
We address the question of probing the fluid dynamics in disordered interfacial media by Pulsed field gradient (PFG) and Magnetic relaxation dispersion (MRD) techniques. We show that the PFG method is useful to separate the effects of morphology from the connectivity in disordered macroporous media. We propose simulations of molecular dynamics and spectral density functions, J(ω), in a reconstructed mesoporous medium for different limiting conditions at the pore surface. An algebraic form is found for J(ω) in presence of a surface diffusion and a local exploration of the pore network. A logarithmic form of J(ω) is found in presence of a pure surface diffusion. We present magnetic relaxation dispersion experiments (MRD) for water and acetone in calibrated mesoporous media to support the main results of our simulations and theories. Nous présentons les avantages respectifs des méthodes de gradients de champs pulsés (PFG) et de relaxation magnétique nucléaire en champs cyclés (MRD) pour sonder la dynamique moléculaire dans les milieux interfaciaux désordonnés. La méthode PFG est utile pour séparer la morphologie et la connectivité dans des milieux macroporeux. Des simulations de diffusion moléculaire et de densité spectrale J(ω) en milieux mésoporeux sont présentées dans différentes conditions limites aux interfaces des pores. Nous trouvons une forme de dispersion algébrique de J(ω) pour une diffusion de surface assistée d'une exploration locale du réseau de pores et une forme logarithmique dans le cas d'une simple diffusion de surface. Les résultats expérimentaux de la méthode MRD pour de l'eau et de l'acétone dans des milieux mésoporeux calibrés supportent les résultats principaux de nos simulations et théories.
A proposed model for economic evaluations of major depressive disorder.
Haji Ali Afzali, Hossein; Karnon, Jonathan; Gray, Jodi
2012-08-01
In countries like UK and Australia, the comparability of model-based analyses is an essential aspect of reimbursement decisions for new pharmaceuticals, medical services and technologies. Within disease areas, the use of models with alternative structures, type of modelling techniques and/or data sources for common parameters reduces the comparability of evaluations of alternative technologies for the same condition. The aim of this paper is to propose a decision analytic model to evaluate long-term costs and benefits of alternative management options in patients with depression. The structure of the proposed model is based on the natural history of depression and includes clinical events that are important from both clinical and economic perspectives. Considering its greater flexibility with respect to handling time, discrete event simulation (DES) is an appropriate simulation platform for modelling studies of depression. We argue that the proposed model can be used as a reference model in model-based studies of depression improving the quality and comparability of studies.
Optimisation thermique de moules d'injection construits par des processus génératifs
NASA Astrophysics Data System (ADS)
Boillat, E.; Glardon, R.; Paraschivescu, D.
2002-12-01
Une des potentialités les plus remarquables des procédés de production génératifs, comme le frittage sélectif par laser, est leur capacité à fabriquer des moules pour l'injection plastique équipés directement de canaux de refroidissement conformes, parfaitement adaptés aux empreintes Pour que l'industrie de l'injection puisse tirer pleinement parti de cette nouvelle opportunité, il est nécessaire de mettre à la disposition des moulistes des logiciels de simulation capables d'évaluer les gains de productivité et de qualité réalisables avec des systèmes de refroidissement mieux adaptés. Ces logiciels devraient aussi être capables, le cas échéant, de concevoir le système de refroidissement optimal dans des situations où l'empreinte d'injection est complexe. Devant le manque d'outils disponibles dans ce domaine, le but de cet article est de proposer un modèle simple de moules d'injection. Ce modèle permet de comparer différentes stratégies de refroidissement et peut être couplé avec un algorithme d'optimisation.
Mohiuddin, Syed
2014-08-01
Bipolar disorder (BD) is a chronic and relapsing mental illness with a considerable health-related and economic burden. The primary goal of pharmacotherapeutics for BD is to improve patients' well-being. The use of decision-analytic models is key in assessing the added value of the pharmacotherapeutics aimed at treating the illness, but concerns have been expressed about the appropriateness of different modelling techniques and about the transparency in the reporting of economic evaluations. This paper aimed to identify and critically appraise published model-based economic evaluations of pharmacotherapeutics in BD patients. A systematic review combining common terms for BD and economic evaluation was conducted in MEDLINE, EMBASE, PSYCINFO and ECONLIT. Studies identified were summarised and critically appraised in terms of the use of modelling technique, model structure and data sources. Considering the prognosis and management of BD, the possible benefits and limitations of each modelling technique are discussed. Fourteen studies were identified using model-based economic evaluations of pharmacotherapeutics in BD patients. Of these 14 studies, nine used Markov, three used discrete-event simulation (DES) and two used decision-tree models. Most of the studies (n = 11) did not include the rationale for the choice of modelling technique undertaken. Half of the studies did not include the risk of mortality. Surprisingly, no study considered the risk of having a mixed bipolar episode. This review identified various modelling issues that could potentially reduce the comparability of one pharmacotherapeutic intervention with another. Better use and reporting of the modelling techniques in the future studies are essential. DES modelling appears to be a flexible and comprehensive technique for evaluating the comparability of BD treatment options because of its greater flexibility of depicting the disease progression over time. However, depending on the research question, modelling techniques other than DES might also be appropriate in some cases.
RF wave simulation for cold edge plasmas using the MFEM library
NASA Astrophysics Data System (ADS)
Shiraiwa, S.; Wright, J. C.; Bonoli, P. T.; Kolev, T.; Stowell, M.
2017-10-01
A newly developed generic electro-magnetic (EM) simulation tool for modeling RF wave propagation in SOL plasmas is presented. The primary motivation of this development is to extend the domain partitioning approach for incorporating arbitrarily shaped SOL plasmas and antenna to the TORIC core ICRF solver, which was previously demonstrated in the 2D geometry [S. Shiraiwa, et. al., "HISTORIC: extending core ICRF wave simulation to include realistic SOL plasmas", Nucl. Fusion in press], to larger and more complicated simulations by including a 3D realistic antenna and integrating RF rectified sheath potential model. Such an extension requires a scalable high fidelity 3D edge plasma wave simulation. We used the MFEM [
Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
2012-01-01
Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.
SIERRA/Aero Theory Manual Version 4.46.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra Thermal/Fluid Team
2017-09-01
SIERRA/Aero is a two and three dimensional, node-centered, edge-based finite volume code that approximates the compressible Navier-Stokes equations on unstructured meshes. It is applicable to inviscid and high Reynolds number laminar and turbulent flows. Currently, two classes of turbulence models are provided: Reynolds Averaged Navier-Stokes (RANS) and hybrid methods such as Detached Eddy Simulation (DES). Large Eddy Simulation (LES) models are currently under development. The gas may be modeled either as ideal, or as a non-equilibrium, chemically reacting mixture of ideal gases. This document describes the mathematical models contained in the code, as well as certain implementation details. First, themore » governing equations are presented, followed by a description of the spatial discretization. Next, the time discretization is described, and finally the boundary conditions. Throughout the document, SIERRA/ Aero is referred to simply as Aero for brevity.« less
SIERRA/Aero Theory Manual Version 4.44
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra Thermal /Fluid Team
2017-04-01
SIERRA/Aero is a two and three dimensional, node-centered, edge-based finite volume code that approximates the compressible Navier-Stokes equations on unstructured meshes. It is applicable to inviscid and high Reynolds number laminar and turbulent flows. Currently, two classes of turbulence models are provided: Reynolds Averaged Navier-Stokes (RANS) and hybrid methods such as Detached Eddy Simulation (DES). Large Eddy Simulation (LES) models are currently under development. The gas may be modeled either as ideal, or as a non-equilibrium, chemically reacting mixture of ideal gases. This document describes the mathematical models contained in the code, as well as certain implementation details. First, themore » governing equations are presented, followed by a description of the spatial discretization. Next, the time discretization is described, and finally the boundary conditions. Throughout the document, SIERRA/ Aero is referred to simply as Aero for brevity.« less
2014-01-01
mesurer leur expérience. Le groupe a rencontré des experts des technologies associées : les environnements virtuels, la réalité augmentée, les agents...virtuels, l’entraînement, l’ergonomie et la performance humaine. Les réflexions et conclusions issues de ces réunions et discussions sont résumées dans...en matière de EVS/ET (incluant les caractéristiques de l’utilisateur, la mission et l’environnement) et la gestion de l’entraînement est un
NASA Astrophysics Data System (ADS)
Benard, Pierre
Nous presentons une etude des fluctuations magnetiques de la phase normale de l'oxyde de cuivre supraconducteur La_{2-x}Sr _{x}CuO_4 . Le compose est modelise par le Hamiltonien de Hubbard bidimensionnel avec un terme de saut vers les deuxiemes voisins (modele tt'U). Le modele est etudie en utilisant l'approximation de la GRPA (Generalized Random Phase Approximation) et en incluant les effets de la renormalisation de l'interaction de Hubbard par les diagrammes de Brueckner-Kanamori. Dans l'approche presentee dans ce travail, les maximums du facteur de structure magnetique observes par les experiences de diffusion de neutrons sont associes aux anomalies 2k _{F} de reseau du facteur de structure des gaz d'electrons bidimensionnels sans interaction. Ces anomalies proviennent de la diffusion entre particules situees a des points de la surface de Fermi ou les vitesses de Fermi sont tangentes, et conduisent a des divergences dont la nature depend de la geometrie de la surface de Fermi au voisinage de ces points. Ces resultats sont ensuite appliques au modele tt'U, dont le modele de Hubbard usuel tU est un cas particulier. Dans la majorite des cas, les interactions ne determinent pas la position des maximums du facteur de structure. Le role de l'interaction est d'augmenter l'intensite des structures du facteur de structure magnetique associees a l'instabilite magnetique du systeme. Ces structures sont souvent deja presentes dans la partie imaginaire de la susceptibilite sans interaction. Le rapport d'intensite entre les maximums absolus et les autres structures du facteur de structure magnetique permet de determiner le rapport U_ {rn}/U_{c} qui mesure la proximite d'une instabilite magnetique. Le diagramme de phase est ensuite etudie afin de delimiter la plage de validite de l'approximation. Apres avoir discute des modes collectifs et de l'effet d'une partie imaginaire non-nulle de la self-energie, l'origine de l'echelle d'energie des fluctuations magnetiques est examinee. Il est ensuite demontre que le modele a trois bandes predit les memes resultats pour la position des structures du facteur de structure magnetique que le modele a une bande, dans la limite ou l'hybridation des orbitales des atomes d'oxygene des plans Cu-O_2 et l'amplitude de sauts vers les seconds voisins sont nulles. Il est de plus constate que l'effet de l'hybridation des orbitales des atomes d'oxygene est bien modelise par le terme de saut vers les seconds voisins. Meme si ils decrivent correctement le comportement qualitatif des maximums du facteur de structure magnetique, les modeles a trois bandes et a une bande ne permettent pas d'obtenir une position de ces structures conforme avec les mesures experimentales, si on suppose que la bande est rigide, c'est-a-dire que les parametres du Hamiltonien sont independants de la concentration de strontium. Ceci peut etre cause par la dependance des parametres du Hamiltonien sur la concentration de strontium. Finalement, les resultats sont compares avec les experiences de diffusion de neutrons et les autres theories, en particulier celles de Littlewood et al. (1993) et de Q. Si et al. (1993). La comparaison avec les resultats experimentaux pour le compose de lanthane suggere que le liquide de Fermi possede une surface de Fermi disjointe, et qu'il est situe pres d'une instabilite magnetique incommensurable.
NASA Astrophysics Data System (ADS)
Roy, G.; Buy, F.; Llorca, F.
2002-12-01
L'étude présentée s'inscrit dans le cadre d'une démarche menant à la construction d'un modèle analytique ou semi analytique de comportement élasto-visco-plastique endommageable, applicable aux chargements rencontrés en configuration d'impact violent et générant de l'écaillage ductile. La prise en compte des effets de compressibilité et de micro inertie est essentielle pour modéliser la phase de croissance. Des simulations numériques globales de la structure et locales à l'échelle des hétérogénéités permettent d'évaluer les niveaux de sollicitations dans les zones susceptibles de s'endommager, dévaluer des critères analytiques de germination de l'endommagement et de comprendre les mécanismes d'interaction entre les défauts. Les effets micro inertiels et de compressibilité sont ainsi mis en évidence dans les phases de germination et de coalescence des micro défauts. II s'agit ici d'une illustration non exhaustive de travaux engagés au CEA Valduc sur le tantale, dans le cadre d'une thèse [10]. Un programme matériaux en partenariat CEA-CNRS sur la modélisation multi échelles du comportement de structures a également été initié dans ce contexte.
Formation et évolution des Galaxies : le rôle de leur environnement
NASA Astrophysics Data System (ADS)
Boselli, Alessandro
2016-08-01
The new panoramic detectors on large telescopes as well as the most performing space missions allowed us to complete large surveys of the Universe at different wavelengths and thus study the relationships between the different galaxy components at various epochs. At the same time, the increasing computing power allowed us to simulate the evolution of galaxies since their formation at an angular resolution never reached so far. In this article I will briefly describe how the comparison between the most recent observations and the predictions of models and simulations changed our view on the process of galaxy formation and evolution.
NASA Astrophysics Data System (ADS)
Mainberger, Sebastian; Kindlein, Moritz; Bezold, Franziska; Elts, Ekaterina; Minceva, Mirjana; Briesen, Heiko
2017-06-01
Deep eutectic solvents (DES) have gained a reputation as inexpensive and easy to handle ionic liquid analogues. This work employs molecular dynamics (MD) to simulate a variety of DES. The hydrogen bond acceptor (HBA) choline chloride was paired with the hydrogen bond donors (HBD) glycerol, 1,4-butanediol, and levulinic acid. Levulinic acid was also paired with the zwitterionic HBA betaine. In order to evaluate the reliability of data MD simulations can provide for DES, two force fields were compared: the Merck Molecular Force Field and the General Amber Force Field with two different sets of partial charges for the latter. The force fields were evaluated by comparing available experimental thermodynamic and transport properties against simulated values. Structural analysis was performed on the eutectic systems and compared to non-eutectic compositions. All force fields could be validated against certain experimental properties, but performance varied depending on the system and property in question. While extensive hydrogen bonding was found for all systems, details about the contribution of individual groups strongly varied among force fields. Interaction potentials revealed that HBA-HBA interactions weaken linearly with increasing HBD ratio, while HBD-HBD interactions grew disproportionally in magnitude, which might hint at the eutectic composition of a system.
Development of a High Level Architecture Federation of Ship Replenishment at Sea
2011-10-01
utiliser une infrastructure de simulation appelée architecture de haut niveau (HLA) afin de fournir des environne - ments de simulation interarmées...fournir un environnement de simulation qui modélise l’interactions entre les divers composants afin de simuler les conditions qui mènent aux
Electromagnetic Gauge Study of Laser-Induced Shock Waves in Aluminium Alloys
NASA Astrophysics Data System (ADS)
Peyre, P.; Fabbro, R.
1995-12-01
The laser-shock behaviour of three industrial aluminum alloys has been analyzed with an Electromagnetic Gauge Method (EMV) for measuring the velocity of the back free surface of thin foils submitted to plane laser irradiation. Surface pressure, shock decay in depth and Hugoniot Elastic Limits (HEL) of the materials were investigated with increasing thicknesses of foils to be shocked. First, surface peak pressures values as a function of laser power density gave a good agreement with conventional piezoelectric quartz measurements. Therefore, comparison of experimental results with computer simulations, using a 1D hydrodynamic Lagrangian finite difference code, were also in good accordance. Lastly, HEL values were compared with static and dynamic compressive tests in order to estimate the effects of a very large range of strain rates (10^{-3} s^{-1} to 10^6 s^{-1}) on the mechanical properties of the alloys. Cet article fait la synthèse d'une étude récente sur la caractérisation du comportement sous choc-laser de trois alliages d'aluminium largement utilisés dans l'industrie à travers la méthode dite de la jauge électromagnétique. Cette méthode permet de mesurer les vitesses matérielles induites en face arrière de plaques d'épaisseurs variables par un impact laser. La mise en vitesse de plaques nous a permis, premièrement, de vérifier la validité des pressions d'impact superficielles obtenues en les comparant avec des résultats antérieurs obtenus par des mesures sur capteurs quartz. Sur des plaques d'épaisseurs croissantes, nous avons caractérisé l'atténuation des ondes de choc en profondeur dans les alliages étudiés et mesuré les limites d'élasticité sous choc (pressions d'Hugoniot) des alliages. Les résultats ont été comparés avec succès à des simulations numériques grâce à un code de calcul monodimensionnel Lagrangien. Enfin, les valeurs des pressions d'Hugoniot mesurées ont permis de tracer l'évolution des contraintes d'écoulement plastique en fonction de la vitesse de déformation pour des valeurs comprises entre 10^{-3} s^{-1} et 10^6 s^{-1}.
Nonlinear Dynamical Model of a Soft Viscoelastic Dielectric Elastomer
NASA Astrophysics Data System (ADS)
Zhang, Junshi; Chen, Hualing; Li, Dichen
2017-12-01
Actuated by alternating stimulation, dielectric elastomers (DEs) show a behavior of complicated nonlinear vibration, implying a potential application as dynamic electromechanical actuators. As is well known, for a vibrational system, including the DE system, the dynamic properties are significantly affected by the geometrical sizes. In this article, a nonlinear dynamical model is deduced to investigate the geometrical effects on dynamic properties of viscoelastic DEs. The DEs with square and arbitrary rectangular geometries are considered, respectively. Besides, the effects of tensile forces on dynamic performances of rectangular DEs with comparably small and large geometrical sizes are explored. Phase paths and Poincaré maps are utilized to detect the periodicity of the nonlinear vibrations of DEs. The resonance characteristics of DEs incorporating geometrical effects are also investigated. The results indicate that the dynamic properties of DEs, including deformation response, vibrational periodicity, and resonance, are tuned when the geometrical sizes vary.
Parametrisation D'effets Non-Standard EN Phenomenologie Electrofaible
NASA Astrophysics Data System (ADS)
Maksymyk, Ivan
Cette these pat articles porte sur la parametrisation d'effets non standard en physique electrofaible. Dans chaque analyse, nous avons ajoute plusieurs operateurs non standard au lagrangien du modele standard electrofaible. Les operateurs non standard decrivent les nouveaux effets decoulant d'un modele sous-jacent non-specefie. D'emblee, le nombre d'operateurs non standard que l'on peut inclure dans une telle analyse est illimite. Mais pour une classe specifique de modeles sous-jacents, les effets non standard peuvent etre decrits par un nombre raisonnable d'operateurs. Dans chaque analyse nous avons developpe des expressions pour des observables electrofaibles, en fonction des coefficients des operateurs nouveaux. En effectuant un "fit" statistique sur un ensemble de donnees experimentales precises, nous avons obtenu des contraintes phenomenologiques sur ces coefficients. Dans "Model-Independent Global Constraints on New Physics", nous avons adopte des hypotheses tres peu contraignantes relatives aux modeles sous-jacents. Nous avons tronque le lagrangien effectif a la dimension cinq (inclusivement). Visant la plus grande generalite possible, nous avons admis des interactions qui ne respectent pas les symetries discretes (soit C, P et CP) ainsi que des interactions qui ne conservent pas la saveur. Le lagrangien effectif contient une quarantaine d'operateurs nouveaux. Nous avons determine que, pour la plupart des coefficients des nouveaux operateurs, les contraintes sont assez serrees (2 ou 3%), mais il y a des exceptions interessantes. Dans "Bounding Anomalous Three-Gauge-Boson Couplings", nous avons determine des contraintes phenomenologiques sur les deviations des couplages a trois bosons de jauge par rapport aux interactions prescrites par le modele standard. Pour ce faire, nous avons calcule les contributions indirectes des CTBJ non standard aux observables de basse energie. Puisque le lagrangien effectif est non-renormalisable, certaines difficultes techniques se posent: pour regulariser les integrales de Feynman les chercheurs se sont generalement servi de la methode de coupure, mais cette methode peut mener a des resultats incorrects. Nous avons opte pour une technique alternative: la regularisation dimensionnelle et la "soustraction minimale avec decouplage". Dans "Beyond S, T and U" nous presentons le formalisme STUVWX, qui est une extension du formalisme STU de Peskin et Takeuchi. Ces formalismes sont bases sur l'hypothese que la theorie sous-jacente se manifeste au moyen de self -energies de bosons de jauge. Ce type d'effet s'appelle 'oblique'. A la base du formalisme STU se trouve la supposition que l'echelle de la nouvelle physique, M, est beaucoup plus grande que q, l'echelle a laquelle on effectue des mesures. Il en resulte que les effets obliques se parametrisent par les trois variables S, T et U. Par contre, dans le formalisme STUVWX, nous avons admis la possibilite que M~ q. Dans "A Global Fit to Extended Oblique Parameters", nous avons effectue deux fits statistiques sur un ensemble de mesures electrofaibles de haute precision. Dans le premier fit, nous avons pose V=W=X=0, obtenant ainsi des contraintes pour l'ensemble {S,T,U}. Dans le second fit, nous avons inclus tous les six parametres.
Kittipittayakorn, Cholada; Ying, Kuo-Ching
2016-01-01
Many hospitals are currently paying more attention to patient satisfaction since it is an important service quality index. Many Asian countries' healthcare systems have a mixed-type registration, accepting both walk-in patients and scheduled patients. This complex registration system causes a long patient waiting time in outpatient clinics. Different approaches have been proposed to reduce the waiting time. This study uses the integration of discrete event simulation (DES) and agent-based simulation (ABS) to improve patient waiting time and is the first attempt to apply this approach to solve this key problem faced by orthopedic departments. From the data collected, patient behaviors are modeled and incorporated into a massive agent-based simulation. The proposed approach is an aid for analyzing and modifying orthopedic department processes, allows us to consider far more details, and provides more reliable results. After applying the proposed approach, the total waiting time of the orthopedic department fell from 1246.39 minutes to 847.21 minutes. Thus, using the correct simulation model significantly reduces patient waiting time in an orthopedic department.
Towards an entropy-based detached-eddy simulation
NASA Astrophysics Data System (ADS)
Zhao, Rui; Yan, Chao; Li, XinLiang; Kong, WeiXuan
2013-10-01
A concept of entropy increment ratio ( s¯) is introduced for compressible turbulence simulation through a series of direct numerical simulations (DNS). s¯ represents the dissipation rate per unit mechanical energy with the benefit of independence of freestream Mach numbers. Based on this feature, we construct the shielding function f s to describe the boundary layer region and propose an entropy-based detached-eddy simulation method (SDES). This approach follows the spirit of delayed detached-eddy simulation (DDES) proposed by Spalart et al. in 2005, but it exhibits much better behavior after their performances are compared in the following flows, namely, pure attached flow with thick boundary layer (a supersonic flat-plate flow with high Reynolds number), fully separated flow (the supersonic base flow), and separated-reattached flow (the supersonic cavity-ramp flow). The Reynolds-averaged Navier-Stokes (RANS) resolved region is reliably preserved and the modeled stress depletion (MSD) phenomenon which is inherent in DES and DDES is partly alleviated. Moreover, this new hybrid strategy is simple and general, making it applicable to other models related to the boundary layer predictions.
Kittipittayakorn, Cholada
2016-01-01
Many hospitals are currently paying more attention to patient satisfaction since it is an important service quality index. Many Asian countries' healthcare systems have a mixed-type registration, accepting both walk-in patients and scheduled patients. This complex registration system causes a long patient waiting time in outpatient clinics. Different approaches have been proposed to reduce the waiting time. This study uses the integration of discrete event simulation (DES) and agent-based simulation (ABS) to improve patient waiting time and is the first attempt to apply this approach to solve this key problem faced by orthopedic departments. From the data collected, patient behaviors are modeled and incorporated into a massive agent-based simulation. The proposed approach is an aid for analyzing and modifying orthopedic department processes, allows us to consider far more details, and provides more reliable results. After applying the proposed approach, the total waiting time of the orthopedic department fell from 1246.39 minutes to 847.21 minutes. Thus, using the correct simulation model significantly reduces patient waiting time in an orthopedic department. PMID:27195606
Turbulence modeling for Francis turbine water passages simulation
NASA Astrophysics Data System (ADS)
Maruzewski, P.; Hayashi, H.; Munch, C.; Yamaishi, K.; Hashii, T.; Mombelli, H. P.; Sugow, Y.; Avellan, F.
2010-08-01
The applications of Computational Fluid Dynamics, CFD, to hydraulic machines life require the ability to handle turbulent flows and to take into account the effects of turbulence on the mean flow. Nowadays, Direct Numerical Simulation, DNS, is still not a good candidate for hydraulic machines simulations due to an expensive computational time consuming. Large Eddy Simulation, LES, even, is of the same category of DNS, could be an alternative whereby only the small scale turbulent fluctuations are modeled and the larger scale fluctuations are computed directly. Nevertheless, the Reynolds-Averaged Navier-Stokes, RANS, model have become the widespread standard base for numerous hydraulic machine design procedures. However, for many applications involving wall-bounded flows and attached boundary layers, various hybrid combinations of LES and RANS are being considered, such as Detached Eddy Simulation, DES, whereby the RANS approximation is kept in the regions where the boundary layers are attached to the solid walls. Furthermore, the accuracy of CFD simulations is highly dependent on the grid quality, in terms of grid uniformity in complex configurations. Moreover any successful structured and unstructured CFD codes have to offer a wide range to the variety of classic RANS model to hybrid complex model. The aim of this study is to compare the behavior of turbulent simulations for both structured and unstructured grids topology with two different CFD codes which used the same Francis turbine. Hence, the study is intended to outline the encountered discrepancy for predicting the wake of turbine blades by using either the standard k-epsilon model, or the standard k-epsilon model or the SST shear stress model in a steady CFD simulation. Finally, comparisons are made with experimental data from the EPFL Laboratory for Hydraulic Machines reduced scale model measurements.
Theorie de la Levitation Radiative a L'equilibre dans les Etoiles Naines Blanches Chaudes
NASA Astrophysics Data System (ADS)
Chayer, Pierre
1995-01-01
Les resultats de nouveaux calculs detailles de levitation radiative dans des naines blanches chaudes utilisant la base de donnees atomiques TOPBASE sont presentes. Des accelerations radiatives et des abondances d'equilibre ont ete calculees pour C, N, O, Ne, Na, Mg, Al, Si, S, Ar, Ca, et Fe, sur des grilles de modeles d'enveloppes stellaires riches en hydrogene et riches en helium. La grille des modeles de DA a des gravites telles que log g = 7.0, 7.5, 8.0, et 8.5, et couvre les temperatures effectives pour lesquelles 100,000K >= T_ {rm eff} >= 20,000K par sauts de 2,500K. La grille des modeles de DO/DB est similaire mais se prolonge a T_{rm eff } = 130,000K. Les resultats d'abondances d'equilibre de Ni dans des DA utilisant la base de donnees de Kurucz sont aussi presentes. Nous discutons dans les moindres details de la physique incluse dans les calculs afin de fournir une bonne comprehension physique de la levitation radiative sous les conditions rencontrees dans les naines blanches. Nous discutons aussi de la forme et de la dependance en fonction de la profondeur des reservoirs des elements, crees par un equilibre entre l'acceleration radiative et la gravite locale effective dans differentes enveloppes stellaires. Nous soulignons le role important joue dans la morphologie de ces reserviors par les etats d'ionisation dominants se trouvant dans la configuration gaz noble. Nos resultats centraux sont presentes sous forme de figures montrant le comportement de l'abondance photospherique estimee pour chaque element en fonction de la temperature effective et de la gravite de surface. Nous apportons egalement des ameliorations aux calculs de levitation radiative en utilisant une approche de modeles d'atmospheres pour les etoiles riches en hydrogene. Nous mettons l'emphase, en particulier, sur le role des traces d'elements lourds qui peuvent etre presents dans le plasma. Nous utilisons une table d'opacite monochromatique detaillee calculee pour un plasma compose de H contenant de faibles quantites de C, N, O, et Fe pour illustrer comment les abondances d'equilibre des elements supportes reagissent a la redistribution du flux causee par l'addition de ces traces de matiere opaque. Nous traitons ce probleme, en premier lieu, selon une approche basee sur des modeles d'enveloppes et, en second lieu, selon une approche basee sur des modeles d'atmospheres. Nous considerons aussi deux autres ameliorations concernant, premierement, un traitement plus sophistique de la redistribution de la quantite de mouvement d'un ion suite a l'absorption d'un photon, et deuxiemement, l'utilisation d'une meilleure valeur de la largeur du profil de raie associee a l'elargissement par pression. Bien qu'une poignee d'abondances soient disponibles a partir de l'analyse des observations qui ont ete faites jusqu'a maintenant, nous sommes neanmoins capables de conclure, a l'aide de comparaisons detaillees, que la theorie de la levitation radiative a l'equilibre ne reussit pas a expliquer quantitativement les patrons d'abondances des elements lourds observes dans les naines blanches chaudes. Au moins un autre mecanisme doit competionner avec la levitation radiative et le triage gravitationnel dans les atmospheres/enveloppes des naines blanches chaudes. Nous suggerons la possibilite d'un vent stellaire ou encore la possibilite d'accretion.
Denis, P; Le Pen, C; Umuhire, D; Berdeaux, G
2008-01-01
To compare the effectiveness of two treatment sequences, latanoprost-latanoprost timolol fixed combination (L-LT) versus travoprost-travoprost timolol fixed combination (T-TT), in the treatment of open-angle glaucoma (OAG) or ocular hypertension (OHT). A discrete event simulation (DES) model was constructed. Patients with either OAG or OHT were treated first-line with a prostaglandin, either latanoprost or travoprost. In case of treatment failure, patients were switched to the specific prostaglandin-timolol sequence LT or TT. Failure was defined as intraocular pressure higher than or equal to 18 mmHg at two visits. Time to failure was estimated from two randomized clinical trials. Log-rank tests were computed. Linear functions after log-log transformation were used to model time to failure. The time horizon of the model was 60 months. Outcomes included treatment failure and disease progression. Sensitivity analyses were performed. Latanoprost treatment resulted in more treatment failures than travoprost (p<0.01), and LT more than TT (p<0.01). At 60 months, the probability of starting a third treatment line was 39.2% with L-LT versus 29.9% with T-TT. On average, L-LT patients developed 0.55 new visual field defects versus 0.48 for T-TT patients. The probability of no disease progression at 60 months was 61.4% with L-LT and 65.5% with T-TT. Based on randomized clinical trial results and using a DES model, the T-TT sequence was more effective at avoiding starting a third line treatment than the L-LT sequence. T-TT treated patients developed less glaucoma progression.
Neonatal diethylstilbestrol exposure alters the metabolic profile of uterine epithelial cells
Yin, Yan; Lin, Congxing; Veith, G. Michael; Chen, Hong; Dhandha, Maulik; Ma, Liang
2012-01-01
SUMMARY Developmental exposure to diethylstilbestrol (DES) causes reproductive tract malformations, affects fertility and increases the risk of clear cell carcinoma of the vagina and cervix in humans. Previous studies on a well-established mouse DES model demonstrated that it recapitulates many features of the human syndrome, yet the underlying molecular mechanism is far from clear. Using the neonatal DES mouse model, the present study uses global transcript profiling to systematically explore early gene expression changes in individual epithelial and mesenchymal compartments of the neonatal uterus. Over 900 genes show differential expression upon DES treatment in either one or both tissue layers. Interestingly, multiple components of peroxisome proliferator-activated receptor-γ (PPARγ)-mediated adipogenesis and lipid metabolism, including PPARγ itself, are targets of DES in the neonatal uterus. Transmission electron microscopy and Oil-Red O staining further demonstrate a dramatic increase in lipid deposition in uterine epithelial cells upon DES exposure. Neonatal DES exposure also perturbs glucose homeostasis in the uterine epithelium. Some of these neonatal DES-induced metabolic changes appear to last into adulthood, suggesting a permanent effect of DES on energy metabolism in uterine epithelial cells. This study extends the list of biological processes that can be regulated by estrogen or DES, and provides a novel perspective for endocrine disruptor-induced reproductive abnormalities. PMID:22679223
Cosmology constraints from shear peak statistics in Dark Energy Survey Science Verification data
NASA Astrophysics Data System (ADS)
Kacprzak, T.; Kirk, D.; Friedrich, O.; Amara, A.; Refregier, A.; Marian, L.; Dietrich, J. P.; Suchyta, E.; Aleksić, J.; Bacon, D.; Becker, M. R.; Bonnett, C.; Bridle, S. L.; Chang, C.; Eifler, T. F.; Hartley, W. G.; Huff, E. M.; Krause, E.; MacCrann, N.; Melchior, P.; Nicola, A.; Samuroff, S.; Sheldon, E.; Troxel, M. A.; Weller, J.; Zuntz, J.; Abbott, T. M. C.; Abdalla, F. B.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Evrard, A. E.; Neto, A. Fausti; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Jain, B.; James, D. J.; Jarvis, M.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; March, M.; Marshall, J. L.; Martini, P.; Miller, C. J.; Miquel, R.; Mohr, J. J.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Rykoff, E. S.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Vikram, V.; Walker, A. R.; Zhang, Y.; DES Collaboration
2016-12-01
Shear peak statistics has gained a lot of attention recently as a practical alternative to the two-point statistics for constraining cosmological parameters. We perform a shear peak statistics analysis of the Dark Energy Survey (DES) Science Verification (SV) data, using weak gravitational lensing measurements from a 139 deg2 field. We measure the abundance of peaks identified in aperture mass maps, as a function of their signal-to-noise ratio, in the signal-to-noise range 04 would require significant corrections, which is why we do not include them in our analysis. We compare our results to the cosmological constraints from the two-point analysis on the SV field and find them to be in good agreement in both the central value and its uncertainty. We discuss prospects for future peak statistics analysis with upcoming DES data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kacprzak, T.; Kirk, D.; Friedrich, O.
Shear peak statistics has gained a lot of attention recently as a practical alternative to the two point statistics for constraining cosmological parameters. We perform a shear peak statistics analysis of the Dark Energy Survey (DES) Science Verification (SV) data, using weak gravitational lensing measurements from a 139 degmore » $^2$ field. We measure the abundance of peaks identified in aperture mass maps, as a function of their signal-to-noise ratio, in the signal-to-noise range $$0<\\mathcal S / \\mathcal N<4$$. To predict the peak counts as a function of cosmological parameters we use a suite of $N$-body simulations spanning 158 models with varying $$\\Omega_{\\rm m}$$ and $$\\sigma_8$$, fixing $w = -1$, $$\\Omega_{\\rm b} = 0.04$$, $h = 0.7$ and $$n_s=1$$, to which we have applied the DES SV mask and redshift distribution. In our fiducial analysis we measure $$\\sigma_{8}(\\Omega_{\\rm m}/0.3)^{0.6}=0.77 \\pm 0.07$$, after marginalising over the shear multiplicative bias and the error on the mean redshift of the galaxy sample. We introduce models of intrinsic alignments, blending, and source contamination by cluster members. These models indicate that peaks with $$\\mathcal S / \\mathcal N>4$$ would require significant corrections, which is why we do not include them in our analysis. We compare our results to the cosmological constraints from the two point analysis on the SV field and find them to be in good agreement in both the central value and its uncertainty. As a result, we discuss prospects for future peak statistics analysis with upcoming DES data.« less
NASA Astrophysics Data System (ADS)
Yu, Hesheng; Thé, Jesse
2016-11-01
The prediction of the dispersion of air pollutants in urban areas is of great importance to public health, homeland security, and environmental protection. Computational Fluid Dynamics (CFD) emerges as an effective tool for pollutant dispersion modelling. This paper reports and quantitatively validates the shear stress transport (SST) k-ω turbulence closure model and its transitional variant for pollutant dispersion under complex urban environment for the first time. Sensitivity analysis is performed to establish recommendation for the proper use of turbulence models in urban settings. The current SST k-ω simulation is validated rigorously by extensive experimental data using hit rate for velocity components, and the "factor of two" of observations (FAC2) and fractional bias (FB) for concentration field. The simulation results show that current SST k-ω model can predict flow field nicely with an overall hit rate of 0.870, and concentration dispersion with FAC2 = 0.721 and FB = 0.045. The flow simulation of the current SST k-ω model is slightly inferior to that of a detached eddy simulation (DES), but better than that of standard k-ε model. However, the current study is the best among these three model approaches, when validated against measurements of pollutant dispersion in the atmosphere. This work aims to provide recommendation for proper use of CFD to predict pollutant dispersion in urban environment.
NASA Astrophysics Data System (ADS)
L'Hostis, V.; Brunet, C.; Poupard, O.; Petre-Lazar, I.
2006-11-01
Several ageing models are available for the prediction of the mechanical consequences of rebar corrosion. They are used for service life prediction of reinforced concrete structures. Concerning corrosion diagnosis of reinforced concrete, some Non Destructive Testing (NDT) tools have been developed, and have been in use for some years. However, these developments require validation on existing concrete structures. The French project “Benchmark des Poutres de la Rance” contributes to this aspect. It has two main objectives: (i) validation of mechanical models to estimate the influence of rebar corrosion on the load bearing capacity of a structure, (ii) qualification of the use of the NDT results to collect information on steel corrosion within reinforced-concrete structures. Ten French and European institutions from both academic research laboratories and industrial companies contributed during the years 2004 and 2005. This paper presents the project that was divided into several work packages: (i) the reinforced concrete beams were characterized from non-destructive testing tools, (ii) the mechanical behaviour of the beams was experimentally tested, (iii) complementary laboratory analysis were performed and (iv) finally numerical simulations results were compared to the experimental results obtained with the mechanical tests.
Unstructured CFD Aerodynamic Analysis of a Generic UCAV Configuration
NASA Technical Reports Server (NTRS)
Frink, Neal T.; Tormalm, Magnus; Schmidt, Stefan
2011-01-01
Three independent studies from the United States (NASA), Sweden (FOI), and Australia (DSTO) are analyzed to assess the state of current unstructured-grid computational fluid dynamic tools and practices for predicting the complex static and dynamic aerodynamic and stability characteristics of a generic 53-degree swept, round-leading-edge uninhabited combat air vehicle configuration, called SACCON. NASA exercised the USM3D tetrahedral cell-centered flow solver, while FOI and DSTO applied the FOI/EDGE general-cell vertex-based solver. The authors primarily employ the Reynolds Averaged Navier-Stokes (RANS) assumption, with a limited assessment of the EDGE Detached Eddy Simulation (DES) extension, to explore sensitivities to grids and turbulence models. Correlations with experimental data are provided for force and moments, surface pressure, and off-body flow measurements. The vortical flow field over SACCON proved extremely difficult to model adequately. As a general rule, the prospect of obtaining reasonable correlations of SACCON pitching moment characteristics with the RANS formulation is not promising, even for static cases. Yet, dynamic pitch oscillation results seem to produce a promising characterization of shapes for the lift and pitching moment hysteresis curves. Future studies of this configuration should include more investigation with higher-fidelity turbulence models, such as DES.
Suh, Hae Sun; Song, Hyun Jin; Jang, Eun Jin; Kim, Jung-Sun; Choi, Donghoon; Lee, Sang Moo
2013-07-01
The goal of this study was to perform an economic analysis of a primary stenting with drug-eluting stents (DES) compared with bare-metal stents (BMS) in patients with acute myocardial infarction (AMI) admitted through an emergency room (ER) visit in Korea using population-based data. We employed a cost-minimization method using a decision analytic model with a two-year time period. Model probabilities and costs were obtained from a published systematic review and population-based data from which a retrospective database analysis of the national reimbursement database of Health Insurance Review and Assessment covering 2006 through 2010 was performed. Uncertainty was evaluated using one-way sensitivity analyses and probabilistic sensitivity analyses. Among 513 979 cases with AMI during 2007 and 2008, 24 742 cases underwent stenting procedures and 20 320 patients admitted through an ER visit with primary stenting were identified in the base model. The transition probabilities of DES-to-DES, DES-to-BMS, DES-to-coronary artery bypass graft, and DES-to-balloon were 59.7%, 0.6%, 4.3%, and 35.3%, respectively, among these patients. The average two-year costs of DES and BMS in 2011 Korean won were 11 065 528 won/person and 9 647 647 won/person, respectively. DES resulted in higher costs than BMS by 1 417 882 won/person. The model was highly sensitive to the probability and costs of having no revascularization. Primary stenting with BMS for AMI with an ER visit was shown to be a cost-saving procedure compared with DES in Korea. Caution is needed when applying this finding to patients with a higher level of severity in health status.
NASA Astrophysics Data System (ADS)
Mijiyawa, Faycal
Cette etude permet d'adapter des materiaux composites thermoplastiques a fibres de bois aux engrenages, de fabriquer de nouvelles generations d'engrenages et de predire le comportement thermique de ces engrenages. Apres une large revue de la litterature sur les materiaux thermoplastiques (polyethylene et polypropylene) renforces par les fibres de bois (bouleau et tremble), sur la formulation et l'etude du comportement thermomecanique des engrenages en plastique-composite; une relation a ete etablie avec notre presente these de doctorat. En effet, beaucoup d'etudes sur la formulation et la caracterisation des materiaux composites a fibres de bois ont ete deja realisees, mais aucune ne s'est interessee a la fabrication des engrenages. Les differentes techniques de formulation tirees de la litterature ont facilite l'obtention d'un materiau composite ayant presque les memes proprietes que les materiaux plastiques (nylon, acetal...) utilises dans la conception des engrenages. La formulation des materiaux thermoplastiques renforces par les fibres de bois a ete effectuee au Centre de recherche en materiaux lignocellulosiques (CRML) de l'Universite du Quebec a Trois-Rivieres (UQTR), en collaboration avec le departement de Genie Mecanique, en melangeant les composites avec deux rouleaux sur une machine de type Thermotron-C.W. Brabender (modele T-303, Allemand) ; puis des pieces ont ete fabriquees par thermocompression. Les thermoplastiques utilises dans le cadre de cette these sont le polypropylene (PP) et le polyethylene haute densite (HDPE), avec comme renfort des fibres de bouleau et de tremble. A cause de l'incompatibilite entre la fibre de bois et le thermoplastique, un traitement chimique a l'aide d'un agent de couplage a ete realise pour augmenter les proprietes mecaniques des materiaux composites. Pour les composites polypropylene/bois : (1) Les modules elastiques et les contraintes a la rupture en traction des composites PP/bouleau et PP/tremble evoluent lineairement en fonction du taux de fibres, avec ou sans agent de couplage (Maleate de polypropylene MAPP). De plus, l'adherence entre les fibres de bois et le plastique est amelioree en utilisant seulement 3 % MAPP, entrainant donc une augmentation de la contrainte maximale bien qu'aucun effet significatif ne soit observe sur le module d'elasticite. (2) Les resultats obtenus montrent que, en general, les proprietes en traction des composites polypropylene/bouleau, polypropylene/tremble et polypropylene/bouleau/ tremble sont tres semblables. Les composites plastique-bois (WPCs), en particulier ceux contenant 30 % et 40 % de fibres, ont des modules elastiques plus eleves que certains plastiques utilises dans l'application des engrenages (ex. Nylon). Pour les composites polyethylene/bois, avec 3%Maleate de polyethylene (MAPE): (1) Tests de traction : le module elastique passe de 1.34 GPa a 4.19 GPa pour le composite HDPE/bouleau, alors qu'il passe de 1.34 GPa a 3.86 GPa pour le composite HDPE/tremble. La contrainte maximale passe de 22 MPa a 42.65 MPa pour le composite HDPE/bouleau, alors qu'elle passe de 22 MPa a 43.48 MPa pour le composite HDPE/tremble. (2) Tests de flexion : le module elastique passe de 1.04 GPa a 3.47 GPa pour le composite HDPE/bouleau et a 3.64 GPa pour le composite HDPE/tremble. La contrainte maximale passe de 23.90 MPa a 66.70 MPa pour le composite HDPE/bouleau, alors qu'elle passe a 59.51 MPa pour le composite HDPE/tremble. (3) Le coefficient de Poisson determine par impulsion acoustique est autour de 0.35 pour tous les composites HDPE/bois. (4) Le test de degradation thermique TGA nous revele que les materiaux composites presentent une stabilite thermique intermediaire entre les fibres de bois et la matrice HDPE. (5) Le test de mouillabilite (angle de contact) revele que l'ajout de fibres de bois ne diminue pas de facon significative les angles de contact avec de l'eau parce que les fibres de bois (bouleau ou tremble) semblent etre enveloppees par la matrice sur la surface des composites, comme le montrent des images prises au microscope electronique a balayage MEB. (6) Le modele de Lavengoof-Goettler predit mieux le module elastique du composite thermoplastique/bois. (7) Le HDPE renforce par 40 % de bouleau est mieux adapte pour la fabrication des engrenages, car le retrait est moins important lors du refroidissement au moulage. La simulation numerique semble mieux predire la temperature d'equilibre a la vitesse de 500 tr/min; alors qu'a 1000 tr/min, on remarque une divergence du modele. (Abstract shortened by ProQuest.). None None None None None None None None
Kennedy Space Center Orion Processing Team Planning for Ground Operations
NASA Technical Reports Server (NTRS)
Letchworth, Gary; Schlierf, Roland
2011-01-01
Topics in this presentation are: Constellation Ares I/Orion/Ground Ops Elements Orion Ground Operations Flow Orion Operations Planning Process and Toolset Overview, including: 1 Orion Concept of Operations by Phase 2 Ops Analysis Capabilities Overview 3 Operations Planning Evolution 4 Functional Flow Block Diagrams 5 Operations Timeline Development 6 Discrete Event Simulation (DES) Modeling 7 Ground Operations Planning Document Database (GOPDb) Using Operations Planning Tools for Operability Improvements includes: 1 Kaizen/Lean Events 2 Mockups 3 Human Factors Analysis
2006-10-30
acknowledges funding from the ‘‘ Programa Torres Quevedo’’ of the Spanish Ministerio de Educación y Ciencia. References [1] Nix WD. Metall Mater Trans A...University of Madrid E. T. S. de Ingenieros de Caminos Madrid 28040 Spain 8. PERFORMING ORGANIZATION REPORT NUMBER N/A 10. SPONSOR...Rodney * Génie Physique et Mécanique des Matériaux (UMR CNRS 5010), Institut National Polytechnique de Grenoble, 101 rue de la Physique, 38402
2006-10-30
acknowledges funding from the ‘‘ Programa Torres Quevedo’’ of the Spanish Ministerio de Educación y Ciencia. References [1] Nix WD. Metall Mater Trans A...University of Madrid E. T. S. de Ingenieros de Caminos Madrid 28040 Spain 8. PERFORMING ORGANIZATION REPORT NUMBER N/A 10. SPONSOR...Rodney * Génie Physique et Mécanique des Matériaux (UMR CNRS 5010), Institut National Polytechnique de Grenoble, 101 rue de la Physique, 38402
Modeling Airport Ground Operations using Discrete Event Simulation (DES) and X3D Visualization
2008-03-01
scenes. It is written in open-source Java and XML using the Netbeans platform, which gave the features of being suitable as standalone applications...and as a plug-in module for the Netbeans integrated development environment (IDE). X3D Graphics is the tool used for the elaboration the creation of...process is shown in Figure 2. To 20 create a new event graph in Viskit, first, Viskit tool must be launched via Netbeans or from the executable
NASA Astrophysics Data System (ADS)
Beyhaghi, Saman
Because of the problems associated with increase of greenhouse gases, as well as the limited supplies of fossil fuels, the transition to alternate, clean, renewable sources of energy is inevitable. Renewable sources of energy can be used to decrease our need for fossil fuels, thus reducing impact to humans, other species and their habitats. The wind is one of the cleanest forms of energy, and it can be an excellent candidate for producing electrical energy in a more sustainable manner. Vertical- and Horizontal-Axis Wind Turbines (VAWT and HAWT) are two common devices used for harvesting electrical energy from the wind. Due to the development of a thin boundary layer over the ground surface, the modern commercial wind turbines have to be relatively large to be cost-effective. Because of the high manufacturing and transportation costs of the wind turbine components, it is necessary to evaluate the design and predict the performance of the turbine prior to shipping it to the site, where it is to be installed. Computational Fluid Dynamics (CFD) has proven to be a simple, cheap and yet relatively accurate tool for prediction of wind turbine performance, where the suitability of different designs can be evaluated at a low cost. High accuracy simulation methods such as Large Eddy Simulation (LES) and Detached Eddy Simulation (DES) are developed and utilized in the past decades. Despite their superior importance in large fluid domains, they fail to make very accurate predictions near the solid surfaces. Therefore, in the present effort, the possibility of improving near-wall predictions of CFD simulations in the near-wall region by using a modified turbulence model is also thoroughly investigated. Algebraic Stress Model (ASM) is employed in conjunction with Detached Eddy Simulation (DES) to improve Reynolds stresses components, and consequently predictions of the near-wall velocities and surface pressure distributions. The proposed model shows a slightly better performance as compared to the baseline DES. In the second part of this study, the focus is on improving the aerodynamic performance of airfoils and wind turbines in terms of lift and drag coefficients and power generation. One special type of add-on feature for wind turbines and airfoils, i.e., leading-edge slots are investigated through numerical simulation and laboratory experiments. Although similar slots are designed and employed for aircrafts, a special slot with a reversed flow direction is drilled in the leading edge of a sample wind turbine airfoil to study its influence on the aerodynamic performance. The objective is to vary the five main geometrical parameters of slot and characterize the performance improvement of the new design under different operating conditions. A number of Design of Experiment and optimization studies are conducted to determine the most suitable slot configuration to maximize the lift or lift-over-drag ratio. Results indicate that proper sizing and placement of slot can improve the lift coefficient, while it has negligible negative impact on the drag. Some recommendations for future investigation on slot are proposed at the end. The performance of a horizontal axis wind turbine blade equipped with leading-edge slot is also studied, and it is concluded that slotted blades can generate about 10% more power than solid blades, for the two operating conditions investigated. The good agreement between the CFD predictions and experimental data confirms the validity of the model and results.
Simulation of granular and gas-solid flows using discrete element method
NASA Astrophysics Data System (ADS)
Boyalakuntla, Dhanunjay S.
2003-10-01
In recent years there has been increased research activity in the experimental and numerical study of gas-solid flows. Flows of this type have numerous applications in the energy, pharmaceuticals, and chemicals process industries. Typical applications include pulverized coal combustion, flow and heat transfer in bubbling and circulating fluidized beds, hopper and chute flows, pneumatic transport of pharmaceutical powders and pellets, and many more. The present work addresses the study of gas-solid flows using computational fluid dynamics (CFD) techniques and discrete element simulation methods (DES) combined. Many previous studies of coupled gas-solid flows have been performed assuming the solid phase as a continuum with averaged properties and treating the gas-solid flow as constituting of interpenetrating continua. Instead, in the present work, the gas phase flow is simulated using continuum theory and the solid phase flow is simulated using DES. DES treats each solid particle individually, thus accounting for its dynamics due to particle-particle interactions, particle-wall interactions as well as fluid drag and buoyancy. The present work involves developing efficient DES methods for dense granular flow and coupling this simulation to continuum simulations of the gas phase flow. Simulations have been performed to observe pure granular behavior in vibrating beds. Benchmark cases have been simulated and the results obtained match the published literature. The dimensionless acceleration amplitude and the bed height are the parameters governing bed behavior. Various interesting behaviors such as heaping, round and cusp surface standing waves, as well as kinks, have been observed for different values of the acceleration amplitude for a given bed height. Furthermore, binary granular mixtures (granular mixtures with two particle sizes) in a vibrated bed have also been studied. Gas-solid flow simulations have been performed to study fluidized beds. Benchmark 2D fluidized bed simulations have been performed and the results have been shown to satisfactorily compare with those published in the literature. A comprehensive study of the effect of drag correlations on the simulation of fluidized beds has been performed. It has been found that nearly all the drag correlations studied make similar predictions of global quantities such as the time-dependent pressure drop, bubbling frequency and growth. In conclusion, discrete element simulation has been successfully coupled to continuum gas-phase. Though all the results presented in the thesis are two-dimensional, the present implementation is completely three dimensional and can be used to study 3D fluidized beds to aid in better design and understanding. Other industrially important phenomena like particle coating, coal gasification etc., and applications in emerging areas such as nano-particle/fluid mixtures can also be studied through this type of simulation. (Abstract shortened by UMI.)
Des proprietes de l'etat normal du modele de Hubbard bidimensionnel
NASA Astrophysics Data System (ADS)
Lemay, Francois
Depuis leur decouverte, les etudes experimentales ont demontre que les supra-conducteurs a haute temperature ont une phase normale tres etrange. Les proprietes de ces materiaux ne sont pas bien decrites par la theorie du liquide de Fermi. Le modele de Hubbard bidimensionnel, bien qu'il ne soit pas encore resolu, est toujours considere comme un candidat pour expliquer la physique de ces composes. Dans cet ouvrage, nous mettons en evidence plusieurs proprietes electroniques du modele qui sont incompatibles avec l'existence de quasi-particules. Nous montrons notamment que la susceptibilite des electrons libres sur reseau contient des singularites logarithmiques qui influencent de facon determinante les proprietes de la self-energie a basse frequence. Ces singularites sont responsables de la destruction des quasi-particules. En l'absence de fluctuations antiferromagnetiques, elles sont aussi responsables de l'existence d'un petit pseudogap dans le poids spectral au niveau de Fermi. Les proprietes du modele sont egalement etudiees pour une surface de Fermi similaire a celle des supraconducteurs a haute temperature. Un parallele est etabli entre certaines caracteristiques du modele et celles de ces materiaux.
Unsteady RANS/DES analysis of flow around helicopter rotor blades at forword flight conditions
NASA Astrophysics Data System (ADS)
Zhang, Zhenyu; Qian, Yaoru
2018-05-01
In this paper, the complex flows around forward-flying helicopter blades are numerically investigated. Both the Reynolds-averaged Navier-Stokes (RANS) and the Detached Eddy Simulation (DES) methods are used for the analysis of characteristics like local dynamic flow separation, effects of radial sweeping and reversed flow. The flow was solved by a highly efficient finite volume solver with multi-block structured grids. Focusing upon the complexity of the advance ratio effects, above properties are fully recognized. The current results showed significant agreements between both RANS and DES methods at phases with attached flow phases. Detailed information of separating flow near the withdrawal phases are given by DES results. The flow analysis of these blades under reversed flow reveals a significant interaction between the reversed flow and the span-wise sweeping.
A Search for Kilonovae in the Dark Energy Survey
Doctor, Z.; Kessler, R.; Chen, H. Y.; ...
2017-03-01
The coalescence of a binary neutron star pair is expected to produce gravitational waves (GW) and electromagnetic radiation, both of which may be detectable with currently available instruments. In this paper, we describe a search for a predicted r-process optical transient from these mergers, dubbed the “kilonova” (KN), using griz broadband data from the Dark Energy Survey Supernova Program (DES-SN). Some models predict KNe to be redder, shorter-lived, and dimmer than supernovae (SNe), but the event rate of KNe is poorly constrained. We simulate KN and SN light curves with the Monte-Carlo simulation code SNANA to optimize selection requirements, determine search efficiency, and predict SN backgrounds. Our analysis of the first two seasons of DES-SN data results in 0 events, and is consistent with our prediction of 1.1 ± 0.2 background events based on simulations of SNe. From our prediction, there is a 33% chance of finding 0 events in the data. Assuming no underlying galaxy flux, our search sets 90% upper limits on the KN volumetric rate of 1.0 x 10more » $$^{7}$$ Gpc $-$3 yr $-$1 for the dimmest KN model we consider (peak i-band absolute magnitude $${M}_{i}=-11.4$$ mag) and 2.4x 10$$^{4}$$ Gpc $-$3 yr $-$1 for the brightest ($${M}_{i}=-16.2$$ mag). Accounting for anomalous subtraction artifacts on bright galaxies, these limits are ~3 times higher. This analysis is the first untriggered optical KN search and informs selection requirements and strategies for future KN searches. Finally, our upper limits on the KN rate are consistent with those measured by GW and gamma-ray burst searches.« less
A Search for Kilonovae in the Dark Energy Survey
NASA Astrophysics Data System (ADS)
Doctor, Z.; Kessler, R.; Chen, H. Y.; Farr, B.; Finley, D. A.; Foley, R. J.; Goldstein, D. A.; Holz, D. E.; Kim, A. G.; Morganson, E.; Sako, M.; Scolnic, D.; Smith, M.; Soares-Santos, M.; Spinka, H.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Drlica-Wagner, A.; Eifler, T. F.; Frieman, J.; García-Bellido, J.; Gaztanaga, E.; Gerdes, D. W.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; James, D. J.; Krause, E.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; Maia, M. A. G.; March, M.; Marshall, J. L.; Menanteau, F.; Miquel, R.; Neilsen, E.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Walker, A. R.; Wester, W.; DES Collaboration
2017-03-01
The coalescence of a binary neutron star pair is expected to produce gravitational waves (GW) and electromagnetic radiation, both of which may be detectable with currently available instruments. We describe a search for a predicted r-process optical transient from these mergers, dubbed the “kilonova” (KN), using griz broadband data from the Dark Energy Survey Supernova Program (DES-SN). Some models predict KNe to be redder, shorter-lived, and dimmer than supernovae (SNe), but the event rate of KNe is poorly constrained. We simulate KN and SN light curves with the Monte-Carlo simulation code SNANA to optimize selection requirements, determine search efficiency, and predict SN backgrounds. Our analysis of the first two seasons of DES-SN data results in 0 events, and is consistent with our prediction of 1.1 ± 0.2 background events based on simulations of SNe. From our prediction, there is a 33% chance of finding 0 events in the data. Assuming no underlying galaxy flux, our search sets 90% upper limits on the KN volumetric rate of 1.0 × {10}7 Gpc-3 yr-1 for the dimmest KN model we consider (peak I-band absolute magnitude {M}I=-11.4 mag) and 2.4 × {10}4 Gpc-3 yr-1 for the brightest ({M}I=-16.2 mag). Accounting for anomalous subtraction artifacts on bright galaxies, these limits are ˜3 times higher. This analysis is the first untriggered optical KN search and informs selection requirements and strategies for future KN searches. Our upper limits on the KN rate are consistent with those measured by GW and gamma-ray burst searches.
NASA Astrophysics Data System (ADS)
Caniaux, Guy; Planton, Serge
1998-10-01
A primitive equation model is used to simulate the mesoscale circulation associated with a portion of the Azores Front investigated during the intensive observation period (IOP) of the Structure des Echanges Mer-Atmosphere, Proprietes des Heterogeneites Oceaniques: Recherche Experimentale (SEMAPHORE) experiment in fall 1993. The model is a mesoscale version of the ocean general circulation model (OGCM) developed at the Laboratoire d'Océanographie Dynamique et de Climatologie (LODYC) in Paris and includes open lateral boundaries, a 1.5-level-order turbulence closure scheme, and fine mesh resolution (0.11° for latitude and 0.09° for longitude). The atmospheric forcing is provided by satellite data for the solar and infrared fluxes and by analyzed (or reanalyzed for the wind) atmospheric data from the European Centre for Medium-Range Weather Forecasts (ECMWF) forecast model. The extended data set collected during the IOP of SEMAPHORE enables a detailed initialization of the model, a coupling with the rest of the basin through time dependent open boundaries, and a model/data comparison for validation. The analysis of model outputs indicates that most features are in good agreement with independent available observations. The surface front evolution is subject to an intense deformation different from that of the deep front system, which evolves only weakly. An estimate of the upper layer heat budget is performed during the 22 days of the integration of the model. Each term of this budget is analyzed according to various atmospheric events that occurred during the experiment, such as the passage of a strong storm. This facilitates extended estimates of mixed layer or relevant surface processes beyond those which are obtainable directly from observations. Surface fluxes represent 54% of the heat loss in the mixed layer and 70% in the top 100-m layer, while vertical transport at the mixed layer bottom accounts for 31% and three-dimensional processes account for 14%.
NASA Astrophysics Data System (ADS)
Zhang, Junshi; Chen, Hualing; Li, Dichen
2018-02-01
Subject to an AC voltage, dielectric elastomers (DEs) behave as a nonlinear vibration, implying potential applications as soft dynamical actuators and robots. In this article, by utilizing the Lagrange's equation, a theoretical model is deduced to investigate the dynamic performances of DEs by considering three internal properties, including crosslinks, entanglements, and finite deformations of polymer chains. Numerical calculations are employed to describe the dynamic response, stability, periodicity, and resonance properties of DEs. It is observed that the frequency and nonlinearity of dynamic response are tuned by the internal properties of DEs. Phase paths and Poincaré maps are utilized to detect the stability and periodicity of the nonlinear vibrations of DEs, which demonstrate that transitions between aperiodic and quasi-periodic vibrations may occur when the three internal properties vary. The resonance of DEs involving the three internal properties of polymer chains is also investigated.
Folgen des Globalen Wandels für das Grundwasser in Süddeutschland - Teil 2: Sozioökonomische Aspekte
NASA Astrophysics Data System (ADS)
Barthel, Roland; Krimly, Tatjana; Elbers, Michael; Soboll, Anja; Wackerbauer, Johann; Hennicker, Rolf; Janisch, Stephan; Reichenau, Tim G.; Dabbert, Stephan; Schmude, Jürgen; Ernst, Andreas; Mauser, Wolfram
2011-12-01
In order to account for complex interactions between humans climate and the water cycle, the research consortium GLOWA-Danube (www.glowa-danube.de) has developed the simulation system DANUBIA which consists of 17 coupled models. DANUBIA was applied to investigate various impacts of global-change between 2011 and 2060 in the Upper Danube Catchment. This article describes part 2 of an article series with investigations of socio-economic aspects, while part 1 (Barthel et al. in Grundwasser 16(4), doi:10.1007/s007-011-01794, 2011) deals with natural-spatial aspects. The principles of socio-economic actor-modeling and interactions between socio-economic and natural science model components are described here. We present selected simulations that show impacts on groundwater from changes in agriculture, tourism, economy, domestic water users and water supply. Despite decreases in water consumption, the scenario simulations show significant decreases in groundwater quantity. On the other hand, groundwater quality will likely be influenced more severely by land use changes compared to direct climatic causes. However, overall changes will not be dramatic.
NASA Astrophysics Data System (ADS)
Armenio, Vincenzo; Fakhari, Ahmad; Petronio, Andrea; Padovan, Roberta; Pittaluga, Chiara; Caprino, Giovanni
2015-11-01
Massive flow separation is ubiquitous in industrial applications, ruling drag and hydrodynamic noise. In spite of considerable efforts, its numerical prediction still represents a challenge for CFD models in use in engineering. Aside commercial software, over the latter years the opensource software OpenFOAMR (OF) has emerged as a valid tool for prediction of complex industrial flows. In the present work, we simulate two flows representative of a class of situations occurring in industrial problems: the flow around sphere and that around a wall-mounted square cylinder at Re = 10000 . We compare the performance two different tools, namely OF and ANSYS CFX 15.0 (CFX) using different unstructured grids and turbulence models. The grids have been generated using SNAPPYHEXMESH and ANSYS ICEM CFD 15.0 with different near wall resolutions. The codes have been run in a RANS mode using k - ɛ model (OF) and SST - k - ω (CFX) with and without wall-layer models. OF has been also used in LES, WMLES and DES mode. Regarding the sphere, RANS models were not able to catch separation, while good prediction of separation and distribution of stresses over the surface were obtained using LES, WMLES and DES. Results for the second test case are currently under analysis. Financial support from COSMO ``cfd open source per opera mortta'' PAR FSC 2007-2013, Friuli Venezia Giulia.
NASA Astrophysics Data System (ADS)
Leblois, T.; Tellier, C. R.; Messaoudi, T.
1997-03-01
The anisotropic etching behavior of quartz crystal in concentrated ammonium bifluoride solution is studied and analyzed in the framework of a tensorial model. This model allows to simulate bi- or three-dimensional etching shapes from the equation for the representative surface of the dissolution slowness. In this paper, we present experimental results such as surface profile and initially circular cross-sectional profiles of differently singly- or doubly-rotated cuts. The polar diagrams of the dissolution slowness vector in several planes are deduced from experimental data. The comparison between predicted surface and cross-sectional profiles and experimental results is detailed and shows a good agreement. In particular, several examples give evidence that the final etched shapes are correlated to the extrema of the dissolution slowness. However, in several cases, experimental shapes cannot be simply correlated to the presence of extrema. Simulation gives effectively evidence for an important role played by more progressive changes in the curvature of the slowness surface. Consequently, analysis of data merits to be treated carefully. Nous nous proposons d'étudier et d'analyser à l'aide du modèle tensoriel de la dissolution l'attaque chimique anisotrope du cristal de quartz dans une solution concentrée de bifluorure d'ammonium. Ce modèle permet de simuler des formes usinées à deux ou trois dimensions à partir de l'équation de la surface représentative de la lenteur de dissolution du cristal de quartz. Dans cet article, nous présentons des résultats expérimentaux concernant des profils de surface et des sections initialement cylindriques de coupes à simple et double rotation. Les diagrammes polaires du vecteur lenteur de dissolution dans différents plans sont déduits de données expérimentales. La comparaison entre les profils de surface et de section théoriques et les résultats expérimentaux est détaillée et montre un bon accord. En particulier plusieurs exemples montrent que la forme finale est corrélée à la présence d'extrema de la lenteur de dissolution. Cependant, la corrélation entre résultats expérimentaux et théoriques n'est pas toujours simple et mérite une analyse soignée. Pour conclure, le modèle 3D est appliqué pour prévoir la forme usinée d'un trou initialement circulaire dans une coupe tournée autour de l'axe Y. Le résultat théorique est comparé avec la forme usinée expérimentale et montre un parfait accord.
ERIC Educational Resources Information Center
Ruemper, Wendy, Ed.; And Others
Intended as a reference for preventing harassment and discrimination in Ontario colleges and universities, this resource guide describes a project to develop models of alternative instructional delivery and presents the models. Part 1 provides an introduction to the guide, reviews the goals of the project, and describes a related training video…
NASA Astrophysics Data System (ADS)
Chavez, Milagros
Cette these presente la trajectoire et les resultats d'une recherche dont l'objectif global est de developper un modele educationnel integrant l'ethique de l'environnement comme dimension transversale de l'education en sciences et en technologies. Face au paradigme positiviste toujours dominant dans l'enseignement des sciences, il a semble utile d'ouvrir un espace de reflexion et de proposer, sous forme d'un modele formel, une orientation pedagogique qui soit plus en resonance avec quelques-unes des preoccupations fondamentales de notre epoque: en particulier celle qui concerne la relation de humain avec son environnement et plus specifiquement, le role de la science dans le faconnement d'une telle relation, par sa contribution a la transformation des conditions de vie, au point de compromettre les equilibres naturels. En fonction de cette problematique generale, les objectifs de la recherche sont les suivants: (1) definir les elements paradigmatiques, theoriques et axiologiques du modele educationnel a construire et (2) definir ses composantes strategiques. De caractere theorico-speculatif, cette recherche a adopte la demarche de l'anasynthese, en la situant dans la perspective critique de la recherche en education. Le cadre theorique de cette these s'est construit autour de quatre concepts pivots: modele educationnel, education en sciences et en technologies, transversalite educative et ethique de l'environnement. Ces concepts ont ete clarifies a partir d'un corpus textuel, puis, sur cette base, des choix theoriques ont ete faits, a partir desquels un prototype du modele a ete elabore. Ce prototype a ensuite ete soumis a une double validation (par des experts et par une mise a l'essai), dans le but d'y apporter des ameliorations et, a partir de la, de construire un modele optimal. Ce dernier comporte deux dimensions: theorico-axiologique et strategique. La premiere s'appuie sur une conception de l'education en sciences et en technologies comme appropriation d'un patrimoine culturel, dans une perspective critique et emancipatrice. Dans cette optique, l'ethique de l'environnement intervient comme processus reflexif et existentiel de notre relation a l'environnement, susceptible d'etre integre comme dimension transversale de la dynamique educative. A cet effet, la dimension strategique du modele suggere une approche transversale de type existentiel, une strategie globale de type dialogique et des strategies pedagogiques specifiques dont des strategies d'apprentissage et des strategies d'evaluation. La realisation de ce modele a mis en relief certaines perspectives interessantes. Par exemple (1) la necessite de croiser la dimension cognitive des processus educatifs en sciences et en technologies avec d'autres dimensions de l'etre humain (affective, ethique, existentielle, sociale, spirituelle, etc.); (2) une vision de l'education en sciences et en technologies comme acte de liberte fondamentale qui consiste a s'approprier de facon critique un certain patrimoine culturel; (3) une vision de l'ethique de l'environnement comme processus de reflexion qui nous confronte a des questions existentielles de base.
A Concept Map Knowledge Model of Intelligence Analysis
2011-05-01
renseignement et rassemble un certain nombre de sujets pertinents. Les auteurs de ce modèle de schéma conceptuel de la connaissance aspirent à ce...d’analyse des renseignements, les auteurs ont étudié les documents disponibles sur l’analyse des renseignements et ont consulté des professionnels du...renseignement. La compréhension conceptuelle de l’analyse des renseignements des auteurs se présente sous la forme d’un modèle de schéma conceptuel
Inference from the small scales of cosmic shear with current and future Dark Energy Survey data
MacCrann, N.; Aleksić, J.; Amara, A.; ...
2016-11-05
Cosmic shear is sensitive to fluctuations in the cosmological matter density field, including on small physical scales, where matter clustering is affected by baryonic physics in galaxies and galaxy clusters, such as star formation, supernovae feedback and AGN feedback. While muddying any cosmological information that is contained in small scale cosmic shear measurements, this does mean that cosmic shear has the potential to constrain baryonic physics and galaxy formation. We perform an analysis of the Dark Energy Survey (DES) Science Verification (SV) cosmic shear measurements, now extended to smaller scales, and using the Mead et al. 2015 halo model tomore » account for baryonic feedback. While the SV data has limited statistical power, we demonstrate using a simulated likelihood analysis that the final DES data will have the statistical power to differentiate among baryonic feedback scenarios. We also explore some of the difficulties in interpreting the small scales in cosmic shear measurements, presenting estimates of the size of several other systematic effects that make inference from small scales difficult, including uncertainty in the modelling of intrinsic alignment on nonlinear scales, `lensing bias', and shape measurement selection effects. For the latter two, we make use of novel image simulations. While future cosmic shear datasets have the statistical power to constrain baryonic feedback scenarios, there are several systematic effects that require improved treatments, in order to make robust conclusions about baryonic feedback.« less
Shenoy, Erica S; Lee, Hang; Ryan, Erin E; Hou, Taige; Walensky, Rochelle P; Ware, Winston; Hooper, David C
2018-02-01
Hospitalized patients are assigned to available staffed beds based on patient acuity and services required. In hospitals with double-occupancy rooms, patients must be additionally matched by gender. Patients with methicillin-resistant Staphylococcus aureus (MRSA) or vancomycin-resistant Enterococcus (VRE) must be bedded in single-occupancy rooms or cohorted with other patients with similar MRSA/VRE flags. We developed a discrete event simulation (DES) model of patient flow through an acute care hospital. Patients are matched to beds based on acuity, service, gender, and known MRSA/VRE colonization. Outcomes included time to bed arrival, length of stay, patient-bed acuity mismatches, occupancy, idle beds, acuity-related transfers, rooms with discordant MRSA/VRE colonization, and transmission due to discordant colonization. Observed outcomes were well-approximated by model-generated outcomes for time-to-bed arrival (6.7 v. 6.2 to 6.5 h) and length of stay (3.3 v. 2.9 to 3.0 days), with overlapping 90% coverage intervals. Patient-bed acuity mismatches, where patient acuity exceeded bed acuity and where patient acuity was lower than bed acuity, ranged from 0.6 to 0.9 and 8.6 to 11.1 mismatches per h, respectively. Values for observed occupancy, total idle beds, and acuity-related transfers compared favorably to model-predicted values (91% v. 86% to 87% occupancy, 15.1 v. 14.3 to 15.7 total idle beds, and 27.2 v. 22.6 to 23.7 transfers). Rooms with discordant colonization status and transmission due to discordance were modeled without an observed value for comparison. One-way and multi-way sensitivity analyses were performed for idle beds and rooms with discordant colonization. We developed and validated a DES model of patient flow incorporating MRSA/VRE flags. The model allowed for quantification of the substantial impact of MRSA/VRE flags on hospital efficiency and potentially avoidable nosocomial transmission.
Étude des perturbations conduites et rayonnées dans une cellule de commutation
NASA Astrophysics Data System (ADS)
Costa, F.; Forest, F.; Puzo, A.; Rojat, G.
1993-12-01
The principles used in static conversion and the rise of the performances of the new switching devices contribue to increase the level of electromagnetic noises emitted by electronic converters. We have studied the way how these perturbations are created and coupled through their environment in conducted and radiated mode by a switching cell. This one can work in hard switching, zero current or voltage switching modes. We first outline the general problems of electromagnetic pollution and their metrology in converters. Then we describe the experimental environment. We analyse the mechanisms of generation of parasitic signals in a switching cell related to the electrical constraints and its switching mode. The simulated results, issued of the analytical models obtained, are confronted with the experimental ones. Then we show a method to calculate analytically the E and H near fields. It has been confirmed by experimental results. At last, we present, in a synthetic manner, the main results obtained, relative to the switching mode and the electrical constraints, using a new characterizing method. Theses results will allow the designer to incorporate the electromagnetic considerations in the conception of a converter. Les principes de commutation employés en conversion statique, l'évolution des performances statiques et dynamiques des composants, contribuent à faire des dispositifs de conversion statique de puissants générateurs de perturbations conduites et rayonnées. Nous nous sommes attachés à étudier les mécanismes de génération et de couplage des perturbations, tant en mode conduit que rayonné dans des structures à une seule cellule de commutation et fonctionnant selon les trois principaux modes de commutation : commutation forcée, à zéro de courant (ZCS), et à zéro de tension (ZVS). Après la mise en évidence de la problématique de pollution électromagnétique dans les structures et leur métrologie, nous décrivons l'environnement expérimental étudié. Nous analysons ensuite les principaux mécanismes produisant les perturbations au sein d'une cellule de commutation en introduisant un certain nombre de composants parasites. Les modèles sont simulés et confrontés aux résultats expérimentaux. Nous décrivons alors une méthode, validée expérimentalement et permettant de calculer les intensités des champs E et H proches émis. Enfin, nous présentons de façon synthétique les résultats observés selon les régimes de fonctionnement de la cellule de commutation et les contraintes électriques et dynamiques qu'elle subit. Nous avons, pour ce faire, développé une méthode originale de quantification des signaux perturbateurs. Les résultats obtenus doivent permettre d'intégrer les problèmes de pollution électromagnétique au stade de la conception d'un dispositif.
NASA Astrophysics Data System (ADS)
Kitterød, N.-O.; Colleuille, H.; Wong, W. K.; Pedersen, T. S.
2000-09-01
Standard geostatistical methods for simulation of heterogeneity were applied to the Romeriksporten tunnel in Norway, where water was leaking through high-permeable fracture zones into the tunnel while it was under construction, causing drainage problems on the surface. After the tunnel was completed, artificial infiltration of water into wells drilled from the tunnel was implemented to control the leakage. Synthetic heterogeneity was generated at a scale sufficiently small to simulate the effects of remedial actions that were proposed to control the leakage. The flow field depends on the variance of permeabilities and the covariance model used to generate the heterogeneity. Flow channeling is the most important flow mechanism if the variance of the permeability field is large compared to the expected value. This condition makes the tunnel leakage difficult to control. The main effects of permeability changes due to sealing injection are simulated by a simple perturbation of the log-normal probability density function of the permeability. If flow channeling is the major transport mechanism of water into the tunnel, implementation of artificial infiltration of water to control the leakage requires previous chemical-sealing injection to be successful. Résumé. Des méthodes géostatistiques standard ont été employées pour simuler l'hétérogénéité des zones de fractures à fortes perméabilitées dans lesquelles, au cours de la construction du tunnel ferroviaire de Romeriksporten (Norvège), l'eau s'est écoulée, causant des problèmes de drainage en surface. Quand les travaux ont été terminés, l'injection d'eau dans des puits forés à partir du tunnel a été réalisée pour contrôler ces infiltrations. Une hétérogénéité synthétique a été créée à une échelle suffisamment petite pour simuler les effets de l'injection d'eau. Le champ des écoulements dépend de la variance des perméabilités et de la covariance du modèle utilisé pour générer l'hétérogénéité. La chenalisation de l'écoulement est le mécanisme d'écoulement le plus important si la variance du champ de perméabilité est grande par rapport à la valeur moyenne. Cette condition fait que les infiltrations dans le tunnel sont difficiles à contrôler. L'étanchéification du tunnel par des produits chimiques est simulé par une simple perturbation de la fonction de densité de probabilité log-normale de la perméabilité. Si la chenalisation de l'écoulement est le principal mécanisme de transport d'eau entrant dans le tunnel, la création d'une injection artificielle de l'eau pour contrôler l'infiltration dans le tunnel impose, pour réussir, une imperméabilisation préalable par des produits chimiques. Resumen. Se han aplicado métodos estadísticos convencionales para la simulación de la heterogeneidad en el túnel de Roeriksporte (Noruega), donde la presencia de agua en zonas fracturadas de alta permeabilidad originó problemas de drenaje en superficie durante su construcción. Una vez finalizado el túnel, para controlar la infiltración se inyectó agua en los pozos situados en su interior. La generación del campo heterogéneo se realizó a una escala lo suficientemente pequeña que permitiera simular los efectos de las medidas de control propuestas. El campo de flujo depende de la varianza de las permeabilidades y del modelo de covarianza utilizado para generar la heterogeneidad. El flujo a través de canales es el mecanismo dominante si la varianza del campo de permeabilidad es grande en relación con el valor esperado. Este hecho condiciona que las filtraciones en el túnel sean difíciles de controlar. Los principales efectos de los cambios de permeabilidad originados por las inyecciones para el sellado del túnel se simularon mediante una simple perturbación de la función de densidad de probabilidad lognormal de la permeabilidad. Si el flujo a través de canales es el principal mecanismo de la presencia de agua en el túnel, el control de las filtraciones mediante técnicas de inyección de agua en pozos de recarga requiere de la inyección previa de un producto químico para el sellado de las fisuras.
Dynamique moléculaire et canaux ioniques
NASA Astrophysics Data System (ADS)
Crouzy, S.
2005-11-01
Diffusion de neutrons et Dynamique Moléculaire (DM) sont deux techniques intimement liées car elles portent sur les mêmes échelles de temps: la première apporte des informations structurales ou dynamiques sur le système physique ou biologique, la seconde permet de décoder ces informations à travers un modèle facilitant l'interprétation des résultats. Au delà de l'intérêt que la technique de DM peut avoir en relation directe avec les neutrons, il est intéressant de comprendre comment les modèles sont construits et comment les techniques de simulation peuvent aller beaucoup plus loin que de simples modélisations. Nous décrirons brièvement, dans la suite de cet exposé, la technique de DM et les méthodes plus sophistiquées de calculs d'énergie libre et de potentiels de force moyenne à partir des simulations de DM. Puis nous verrons avec deux exemples tirés de nos travaux théoriques sur les canaux ioniques comment ces calculs peuvent nous donner accès à des vitesses de réaction ou des constantes d'affinité ou de liaison. La première étude porte sur un analogue de la gramicidine A qui forme un canal conducteur d'ions interrompus par le basculement d'un cycle dioxolane [1]. La seconde concerne le canal potassique KcsA dont nous avons étudié le blocage du coté extracellulaire par l'ion Tetra Ethyl Ammonium [2].
NASA Astrophysics Data System (ADS)
Fournier, Patrick
Le Modele de l'Etat Critique Generalise (MECG) est utilise pour decrire les proprietes magnetiques et de transport du YBa_2Cu_3O _7 polycristallin. Ce modele empirique permet de relier la densite de courant critique a la densite de lignes de flux penetrant dans la region intergrain. Deux techniques de mesures sont utilisees pour caracteriser nos materiaux. La premiere consiste a mesurer le champ au centre d'un cylindre creux en fonction du champ magnetique applique pour des temperatures comprises entre 20 et 85K. En variant l'epaisseur de la paroi du cylindre creux, il est possible de suivre l'evolution des cycles d'hysteresis et de determiner des champs caracteristiques qui varient en fonction de cette dimension. En utilisant un lissage des resultats experimentaux, nous determinons J _{co}, H_ {o} et n, les parametres du MECG. La forme des cylindres, avec une longueur comparable au diametre externe, entrai ne la presence d'un champ demagnetisant qui peut etre inclus dans le modele theorique. Ceci nous permet d'evaluer la fraction du volume ecrante, f _{g}, ainsi que le facteur demagnetisant N. Nous trouvons que J_{ co}, H_{o} et f_{g} dependent de la temperature, tandis que n et N (pour une epaisseur de paroi fixe) n'en dependent pas. La deuxieme technique consiste a mesurer le courant critique de lames minces en fonction du champ applique pour differentes temperatures. Nous utilisons un montage que nous avons developpe permettant d'effectuer ces mesures en contact direct avec le liquide refrigerant, i.e. dans l'azote liquide. Nous varions la temperature du liquide en variant la pression du gaz au-dessus du bain d'azote. Cette methode nous permet de balayer des temperatures entre 65K et la temperature critique du materiau ({~ }92K). Nous effectuons le lissage des courbes de courant critique en fonction du champ applique encore a l'aide du MECG, pour a nouveau obtenir ses parametres. Pour trois echantillons avec des traitements thermiques differents, les parametres sont differents confirmant que la variation des proprietes macroscopiques de ces supraconducteurs est intimement reliee a la nature des jonctions entre les grains et de la surface des grains. L'oxygenation prolongee retablit les proprietes initiales des echantillons qui se sont degrades durant le recuit des contacts.
Molecular dynamics study of vacancy-like defects in a model glass : static behaviour
NASA Astrophysics Data System (ADS)
Delaye, J. M.; Limoge, Y.
1993-10-01
The possibility of defining vacancy-like defects in a Lennard-Jones glass is searched for in a systematic manner. Considering different relaxation levels of the same system, as well as different external pressures, we use a Molecular Dynamics simulation method, to study at constant volume or external pressure, the relaxation of a “piece” of glass, after the sudden removal of an atom. Three typical kinds of behaviour can be observed: the persistence of the empty volume left by the missing atom, its migration by clearly identifiable atomic jumps and the dissipation “on the spot”. A careful analysis of the probabilities of these three kinds of behaviour shows that a meaningful definition of vacancy-like defects can be given in a Lennard-Jones glass. Dans cet article, nous nous penchons de façon systématique sur la possibilité de définir des défauts de type lacunaire dans un verre de Lennard-Jones, à différents niveaux de relaxation et de pression, par une méthode de simulation numérique en dynamique moléculaire à volume ou à pression constants. Le défaut est créé en supprimant un atome et en suivant la réponse du système. Nous observons trois comportements typiques : la persistance sur place du “trou” laissé par l'atome supprimé, sa migration par des sauts atomiques clairement identifiés et enfin sa dissipation sur place. Une analyse détaillée de ces trois comportements montre qu'il est possible dans un verre de Lennard-Jones de définir des défauts de type lacunaire.
Standfield, L; Comans, T; Raymer, M; O'Leary, S; Moretto, N; Scuffham, P
2016-08-01
Hospital outpatient orthopaedic services traditionally rely on medical specialists to assess all new patients to determine appropriate care. This has resulted in significant delays in service provision. In response, Orthopaedic Physiotherapy Screening Clinics and Multidisciplinary Services (OPSC) have been introduced to assess and co-ordinate care for semi- and non-urgent patients. To compare the efficiency of delivering increased semi- and non-urgent orthopaedic outpatient services through: (1) additional OPSC services; (2) additional traditional orthopaedic medical services with added surgical resources (TOMS + Surg); or (3) additional TOMS without added surgical resources (TOMS - Surg). A cost-utility analysis using discrete event simulation (DES) with dynamic queuing (DQ) was used to predict the cost effectiveness, throughput, queuing times, and resource utilisation, associated with introducing additional OPSC or TOMS ± Surg versus usual care. The introduction of additional OPSC or TOMS (±surgery) would be considered cost effective in Australia. However, OPSC was the most cost-effective option. Increasing the capacity of current OPSC services is an efficient way to improve patient throughput and waiting times without exceeding current surgical resources. An OPSC capacity increase of ~100 patients per month appears cost effective (A$8546 per quality-adjusted life-year) and results in a high level of OPSC utilisation (98 %). Increasing OPSC capacity to manage semi- and non-urgent patients would be cost effective, improve throughput, and reduce waiting times without exceeding current surgical resources. Unlike Markov cohort modelling, microsimulation, or DES without DQ, employing DES-DQ in situations where capacity constraints predominate provides valuable additional information beyond cost effectiveness to guide resource allocation decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C.; Pujol, A.; Gaztañaga, E.
We measure the redshift evolution of galaxy bias from a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for amore » $$\\sim$$116 deg$$^{2}$$ area of the Dark Energy Survey (DES) Science Verification data. This method was first developed in Amara et al. (2012) and later re-examined in a companion paper (Pujol et al., in prep) with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a magnitude-limited galaxy sample. We find the galaxy bias and 1$$\\sigma$$ error bars in 4 photometric redshift bins to be 1.33$$\\pm$$0.18 (z=0.2-0.4), 1.19$$\\pm$$0.23 (z=0.4-0.6), 0.99$$\\pm$$0.36 ( z=0.6-0.8), and 1.66$$\\pm$$0.56 (z=0.8-1.0). These measurements are consistent at the 1-2$$\\sigma$$ level with mea- surements on the same dataset using galaxy clustering and cross-correlation of galaxies with CMB lensing. In addition, our method provides the only $$\\sigma_8$$-independent constraint among the three. We forward-model the main observational effects using mock galaxy catalogs by including shape noise, photo-z errors and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Furthermore, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less
Using Variability to Search for Lensed Quasars in the Dark Energy Survey
NASA Astrophysics Data System (ADS)
Buckley-Geer, Elizabeth J.; Dark Energy Survey Collaboration
2014-01-01
The Dark Energy Survey (DES) has just started its first season of a 5 year program using the DECam instrument on the Blanco 4m telescope at CTIO. Over the course of the 5 year survey we expect to discover about 120 lensed quasars brighter than i=21, including 20 high information-content quads (third brightest image required to be i<21). Strongly lensed quasars can be used to measure cosmological parameters. The time delays between the multiple images can be measured via dedicated monitoring campaigns, while the gravitational potential of the lensing galaxy and of structures along the line of sight can be modeled and measured using deep high resolution imaging and spectroscopy. The combination of these observables enables a distance, known as the time-delay distance (a combination of angular diameter distances) to be measured, which in turn can be converted into a measurement of cosmological parameters including those describing the Dark Energy equation of state. The first step in this measurement is to identify the lensed quasars. Traditionally, quasar candidates have been identified by their blue u-g color which allows them to be separated from the much more numerous stellar contaminants. However, the Dark Energy Survey does not take data in the u-band so other techniques must be employed. One such technique is based on the instrinsic variability of quasars (Schmidt et al, 2010, ApJ 714 1194). We have simulated what we would expect for the DES observing cadence in the first two seasons where we expect four visits to a given patch of sky spread over the two years. We will show results from the simulations as well as a first look at the data from the Science Verification phase of DES.
1998-08-27
serait compI~ men - taire du logiciel "ferm6" du sysf~me dle PlY commercial et qui permettrait dtudier la pr6cision des math odes dle traitement...de rNaxNt piels antu pourts chauepixelr un nivea des grnis contaont (modanlisa iped 0.5 - R0 .2/9 0.570 Simulations num6riques par une repr~sentation
NASA Astrophysics Data System (ADS)
Boissonneault, Maxime
L'electrodynamique quantique en circuit est une architecture prometteuse pour le calcul quantique ainsi que pour etudier l'optique quantique. Dans cette architecture, on couple un ou plusieurs qubits supraconducteurs jouant le role d'atomes a un ou plusieurs resonateurs jouant le role de cavites optiques. Dans cette these, j'etudie l'interaction entre un seul qubit supraconducteur et un seul resonateur, en permettant cependant au qubit d'avoir plus de deux niveaux et au resonateur d'avoir une non-linearite Kerr. Je m'interesse particulierement a la lecture de l'etat du qubit et a son amelioration, a la retroaction du processus de mesure sur le qubit de meme qu'a l'etude des proprietes quantiques du resonateur a l'aide du qubit. J'utilise pour ce faire un modele analytique reduit que je developpe a partir de la description complete du systeme en utilisant principalement des transfprmations unitaires et une elimination adiabatique. J'utilise aussi une librairie de calcul numerique maison permettant de simuler efficacement l'evolution du systeme complet. Je compare les predictions du modele analytique reduit et les resultats de simulations numeriques a des resultats experimentaux obtenus par l'equipe de quantronique du CEASaclay. Ces resultats sont ceux d'une spectroscopie d'un qubit supraconducteur couple a un resonateur non lineaire excite. Dans un regime de faible puissance de spectroscopie le modele reduit predit correctement la position et la largeur de la raie. La position de la raie subit les decalages de Lamb et de Stark, et sa largeur est dominee par un dephasage induit par le processus de mesure. Je montre que, pour les parametres typiques de l'electrodynamique quantique en circuit, un accord quantitatif requiert un modele en reponse non lineaire du champ intra-resonateur, tel que celui developpe. Dans un regime de forte puissance de spectroscopie, des bandes laterales apparaissent et sont causees par les fluctuations quantiques du champ electromagnetique intra-resonateur autour de sa valeur d'equilibre. Ces fluctuations sont causees par la compression du champ electromagnetique due a la non-linearite du resonateur, et l'observation de leur effet via la spectroscopie d'un qubit constitue une premiere. Suite aux succes quantitatifs du modele reduit, je montre que deux regimes de parametres ameliorent marginalement la mesure dispersive d'un qubit avec un resonateur lineaire, et significativement une mesure par bifurcation avec un resonateur non lineaire. J'explique le fonctionnement d'une mesure de qubit dans un resonateur lineaire developpee par une equipe experimentale de l'Universite de Yale. Cette mesure, qui utilise les non-linearites induites par le qubit, a une haute fidelite, mais utilise une tres haute puissance et est destructrice. Dans tous ces cas, la structure multi-niveaux du qubit s'avere cruciale pour la mesure. En suggerant des facons d'ameliorer la mesure de qubits supraconducteurs, et en decrivant quantitativement la physique d'un systeme a plusieurs niveaux couple a un resonateur non lineaire excite, les resultats presentes dans cette these sont pertinents autant pour l'utilisation de l'architecture d'electrodynamique quantique en circuit pour l'informatique quantique que pour l'optique quantique. Mots-cles: electrodynamique quantique en circuit, informatique quantique, mesure, qubit supraconducteur, transmon, non-linearite Kerr
Modelisation de la diffusion sur les surfaces metalliques: De l'adatome aux processus de croissance
NASA Astrophysics Data System (ADS)
Boisvert, Ghyslain
Cette these est consacree a l'etude des processus de diffusion en surface dans le but ultime de comprendre, et de modeliser, la croissance d'une couche mince. L'importance de bien mai triser la croissance est primordiale compte tenu de son role dans la miniaturisation des circuits electroniques. Nous etudions ici les surface des metaux nobles et de ceux de la fin de la serie de transition. Dans un premier temps, nous nous interessons a la diffusion d'un simple adatome sur une surface metallique. Nous avons, entre autres, mis en evidence l'apparition d'une correlation entre evenements successifs lorsque la temperature est comparable a la barriere de diffusion, i.e., la diffusion ne peut pas etre associee a une marche aleatoire. Nous proposons un modele phenomenologique simple qui reproduit bien les resultats des simulations. Ces calculs nous ont aussi permis de montrer que la diffusion obeit a la loi de Meyer-Neldel. Cette loi stipule que, pour un processus active, le prefacteur augmente exponentiellement avec la barriere. En plus, ce travail permet de clarifier l'origine physique de cette loi. En comparant les resultats dynamiques aux resultats statiques, on se rend compte que la barriere extraite des calculs dynamiques est essentiellement la meme que celle obtenue par une approche statique, beaucoup plus simple. On peut donc obtenir cette barriere a l'aide de methodes plus precises, i.e., ab initio, comme la theorie de la fonctionnelle de la densite, qui sont aussi malheureusement beaucoup plus lourdes. C'est ce que nous avons fait pour plusieurs systemes metalliques. Nos resultats avec cette derniere approche se comparent tres bien aux resultats experimentaux. Nous nous sommes attardes plus longuement a la surface (111) du platine. Cette surface regorge de particularites interessantes, comme la forme d'equilibre non-hexagonale des i lots et deux sites d'adsorption differents pour l'adatome. De plus, des calculs ab initio precedents n'ont pas reussi a confirmer la forme d'equilibre et surestiment grandement la barriere. Nos calculs, plus complets et dans un formalisme mieux adapte a ce genre de probleme, predisent correctement la forme d'equilibre, qui est en fait due a un relachement different du stress de surface aux deux types de marches qui forment les cotes des i lots. Notre valeur pour la barriere est aussi fortement diminuee lorsqu'on relaxe les forces sur les atomes de la surface, amenant le resultat theorique beaucoup plus pres de la valeur experimentale. Nos calculs pour le cuivre demontre en effet que la diffusion de petits i lots pendant la croissance ne peut pas etre negligee dans ce cas, mettant en doute la valeur des interpretations des mesures experimentales. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Le Rouzo, J.; Ribet-Mohamed, I.; Haidar, R.; Guérineau, N.; Tauvy, M.; Rosencher, E.
2006-10-01
Des mesures de réponse spectrale à très grande dynamique ont été réalisées sur des détecteurs infrarouge à MPQ. Ces mesures montrent la présence de structures spectrales qui n'ont jamais été observées jusqu'alors. Basés sur un modèle de Kronig-Penney, classiquement utilisé dans les structures périodiques, nos résultats de simulation permettent d'attribuer sans ambiguïté ces structures à la présence de mini bandes d'énergie. De plus les exaltations de réponse en bord de bande correspondent à des singularités de Van Hove. Ce résultat important ouvre de nombreuses perspectives dans le domaine de la détection infrarouge.
Lessons Learned from Numerical Simulations of the F-16XL Aircraft at Flight Conditions
NASA Technical Reports Server (NTRS)
Rizzi, Arthur; Jirasek, Adam; Lamar, John; Crippa, Simone; Badcock, Kenneth; Boelens, Oklo
2009-01-01
Nine groups participating in the Cranked Arrow Wing Aerodynamics Project International (CAWAPI) project have contributed steady and unsteady viscous simulations of a full-scale, semi-span model of the F-16XL aircraft. Three different categories of flight Reynolds/Mach number combinations were computed and compared with flight-test measurements for the purpose of code validation and improved understanding of the flight physics. Steady-state simulations are done with several turbulence models of different complexity with no topology information required and which overcome Boussinesq-assumption problems in vortical flows. Detached-eddy simulation (DES) and its successor delayed detached-eddy simulation (DDES) have been used to compute the time accurate flow development. Common structured and unstructured grids as well as individually-adapted unstructured grids were used. Although discrepancies are observed in the comparisons, overall reasonable agreement is demonstrated for surface pressure distribution, local skin friction and boundary velocity profiles at subsonic speeds. The physical modeling, steady or unsteady, and the grid resolution both contribute to the discrepancies observed in the comparisons with flight data, but at this time it cannot be determined how much each part contributes to the whole. Overall it can be said that the technology readiness of CFD-simulation technology for the study of vehicle performance has matured since 2001 such that it can be used today with a reasonable level of confidence for complex configurations.
Cosmology constraints from shear peak statistics in Dark Energy Survey Science Verification data
Kacprzak, T.; Kirk, D.; Friedrich, O.; ...
2016-08-19
Shear peak statistics has gained a lot of attention recently as a practical alternative to the two point statistics for constraining cosmological parameters. We perform a shear peak statistics analysis of the Dark Energy Survey (DES) Science Verification (SV) data, using weak gravitational lensing measurements from a 139 degmore » $^2$ field. We measure the abundance of peaks identified in aperture mass maps, as a function of their signal-to-noise ratio, in the signal-to-noise range $$0<\\mathcal S / \\mathcal N<4$$. To predict the peak counts as a function of cosmological parameters we use a suite of $N$-body simulations spanning 158 models with varying $$\\Omega_{\\rm m}$$ and $$\\sigma_8$$, fixing $w = -1$, $$\\Omega_{\\rm b} = 0.04$$, $h = 0.7$ and $$n_s=1$$, to which we have applied the DES SV mask and redshift distribution. In our fiducial analysis we measure $$\\sigma_{8}(\\Omega_{\\rm m}/0.3)^{0.6}=0.77 \\pm 0.07$$, after marginalising over the shear multiplicative bias and the error on the mean redshift of the galaxy sample. We introduce models of intrinsic alignments, blending, and source contamination by cluster members. These models indicate that peaks with $$\\mathcal S / \\mathcal N>4$$ would require significant corrections, which is why we do not include them in our analysis. We compare our results to the cosmological constraints from the two point analysis on the SV field and find them to be in good agreement in both the central value and its uncertainty. As a result, we discuss prospects for future peak statistics analysis with upcoming DES data.« less
Evaluation de la qualite osseuse par les ondes guidees ultrasonores =
NASA Astrophysics Data System (ADS)
Abid, Alexandre
La caracterisation des proprietes mecaniques de l'os cortical est un domaine d'interet pour la recherche orthopedique. En effet, cette caracterisation peut apporter des informations primordiales pour determiner le risque de fracture, la presence de microfractures ou encore depister l'osteoporose. Les deux principales techniques actuelles de caracterisation de ces proprietes sont le Dual-energy X-ray Absorptiometry (DXA) et le Quantitative Computed Tomogaphy (QCT). Ces techniques ne sont pas optimales et presentent certaines limites, ainsi l'efficacite du DXA est questionnee dans le milieu orthopedique tandis que le QCT necessite des niveaux de radiations problematiques pour en faire un outil de depistage. Les ondes guidees ultrasonores sont utilisees depuis de nombreuses annees pour detecter les fissures, la geometrie et les proprietes mecaniques de cylindres, tuyaux et autres structures dans des milieux industriels. De plus, leur utilisation est plus abordable que celle du DXA et n'engendrent pas de radiation ce qui les rendent prometteuses pour detecter les proprietes mecaniques des os. Depuis moins de dix ans, de nombreux laboratoires de recherche tentent de transposer ces techniques au monde medical, en propageant les ondes guidees ultrasonores dans les os. Le travail presente ici a pour but de demontrer le potentiel des ondes guidees ultrasonores pour determiner l'evolution des proprietes mecaniques de l'os cortical. Il commence par une introduction generale sur les ondes guidees ultrasonores et une revue de la litterature des differentes techniques relatives a l'utilisation des ondes guidees ultrasonores sur les os. L'article redige lors de ma maitrise est ensuite presente. L'objectif de cet article est d'exciter et de detecter certains modes des ondes guides presentant une sensibilite a la deterioration des proprietes mecaniques de l'os cortical. Ce travail est realise en modelisant par elements finis la propagation de ces ondes dans deux modeles osseux cylindriques. Ces deux modeles sont composes d'une couche peripherique d'os cortical et remplis soit d'os trabeculaire soit de moelle osseuse. Ces deux modeles permettent d'obtenir deux geometries, chacune propice a la propagation circonferentielle ou longitudinale des ondes guidees. Les resultats, ou trois differents modes ont pu etre identifies, sont compares avec des donnees experimentales obtenues avec des fantomes osseux et theoriques. La sensibilite de chaque mode pour les differents parametres des proprietes mecaniques est alors etudiee ce qui permet de conclure sur le potentiel de chaque mode quant a la prediction de risque de fracture ou de presence de microfractures.
Modelisation de la Propagation des Ondes Sonores dans un Environnement Naturel Complexe
NASA Astrophysics Data System (ADS)
L'Esperance, Andre
Ce travail est consacre a la propagation sonore a l'exterieur dans un environnement naturel complexe, i.e. en presence de conditions reelles de vent, de gradient de temperature et de turbulence atmospherique. Plus specifiquement ce travail comporte deux objectifs. D'une part, il vise a developper un modele heuristique de propagation sonore (MHP) permettant de prendre en consideration l'ensemble des phenomenes meteorologiques et acoustiques influencant la propagation du son a l'exterieur. D'autre part, il vise a identifier dans quelles circonstances et avec quelle importance les conditions meteorologiques interviennent sur la propagation sonore. Ce travail est divise en cinq parties. Apres une breve introduction identifiant les motivations de cette etude (chapitre 1), le chapitre 2 fait un rappel des travaux deja realises dans le domaine de la propagation du son a l'exterieur. Ce chapitre presente egalement les bases de l'acoustique geometrique a partir desquelles ont ete developpees la partie acoustique du modele heuristique de propagation. En outre, on y decrit comment les phenomenes de refraction et de turbulence atmospherique peuvent etre consideres dans la theorie des rayons. Le chapitre 3 presente le modele heuristique de propagation (MHP) developpe au cours de cet ouvrage. La premiere section de ce chapitre decrit le modele acoustique de propagation, modele qui fait l'hypothese d'un gradient de celerite lineaire et qui est base sur une solution hybride d'acoustique geometrique et de theorie des residus. La deuxieme section du chapitre 3 traite plus specifiquement de la modelisation des aspects meteorologiques et de la determination des profils de celerite et des index de fluctuation associes aux conditions meteorologiques. La section 3 de ce chapitre decrit comment les profils de celerite resultants sont linearises pour les calculs dans le modele acoustique, et finalement la section 4 donne les tendances generales obtenues par le modele. Le chapitre 4 decrit les compagnes de mesures qui ont ete realisees a Rock-Spring (Pennsylvanie, Etats -Unis) au cours de l'ete 90 et a Bouin (Vendee, France) au cours de l'automne 91. La campagne de mesure de Rock -Spring a porte plus specifiquement sur les effets de la refraction alors que la campagne de Bouin a prote plus specifiquement sur les effets de la turbulence. La section 4.1 decrit les equipements et le traitement des donnees meteorologiques realisees dans chaque cas et la section 4.2 fait de meme pour les resultats acoustiques. Finalement, le chapitre 5 compare les resultats experimentaux obtenus avec ceux donnes par le MHP, tant pour les resultats meteorologiques que pour les resultats acoustiques. Des comparaisons avec un autre modele (le Fast Field Program) sont egalement presentees.
Getsios, Denis; Marton, Jenő P; Revankar, Nikhil; Ward, Alexandra J; Willke, Richard J; Rublee, Dale; Ishak, K Jack; Xenakis, James G
2013-09-01
Most existing models of smoking cessation treatments have considered a single quit attempt when modelling long-term outcomes. To develop a model to simulate smokers over their lifetimes accounting for multiple quit attempts and relapses which will allow for prediction of the long-term health and economic impact of smoking cessation strategies. A discrete event simulation (DES) that models individuals' life course of smoking behaviours, attempts to quit, and the cumulative impact on health and economic outcomes was developed. Each individual is assigned one of the available strategies used to support each quit attempt; the outcome of each attempt, time to relapses if abstinence is achieved, and time between quit attempts is tracked. Based on each individual's smoking or abstinence patterns, the risk of developing diseases associated with smoking (chronic obstructive pulmonary disease, lung cancer, myocardial infarction and stroke) is determined and the corresponding costs, changes to mortality, and quality of life assigned. Direct costs are assessed from the perspective of a comprehensive US healthcare payer ($US, 2012 values). Quit attempt strategies that can be evaluated in the current simulation include unassisted quit attempts, brief counselling, behavioural modification therapy, nicotine replacement therapy, bupropion, and varenicline, with the selection of strategies and time between quit attempts based on equations derived from survey data. Equations predicting the success of quit attempts as well as the short-term probability of relapse were derived from five varenicline clinical trials. Concordance between the five trials and predictions from the simulation on abstinence at 12 months was high, indicating that the equations predicting success and relapse in the first year following a quit attempt were reliable. Predictions allowing for only a single quit attempt versus unrestricted attempts demonstrate important differences, with the single quit attempt simulation predicting 19 % more smoking-related diseases and 10 % higher costs associated with smoking-related diseases. Differences are most prominent in predictions of the time that individuals abstain from smoking: 13.2 years on average over a lifetime allowing for multiple quit attempts, versus only 1.2 years with single quit attempts. Differences in abstinence time estimates become substantial only 5 years into the simulation. In the multiple quit attempt simulations, younger individuals survived longer, yet had lower lifetime smoking-related disease and total costs, while the opposite was true for those with high levels of nicotine dependence. By allowing for multiple quit attempts over the course of individuals' lives, the simulation can provide more reliable estimates on the health and economic impact of interventions designed to increase abstinence from smoking. Furthermore, the individual nature of the simulation allows for evaluation of outcomes in populations with different baseline profiles. DES provides a framework for comprehensive and appropriate predictions when applied to smoking cessation over smoker lifetimes.
Modelisation par elements finis du muscle strie
NASA Astrophysics Data System (ADS)
Leonard, Mathieu
Ce present projet de recherche a permis. de creer un modele par elements finis du muscle strie humain dans le but d'etudier les mecanismes engendrant les lesions musculaires traumatiques. Ce modele constitue une plate-forme numerique capable de discerner l'influence des proprietes mecaniques des fascias et de la cellule musculaire sur le comportement dynamique du muscle lors d'une contraction excentrique, notamment le module de Young et le module de cisaillement de la couche de tissu conjonctif, l'orientation des fibres de collagene de cette membrane et le coefficient de poisson du muscle. La caracterisation experimentale in vitro de ces parametres pour des vitesses de deformation elevees a partir de muscles stries humains actifs est essentielle pour l'etude de lesions musculaires traumatiques. Le modele numerique developpe est capable de modeliser la contraction musculaire comme une transition de phase de la cellule musculaire par un changement de raideur et de volume a l'aide des lois de comportement de materiau predefinies dans le logiciel LS-DYNA (v971, Livermore Software Technology Corporation, Livermore, CA, USA). Le present projet de recherche introduit donc un phenomene physiologique qui pourrait expliquer des blessures musculaires courantes (crampes, courbatures, claquages, etc.), mais aussi des maladies ou desordres touchant le tissu conjonctif comme les collagenoses et la dystrophie musculaire. La predominance de blessures musculaires lors de contractions excentriques est egalement exposee. Le modele developpe dans ce projet de recherche met ainsi a l'avant-scene le concept de transition de phase ouvrant la porte au developpement de nouvelles technologies pour l'activation musculaire chez les personnes atteintes de paraplegie ou de muscles artificiels compacts pour l'elaboration de protheses ou d'exosquelettes. Mots-cles Muscle strie, lesion musculaire, fascia, contraction excentrique, modele par elements finis, transition de phase
Lancelot, Renaud; Lesnoff, Matthieu
2016-01-01
Background Peste des petits ruminants (PPR) is an acute infectious viral disease affecting domestic small ruminants (sheep and goats) and some wild ruminant species in Africa, the Middle East and Asia. A global PPR control strategy based on mass vaccination—in regions where PPR is endemic—was recently designed and launched by international organizations. Sahelian Africa is one of the most challenging endemic regions for PPR control. Indeed, strong seasonal and annual variations in mating, mortality and offtake rates result in a complex population dynamics which might in turn alter the population post-vaccination immunity rate (PIR), and thus be important to consider for the implementation of vaccination campaigns. Methods In a context of preventive vaccination in epidemiological units without PPR virus transmission, we developed a predictive, dynamic model based on a seasonal matrix population model to simulate PIR dynamics. This model was mostly calibrated with demographic and epidemiological parameters estimated from a long-term follow-up survey of small ruminant herds. We used it to simulate the PIR dynamics following a single PPR vaccination campaign in a Sahelian sheep population, and to assess the effects of (i) changes in offtake rate related to the Tabaski (a Muslim feast following the lunar calendar), and (ii) the date of implementation of the vaccination campaigns. Results The persistence of PIR was not influenced by the Tabaski date. Decreasing the vaccination coverage from 100 to 80% had limited effects on PIR. However, lower vaccination coverage did not provide sufficient immunity rates (PIR < 70%). As a trade-off between model predictions and other considerations like animal physiological status, and suitability for livestock farmers, we would suggest to implement vaccination campaigns in September-October. This model is a first step towards better decision support for animal health authorities. It might be adapted to other species, livestock farming systems or diseases. PMID:27603710
Modelisation des emissions de particules microniques et nanometriques en usinage
NASA Astrophysics Data System (ADS)
Khettabi, Riad
La mise en forme des pieces par usinage emet des particules, de tailles microscopiques et nanometriques, qui peuvent etre dangereuses pour la sante. Le but de ce travail est d'etudier les emissions de ces particules pour fins de prevention et reduction a la source. L'approche retenue est experimentale et theorique, aux deux echelles microscopique et macroscopique. Le travail commence par des essais permettant de determiner les influences du materiau, de l'outil et des parametres d'usinage sur les emissions de particules. E nsuite un nouveau parametre caracterisant les emissions, nomme Dust unit , est developpe et un modele predictif est propose. Ce modele est base sur une nouvelle theorie hybride qui integre les approches energetiques, tribologiques et deformation plastique, et inclut la geometrie de l'outil, les proprietes du materiau, les conditions de coupe et la segmentation des copeaux. Il ete valide au tournage sur quatre materiaux: A16061-T6, AISI1018, AISI4140 et fonte grise.
Geostatistical mapping of leakance in a regional aquitard, Oak Ridges Moraine area, Ontario, Canada
NASA Astrophysics Data System (ADS)
Desbarats, A. J.; Hinton, M. J.; Logan, C. E.; Sharpe, D. R.
2001-01-01
The Newmarket Till forms a regionally extensive aquitard separating two major aquifer systems in the Greater Toronto area, Canada. The till is incised, and sometimes eroded entirely, by a network of sand- and gravel-filled channels forming productive aquifers and, locally, high-conductivity windows between aquifer systems. Leakage through the till may also be substantial in places. This study investigates the spatial variability of aquitard leakance in order to assess the relative importance of recharge processes to the lower aquifers. With a large database derived from water-well records and containing both hard and soft information, the Sequential Indicator Simulation method is used to generate maps of aquitard thickness and window probability. These can be used for targeting channel aquifers and for identifying potential areas of recharge to the lower aquifers. Conductivities are modeled from sparse data assuming that their correlation range is much smaller than the grid spacing. Block-scale leakances are obtained by upscaling nodal values based on simulated conductivity and thickness fields. Under the "aquifer-flow'' assumption, upscaling is performed by arithmetic spatial averaging. Histograms and maps of upscaled leakances show that heterogeneities associated with aquitard windows have the largest effect on regional groundwater flow patterns. Résumé. La moraine glaciaire de Newmarket constitue un imperméable d'extension régionale séparant deux systèmes aquifères dans la région du Grand Toronto (Canada). La moraine est entaillée, et parfois entièrement érodée, par un réseau de chenaux comblés de sables et de graviers formant des aquifères productifs et, localement, des «fenêtres», zones à forte conductivité hydraulique reliant les systèmes aquifères. Une drainance au travers de la moraine peut également être significative par endroits. Cette étude s'intéresse à la variabilité spatiale de la drainance au travers de l'imperméable, dans le but d'évaluer l'importance relative des processus d'alimentation des aquifères inférieurs. À partir d'une vaste base de données constituée par les mesures faites dans les puits et contenant des informations à la fois certaines et incertaines, la méthode de simulation par indicateur séquentiel est utilisée pour créer des cartes d'épaisseur de l'imperméable et de probabilité d'existence des fenêtres. Ces cartes peuvent être utilisées pour mettre en évidence les aquifères de chenaux et pour identifier les zones potentielles de recharge des aquifères inférieurs. Les conductivités hydrauliques sont modélisées à partir de données clairsemées en supposant que leur gamme de corrélation est beaucoup plus faible que le pas de la grille. La drainance à l'échelle des blocs est obtenue par accroissement du niveau d'échelle des valeurs nodales basé sur les champs de conductivité simulée et des épaisseurs. À partir de l'hypothèse d'écoulement dans l'aquifère, l'accroissement d'échelle est réalisé en faisant une moyenne arithmétique spatiale. Des histogrammes et des cartes de drainance avec accroissement d'échelle montrent que les hétérogénéités associées aux fenêtres dans l'imperméable constituent l'effet le plus important dans l'organisation des écoulements souterrains régionaux. Resumen. El Till de Newmarket forma un acuitardo de extensión regional que separa dos sistemas acuíferos principales en la zona de Greater Toronto (Canadá). El till está horadado, y a veces completamente erosionado, por una red de canales rellenos de arena y grava. Estos constituyen acuíferos productivos y, localmente, conexiones de alta permeabilidad entre sistemas acuíferos. El goteo a través del till puede ser fundamental en ciertos lugares. Este estudio investiga la variabilidad espacial del goteo desde el acuitardo con el fin de establecer la importancia relativa de los procesos de recarga hacia los acuíferos inferiores. Se ha utilizado el método de la Simulación Indicadora Secuencial, soportado por una gran base de datos que contiene registros de sondeos e incluye información blanda y dura, para generar mapas de espesor del acuitardo y mapas de probabilidad de discontinuidades laterales en el acuitardo. Estos resultados se pueden emplear para identificar canales y las zonas de recarga potencial a los acuíferos inferiores. Se han modelado las conductividades a partir de datos dispersos, suponiendo que la distancia de correlación es mucho menor que el espaciado de la malla. Se han calculado valores de goteo a escala de bloque por medio del escalado de los valores nodales, basándose en los campos simulados de conductividad y espesor. El escalado se calcula mediante un promedio aritmético espacial, bajo la hipótesis de "flujo en el acuífero". Los histogramas y mapas de goteos escalados muestran que las heterogeneidades asociadas a las discontinuidades en el acuitardo producen el efecto más notable en la distribución regional del flujo de aguas subterráneas.
Evaluation of Simulated RADARSAT-2 Polarimetry Products
2007-09-01
comparativement à l’utilisation d’un radar monocanal pour détecter des navires ainsi que les avantages possibles de la décomposition des cibles polarimétriques...calculation of the ROC can be applied to any designed probability of false alarm, such as PFA = 10-9, provided there are enough ocean samples available. The... designation , trade name, military project code name, geographic location may also be included. If possible keywords should be selected from a published
Dark Energy Survey Year 1 results: curved-sky weak lensing mass map
NASA Astrophysics Data System (ADS)
Chang, C.; Pujol, A.; Mawdsley, B.; Bacon, D.; Elvin-Poole, J.; Melchior, P.; Kovács, A.; Jain, B.; Leistedt, B.; Giannantonio, T.; Alarcon, A.; Baxter, E.; Bechtol, K.; Becker, M. R.; Benoit-Lévy, A.; Bernstein, G. M.; Bonnett, C.; Busha, M. T.; Rosell, A. Carnero; Castander, F. J.; Cawthon, R.; da Costa, L. N.; Davis, C.; De Vicente, J.; DeRose, J.; Drlica-Wagner, A.; Fosalba, P.; Gatti, M.; Gaztanaga, E.; Gruen, D.; Gschwend, J.; Hartley, W. G.; Hoyle, B.; Huff, E. M.; Jarvis, M.; Jeffrey, N.; Kacprzak, T.; Lin, H.; MacCrann, N.; Maia, M. A. G.; Ogando, R. L. C.; Prat, J.; Rau, M. M.; Rollins, R. P.; Roodman, A.; Rozo, E.; Rykoff, E. S.; Samuroff, S.; Sánchez, C.; Sevilla-Noarbe, I.; Sheldon, E.; Troxel, M. A.; Varga, T. N.; Vielzeuf, P.; Vikram, V.; Wechsler, R. H.; Zuntz, J.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Kind, M. Carrasco; Carretero, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Doel, P.; Estrada, J.; Neto, A. Fausti; Fernandez, E.; Flaugher, B.; Frieman, J.; García-Bellido, J.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Kent, S.; Kirk, D.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Martini, P.; Menanteau, F.; Miquel, R.; Mohr, J. J.; Neilsen, E.; Nichol, R. C.; Petravick, D.; Plazas, A. A.; Romer, A. K.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.; Tucker, D. L.; Walker, A. R.; Wester, W.; Zhang, Y.
2018-04-01
We construct the largest curved-sky galaxy weak lensing mass map to date from the DES first-year (DES Y1) data. The map, about 10 times larger than the previous work, is constructed over a contiguous ≈1500 deg2, covering a comoving volume of ≈10 Gpc3. The effects of masking, sampling, and noise are tested using simulations. We generate weak lensing maps from two DES Y1 shear catalogues, METACALIBRATION and IM3SHAPE, with sources at redshift 0.2 < z < 1.3, and in each of four bins in this range. In the highest signal-to-noise map, the ratio between the mean signal to noise in the E-mode map and the B-mode map is ˜1.5 (˜2) when smoothed with a Gaussian filter of σG = 30 (80) arcmin. The second and third moments of the convergence κ in the maps are in agreement with simulations. We also find no significant correlation of κ with maps of potential systematic contaminants. Finally, we demonstrate two applications of the mass maps: (1) cross-correlation
Une nouvelle voie pour la conception des implants intervertébraux
NASA Astrophysics Data System (ADS)
Gradel, T.; Tabourot, L.; Arrieux, R.; Balland, P.
2002-12-01
L'objectif de notre travail est la conception d'une nouvelle génération d'implants intersomatiques qui s'adapte parfaitement à la géométrie des plateaux vertébraux en se déformant. Pour cela, nous avons utilisé une nouvelle démarche qui consiste à simuler entièrement le procédé de fabrication en l'occurrence l'emboutissage, Cette simulation, en concervant l'historique des sollicitations exercées sur le matériau lors de sa mise en œuvre permet de valider très précisément sa résistance mécanique en fin de cycle. Au cours de cette étude, nous avons mené en parallèle deux analyses dites “ coopératives ” : l'une fondée sur un modèle phénoménologique de type HILL et l'autre sur un modèle multi-échelles prenant en compte des phénomènes plus physiques afin d'acquérir une bonne connaissance du comportement du matériau lors de la déformation. C'est pour sa bonne résistance, sa biocompatibilité et ses propriétés radiologiques que nous avons choisi le T40 (titane pur) comme matériau.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guchhait, Biswajit; Das, Suman; Daschakraborty, Snehasis
Here we investigate the solute-medium interaction and solute-centered dynamics in (RCONH{sub 2} + LiX) deep eutectics (DEs) via carrying out time-resolved fluorescence measurements and all-atom molecular dynamics simulations at various temperatures. Alkylamides (RCONH{sub 2}) considered are acetamide (CH{sub 3}CONH{sub 2}), propionamide (CH{sub 3}CH{sub 2}CONH{sub 2}), and butyramide (CH{sub 3}CH{sub 2}CH{sub 2}CONH{sub 2}); the electrolytes (LiX) are lithium perchlorate (LiClO{sub 4}), lithium bromide (LiBr), and lithium nitrate (LiNO{sub 3}). Differential scanning calorimetric measurements reveal glass transition temperatures (T{sub g}) of these DEs are ∼195 K and show a very weak dependence on alkyl chain-length and electrolyte identity. Time-resolved and steady statemore » fluorescence measurements with these DEs have been carried out at six-to-nine different temperatures that are ∼100–150 K above their individual T{sub g}s. Four different solute probes providing a good spread of fluorescence lifetimes have been employed in steady state measurements, revealing strong excitation wavelength dependence of probe fluorescence emission peak frequencies. Extent of this dependence, which shows sensitivity to anion identity, has been found to increase with increase of amide chain-length and decrease of probe lifetime. Time-resolved measurements reveal strong fractional power dependence of average rates for solute solvation and rotation with fraction power being relatively smaller (stronger viscosity decoupling) for DEs containing longer amide and larger (weaker decoupling) for DEs containing perchlorate anion. Representative all-atom molecular dynamics simulations of (CH{sub 3}CONH{sub 2} + LiX) DEs at different temperatures reveal strongly stretched exponential relaxation of wavevector dependent acetamide self dynamic structure factor with time constants dependent both on ion identity and temperature, providing justification for explaining the fluorescence results in terms of temporal heterogeneity and amide clustering in these multi-component melts.« less
NASA Astrophysics Data System (ADS)
Freuchet, Florian
Dans le milieu marin, l'abondance du recrutement depend des processus qui vont affecter les adultes et le stock de larves. Sous l'influence de signaux fiables de la qualite de l'habitat, la mere peut augmenter (effet maternel anticipatoire, 'anticipatory mother effects', AME) ou reduire (effet maternel egoiste, 'selfish maternai effects', SME) la condition physiologique de la progeniture. Dans les zones tropicales, generalement plus oligotrophes, la ressource nutritive et la temperature sont deux composantes importantes pouvant limiter le recrutement. Les effets de l'apport nutritionnel et du stress thermique sur la production de larves et sur la stategie maternelle adoptee ont ete testes dans cette etude. Nous avons cible la balane Chthamalus bisinuatus (Pilsbry) comme modele biologique car el1e domine les zones intertidales superieures le long des cotes rocheuses du Sud-Est du Bresil (region tropicale). Les hypotheses de depart stipulaient que l'apport nutritionnel permet aux adultes de produire des larves de qualite elevee et que le stress thermique genere une ponte precoce, produisant des larves de faible qualite. Afin de tester ces hypotheses, des populations de C. bisinuatus ont ete elevees selon quatre groupes experimentaux differents, en combinant des niveaux d'apport nutritionnel (eleve et faible) et de stress thermique (stresse et non stresse). Des mesures de survie et de conditions physiologiques des adultes et des larves ont permis d'identifier les reponses parentales pouvant etre avantageuses dans un environnement tropical hostile. L'analyse des profils en acides gras a ete la methode utilisee pour evaluer la qualite physiologique des adultes et de larves. Les resultats du traitement alimentaire (fort ou faible apport nutritif), ne montrent aucune difference dans l'accumulation de lipides neutres, la taille des nauplii, l'effort de reproduction ou le temps de survie des nauplii en condition de jeune. Il semble que la faible ressource nutritive est compensee par les meres qui adoptent un modele AME qui se traduit par l'anticipation du milieu par les meres afin de produire des larves au phenotype approprie. A l'ajout d'un stress thermique, on observe des diminutions de 47% de la production de larves et celles-ci etaient 18 microm plus petites. Les meres semblent utiliser un modele SME caracterise par une diminution de la performance des larves. Suite a ces resultats, nous emettons l'hypothese qu'en zone subtropicale, comme sur les cotes de l'etat de Sao Paulo, l'elevation de la temperature subie par les balanes n'est, a priori, pas dommageable pour leur organisme si eIle est combinee a un apport nutritif suffisant.
[Modeling in value-based medicine].
Neubauer, A S; Hirneiss, C; Kampik, A
2010-03-01
Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.
Murphy, Elizabeth A.; Soong, David T.; Sharpe, Jennifer B.
2012-01-01
Digital flood-inundation maps for a 9-mile reach of the Des Plaines River from Riverwoods to Mettawa, Illinois, were created by the U.S. Geological Survey (USGS) in cooperation with the Lake County Stormwater Management Commission and the Villages of Lincolnshire and Riverwoods. The inundation maps, which can be accessed through the USGS Flood Inundation Mapping Science Web site at http://water.usgs.gov/osw/flood_inundation/, depict estimates of the areal extent of flooding corresponding to selected water levels (gage heights) at the USGS streamgage at Des Plaines River at Lincolnshire, Illinois (station no. 05528100). Current conditions at the USGS streamgage may be obtained on the Internet at http://waterdata.usgs.gov/usa/nwis/uv?05528100. In addition, this streamgage is incorporated into the Advanced Hydrologic Prediction Service (AHPS) flood warning system (http://water.weather.gov/ahps/) by the National Weather Service (NWS). The NWS forecasts flood hydrographs at many places that are often co-located at USGS streamgages. The NWS forecasted peak-stage information, also shown on the Des Plaines River at Lincolnshire inundation Web site, may be used in conjunction with the maps developed in this study to show predicted areas of flood inundation. In this study, flood profiles were computed for the stream reach by means of a one-dimensional step-backwater model. The hydraulic model was then used to determine seven water-surface profiles for flood stages at roughly 1-ft intervals referenced to the streamgage datum and ranging from the 50- to 0.2-percent annual exceedance probability flows. The simulated water-surface profiles were then combined with a Geographic Information System (GIS) Digital Elevation Model (DEM) (derived from Light Detection And Ranging (LiDAR) data) in order to delineate the area flooded at each water level. These maps, along with information on the Internet regarding current gage height from USGS streamgages and forecasted stream stages from the NWS, provide emergency management personnel and residents with information that is critical for flood response activities such as evacuations and road closures, as well as for post-flood recovery efforts.
Planetary plasma data analysis and 3D visualisation tools of the CDPP in the IMPEx infrastructure
NASA Astrophysics Data System (ADS)
Gangloff, Michel; Génot, Vincent; Khodachenko, Maxim; Modolo, Ronan; Kallio, Esa; Alexeev, Igor; Al-Ubaidi, Tarek; Scherf, Manuel; André, Nicolas; Bourrel, Nataliya; Budnik, Elena; Bouchemit, Myriam; Dufourg, Nicolas; Beigbeder, Laurent
2015-04-01
The CDPP (Centre de Données de la Physique des Plasmas,(http://cdpp.eu/), the French data center for plasma physics, is engaged for more than a decade in the archiving and dissemination of plasma data products from space missions and ground observatories. Besides these activities, the CDPP developed services like AMDA (http://amda.cdpp.eu/) which enables in depth analysis of a large amount of data through dedicated functionalities such as: visualization, conditional search, cataloguing, and 3DView (http://3dview.cdpp.eu/) which provides immersive visualisations in planetary environments and is further developed to include simulation and observational data. Both tools provide an interface to the IMPEx infrastructure (http://impexfp7.oeaw.ac.at) which facilitates the joint access to outputs of simulations (MHD or Hybrid models) in planetary sciences from providers like LATMOS, FMI as well as planetary plasma observational data provided by the CDPP. Several magnetospheric models are implemented in 3Dview (e.g. Tsyganenko for the Earth, and Cain for Mars). Magnetospheric models provided by SINP for the Earth, Jupiter, Saturn and Mercury as well as Hess models for Jupiter can also be used in 3DView, through the IMPEx infrastructure. A use case demonstrating the new capabilities offered by these tools and their interaction, including magnetospheric models, will be presented together with the IMPEx simulation metadata model used for the interface to simulation databases and model providers.
Furushima, Daisuke; Yamada, Hiroshi; Kido, Michiko; Ohno, Yuko
2018-01-01
Improvement in patient waiting time in dispensing pharmacies is an important element for patient and pharmacists. The One-Dose Package (ODP) of medicines was implemented in Japan to support medicine adherence among elderly patients; however, it also contributed to increase in patient waiting times. Given the projected increase in ODP patients in the near future owing to rapid population aging, development of improved strategies is a key imperative. We conducted a cross-sectional survey at a single dispensing pharmacy to clarify the impact of ODP on patient waiting time. Further, we propose an improvement strategy developed with use of a discrete event simulation (DES) model. A total of 673 patients received pharmacy services during the study period. A two-fold difference in mean waiting time was observed between ODP and non-ODP patients (22.6 and 11.2 min, respectively). The DES model was constructed with input parameters estimated from observed data. Introduction of fully automated ODP (A-ODP) system was projected to reduce the waiting time for ODP patient by 0.5 times (from 23.1 to 11.5 min). Furthermore, assuming that 40% of non-ODP patients would transfer to ODP, the waiting time was predicted to increase to 56.8 min; however, introduction of the A-ODP system decreased the waiting time to 20.4 min. Our findings indicate that ODP is one of the elements that increases the waiting time and that it might become longer in the future. Introduction of the A-ODP system may be an effective strategy to improve waiting time.
Discrete Event Simulation of a Suppression of Enemy Air Defenses (SEAD) Mission
2008-03-01
component-based DES developed in Java® using the Simkit simulation package. Analysis of ship self air defense system selection ( Turan , 1999) is another...Institute of Technology, Wright-Patterson AFB OH, March 2003 (ADA445279 ) Turan , Bulent. A Comparative Analysis of Ship Self Air Defense (SSAD) Systems
In a classical model of latent hormonal carcinogenesis, exposing female mice on neonatal days 1-5 to the synthetic estrogen diethylstilbestrol (DES; 1 mg/kg/day) results in high incidence of uterine carcinoma. However, the biological mechanisms driving DES-induced carcinogenesis ...
Standardisation for C2-Simulation Interoperation
2015-11-01
la continuité du MSG-048 a permis, grâce notamment à la contribution de la communauté opérationnelle, de consolider le besoin et d’approfondir un...demandes de documents STO, RTO ou AGARD doivent comporter la dénomination « STO », « RTO » ou « AGARD » selon le cas, suivi du numéro de série. Des ...disponibilité des rapports de la STO au fur et à mesure de leur publication, vous pouvez consulter notre site Web
Embrace the Dark Side: Advancing the Dark Energy Survey
NASA Astrophysics Data System (ADS)
Suchyta, Eric
The Dark Energy Survey (DES) is an ongoing cosmological survey intended to study the properties of the accelerated expansion of the Universe. In this dissertation, I present work of mine that has advanced the progress of DES. First is an introduction, which explores the physics of the cosmos, as well as how DES intends to probe it. Attention is given to developing the theoretical framework cosmologists use to describe the Universe, and to explaining observational evidence which has furnished our current conception of the cosmos. Emphasis is placed on the dark sector - dark matter and dark energy - the content of the Universe not explained by the Standard Model of particle physics. As its name suggests, the Dark Energy Survey has been specially designed to measure the properties of dark energy. DES will use a combination of galaxy cluster, weak gravitational lensing, angular clustering, and supernovae measurements to derive its state of the art constraints, each of which is discussed in the text. The work described in this dissertation includes science measurements directly related to the first three of these probes. The dissertation presents my contributions to the readout and control system of the Dark Energy Camera (DECam); the name of this software is SISPI. SISPI uses client-server and publish-subscribe communication patterns to coordinate and command actions among the many hardware components of DECam - the survey instrument for DES, a 570 megapixel CCD camera, mounted at prime focus of the Blanco 4-m Telescope. The SISPI work I discuss includes coding applications for DECam's filter changer mechanism and hexapod, as well as developing the Scripts Editor, a GUI application for DECam users to edit and export observing sequence SISPI can load and execute. Next, the dissertation describes the processing of early DES data, which I contributed. This furnished the data products used in the first-completed DES science analysis, and contributed to improving the collaboration-wide treatment of the data. The science measurements themselves are also detailed. We verified DES's capabilities for performing weak lensing analyses by measuring the masses of four galaxy clusters, finding consistency with previous measurements, and utilized DECam's wide field-of-view for a photometric study of filament-like structures in the fields. Finally, my recent work with Balrog is presented. Balrog is a simulation toolkit for embedding fake objects into real survey images in order to correct for systematic biases. We have used Balrog to extend DES galaxy clustering measurements down to fainter limits than previously possible, finding results consistent with higher-resolution space-based data. The methodology used in this analysis generalizes beyond galaxy clustering alone, and promises to be useful in future imaging survey measurements.
NASA Astrophysics Data System (ADS)
Senthilkumar, M.; Elango, L.
A three-dimensional mathematical model to simulate regional groundwater flow was used in the lower Palar River basin, in southern India. The study area is characterised by heavy ion of groundwater for agricultural, industrial and drinking water supplies. There are three major pumping stations on the riverbed apart from a number of wells distributed over the area. The model simulates groundwater flow over an area of about 392 km2 with 70 rows, 40 columns, and two layers. The model simulated a transient-state condition for the period 1991-2001. The model was calibrated for steady- and transient-state conditions. There was a reasonable match between the computed and observed heads. The transient model was run until the year 2010 to forecast groundwater flow under various scenarios of overpumping and less recharge. Based on the modelling results, it is shown that the aquifer system is stable at the present rate of pumping, excepting for a few locations along the coast where the groundwater head drops from 0.4 to 1.81 m below sea level during the dry seasons. Further, there was a decline in the groundwater head by 0.9 to 2.4 m below sea level in the eastern part of the area when the aquifer system was subjected to an additional groundwater withdrawal of 2 million gallons per day (MGD) at a major pumping station. Les modèles mathématiques en trois dimensions de l'écoulement souterrain régional sont très utiles pour la gestion des ressources en eau souterraine, car ils permettent une évaluation des composantes des processus hydrologiques et fournissent une description physique de l'écoulement de l'eau dans un aquifère. Une telle modélisation a été entreprise sur une partie du bassin inférieur de la rivière Palar, dans le sud de l'Inde. La zone d'étude est caractérisée par des prélèvements importants d'eau souterraine pour l'agriculture, l'industrie et l'eau potable. Il existe trois grandes stations de pompage sur la rivière en plus d'un certain nombre de puits répartis dans cette région. Le modèle simule l'écoulement souterrain dans une région d'environ 392 km2 avec 70 rangs, 40 colonnes et deux couches. Le modèle a fonctionné en régime transitoire en utilisant une approximation aux différences finies d'une équation différentielle partielle en trois dimensions de l'écoulement souterrain dans cet aquifère pour la période 1991-2001. Le modèle a été calibré pour des conditions de régime permanent et transitoire. Les charges hydrauliques calculées étaient en bon accord avec celles observées. Sur la base des résultats du modèle, il est apparu que le système aquifère est stable pour ce taux de pompage, excepté en quelques sites le long de la côte où l'eau marine a pénétré 50-100 m dans les terres. Le modèle transitoire a tourné jusqu'en 2010 afin de prévoir l'écoulement souterrain dynamique pour différents scénarios de pompage excessif et de recharge réduite. Il se produit un abaissement de la piézométrie de la nappe de 0.6 à 0.8 m dans la partie orientale, alors que l'aquifère est soumis à un prélèvement supplémentaire de 8,000 m3/jour à l'une des stations principales de pompage. Même avec le niveau actuel de pompage, la piézométrie de la nappe descendrait sous le niveau de la mer au cours des saisons sèches. Le modèle prédit le fonctionnement du système aquifère sous différentes conditions de stress hydrologique. Los modelos tridimensionales de flujo de aguas subterráneas son útiles para gestionar los recursos hídricos subterráneos, ya que proporcionan una aproximación a los diversos procesos hidrológicos y una descripción cuantitativa del flujo de agua en el acuífero. Se ha desarrollado un estudio de modelación de este tipo en una parte de la cuenca baja del río Palar, en el Sur de la India. Esta zona se caracteriza por las intensas extracciones de aguas subterráneas para usos agrícolas, industriales y domésticos. Hay tres estaciones de bombeo principales en el río, además de numerosos pozos distribuidos por la zona. El modelo simula el flujo de las aguas subterráneas en una superficie de 392 km2 por medio de 70 filas, 40 columnas y 2 capas. El modelo ha sido empleado en condiciones transitorias, por medio de la aproximación en diferencias finitas de las ecuaciones diferenciales parciales en tres dimensiones del flujo en el acuífero durante el período 1991-2001. Se ha calibrado el modelo en condiciones permanentes y transitorias. El ajuste entre los niveles calculados y medidos es razonable. A partir de los resultados de la modelación, se ha obtenido que el sistema acuífero es estable con la tasa de bombeo utilizada, exceptuando unos pocos emplazamientos a lo largo de la costa, donde se ha dado lugar a fenómenos de intrusión marina en una distancia de 50-100 m. El modelo transitorio ha sido ejecutado hasta el año 2010 para predecir el flujo dinámico bajo diversos escenarios de sobreexplotación y de reducción de la recarga. Se produce una disminución en los niveles piezométricos de 0.6 a 0.8 m en la zona oriental, donde el sistema acuífero está sometido a una extracción adicional de 2 millones de galones por día en la estación principal de bombeo. Incluso con las extracciones actuales, los niveles piezométricos se sitúan bajo el nivel del mar durante las épocas secas. El modelo predice el comportamiento de este sistema acuífero bajo varias condiciones de presión hidrológica.
Prise en compte des ``courants de London'' dans la modélisation des supraconducteurs
NASA Astrophysics Data System (ADS)
Bossavit, Alain
1997-10-01
A model is given, in variational form, in which volumic “Bean currents”, ruled by Bean's law, and surface “London currents” coexist. This macroscopic model generalizes Bean's one, by appending to the critical density j_c a second parameter, with the dimension of a length, similar to London's depth λ. The one-dimensional version of the model is investigated, in order to link this parameter with the standard observable H-M characteristics On propose un modèle, sous forme variationnelle, associant des “courants de Bean” volumiques, décrits par la loi de Bean, et des “courants de London”, surfaciques. Ce modèle macroscopique généralise celui de Bean, caractérisé par le courant critique j_c, et fait intervenir un second paramètre, homogène à une longueur, analogue au λ de London. La version unidimensionnelle du modèle est étudiée en détail de manière à relier ce paramètre à l'observation des caractéristiques H-M usuelles.
Abriata, Luciano A; Albanesi, Daniela; Dal Peraro, Matteo; de Mendoza, Diego
2017-06-20
Histidine kinases (HK) are the sensory proteins of two-component systems, responsible for a large fraction of bacterial responses to stimuli and environmental changes. Prototypical HKs are membrane-bound proteins that phosphorylate cognate response regulator proteins in the cytoplasm upon signal detection in the membrane or periplasm. HKs stand as potential drug targets but also constitute fascinating systems for studying proteins at work, specifically regarding the chemistry and mechanics of signal detection, transduction through the membrane, and regulation of catalytic outputs. In this Account, we focus on Bacillus subtilis DesK, a membrane-bound HK part of a two-component system that maintains appropriate membrane fluidity at low growth temperatures. Unlike most HKs, DesK has no extracytoplasmic signal-sensing domains; instead, sensing is carried out by 10 transmembrane helices (coming from two protomers) arranged in an unknown structure. The fifth transmembrane helix from each protomer connects, without any of the intermediate domains found in other HKs, into the dimerization and histidine phosphotransfer (DHp) domain located in the cytoplasm, which is followed by the ATP-binding domains (ABD). Throughout the years, genetic, biochemical, structural, and computational studies on wild-type, mutant, and truncated versions of DesK allowed us to dissect several aspects of DesK's functioning, pushing forward a more general understanding of its own structure/function relationships as well as those of other HKs. We have shown that the sensing mechanism is rooted in temperature-dependent membrane properties, most likely a combination of thickness, fluidity, and water permeability, and we have proposed possible mechanisms by which DesK senses these properties and transduces the signals. X-ray structures and computational models have revealed structural features of TM and cytoplasmic regions in DesK's kinase- and phosphatase-competent states. Biochemical and genetic experiments and molecular simulations further showed that reversible formation of a two-helix coiled coil in the fifth TM segment and the N-terminus of the cytoplasmic domain is essential for the sensing and signal transduction mechanisms. Together with other structural and functional works, the emerging picture suggests that diverse HKs possess distinct sensing and transduction mechanisms but share as rather general features (i) a symmetric phosphatase state and an asymmetric kinase state and (ii) similar functional outputs on the conserved DHp and ABD domains, achieved through different mechanisms that depend on the nature of the initial signal. We here advance (iii) an important role for TM prolines in transducing the initial signals to the cytoplasmic coiled coils, based on simulations of DesK's TM helices and our previous work on a related HK, PhoQ. Lastly, evidence for DesK, PhoQ, BvgS, and DctB HKs shows that (iv) overall catalytic output is tuned by a delicate balance between hydration potentials, coiled coil stability, and exposure of hydrophobic surface patches at their cytoplasmic coiled coils and at the N-terminal and C-terminal sides of their TM helices. This balance is so delicate that small perturbations, either physiological signals or induced by mutations, lead to large remodeling of the underlying conformational landscape achieving clear-cut changes in catalytic output, mirroring the required response speed of these systems for proper biological function.
NASA Astrophysics Data System (ADS)
Brunet, V.; Molton, P.; Bézard, H.; Deck, S.; Jacquin, L.
2012-01-01
This paper describes the results obtained during the European Union JEDI (JEt Development Investigations) project carried out in cooperation between ONERA and Airbus. The aim of these studies was first to acquire a complete database of a modern-type engine jet installation set under a wall-to-wall swept wing in various transonic flow conditions. Interactions between the engine jet, the pylon, and the wing were studied thanks to ¤advanced¥ measurement techniques. In parallel, accurate Reynolds-averaged Navier Stokes (RANS) simulations were carried out from simple ones with the Spalart Allmaras model to more complex ones like the DRSM-SSG (Differential Reynolds Stress Modef of Speziale Sarkar Gatski) turbulence model. In the end, Zonal-Detached Eddy Simulations (Z-DES) were also performed to compare different simulation techniques. All numerical results are accurately validated thanks to the experimental database acquired in parallel. This complete and complex study of modern civil aircraft engine installation allowed many upgrades in understanding and simulation methods to be obtained. Furthermore, a setup for engine jet installation studies has been validated for possible future works in the S3Ch transonic research wind-tunnel. The main conclusions are summed up in this paper.
NASA Astrophysics Data System (ADS)
Tremblay, Sarah-Eve
Ce memoire presente le developpement d’un montage simulant l’erosion par la pluie afin d’effectuer l’evaluation de differents revetements glaciophobes dans le domaine aerospatial. Bien que plusieurs revetements presentent une bonne efficacite a reduire l’adherence et/ou l’accumulation de glace, ils ne repondent pas necessairement aux normes de resistance a l’erosion simulee par les gouttes de pluie les frappant a grande vitesse. Il n’existe qu’une installation en Amerique du Nord offrant un service d’essai qui evalue la resistance a l’erosion par la pluie suivant les normes aerospatiales. Etant l’unique institution pouvant faire la certification de peintures utilisees sur les avions en ce qui a trait a l’erosion par la pluie, ce service est donc difficile d’acces et couteux. Le laboratoire international des materiaux antigivre (LIMA) a developpe un essai plus rapide et moins couteux, facilitant ainsi le developpement de revetements glaciophobes devant resister a l’erosion par la pluie. Dans cette etude, le developpement du montage d’erosion par la pluie effectue au laboratoire des materiaux antigivre (LIMA) est presente. En particulier, des essais sur quatre (4) revetements dont la resistance a l’erosion est connue, et sur trois revetements industriels, ont ete effectues afin d’ajuster les differents parametres du montage comme la pression et la temperature de l’eau ainsi que la robustesse du montage. Ensuite, des essais de sensibilite et de reproductibilite des resultats ont egalement ete effectues pour fin de validation du montage et du protocole experimental. Pour ce faire, le montage de type jet d’eau developpe consiste principalement en une pompe a haute pression qui projette un jet d’eau continu passant par les orifices d’un disque tournant. Cette operation permet de generer une goutte de pluie simulee qui est projetee sur un echantillon de revetement statique. L’essai est base sur la norme standard ASTM (Liquid Impingement Erosion Testing, G73-82). La resistance a l’erosion du materiau est determinee a l’aide du nombre d’impacts subi par l’echantillon obtenu avant la production de dommages visibles. Donc, pour determiner le niveau d’erosion, quatre sites d’impacts doivent etre erodes sur cinq sur le meme rang pour le meme nombre d’impact. L’analyse des quatre revetements de resistance a l’erosion connus a ete completee par l’examen microscopique de chaque site d’impact et d’une photo. Le choix des quatre revetements de resistance a l’erosion connue, allant du plus resistant au moins resistant, a permis de verifier que le montage etait assez sensible pour evaluer les revetements souvent utilises dans le domaine aerospatial. De plus, l’evaluation des trois revetements industriels a permis, pour sa part, de confirmer les resultats obtenus precedemment. Finalement, pour evaluer la sensibilite et la reproductibilite des resultats, de deux a six repetitions pour chaque revetement a ete effectuees donnant des taux d’erosion allant de 100 a 100 000 impacts. L’intervalle des ecarts-types varie de ± 0% a ± 47% pour une moyenne de ± 17%. Le critere d’echec a ete determine a l’aide du nombre d’impacts avant l’apparition de dommage visible ainsi que de la repetabilite des resultats.
[High fidelity simulation : a new tool for learning and research in pediatrics].
Bragard, I; Farhat, N; Seghaye, M-C; Schumacher, K
2016-10-01
Caring for a sick child represents a high risk activity that requires technical and non-technical skills related to several factors such as the rarity of certain events or the stress of caring for a child. As regard these conditions, medi¬cal simulation provides a learning environment without risk, the control of variables, the reproducibility of situations, and the confrontation with rare events. In this article, we des¬cribe the steps of a simulation session and outline the current knowledge of the use of simulation in paediatrics. A session of simulation includes seven phases following the model of Peter Dieckmann, particularly the scenario and the debriefing that form the heart of the learning experience. Several studies have shown the advantages of simulation for paediatric trai¬ning in terms of changes in attitudes, skills and knowledge. Some studies have demonstrated a beneficial transfer to prac¬tice. In conclusion, simulation provides great potential for training and research in paediatrics. The establishment of a collaborative research program by the whole simulation com¬munity would help ensure that this type of training improves the quality of care.
NASA Astrophysics Data System (ADS)
Dolez, Patricia
Le travail de recherche effectue dans le cadre de ce projet de doctorat a permis la mise au point d'une methode de mesure des pertes ac destinee a l'etude des supraconducteurs a haute temperature critique. Pour le choix des principes de cette methode, nous nous sommes inspires de travaux anterieurs realises sur les supraconducteurs conventionnels, afin de proposer une alternative a la technique electrique, presentant lors du debut de cette these des problemes lies a la variation du resultat des mesures selon la position des contacts de tension sur la surface de l'echantillon, et de pouvoir mesurer les pertes ac dans des conditions simulant la realite des futures applications industrielles des rubans supraconducteurs: en particulier, cette methode utilise la technique calorimetrique, associee a une calibration simultanee et in situ. La validite de la methode a ete verifiee de maniere theorique et experimentale: d'une part, des mesures ont ete realisees sur des echantillons de Bi-2223 recouverts d'argent ou d'alliage d'argent-or et comparees avec les predictions theoriques donnees par Norris, nous indiquant la nature majoritairement hysteretique des pertes ac dans nos echantillons; d'autre part, une mesure electrique a ete realisee in situ dont les resultats correspondent parfaitement a ceux donnes par notre methode calorimetrique. Par ailleurs, nous avons compare la dependance en courant et en frequence des pertes ac d'un echantillon avant et apres qu'il ait ete endommage. Ces mesures semblent indiquer une relation entre la valeur du coefficient de la loi de puissance modelisant la dependance des pertes avec le courant, et les inhomogeneites longitudinales du courant critique induites par l'endommagement. De plus, la variation en frequence montre qu'au niveau des grosses fractures transverses creees par l'endommagement dans le coeur supraconducteur, le courant se partage localement de maniere a peu pres equivalente entre les quelques grains de matiere supraconductrice qui restent fixes a l'interface coeur-enveloppe, et le revetement en alliage d'argent. L'interet d'une methode calorimetrique par rapport a la technique electrique, plus rapide, plus sensible et maintenant fiable, reside dans la possibilite de realiser des mesures de pertes ac dans des environnements complexes, reproduisant la situation presente par exemple dans un cable de transport d'energie ou dans un transformateur. En particulier, la superposition d'un courant dc en plus du courant ac habituel nous a permis d'observer experimentalement, pour la premiere fois a notre connaissance, un comportement particulier des pertes ac en fonction de la valeur du courant dc decrit theoriquement par LeBlanc. Nous avons pu en deduire la presence d'un courant d'ecrantage Meissner de 16 A, ce qui nous permet de determiner les conditions dans lesquelles une reduction du niveau de pertes ac pourrait etre obtenue par application d'un courant dc, phenomene denomme "vallee de Clem".
NASA Astrophysics Data System (ADS)
Samuroff, S.; Bridle, S. L.; Zuntz, J.; Troxel, M. A.; Gruen, D.; Rollins, R. P.; Bernstein, G. M.; Eifler, T. F.; Huff, E. M.; Kacprzak, T.; Krause, E.; MacCrann, N.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; Davis, C.; Desai, S.; Doel, P.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Honscheid, K.; James, D. J.; Jarvis, M.; Jeltema, T.; Kirk, D.; Kuehn, K.; Kuhlmann, S.; Li, T. S.; Lima, M.; Maia, M. A. G.; March, M.; Marshall, J. L.; Martini, P.; Melchior, P.; Menanteau, F.; Miquel, R.; Nord, B.; Ogando, R. L. C.; Plazas, A. A.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Sheldon, E.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.; Tucker, D. L.; DES Collaboration
2018-04-01
We use a suite of simulated images based on Year 1 of the Dark Energy Survey to explore the impact of galaxy neighbours on shape measurement and shear cosmology. The HOOPOE image simulations include realistic blending, galaxy positions, and spatial variations in depth and point spread function properties. Using the IM3SHAPE maximum-likelihood shape measurement code, we identify four mechanisms by which neighbours can have a non-negligible influence on shear estimation. These effects, if ignored, would contribute a net multiplicative bias of m ˜ 0.03-0.09 in the Year One of the Dark Energy Survey (DES Y1) IM3SHAPE catalogue, though the precise impact will be dependent on both the measurement code and the selection cuts applied. This can be reduced to percentage level or less by removing objects with close neighbours, at a cost to the effective number density of galaxies neff of 30 per cent. We use the cosmological inference pipeline of DES Y1 to explore the cosmological implications of neighbour bias and show that omitting blending from the calibration simulation for DES Y1 would bias the inferred clustering amplitude S8 ≡ σ8(Ωm/0.3)0.5 by 2σ towards low values. Finally, we use the HOOPOE simulations to test the effect of neighbour-induced spatial correlations in the multiplicative bias. We find the impact on the recovered S8 of ignoring such correlations to be subdominant to statistical error at the current level of precision.
Samuroff, S.
2017-12-26
We use a suite of simulated images based on Year 1 of the Dark Energy Survey to explore the impact of galaxy neighbours on shape measurement and shear cosmology. The hoopoe image simulations include realistic blending, galaxy positions, and spatial variations in depth and PSF properties. Using the im3shape maximum-likelihood shape measurement code, we identify four mechanisms by which neighbours can have a non-negligible influence on shear estimation. These effects, if ignored, would contribute a net multiplicative bias ofmore » $$m \\sim 0.03 - 0.09$$ in the DES Y1 im3shape catalogue, though the precise impact will be dependent on both the measurement code and the selection cuts applied. This can be reduced to percentage level or less by removing objects with close neighbours, at a cost to the effective number density of galaxies $$n_\\mathrm{eff}$$ of 30%. We use the cosmological inference pipeline of DES Y1 to explore the cosmological implications of neighbour bias and show that omitting blending from the calibration simulation for DES Y1 would bias the inferred clustering amplitude $$S_8\\equiv \\sigma_8 (\\Omega _\\mathrm{m} /0.3)^{0.5}$$ by $$2 \\sigma$$ towards low values. Lastly, we use the hoopoe simulations to test the effect of neighbour-induced spatial correlations in the multiplicative bias. We find the impact on the recovered $$S_8$$ of ignoring such correlations to be subdominant to statistical error at the current level of precision.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samuroff, S.
We use a suite of simulated images based on Year 1 of the Dark Energy Survey to explore the impact of galaxy neighbours on shape measurement and shear cosmology. The hoopoe image simulations include realistic blending, galaxy positions, and spatial variations in depth and PSF properties. Using the im3shape maximum-likelihood shape measurement code, we identify four mechanisms by which neighbours can have a non-negligible influence on shear estimation. These effects, if ignored, would contribute a net multiplicative bias ofmore » $$m \\sim 0.03 - 0.09$$ in the DES Y1 im3shape catalogue, though the precise impact will be dependent on both the measurement code and the selection cuts applied. This can be reduced to percentage level or less by removing objects with close neighbours, at a cost to the effective number density of galaxies $$n_\\mathrm{eff}$$ of 30%. We use the cosmological inference pipeline of DES Y1 to explore the cosmological implications of neighbour bias and show that omitting blending from the calibration simulation for DES Y1 would bias the inferred clustering amplitude $$S_8\\equiv \\sigma_8 (\\Omega _\\mathrm{m} /0.3)^{0.5}$$ by $$2 \\sigma$$ towards low values. Lastly, we use the hoopoe simulations to test the effect of neighbour-induced spatial correlations in the multiplicative bias. We find the impact on the recovered $$S_8$$ of ignoring such correlations to be subdominant to statistical error at the current level of precision.« less
NASA Astrophysics Data System (ADS)
Durand, S.; Tellier, C. R.
1996-02-01
This paper constitutes the first part of a work devoted to applications of piezoresistance effects in germanium and silicon semiconductors. In this part, emphasis is placed on a formal explanation of non-linear effects. We propose a brief phenomenological description based on the multi-valleys model of semiconductors before to adopt a macroscopic tensorial model from which general analytical expressions for primed non-linear piezoresistance coefficients are derived. Graphical representations of linear and non-linear piezoresistance coefficients allows us to characterize the influence of the two angles of cut and of directions of alignment. The second part will primarily deal with specific applications for piezoresistive sensors. Cette publication constitue la première partie d'un travail consacré aux applications des effets piézorésistifs dans les semiconducteurs germanium et silicium. Cette partie traite essentiellement de la modélisation des effets non-linéaires. Après une description phénoménologique à partir du modèle de bande des semiconducteurs nous développons un modèle tensoriel macroscopique et nous proposons des équations générales analytiques exprimant les coefficients piézorésistifs non-linéaires dans des repères tournés. Des représentations graphiques des variations des coefficients piézorésistifs linéaires et non-linéaires permettent une pré-caractérisation de l'influence des angles de coupes et des directions d'alignement avant l'étude d'applications spécifiques qui feront l'objet de la deuxième partie.
An approach to developing an integrated pyroprocessing simulator
NASA Astrophysics Data System (ADS)
Lee, Hyo Jik; Ko, Won Il; Choi, Sung Yeol; Kim, Sung Ki; Kim, In Tae; Lee, Han Soo
2014-02-01
Pyroprocessing has been studied for a decade as one of the promising fuel recycling options in Korea. We have built a pyroprocessing integrated inactive demonstration facility (PRIDE) to assess the feasibility of integrated pyroprocessing technology and scale-up issues of the processing equipment. Even though such facility cannot be replaced with a real integrated facility using spent nuclear fuel (SF), many insights can be obtained in terms of the world's largest integrated pyroprocessing operation. In order to complement or overcome such limited test-based research, a pyroprocessing Modelling and simulation study began in 2011. The Korea Atomic Energy Research Institute (KAERI) suggested a Modelling architecture for the development of a multi-purpose pyroprocessing simulator consisting of three-tiered models: unit process, operation, and plant-level-model. The unit process model can be addressed using governing equations or empirical equations as a continuous system (CS). In contrast, the operation model describes the operational behaviors as a discrete event system (DES). The plant-level model is an integrated model of the unit process and an operation model with various analysis modules. An interface with different systems, the incorporation of different codes, a process-centered database design, and a dynamic material flow are discussed as necessary components for building a framework of the plant-level model. As a sample model that contains methods decoding the above engineering issues was thoroughly reviewed, the architecture for building the plant-level-model was verified. By analyzing a process and operation-combined model, we showed that the suggested approach is effective for comprehensively understanding an integrated dynamic material flow. This paper addressed the current status of the pyroprocessing Modelling and simulation activity at KAERI, and also predicted its path forward.
An approach to developing an integrated pyroprocessing simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyo Jik; Ko, Won Il; Choi, Sung Yeol
Pyroprocessing has been studied for a decade as one of the promising fuel recycling options in Korea. We have built a pyroprocessing integrated inactive demonstration facility (PRIDE) to assess the feasibility of integrated pyroprocessing technology and scale-up issues of the processing equipment. Even though such facility cannot be replaced with a real integrated facility using spent nuclear fuel (SF), many insights can be obtained in terms of the world's largest integrated pyroprocessing operation. In order to complement or overcome such limited test-based research, a pyroprocessing Modelling and simulation study began in 2011. The Korea Atomic Energy Research Institute (KAERI) suggestedmore » a Modelling architecture for the development of a multi-purpose pyroprocessing simulator consisting of three-tiered models: unit process, operation, and plant-level-model. The unit process model can be addressed using governing equations or empirical equations as a continuous system (CS). In contrast, the operation model describes the operational behaviors as a discrete event system (DES). The plant-level model is an integrated model of the unit process and an operation model with various analysis modules. An interface with different systems, the incorporation of different codes, a process-centered database design, and a dynamic material flow are discussed as necessary components for building a framework of the plant-level model. As a sample model that contains methods decoding the above engineering issues was thoroughly reviewed, the architecture for building the plant-level-model was verified. By analyzing a process and operation-combined model, we showed that the suggested approach is effective for comprehensively understanding an integrated dynamic material flow. This paper addressed the current status of the pyroprocessing Modelling and simulation activity at KAERI, and also predicted its path forward.« less
Kovalchuk, Sergey V; Funkner, Anastasia A; Metsker, Oleg G; Yakovlev, Aleksey N
2018-06-01
An approach to building a hybrid simulation of patient flow is introduced with a combination of data-driven methods for automation of model identification. The approach is described with a conceptual framework and basic methods for combination of different techniques. The implementation of the proposed approach for simulation of the acute coronary syndrome (ACS) was developed and used in an experimental study. A combination of data, text, process mining techniques, and machine learning approaches for the analysis of electronic health records (EHRs) with discrete-event simulation (DES) and queueing theory for the simulation of patient flow was proposed. The performed analysis of EHRs for ACS patients enabled identification of several classes of clinical pathways (CPs) which were used to implement a more realistic simulation of the patient flow. The developed solution was implemented using Python libraries (SimPy, SciPy, and others). The proposed approach enables more a realistic and detailed simulation of the patient flow within a group of related departments. An experimental study shows an improved simulation of patient length of stay for ACS patient flow obtained from EHRs in Almazov National Medical Research Centre in Saint Petersburg, Russia. The proposed approach, methods, and solutions provide a conceptual, methodological, and programming framework for the implementation of a simulation of complex and diverse scenarios within a flow of patients for different purposes: decision making, training, management optimization, and others. Copyright © 2018 Elsevier Inc. All rights reserved.
Portal, Céline; Gouyer, Valérie; Gottrand, Frédéric; Desseyn, Jean-Luc
2017-01-01
Modification of mucous cell density and gel-forming mucin production are established hallmarks of mucosal diseases. Our aim was to develop and validate a mouse model to study live goblet cell density in pathological situations and under pharmacological treatments. We created a reporter mouse for the gel-forming mucin gene Muc5b. Muc5b-positive goblet cells were studied in the eye conjunctiva by immunohistochemistry and probe-based confocal laser endomicroscopy (pCLE) in living mice. Dry eye syndrome (DES) model was induced by topical application of benzalkonium chloride (BAK) and recombinant interleukine (rIL) 13 was administered to reverse the goblet cell loss in the DES model. Almost 50% of the total of conjunctival goblet cells are Muc5b+ in unchallenged mice. The decrease density of Muc5b+ conjunctival goblet cell population in the DES model reflects the whole conjunctival goblet cell loss. Ten days of BAK in one eye followed by 4 days without any treatment induced a -18.3% decrease in conjunctival goblet cell density. A four days of rIL13 application in the DES model restored the normal goblet cell density. Muc5b is a biological marker of DES mouse models. We bring the proof of concept that our model is unique and allows a better understanding of the mechanisms that regulate gel-forming mucin production/secretion and mucous cell differentiation in the conjunctiva of living mice and can be used to test treatment compounds in mucosal disease models.
Cylinders out of a top hat: counts-in-cells for projected densities
NASA Astrophysics Data System (ADS)
Uhlemann, Cora; Pichon, Christophe; Codis, Sandrine; L'Huillier, Benjamin; Kim, Juhan; Bernardeau, Francis; Park, Changbom; Prunet, Simon
2018-06-01
Large deviation statistics is implemented to predict the statistics of cosmic densities in cylinders applicable to photometric surveys. It yields few per cent accurate analytical predictions for the one-point probability distribution function (PDF) of densities in concentric or compensated cylinders; and also captures the density dependence of their angular clustering (cylinder bias). All predictions are found to be in excellent agreement with the cosmological simulation Horizon Run 4 in the quasi-linear regime where standard perturbation theory normally breaks down. These results are combined with a simple local bias model that relates dark matter and tracer densities in cylinders and validated on simulated halo catalogues. This formalism can be used to probe cosmology with existing and upcoming photometric surveys like DES, Euclid or WFIRST containing billions of galaxies.
NASA Astrophysics Data System (ADS)
Greiner, Katharina; Egger, Jan; Großkopf, Stefan; Kaftan, Jens N.; Dörner, Ralf; Freisleben, Bernd
In diesem Beitrag werden Active Appearance Models (AAMs) zur Segmentierung der äußeren Kontur von Aortenaneurysmen eingesetzt. Diese Aufgabe ist wegen des geringen Kontrastes zum umliegenden Gewebe und des Aufbaus der teils thrombotisierten oder kalzifizierten Gefäßwände im Bereich eines Aneurysmas so komplex, dass sie aufgrund der Vielgestalt der Kontur in CT-Angiographie-Bildern die Verwendung eines statistischen Modells für Form und eingeschlossene Textur rechtfertigt. Für die Evaluation des Verfahrens wurden verschiedene statistische Modelle aus Schichten von neun CTA-Datensätzen trainiert und die Segmentierung anhand von Leave-One-Out-Tests überprüft.
1980-11-21
defensive , and both the question and the answer seemed to generate supporting reactions from the audience. Discrete Event Simulation The session on...R. Toscano / A. Maceri / F. Maceri (Italy) Analyse numerique de quelques problemes de contact en theorie des membranes 3:40 - 4:00 p.m. COFFEE BREAK...Switzerland Stockage de chaleur faible profondeur : Simulation par elements finis 3:40 - 4:00 p.m. A. Rizk Abu El-Wafa / M. Tawfik / M.S. Mansour (Egypt) Digital
Synthese de champs sonores adaptative
NASA Astrophysics Data System (ADS)
Gauthier, Philippe-Aubert
La reproduction de champs acoustiques est une approche physique au probleme technologique de la spatialisation sonore. Cette these concerne l'aspect physique de la reproduction de champs acoustiques. L'objectif principal est l'amelioration de la reproduction de champs acoustiques par "synthese de champs acoustiques" ("Wave Field Synthesis", WFS), une approche connue, basee sur des hypotheses de champ libre, a l'aide du controle actif par l'ajout de capteurs de l'erreur de reproduction et d'une boucle fermee. Un premier chapitre technique (chapitre 4) expose les resultats d'appreciation objective de la WFS par simulations et mesures experimentales. L'effet indesirable de la salle de reproduction sur les qualites objectives de la WFS fut illustre. Une premiere question de recherche fut ensuite abordee (chapitre 5), a savoir s'il est possible de reproduire des champs progressifs en salle dans un paradigme physique de controle actif: cette possibilite fut prouvee. L'approche technique privilegiee, "synthese de champs adaptative" ("Adaptive Wave Field Synthesis" [AWFS]), fut definie, puis simulee (chapitre 6). Cette approche d'AWFS comporte une originalite en controle actif et en reproduction de champs acoustiques: la fonction cout quadratique representant la minimisation des erreurs de reproduction inclut une regularisation de Tikhonov avec solution a priori qui vient de la WFS. L'etude de l'AWFS a l'aide de la decomposition en valeurs singulieres (chapitre 7) a permis de comprendre les mecanismes propres a l'AWFS. C'est la deuxieme principale originalite de la these. L'algorithme FXLMS (LMS et reference filtree) est modifie pour l'AWFS (chapitre 8). Le decouplage du systeme par decomposition en valeurs singulieres est illustre dans le domaine du traitement de signal et l'AWFS basee sur le controle independant des modes de rayonnement est simulee (chapitre 8). Ce qui constitue la troisieme originalite principale de cette these. Ces simulations du traitement de signal montrent l'efficacite des algorithmes et la capacite de l'AWFS a attenuer les erreurs attribuables a des reflexions acoustiques. Le neuvieme chapitre presente des resultats experimentaux d'AWFS. L'objectif etait de valider la methode et d'evaluer les performances de l'AWFS. Un autre algorithme prometteur est aussi teste. Les resultats demontrent la bonne marche de l'AWFS et des algorithmes testes. Autant dans le cas de la reproduction de champs harmoniques que dans le cas de la reproduction de champs a large bande, l'AWFS reduit l'erreur de reproduction de la WFS et les effets indesirables causes par les lieux de reproduction.
Hammons, Joshua A; Zhang, Fan; Ilavsky, Jan
2018-06-15
Many applications of deep eutectic solvents (DES) rely on exploitation of their unique yet complex liquid structures. Due to the ionic nature of the DES components, their diffuse structures are perturbed in the presence of a charged surface. We hypothesize that it is possible to perturb the bulk DES structure far (>100 nm) from a curved, charged surface with mesoscopic dimensions. We performed in situ, synchrotron-based ultra-small angle X-ray scattering (USAXS) experiments to study the solvent distribution near the surface of charged mesoporous silica particles (MPS) (≈0.5 µm in diameter) suspended in both water and a common type of DES (1:2 choline Cl-:ethylene glycol). A careful USAXS analysis reveals that the perturbation of electron density distribution within the DES extends ≈1 μm beyond the particle surface, and that this perturbation can be manipulated by the addition of salt ions (AgCl). The concentration of the pore-filling fluid is greatly reduced in the DES. Notably, we extracted the real-space structures of these fluctuations from the USAXS data using a simulated annealing approach that does not require a priori knowledge about the scattering form factor, and can be generalized to a wide range of complex small-angle scattering problems. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Schober, Helmut; Fischer, Henry; Leclercq-Hugeux, Françoise
2003-09-01
L'École Thématique “Structure et Dynamique des Systèmes Désordonnés” [1] s'inscrit dans le cadre des écoles organisées sous l'impulsion de la Société Française de la Neutronique (SFN). Elle s'est deroulée en mai 2002 sur la Presqu'île de Giens (Var) en première partie des 11émes Journées de la Diffusion Neutronique. L'Édition de ces cours constitue ainsi le cinquième ouvrage introduisant les techniques neutroniques et leurs apports à differentes thématiques [2]. Le désordre est un facteur déterminant pour pratiquement toutes les propriétés des matériaux. Il est inhérent dans les matériaux amorphes ou liquides, mais il détermine également les propriétés mécaniques et électroniques d'autres composés d'importance technologique comme les matériaux métalliques, dont l'aspect “désordonné” est moins évident. Le désordre enfin joue un rôle essentiel dans tout ce que touche à la vie. En fait, il est difficile d'imaginer des systèmes, à part quelques exceptions rares comme l'hélium ou le silicium de haute pureté, où il n'y ait pas de désordre. En dehors de cet aspect pratique, la description scientifique du désordre atomique pose toujours des problèmes fondamentaux faute de concepts pertinents. Une des tâches importantes du scientifique ou de l'ingenieur est de préciser quel genre de désordre existe à une échelle d'espace et de temps donnée. Un materiau peut très bien être homogène à l'échelle atomique et présenter des défauts ou des hétrogéneité visibles à l'œil et inversement. De même, un système désordonné à un instant donné, peut apparaître ordonné si on moyenne dans le temps. Les techniques de diffusion des neutrons sont idéales pour aborder ces questions. Les neutrons sondent directement les noyaux et ont à la fois des longueurs d'ondes proches des distances inter atomiques et des énergies avoisinant celles des excitations élémentaires de la matière condensée. Ils permettent ainsi une observation directe et non-destructive des positions et des mouvements atomiques. La gamme des distances et des temps sondés par les neutrons autorisent aussi bien de suivre le mouvement des atomes à l'échelle inter atomique que l'évolution d'une structure soit mésoscopique soit macroscopique sur des temps de moins d'une picoseconde à plusieurs heures. La gamme des techniques de diffusion utilisées pour couvrir un tel champ d'application est naturellement très large : différents types de diffractomètres permettent des études de structure atomique trés pointue et résolue dans le temps. Les spectromètres nous donnent accès à la dynamique d'une dizaine de femto- à plusieurs nanosecondes. La diffusion aux petits angles, la reflectomètrie et finalement la tomographie nous permettent de regarder la structure et la dynamique d'objets plus grands. Il était illusoire de prétendre couvrir la totalité des systèmes désordonnés et l'ensemble des techniques de diffusion neutronique pertinentes au cours d'une école de 3 jours. Nous avons donc été amené à nous concentrer sur les liquides, les verres, les cristaux plastiques et les polymères. Ce choix s'est imposé par le souhait d'introduire les concepts de base, en acceptant de sacrifier certains domaines d'applications. Il va de soi que l'étude approfondie de systèmes mettant en jeu plusieurs échelles de distance (structure moléculaire, portée des corrélations entre les molécules) et de temps (relaxations caractéristiques des divers phenomènes dynamiques selon leur extension spatiale) implique une pluralité de techniques expérimentales, complémentaires des différentes techniques neutroniques disponibles. Un des objectifs de cette école etait donc de préciser les domaines de pertinence des différentes techniques neutroniques disponibles en en situant la complémentarité avec d'autres approches instrumentales (diffraction des rayons X, EXAFS, RMN, etc.). Enfin la confrontation avec les résultats obtenus par simulation et modélisation numérique est essentielle à la compréhension des processus élémentaires dans ces systèmes complexes. En première partie de cet ouvrage, on trouvera une description géntrale des systèmes désordonnés et une introduction aux techniques de diffusion de neutrons. Pour la partie concernant la structure, une revue des techniques de diffraction est suivie par des applications à des systèmes de complexité croissante (liquides métalliques simples, alliages liquides, semiconducteurs fondus, solutions aqueuses, verres d'oxydes). L'apport des méthodes de variation de longueur de diffusion (substitution isotopique pour les mesures aux neutrons, diffusion anomale des rayons X ou combinaison de ces deux techniques) permet la détermination des facteurs de structures partiels. Pour la dynamique, on a privilégié une approche introduisant progressivement le désordre en passant des solides aux liquides. Le concept de diffusion puis les particularités dynamiques des verres sont ensuite présentés et illustrés par des exemples concrets. Enfin la dernière partie est consacrée à la simulation numérique des propriétés dynamiques des verres et des liquides et à la modélisation de la structure par méthode de Monte Carlo inverse (RMC) à partir de l'ensemble des données expérimentales existant pour un système donné. La communauté française a joué un rôle important dans le développement des techniques neutroniques dédites à l'étude des systèmes désordonnés et y a acquis une competence reconnue. Nous avons essayé, dans cet ouvrage, de privilégier les aspects pédagogiques des cours, pour les rendre accessibles aux scientifiques francophones non spécialistes en diffusion des neutrons. Nous saluons ici l'effort de rédaction, en langue française, des différents intervenants et souhaitons que cet ouvrage puisse participer à la formation et au renouvellement de la communauté neutronique française. Helmut Schober et Henry Fischer (Editeurs scientifiques) Françoise Leclercq-Hugeux (Présidente de la SFN)
Dark Energy Survey Year 1 Results: Curved-Sky Weak Lensing Mass Map
Chang, C.; Sheldon, E.; Pujol, A.; ...
2018-01-04
We construct the largest curved-sky galaxy weak lensing mass map to date from the DES firstyear (DES Y1) data. The map, about 10 times larger than previous work, is constructed over a contiguous ≈1;500 deg 2, covering a comoving volume of ≈10 Gpc 3. The effects of masking, sampling, and noise are tested using simulations. We generate weak lensing maps from two DES Y1 shear catalogs, METACALIBRATION and IM3SHAPE, with sources at redshift 0:2 < z < 1:3; and in each of four bins in this range. In the highest signal-to-noise map, the ratio between the mean signal-to-noise in themore » E-mode and the B-mode map is ~1.5 (~2) when smoothed with a Gaussian filter of sG =30 (80) arcminutes. The second and third moments of the convergence k in the maps are in agreement with simulations. We also find no significant correlation of k with maps of potential systematic contaminants. Finally, we demonstrate two applications of the mass maps: (1) cross-correlation with different foreground tracers of mass and (2) exploration of the largest peaks and voids in the maps.« less
Dark Energy Survey Year 1 Results: Curved-Sky Weak Lensing Mass Map
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C.; Sheldon, E.; Pujol, A.
We construct the largest curved-sky galaxy weak lensing mass map to date from the DES firstyear (DES Y1) data. The map, about 10 times larger than previous work, is constructed over a contiguous ≈1;500 deg 2, covering a comoving volume of ≈10 Gpc 3. The effects of masking, sampling, and noise are tested using simulations. We generate weak lensing maps from two DES Y1 shear catalogs, METACALIBRATION and IM3SHAPE, with sources at redshift 0:2 < z < 1:3; and in each of four bins in this range. In the highest signal-to-noise map, the ratio between the mean signal-to-noise in themore » E-mode and the B-mode map is ~1.5 (~2) when smoothed with a Gaussian filter of sG =30 (80) arcminutes. The second and third moments of the convergence k in the maps are in agreement with simulations. We also find no significant correlation of k with maps of potential systematic contaminants. Finally, we demonstrate two applications of the mass maps: (1) cross-correlation with different foreground tracers of mass and (2) exploration of the largest peaks and voids in the maps.« less
NASA Astrophysics Data System (ADS)
Edwards, David C.; Nielsen, Steen B.; Jarzęcki, Andrzej A.; Spiro, Thomas G.; Myneni, Satish C. B.
2005-07-01
The deprotonation and iron complexation of the hydroxamate siderophore, desferrioxamine B (desB), and a model hydroxamate ligand, acetohydroxamic acid (aHa), were studied using infrared, resonance Raman and UV-vis spectroscopy. The experimental spectra were interpreted by a comparison with DFT calculated spectra of aHa (partly hydrated) and desB (reactive groups of unhydrated molecule) at the B3LYP/6-31G* level of theory. The ab initio models include three water molecules surrounding the deprotonation site of aHa to account for partial hydration. Experiments and calculations were also conducted in D 2O to verify spectral assignments. These studies of aHa suggest that the cis-keto-aHa is the dominant form, and its deprotonation occurs at the oxime oxygen atom in aqueous solutions. The stable form of iron-complexed aHa is identified as Fe(aHa) 3 for a wide range of pH conditions. The spectral information of aHa and an ab initio model of desB were used to interpret the chemical state of different functional groups in desB. Vibrational spectra of desB indicate that the oxime and amide carbonyl groups can be identified unambiguously. Vibrational spectral analysis of the oxime carbonyl after deprotonation and iron complexation of desB indicates that the conformational changes between anion and the iron-complexed anion are small. Enhanced electron delocalization in the oxime group of Fe-desB when compared to that of Fe(aHa) 3 may be responsible for higher stability constant of the former.
NASA Astrophysics Data System (ADS)
Kamli, Emna
Les radars hautes-frequences (RHF) mesurent les courants marins de surface avec une portee pouvant atteindre 200 kilometres et une resolution de l'ordre du kilometre. Cette etude a pour but de caracteriser la performance des RHF, en terme de couverture spatiale, pour la mesure des courants de surface en presence partielle de glace de mer. Pour ce faire, les mesures des courants de deux radars de type CODAR sur la rive sud de l'estuaire maritime du Saint-Laurent, et d'un radar de type WERA sur la rive nord, prises pendant l'hiver 2013, ont ete utilisees. Dans un premier temps, l'aire moyenne journaliere de la zone ou les courants sont mesures par chaque radar a ete comparee a l'energie des vagues de Bragg calculee a partir des donnees brutes d'acceleration fournies par une bouee mouillee dans la zone couverte par les radars. La couverture des CODARs est dependante de la densite d'energie de Bragg, alors que la couverture du WERA y est pratiquement insensible. Un modele de fetch appele GENER a ete force par la vitesse du vent predite par le modele GEM d'Environnement Canada pour estimer la hauteur significative ainsi que la periode modale des vagues. A partir de ces parametres, la densite d'energie des vagues de Bragg a ete evaluee pendant l'hiver a l'aide du spectre theorique de Bretschneider. Ces resultats permettent d'etablir la couverture normale de chaque radar en absence de glace de mer. La concentration de glace de mer, predite par le systeme canadien operationnel de prevision glace-ocean, a ete moyennee sur les differents fetchs du vent selon la direction moyenne journaliere des vagues predites par GENER. Dans un deuxieme temps, la relation entre le ratio des couvertures journalieres obtenues pendant l'hiver 2013 et des couvertures normales de chaque radar d'une part, et la concentration moyenne journaliere de glace de mer d'autre part, a ete etablie. Le ratio des couvertures decroit avec l'augmentation de la concentration de glace de mer pour les deux types de radars, mais pour une concentration de glace de 20% la couverture du WERA est reduite de 34% alors que pour les CODARs elle est reduite de 67%. Les relations empiriques etablies entre la couverture des RHF et les parametres environnementaux (vent et glace de mer) permettront de predire la couverture que pourraient fournir des RHF installes dans d'autres regions soumises a la presence saisonniere de glace de mer.
Kolandaivelu, Kumaran; Bailey, Lynn; Buzzi, Stefano; Zucker, Arik; Milleret, Vincent; Ziogas, Algirdas; Ehrbar, Martin; Khattab, Ahmed A; Stanley, James R L; Wong, Gee K; Zani, Brett; Markham, Peter M; Tzafriri, Abraham R; Bhatt, Deepak L; Edelman, Elazer R
2017-04-20
Simple surface modifications can enhance coronary stent performance. Ultra-hydrophilic surface (UHS) treatment of contemporary bare metal stents (BMS) was assessed in vivo to verify whether such stents can provide long-term efficacy comparable to second-generation drug-eluting stents (DES) while promoting healing comparably to BMS. UHS-treated BMS, untreated BMS and corresponding DES were tested for three commercial platforms. A thirty-day and a 90-day porcine coronary model were used to characterise late tissue response. Three-day porcine coronary and seven-day rabbit iliac models were used for early healing assessment. In porcine coronary arteries, hydrophilic treatment reduced intimal hyperplasia relative to the BMS and corresponding DES platforms (1.5-fold to threefold reduction in 30-day angiographic and histological stenosis; p<0.04). Endothelialisation was similar on UHS-treated BMS and untreated BMS, both in swine and rabbit models, and lower on DES. Elevation in thrombotic indices was infrequent (never observed with UHS, rare with BMS, most often with DES), but, when present, correlated with reduced endothelialisation (p<0.01). Ultra-hydrophilic surface treatment of contemporary stents conferred good healing while moderating neointimal and thrombotic responses. Such surfaces may offer safe alternatives to DES, particularly when rapid healing and short dual antiplatelet therapy (DAPT) are crucial.
Instrumental Response Model and Detrending for the Dark Energy Camera
Bernstein, G. M.; Abbott, T. M. C.; Desai, S.; ...
2017-09-14
We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less
Instrumental Response Model and Detrending for the Dark Energy Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, G. M.; Abbott, T. M. C.; Desai, S.
We describe the model for mapping from sky brightness to the digital output of the Dark Energy Camera (DECam) and the algorithms adopted by the Dark Energy Survey (DES) for inverting this model to obtain photometric measures of celestial objects from the raw camera output. This calibration aims for fluxes that are uniform across the camera field of view and across the full angular and temporal span of the DES observations, approaching the accuracy limits set by shot noise for the full dynamic range of DES observations. The DES pipeline incorporates several substantive advances over standard detrending techniques, including principal-components-based sky and fringe subtraction; correction of the "brighter-fatter" nonlinearity; use of internal consistency in on-sky observations to disentangle the influences of quantum efficiency, pixel-size variations, and scattered light in the dome flats; and pixel-by-pixel characterization of instrument spectral response, through combination of internal-consistency constraints with auxiliary calibration data. This article provides conceptual derivations of the detrending/calibration steps, and the procedures for obtaining the necessary calibration data. Other publications will describe the implementation of these concepts for the DES operational pipeline, the detailed methods, and the validation that the techniques can bring DECam photometry and astrometry withinmore » $$\\approx 2$$ mmag and $$\\approx 3$$ mas, respectively, of fundamental atmospheric and statistical limits. In conclusion, the DES techniques should be broadly applicable to wide-field imagers.« less
Wilson, Gregory J; McGregor, Jennifer; Conditt, Gerard; Shibuya, Masahiko; Sushkova, Natalia; Eppihimer, Michael J; Hawley, Steven P; Rouselle, Serge D; Huibregtse, Barbara A; Dawkins, Keith D; Granada, Juan F
2018-02-20
Drug-eluting stents (DES) have evolved to using bioresorbable polymers as a method of drug delivery. The impact of bioresorbable polymer on long-term neointimal formation, inflammation, and healing has not been fully characterised. This study aimed to evaluate the biological effect of polymer resorption on vascular healing and inflammation. A comparative DES study was performed in the familial hypercholesterolaemic swine model of coronary stenosis. Permanent polymer DES (zotarolimus-eluting [ZES] or everolimus-eluting [EES]) were compared to bioresorbable polymer everolimus-eluting stents (BP-EES) and BMS. Post implantation in 29 swine, stents were explanted and analysed up to 180 days. Area stenosis was reduced in all DES compared to BMS at 30 days. At 180 days, BP-EES had significantly lower area stenosis than EES or ZES. Severe inflammatory activity persisted in permanent polymer DES at 180 days compared to BP-EES or BMS. Qualitative para-strut inflammation areas (graded as none to severe) were elevated but similar in all groups at 30 days, peaked at 90 days in DES compared to BMS (p<0.05) and, at 180 days, were similar between BMS and BP-EES but were significantly greater in DES. BP-EES resulted in a lower net long-term reduction in neointimal formation and inflammation compared to permanent polymer DES in an animal model. Further study of the long-term neointima formation deserves study in human clinical trials.
1993-11-01
In this section, we recall definitions of dual linear incoherent KH,’ radar measurables, rainfall rate and the specific attenuation (7) due to...reflectivity data. Two different path lengths (d1,) 10 and 20 from a C-band dual linear polarization radar measurements, Km., have been considered...model for simulation of dual linear polarization radar 7. REFERENCES measurement fields", to be published on lEE 1. Leitao, M. J. and P. A. Watson
NASA Astrophysics Data System (ADS)
Savard, Stephane
Les premieres etudes d'antennes a base de supraconducteurs a haute temperature critique emettant une impulsion electromagnetique dont le contenu en frequence se situe dans le domaine terahertz remontent a 1996. Une antenne supraconductrice est formee d'un micro-pont d'une couche mince supraconductrice sur lequel un courant continu est applique. Un faisceau laser dans le visible est focalise sur le micro-pont et place le supraconducteur dans un etat hors-equilibre ou des paires sont brisees. Grace a la relaxation des quasiparticules en surplus et eventuellement de la reformation des paires supraconductrices, nous pouvons etudier la nature de la supraconductivite. L'analyse de la cinetique temporelle du champ electromagnetique emis par une telle antenne terahertz supraconductrice s'est averee utile pour decrire qualitativement les caracteristiques de celle-ci en fonction des parametres d'operation tels que le courant applique, la temperature et la puissance d'excitation. La comprehension de l'etat hors-equilibre est la cle pour comprendre le fonctionnement des antennes terahertz supraconductrices a haute temperature critique. Dans le but de comprendre ultimement cet etat hors-equilibre, nous avions besoin d'une methode et d'un modele pour extraire de facon plus systematique les proprietes intrinseques du materiau qui compose l'antenne terahertz a partir des caracteristiques d'emission de celle-ci. Nous avons developpe une procedure pour calibrer le spectrometre dans le domaine temporel en utilisant des antennes terahertz de GaAs bombarde aux protons H+ comme emetteur et detecteur. Une fois le montage calibre, nous y avons insere une antenne emettrice dipolaire de YBa 2Cu3O7-delta . Un modele avec des fonctions exponentielles de montee et de descente du signal est utilise pour lisser le spectre du champ electromagnetique de l'antenne de YBa 2Cu3O7-delta, ce qui nous permet d'extraire les proprietes intrinseques de ce dernier. Pour confirmer la validite du modele choisi, nous avons mesure les proprietes intrinseques du meme echantillon de YBa2Cu3O7- delta avec la technique pompe-visible et sonde-terahertz donnant, elle aussi, acces aux temps caracteristiques regissant l'evolution hors-equilibre de ce materiau. Dans le meilleur scenario, ces temps caracteristiques devraient correspondre a ceux evalues grace a la modelisation des antennes. Un bon controle des parametres de croissance des couches minces supraconductrices et de fabrication du dispositif nous a permis de realiser des antennes d'emission terahertz possedant d'excellentes caracteristiques en terme de largeur de bande d'emission (typiquement 3 THz) exploitables pour des applications de spectroscopie resolue dans le domaine temporel. Le modele developpe et retenu pour le lissage du spectre terahertz decrit bien les caracteristiques de l'antenne supraconductrice pour tous les parametres d'operation. Toutefois, le lien avec la technique pompe-sonde lors de la comparaison des proprietes intrinseques n'est pas direct malgre que les deux techniques montrent que le temps de relaxation des porteurs augmente pres de la temperature critique. Les donnees en pompe-sonde indiquent que la mesure du temps de relaxation depend de la frequence de la sonde, ce qui complique la correspondance des proprietes intrinseques entre les deux techniques. De meme, le temps de relaxation extrait a partir du spectre de l'antenne terahertz augmente en s'approchant de la temperature critique (T c) de YBa2Cu 3O7-delta. Le comportement en temperature du temps de relaxation correspond a une loi de puissance qui est fonction de l'inverse du gap supraconducteur avec un exposant 5 soit 1/Delta 5(T). Le travail presente dans cette these permet de mieux decrire les caracteristiques des antennes supraconductrices a haute temperature critique et de les relier aux proprietes intrinseques du materiau qui les compose. De plus, cette these presente les parametres a ajuster comme le courant applique, la puissance de la pompe, la temperature d'operation, etc, afin d'optimiser l'emission et les performances de ces antennes supraconductrices entre autres pour maximiser leur etendue en frequence dans une perspective d'application en spectroscopie terahertz. Cependant, plusieurs des resultats obtenus soulevent la difficulte de decrire l'etat hors-equilibre et la necessite de developper une theorie pour le supraconducteur YBa2 Cu3O7-delta
Adaptability in Coalition Teamwork (Facultes d’adaptation au travail d’equipe en coalition)
2008-04-01
principaux résultats des 30 communications théoriques et de recherche ont été les suivants : • Les outils de formation (jeux, simulations) fonctionnent...militaires ; • Le retour d’information sur le moral et les performances des équipes en opérations est un instrument qui est particulièrement apprécié...during operations is an instrument that is highly valued by commanders in the field; and • Differences in language proficiency in English confound
1994-09-30
The Commander-in-Chief of the British troops, General Sir Peter de la Billiere, reported that each vehicle of the Tenth Transport Regiment covered 400...Simulation des Reifenprofileinflusses fuir die Gelaindebeweglichkeit von Fahrzeugen C. W. FERVERS IKK-University of German Armed Forces Hamburg, Germany...of the Process) 731 Experimentelle und theoretische Analyse kohaisiven Erdreichs beim Verschiebevorgang (Optimierung des Vorganges) A. JARZEBOWSKI, J
Etude vibroacoustique d'un systeme coque-plancher-cavite avec application a un fuselage simplifie
NASA Astrophysics Data System (ADS)
Missaoui, Jemai
L'objectif de ce travail est de developper des modeles semi-analytiques pour etudier le comportement structural, acoustique et vibro-acoustique d'un systeme coque-plancher-cavite. La connection entre la coque et le plancher est assuree en utilisant le concept de rigidite artificielle. Ce concept de modelisation flexible facilite le choix des fonctions de decomposition du mouvement de chaque sous-structure. Les resultats issus de cette etude vont permettre la comprehension des phenomenes physiques de base rencontres dans une structure d'avion. Une approche integro-modale est developpee pour calculer les caracteristiques modales acoustiques. Elle utilise une discretisation de la cavite irreguliere en sous-cavites acoustiques dont les bases de developpement sont connues a priori. Cette approche, a caractere physique, presente l'avantage d'etre efficace et precise. La validite de celle-ci a ete demontree en utilisant des resultats disponibles dans la litterature. Un modele vibro-acoustique est developpe dans un but d'analyser et de comprendre les effets structuraux et acoustiques du plancher dans la configuration. La validite des resultats, en termes de resonance et de fonction de transfert, est verifiee a l'aide des mesures experimentales realisees au laboratoire.
Bäumler, Michael; Stargardt, Tom; Schreyögg, Jonas; Busse, Reinhard
2012-07-01
The high number of patients with acute myocardial infarction (AMI) has facilitated greater research, resulting in the development of innovative medical devices. So far, results from economic evaluations that compared drug-eluting stents (DES) and bare-metal stents (BMS) have not shown clear evidence that one intervention is more cost effective than the other. The aim of this study was to measure the cost effectiveness of DES compared with BMS in routine care. We used administrative data from a large German sickness fund to compare the costs and effectiveness of DES and BMS in patients with AMI. Patients with hospital admission after AMI in 2004 and 2005 were followed up for 1 year after hospital discharge. The cost of treatment and survival after 365 days were compared for patients treated with DES and BMS. We adjusted for covariates defined according to the Ontario Acute Myocardial Infarction Mortality Prediction Rules using propensity score matching. After matching, we calculated incremental cost-effectiveness ratios (ICERs) by (i) using sample means based on bootstrapping procedures and (ii) estimating generalized linear mixed models for costs and survival. After propensity score matching, the sample included 719 patients treated with DES and 719 patients treated with BMS. A comparison of sample means resulted in average costs of € 12 714 and € 11 714 for DES and BMS, respectively, in 2005 German euros. Difference in 365-day survival was not statistically significant (700 patients with DES and 701 with BMS). The ICER of DES versus BMS was -€ 718 709 per life saved. Bootstrapping resulted in DES being dominated by BMS in 54.5% of replications and DES being a dominant strategy in 2.7% of replications. Results from regression models and sensitivity analyses confirm these results. Treatment with DES after admission with AMI is less cost effective than treatment with BMS. Our results are in line with other cost-effectiveness analyses that used administrative data, i.e. under routine care conditions. However, our results do not preclude that DES may be cost effective in specific patient subgroups.
The Dark Energy Survey: more than dark energy – an overview
Abbott, T.
2016-03-21
This overview article describes the legacy prospect and discovery potential of the Dark Energy Survey (DES) beyond cosmological studies, illustrating it with examples from the DES early data. DES is using a wide-field camera (DECam) on the 4m Blanco Telescope in Chile to image 5000 sq deg of the sky in five filters ( grizY). By its completion the survey is expected to have generated a catalogue of 300 million galaxies with photometric redshifts and 100 million stars. In addition, a time-domain survey search over 27 sq deg is expected to yield a sample of thousands of Type Ia supernovaemore » and other transients. The main goals of DES are to characterise dark energy and dark matter, and to test alternative models of gravity; these goals will be pursued by studying large scale structure, cluster counts, weak gravitational lensing and Type Ia supernovae. However, DES also provides a rich data set which allows us to study many other aspects of astrophysics. In this paper we focus on additional science with DES, emphasizing areas where the survey makes a difference with respect to other current surveys. The paper illustrates, using early data (from `Science Verification', and from the first, second and third seasons of observations), what DES can tell us about the solar system, the Milky Way, galaxy evolution, quasars, and other topics. In addition, we show that if the cosmological model is assumed to be Lambda+ Cold Dark Matter (LCDM) then important astrophysics can be deduced from the primary DES probes. Lastly, highlights from DES early data include the discovery of 34 Trans Neptunian Objects, 17 dwarf satellites of the Milky Way, one published z > 6 quasar (and more confirmed) and two published superluminous supernovae (and more confirmed).« less
The Dark Energy Survey: More than dark energy - An overview
Abbott, T.
2016-03-21
This overview article describes the legacy prospect and discovery potential of the Dark Energy Survey (DES) beyond cosmological studies, illustrating it with examples from the DES early data. DES is using a wide-field camera (DECam) on the 4m Blanco Telescope in Chile to image 5000 sq deg of the sky in five filters ( grizY). By its completion the survey is expected to have generated a catalogue of 300 million galaxies with photometric redshifts and 100 million stars. In addition, a time-domain survey search over 27 sq deg is expected to yield a sample of thousands of Type Ia supernovaemore » and other transients. The main goals of DES are to characterise dark energy and dark matter, and to test alternative models of gravity; these goals will be pursued by studying large scale structure, cluster counts, weak gravitational lensing and Type Ia supernovae. However, DES also provides a rich data set which allows us to study many other aspects of astrophysics. In this paper we focus on additional science with DES, emphasizing areas where the survey makes a difference with respect to other current surveys. The paper illustrates, using early data (from `Science Verification', and from the first, second and third seasons of observations), what DES can tell us about the solar system, the Milky Way, galaxy evolution, quasars, and other topics. In addition, we show that if the cosmological model is assumed to be Lambda+ Cold Dark Matter (LCDM) then important astrophysics can be deduced from the primary DES probes. Lastly, highlights from DES early data include the discovery of 34 Trans Neptunian Objects, 17 dwarf satellites of the Milky Way, one published z > 6 quasar (and more confirmed) and two published superluminous supernovae (and more confirmed).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, T.
This overview article describes the legacy prospect and discovery potential of the Dark Energy Survey (DES) beyond cosmological studies, illustrating it with examples from the DES early data. DES is using a wide-field camera (DECam) on the 4m Blanco Telescope in Chile to image 5000 sq deg of the sky in five filters ( grizY). By its completion the survey is expected to have generated a catalogue of 300 million galaxies with photometric redshifts and 100 million stars. In addition, a time-domain survey search over 27 sq deg is expected to yield a sample of thousands of Type Ia supernovaemore » and other transients. The main goals of DES are to characterise dark energy and dark matter, and to test alternative models of gravity; these goals will be pursued by studying large scale structure, cluster counts, weak gravitational lensing and Type Ia supernovae. However, DES also provides a rich data set which allows us to study many other aspects of astrophysics. In this paper we focus on additional science with DES, emphasizing areas where the survey makes a difference with respect to other current surveys. The paper illustrates, using early data (from `Science Verification', and from the first, second and third seasons of observations), what DES can tell us about the solar system, the Milky Way, galaxy evolution, quasars, and other topics. In addition, we show that if the cosmological model is assumed to be Lambda+ Cold Dark Matter (LCDM) then important astrophysics can be deduced from the primary DES probes. Lastly, highlights from DES early data include the discovery of 34 Trans Neptunian Objects, 17 dwarf satellites of the Milky Way, one published z > 6 quasar (and more confirmed) and two published superluminous supernovae (and more confirmed).« less
The Dark Energy Survey: more than dark energy – an overview
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vikram, Vinu; Abbott, T; Abdalla, F. B.
This overview paper describes the legacy prospect and discovery potential of the Dark Energy Survey (DES) beyond cosmological studies, illustrating it with examples from the DES early data. DES is using a wide-field camera (DECam) on the 4 m Blanco Telescope in Chile to image 5000 sq deg of the sky in five filters (grizY). By its completion, the survey is expected to have generated a catalogue of 300 million galaxies with photometric redshifts and 100 million stars. In addition, a time-domain survey search over 27 sq deg is expected to yield a sample of thousands of Type Ia supernovaemore » and other transients. The main goals of DES are to characterize dark energy and dark matter, and to test alternative models of gravity; these goals will be pursued by studying large-scale structure, cluster counts, weak gravitational lensing and Type Ia supernovae. However, DES also provides a rich data set which allows us to study many other aspects of astrophysics. In this paper, we focus on additional science with DES, emphasizing areas where the survey makes a difference with respect to other current surveys. The paper illustrates, using early data (from ‘Science Verification’, and from the first, second and third seasons of observations), what DES can tell us about the Solar system, the Milky Way, galaxy evolution, quasars and other topics. In addition, we show that if the cosmological model is assumed to be Λ+cold dark matter, then important astrophysics can be deduced from the primary DES probes. Highlights from DES early data include the discovery of 34 trans-Neptunian objects, 17 dwarf satellites of the Milky Way, one published z > 6 quasar (and more confirmed) and two published superluminous supernovae (and more confirmed).« less
The Dark Energy Survey: more than dark energy – an overview
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, T.
This overview article describes the legacy prospect and discovery potential of the Dark Energy Survey (DES) beyond cosmological studies, illustrating it with examples from the DES early data. DES is using a wide-field camera (DECam) on the 4m Blanco Telescope in Chile to image 5000 sq deg of the sky in five filters ( grizY). By its completion the survey is expected to have generated a catalogue of 300 million galaxies with photometric redshifts and 100 million stars. In addition, a time-domain survey search over 27 sq deg is expected to yield a sample of thousands of Type Ia supernovaemore » and other transients. The main goals of DES are to characterise dark energy and dark matter, and to test alternative models of gravity; these goals will be pursued by studying large scale structure, cluster counts, weak gravitational lensing and Type Ia supernovae. However, DES also provides a rich data set which allows us to study many other aspects of astrophysics. In this paper we focus on additional science with DES, emphasizing areas where the survey makes a difference with respect to other current surveys. The paper illustrates, using early data (from `Science Verification', and from the first, second and third seasons of observations), what DES can tell us about the solar system, the Milky Way, galaxy evolution, quasars, and other topics. In addition, we show that if the cosmological model is assumed to be Lambda+ Cold Dark Matter (LCDM) then important astrophysics can be deduced from the primary DES probes. Lastly, highlights from DES early data include the discovery of 34 Trans Neptunian Objects, 17 dwarf satellites of the Milky Way, one published z > 6 quasar (and more confirmed) and two published superluminous supernovae (and more confirmed).« less
Day, T Eugene; Ravi, Nathan; Xian, Hong; Brugh, Ann
2014-04-01
To examine the effect of changes to screening interval on the incidence of vision loss in a simulated cohort of Veterans with diabetic retinopathy (DR). This simulation allows us to examine potential interventions without putting patients at risk. Simulated randomized controlled trial. We develop a hybrid agent-based/discrete event simulation which incorporates a population of simulated Veterans--using abstracted data from a retrospective cohort of real-world diabetic Veterans--with a discrete event simulation (DES) eye clinic at which it seeks treatment for DR. We compare vision loss under varying screening policies, in a simulated population of 5000 Veterans over 50 independent ten-year simulation runs for each group. Diabetic Retinopathy associated vision loss increased as the screening interval was extended from one to five years (p<0.0001). This increase was concentrated in the third year of the screening interval (p<0.01). There was no increase in vision loss associated with increasing the screening interval from one year to two years (p=0.98). Increasing the screening interval for diabetic patients who have not yet developed diabetic retinopathy from 1 to 2 years appears safe, while increasing the interval to 3 years heightens risk for vision loss. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Veres, Teodor
Cette these est consacree a l'etude de l'evolution structurale des proprietes magnetiques et de transport des multicouches Ni/Fe et nanostructures a base de Co et de l'Ag. Dans une premiere partie, essentiellement bibliographique, nous introduisons quelques concepts de base relies aux proprietes magnetiques et de transport des multicouches metalliques. Ensuite, nous presentons une breve description des methodes d'analyse des resultats. La deuxieme partie est consacree a l'etude des proprietes magnetiques et de transport des multicouches ferromagnetiques/ferromagnetiques Ni/Fe. Nous montrerons qu'une interpretation coherente de ces proprietes necessite la prise en consideration des effets des interfaces. Nous nous attacherons a mettre en evidence, a evaluer et a etudier les effets de ces interfaces ainsi que leur evolution, et ce, suite a des traitements thermiques tel que le depot a temperature elevee et l'irradiation ionique. Les analyses correlees de la structure et de la magnetoresistance nous permettront d'emettre des conclusions sur l'influence des couches tampons entre l'interface et le substrat ainsi qu'entre les couches elles-memes sur le comportement magnetique des couches F/F. La troisieme partie est consacree aux systemes a Magneto-Resistance Geante (MRG) a base de Co et Ag. Nous allons etudier l'evolution de la microstructure suite a l'irradiation avec des ions Si+ ayant une energie de 1 MeV, ainsi que les effets de ces changements sur le comportement magnetique. Cette partie debutera par l'analyse des proprietes d'une multicouche hybride, intermediaire entre les multicouches et les materiaux granulaires. Nous analyserons a l'aide des mesures de diffraction, de relaxation superparamagnetique et de magnetoresistance, les evolutions structurales produites par l'irradiation ionique. Nous etablirons des modeles qui nous aideront a interpreter les resultats pour une serie des multicouches qui couvrent un large eventail de differents comportements magnetiques et ceci en fonction de l'epaisseur de la couche magnetique de Co. Nous verrons que dans ces systemes les effets de l'irradiation ionique sont fortement influences par l'energie de surface ainsi que par l'enthalpie de formation, largement positive pour le systeme Co/Ag.
NASA Astrophysics Data System (ADS)
Jeffrey, N.; Abdalla, F. B.; Lahav, O.; Lanusse, F.; Starck, J.-L.; Leonard, A.; Kirk, D.; Chang, C.; Baxter, E.; Kacprzak, T.; Seitz, S.; Vikram, V.; Whiteway, L.; Abbott, T. M. C.; Allam, S.; Avila, S.; Bertin, E.; Brooks, D.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Davis, C.; De Vicente, J.; Desai, S.; Doel, P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Hartley, W. G.; Honscheid, K.; Hoyle, B.; James, D. J.; Jarvis, M.; Kuehn, K.; Lima, M.; Lin, H.; March, M.; Melchior, P.; Menanteau, F.; Miquel, R.; Plazas, A. A.; Reil, K.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.
2018-05-01
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion, not accounting for survey masks or noise. The Wiener filter is well-motivated for Gaussian density fields in a Bayesian framework. GLIMPSE uses sparsity, aiming to reconstruct non-linearities in the density field. We compare these methods with several tests using public Dark Energy Survey (DES) Science Verification (SV) data and realistic DES simulations. The Wiener filter and GLIMPSE offer substantial improvements over smoothed KS with a range of metrics. Both the Wiener filter and GLIMPSE convergence reconstructions show a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated ΛCDM shear catalogues and catalogues with no mass fluctuations (a standard data vector when inferring cosmology from peak statistics); the maximum signal-to-noise of these peak statistics is increased by a factor of 3.5 for the Wiener filter and 9 for GLIMPSE. With simulations we measure the reconstruction of the harmonic phases; the phase residuals' concentration is improved 17% by GLIMPSE and 18% by the Wiener filter. The correlation between reconstructions from data and foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C.; Pujol, A.; Gaztañaga, E.
We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a ~116 deg 2 area of the Dark Energy Survey (DES) Science Verification (SV) data. This method was first developed in Amara et al. and later re-examined in a companion paper with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a i < 22.5 galaxy sample. We find the galaxy bias and 1σ error bars inmore » four photometric redshift bins to be 1.12 ± 0.19 (z = 0.2–0.4), 0.97 ± 0.15 (z = 0.4–0.6), 1.38 ± 0.39 (z = 0.6–0.8), and 1.45 ± 0.56 (z = 0.8–1.0). These measurements are consistent at the 2σ level with measurements on the same data set using galaxy clustering and cross-correlation of galaxies with cosmic microwave background lensing, with most of the redshift bins consistent within the 1σ error bars. In addition, our method provides the only σ8 independent constraint among the three. We forward model the main observational effects using mock galaxy catalogues by including shape noise, photo-z errors, and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Moreover, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less
Chang, C.; Pujol, A.; Gaztañaga, E.; ...
2016-04-15
We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a ~116 deg 2 area of the Dark Energy Survey (DES) Science Verification (SV) data. This method was first developed in Amara et al. and later re-examined in a companion paper with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a i < 22.5 galaxy sample. We find the galaxy bias and 1σ error bars inmore » four photometric redshift bins to be 1.12 ± 0.19 (z = 0.2–0.4), 0.97 ± 0.15 (z = 0.4–0.6), 1.38 ± 0.39 (z = 0.6–0.8), and 1.45 ± 0.56 (z = 0.8–1.0). These measurements are consistent at the 2σ level with measurements on the same data set using galaxy clustering and cross-correlation of galaxies with cosmic microwave background lensing, with most of the redshift bins consistent within the 1σ error bars. In addition, our method provides the only σ8 independent constraint among the three. We forward model the main observational effects using mock galaxy catalogues by including shape noise, photo-z errors, and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Moreover, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less
Modelling the angular correlation function and its full covariance in photometric galaxy surveys
NASA Astrophysics Data System (ADS)
Crocce, Martín; Cabré, Anna; Gaztañaga, Enrique
2011-06-01
Near-future cosmology will see the advent of wide-area photometric galaxy surveys, such as the Dark Energy Survey (DES), that extend to high redshifts (z˜ 1-2) but give poor radial distance resolution. In such cases splitting the data into redshift bins and using the angular correlation function w(θ), or the Cℓ power spectrum, will become the standard approach to extracting cosmological information or to studying the nature of dark energy through the baryon acoustic oscillations (BAO) probe. In this work we present a detailed model for w(θ) at large scales as a function of redshift and binwidth, including all relevant effects, namely non-linear gravitational clustering, bias, redshift space distortions and photo-z uncertainties. We also present a model for the full covariance matrix, characterizing the angular correlation measurements, that takes into account the same effects as for w(θ) and also the possibility of a shot-noise component and partial sky coverage. Provided with a large-volume N-body simulation from the MICE collaboration, we built several ensembles of mock redshift bins with a sky coverage and depth typical of forthcoming photometric surveys. The model for the angular correlation and the one for the covariance matrix agree remarkably well with the mock measurements in all configurations. The prospects for a full shape analysis of w(θ) at BAO scales in forthcoming photometric surveys such as DES are thus very encouraging.
NASA Astrophysics Data System (ADS)
Mas, Sebastien
Les mesures satellitaires de couleur des oceans sont largement determinees par les proprietes optiques inherentes (IOPs) des eaux de surface. D'autre part, le phytoplancton de petite taille (<20 mum) est le plus souvent dominant dans les oceans, et peut donc etre une source importante de variation des IOPs dans les oceans. Dans ce contexte, le but principal de ce doctorat etait de definir l'impact du phytoplancton (<20 mum) sur les variations des proprietes optiques de l'Estuaire et du Golfe du Saint-Laurent (Canada). Afin d'atteindre cet objectif, il etait necessaire de determiner en milieu controle les facteurs de variabilite des proprietes optiques cellulaires et des IOPs du phytoplancton (<20 mum) des eaux du Saint-Laurent, et d'evaluer la contribution du phytoplancton (<20 mum) aux proprietes optiques totales des eaux du Saint-Laurent. Des experiences en laboratoire ont montre que les variations des proprietes optiques des cellules phytoplanctoniques soumises a un cycle jour-nuit, ainsi qu'a des changements concomitants d'intensite lumineuse, peuvent contribuer significativement a la variabilite des proprietes optiques observee en milieu naturel. D'autres experiences ont, quant a elles, mis en evidence que les variations des proprietes optiques des cellules phytoplanctoniques dues aux phases de croissance peuvent alterer les IOPs des oceans, particulierement pendant les periodes de floraison. De plus, la presence de bacteries et de particules detritiques peut egalement affecter la variabilite des IOPs totales, notamment la diffusion. Au printemps, dans l'Estuaire et le Golfe du Saint-Laurent, la contribution du phytoplancton <20 mum aux IOPs presentait des differences regionales evidentes pour les proprietes d'absorption et de diffusion. En plus de la variabilite spatiale, les proprietes optiques cellulaires presentaient des variations journalieres, et ce particulierement pour le picophytoplancton. Enfin, la plupart des differences observees dans les proprietes biooptiques, particulierement l'absorption, etaient attribuables a la contribution du phytoplancton <20 mum. Ceci confirme l'importance de la structure de taille des communautes phytoplanctoniques dans les modeles bio-optiques appliques au Saint-Laurent. L'ensemble des resultats a permis de mettre en evidence l'importance des mecanismes de photoacclimatation et de synchronisation du cycle cellulaire du phytoplancton sur les variations journalieres des IOPs, ainsi que de l'etat physiologique relie au stade de croissance sur les variations temporelles a long terme des IOPs. De plus, le phytoplancton <20 mum contribue de maniere importante aux IOPs et a leur variabilite dans l'Estuaire et le Golfe du St-Laurent, et ce particulierement pour l'absorption. Cette etude de doctorat souligne donc l'importance du phytoplancton <20 mum sur la variabilite des IOPs des oceans.
Mouse hypospadias: A critical examination and definition
Sinclair, Adriane Watkins; Cao, Mei; Shen, Joel; Cooke, Paul; Risbridger, Gail; Baskin, Laurence; Cunha, Gerald R.
2016-01-01
Hypospadias is a common malformation whose etiology is based upon perturbation of normal penile development. The mouse has been previously used as a model of hypospadias, despite an unacceptably wide range of definitions for this malformation. The current paper presents objective criteria and a definition of mouse hypospadias. Accordingly, diethylstilbestrol (DES) induced penile malformations were examined at 60 days postnatal (P60) in mice treated with DES over the age range of 12 days embryonic to 20 days postnatal (E12 to P20). DES-induced hypospadias involves malformation of the urethral meatus, which is most severe in DES E12-P10, DES P0-P10 and DES P5-P15 groups and less so or absent in the other treatment groups. A frenulum-like ventral tether between the penis and the prepuce was seen in the most severely affected DES-treated mice. Internal penile morphology was also altered in the DES E12-P10, DES P0-P10 and DES P5-P15 groups (with little effect in the other DES treatment groups). Thus, adverse effects of DES are a function of the period of DES treatment and most severe in the P0 to P10 period. In “estrogen mutant mice” (NERKI, βERKO, αERKO and AROM+) hypospadias was only seen in AROM+ male mice having genetically-engineered elevation is serum estrogen. Significantly, mouse hypospadias was only seen distally at and near the urethral meatus where epithelial fusion events are known to take place and never in the penile midshaft, where urethral formation occurs via an entirely different morphogenetic process. PMID:27068029
Aerodynamics of yacht sails: viscous flow features and surface pressure distributions
NASA Astrophysics Data System (ADS)
Viola, Ignazio Maria
2014-11-01
The present paper presents the first Detached Eddy Simulation (DES) on a yacht sails. Wind tunnel experiments on a 1:15th model-scale sailing yacht with an asymmetric spinnaker (fore sail) and a mainsails (aft sail) were modelled using several time and grid resolutions. Also the Reynolds-average Navier-Stokes (RANS) equations were solved for comparison with DES. The computed forces and surface pressure distributions were compared with those measured with both flexible and rigid sails in the wind tunnel and good agreement was found. For the first time it was possible to recognise the coherent and steady nature of the leading edge vortex that develops on the leeward side of the asymmetric spinnaker and which significantly contributes to the overall drive force. The leading edge vortex increases in diameter from the foot to the head of the sail, where it becomes the tip vortex and convects downstream in the direction of the far field velocity. The tip vortex from the head of the mainsail rolls around the one of the spinnaker. The spanwise twist of the spinnaker leads to a mid-span helicoidal vortex, which has never been reported by previous authors, with an horizontal axis and rotating in the same direction of the tip vortex.
NASA Astrophysics Data System (ADS)
Yakirevich, Alexander; Dody, Avraham; Adar, Eilon M.; Borisov, Viacheslav; Geyh, Mebus
A new mathematical method based on a double-component model of kinematic wave flow and approach assesses the dynamic isotopic distribution in arid rain storms and runoff. This model describes the transport and δ18O evolution of rainfall to overland flow and runoff in an arid rocky watershed with uniformly distributed shallow depression storage. The problem was solved numerically. The model was calibrated using a set of temporal discharge and δ18O distribution data for rainfall and runoff collected on a small rocky watershed at the Sede Boker Experimental Site, Israel. Simulation of a reliable result with respect to observation was obtained after parameter adjustment by trial and error. Sensitivity analysis and model application were performed. The model is sensitive to changes in parameters characterizing the depression storage zones. The model reflects the effect of the isotopic memory in the water within the depression storage between sequential rain spells. The use of the double-component model of kinematic wave flow and transport provides an appropriate qualitative and quantitative fitting between computed and observed δ18O distribution in runoff. RésuméUne nouvelle méthode mathématique basée sur un modèle à double composante d'écoulement et de transport par une onde cinématique a été développée pour évaluer la distribution dynamique en isotopes dans les précipitations et dans l'écoulement en région aride. Ce modèle décrit le transport et les variations des δ18O de la pluie vers le ruissellement et l'écoulement de surface dans un bassin aride rocheux où le stockage se fait dans des dépressions peu profondes uniformément réparties. Le problème a été résolu numériquement. Le modèle a été calibré au moyen d'une chronique de débits et d'une distribution des δ18O dans la pluie et dans l'écoulement de surface sur un petit bassin versant rocheux du site expérimental de Sede Boker (Israël). La simulation d'un résultat crédible par rapport aux observations a été obtenu après un ajustement des paramètres par une méthode d'essais et d'erreurs. L'analyse de sensibilité et l'application du modèle ont ensuite été réalisés. Le modèle est plutôt sensible aux changements des paramètres caractérisant les zones de stockage dans les dépressions. Le modèle rend compte de l'effet de mémoire isotopique dans l'eau dans le stockage des dépressions entre les événements séquentiels de pluie. L'utilisation d'un modèle à double composante d'écoulement et de transport par onde cinématique permet un ajustement qualitatif et quantitatif adapté entre les distributions des δ18O calculées et observées dans l'écoulement de surface. Resumen Un nuevo método matemático basado en un modelo de doble componente de onda cinemática de flujo y transporte permite caracterizar la distribución isotópica dinámica de las tormentas en zonas áridas y la escorrentía. Este modelo describe el transporte y la evolución del δ18O en la lluvia, flujo superficial y escorrentía en una cuenca rocosa de clima árido con detención superficial distribuida uniformemente. El modelo se calibró numéricamente utilizando un conjunto de datos de descarga temporal y de distribución de δ18O para lluvia y escorrentía recogida en una pequeña cuenca rocosa en el Centro de Experimentación de Sede Boker, Israel. Se obtuvo un buen ajuste a los datos tras un ajuste de parámetros mediante prueba y error. Se realizó un análisis de sensibilidad que indicó que el modelo resulta ser bastante sensible a cambios en los parámetros que caracterizan las zonas de baja detención superficial. El modelo también refleja el efecto de la memoria isotópica en el agua de estas zonas de detención entre los distintos periodos de lluvias. El uso de un modelo de doble componente de onda cinemática de flujo y transporte proporciona un buen ajuste cualitativo y cuantitativo entre los datos medidos y calculados de δ18O en la escorrentía.
Fabrication de memoire monoelectronique non volatile par une approche de nanogrille flottante
NASA Astrophysics Data System (ADS)
Guilmain, Marc
Les transistors monoelectroniques (SET) sont des dispositifs de tailles nanometriques qui permettent la commande d'un electron a la fois et donc, qui consomment peu d'energie. Une des applications complementaires des SET qui attire l'attention est son utilisation dans des circuits de memoire. Une memoire monoelectronique (SEM) non volatile a le potentiel d'operer a des frequences de l'ordre des gigahertz ce qui lui permettrait de remplacer en meme temps les memoires mortes de type FLASH et les memoires vives de type DRAM. Une puce SEM permettrait donc ultimement la reunification des deux grands types de memoire au sein des ordinateurs. Cette these porte sur la fabrication de memoires monoelectroniques non volatiles. Le procede de fabrication propose repose sur le procede nanodamascene developpe par C. Dubuc et al. a 1'Universite de Sherbrooke. L'un des avantages de ce procede est sa compatibilite avec le back-end-of-line (BEOL) des circuits CMOS. Ce procede a le potentiel de fabriquer plusieurs couches de circuits memoirestres denses au-dessus de tranches CMOS. Ce document presente, entre autres, la realisation d'un simulateur de memoires monoelectroniques ainsi que les resultats de simulations de differentes structures. L'optimisation du procede de fabrication de dispositifs monoelectroniques et la realisation de differentes architectures de SEM simples sont traitees. Les optimisations ont ete faites a plusieurs niveaux : l'electrolithographie, la gravure de l'oxyde, le soulevement du titane, la metallisation et la planarisation CMP. La caracterisation electrique a permis d'etudier en profondeur les dispositifs formes de jonction de Ti/TiO2 et elle a demontre que ces materiaux ne sont pas appropries. Par contre, un SET forme de jonction de TiN/Al2O3 a ete fabrique et caracterise avec succes a basse temperature. Cette demonstration demontre le potentiel du procede de fabrication et de la deposition de couche atomique (ALD) pour la fabrication de memoires monoelectroniques. Mots-cles: Transistor monoelectronique (SET), memoire monoelectronique (SEM), jonction tunnel, temps de retention, nanofabrication, electrolithographie, planarisation chimicomecanique.
NASA Astrophysics Data System (ADS)
Nguimbus, Raphael
La determination de l'impact des facteurs sous controle et hors controle qui influencent les volumes de vente des magasins de detail qui vendent des produits homogenes et fortement substituables constitue le coeur de cette these. Il s'agit d'estimer un ensemble de coefficients stables et asymtotiquement efficaces non correles avec les effets specifiques aleatoires des sites d'essence dans le marche de Montreal (Quebec, Canada) durant is periode 1993--1997. Le modele econometrique qui est ainsi specifie et teste, isole un ensemble de quatre variables dont le prix de detail affiche dans un site d'essence ordinaire, la capacite de service du site pendant les heures de pointe, les heures de service et le nombre de sites concurrents au voisinage du site dans un rayon de deux kilometres. Ces quatre facteurs influencent les ventes d'essence dans les stations-service. Les donnees en panel avec les methodes d'estimation robustes (estimateur a distance minimale) sont utilisees pour estimer les parametres du modele de vente. Nous partons avec l'hypothese generale selon laquelle il se developpe une force d'attraction qui attire les clients automobilistes dans chaque site, et qui lui permet de realiser les ventes. Cette capacite d'attraction varie d'un site a un autre et cela est du a la combinaison de l'effort marketing et de l'environnement concurrentiel autour du site. Les notions de voisinage et de concurrence spatiale expliquent les comportements des decideurs qui gerent les sites. Le but de cette these est de developper un outil d'aide a la decision (modele analytique) pour permettre aux gestionnaires des chaines de stations-service d'affecter efficacement les ressources commerciales dans ies points de vente.
NASA Astrophysics Data System (ADS)
Hentabli, Kamel
Cette recherche s'inscrit dans le cadre du projet de recherche Active Control Technology entre l'Ecole de Technologie Superieure et le constructeur Bombardier Aeronautique . Le but est de concevoir des strategies de commandes multivariables et robustes pour des modeles dynamiques d'avions. Ces strategies de commandes devraient assurer a l'avion une haute performance et satisfaire des qualites de vol desirees en l'occurrence, une bonne manoeuvrabilite, de bonnes marges de stabilite et un amortissement des mouvements phugoides et rapides de l'avion. Dans un premier temps, nous nous sommes principalement interesses aux methodes de synthese LTI et plus exactement a l'approche Hinfinity et la mu-synthese. Par la suite, nous avons accorde un interet particulier aux techniques de commande LPV. Pour mener a bien ce travail, nous avons envisage une approche frequentielle, typiquement Hinfinity. Cette approche est particulierement interessante, dans la mesure ou le modele de synthese est construit directement a partir des differentes specifications du cahier des charges. En effet, ces specifications sont traduites sous forme de gabarits frequentiels, correspondant a des ponderations en entree et en sortie que l'on retrouve dans la synthese Hinfinity classique. Par ailleurs, nous avons utilise une representation de type lineaire fractionnelle (LFT), jugee mieux adaptee pour la prise en compte des differents types d'incertitudes, qui peuvent intervenir sur le systeme. De plus, cette representation s'avere tres appropriee pour l'analyse de la robustesse via les outils de la mu-analyse. D'autre part, afin d'optimiser le compromis entre les specifications de robustesse et de performance, nous avons opte pour une structure de commande a 2 degres de liberte avec modele de reference. Enfin, ces techniques sont illustrees sur des applications realistes, demontrant ainsi la pertinence et l'applicabilite de chacune d'elle. Mots cles. Commande de vol, qualites de vol et manoeuvrabilite, commande robuste, approche Hinfinity , mu-synthese, systemes lineaires a parametres variants, sequencement de gains, transformation lineaire fractionnelle, inegalite matricielle lineaire.
Goh, Yang Miang; Askar Ali, Mohamed Jawad
2016-08-01
One of the key challenges in improving construction safety and health is the management of safety behavior. From a system point of view, workers work unsafely due to system level issues such as poor safety culture, excessive production pressure, inadequate allocation of resources and time and lack of training. These systemic issues should be eradicated or minimized during planning. However, there is a lack of detailed planning tools to help managers assess the impact of their upstream decisions on worker safety behavior. Even though simulation had been used in construction planning, the review conducted in this study showed that construction safety management research had not been exploiting the potential of simulation techniques. Thus, a hybrid simulation framework is proposed to facilitate integration of safety management considerations into construction activity simulation. The hybrid framework consists of discrete event simulation (DES) as the core, but heterogeneous, interactive and intelligent (able to make decisions) agents replace traditional entities and resources. In addition, some of the cognitive processes and physiological aspects of agents are captured using system dynamics (SD) approach. The combination of DES, agent-based simulation (ABS) and SD allows a more "natural" representation of the complex dynamics in construction activities. The proposed hybrid framework was demonstrated using a hypothetical case study. In addition, due to the lack of application of factorial experiment approach in safety management simulation, the case study demonstrated sensitivity analysis and factorial experiment to guide future research. Copyright © 2015 Elsevier Ltd. All rights reserved.
Quantifying the effect of complications on patient flow, costs and surgical throughputs.
Almashrafi, Ahmed; Vanderbloemen, Laura
2016-10-21
Postoperative adverse events are known to increase length of stay and cost. However, research on how adverse events affect patient flow and operational performance has been relatively limited to date. Moreover, there is paucity of studies on the use of simulation in understanding the effect of complications on care processes and resources. In hospitals with scarcity of resources, postoperative complications can exert a substantial influence on hospital throughputs. This paper describes an evaluation method for assessing the effect of complications on patient flow within a cardiac surgical department. The method is illustrated by a case study where actual patient-level data are incorporated into a discrete event simulation (DES) model. The DES model uses patient data obtained from a large hospital in Oman to quantify the effect of complications on patient flow, costs and surgical throughputs. We evaluated the incremental increase in resources due to treatment of complications using Poisson regression. Several types of complications were examined such as cardiac complications, pulmonary complications, infection complications and neurological complications. 48 % of the patients in our dataset experienced one or more complications. The most common types of complications were ventricular arrhythmia (16 %) followed by new atrial arrhythmia (15.5 %) and prolonged ventilation longer than 24 h (12.5 %). The total number of additional days associated with infections was the highest, while cardiac complications have resulted in the lowest number of incremental days of hospital stay. Complications had a significant effect on perioperative operational performance such as surgery cancellations and waiting time. The effect was profound when complications occurred in the Cardiac Intensive Care (CICU) where a limited capacity was observed. The study provides evidence supporting the need to incorporate adverse events data in resource planning to improve hospital performance.
Cost-effectiveness of drug-eluting coronary stents in Quebec, Canada.
Brophy, James M; Erickson, Lonny J
2005-01-01
The aim of this investigation was to assess the incremental cost-effectiveness of replacing bare metal coronary stents (BMS) with drug-eluting stents (DES) in the Province of Quebec, Canada. The strategy used was a cost-effectiveness analysis from the perspective of the health-care provider, in the province of Quebec, Canada (population 7.5 million). The main outcome measure was the cost per avoided revascularization intervention. Based on the annual Quebec rate of 14,000 angioplasties with an average of 1.7 stents per procedure and a purchase cost of $2,600 Canadian dollar (CDN) for DES, 100 percent substitution of BMS with DES would require an additional $45.1 million CDN of funding. After the benefits of reduced repeat revascularization interventions are included, the incremental cost would be $35.2 million CDN. The cost per avoided revascularization intervention (18 percent coronary artery bypass graft, 82 percent percutaneous coronary intervention [PCI]) would be $23,067 CDN. If DES were offered selectively to higher risk populations, for example, a 20 percent subgroup with a relative restenosis risk of 2.5 times the current bare metal rate, the incremental cost of the program would be $4.9 million CDN at a cost of $7,800 per avoided revascularization procedure. Break-even costs for the program would occur at DES purchase cost of $1,161 for 100 percent DES use and $1,627 for selective 20 percent DES use for high-risk patients for restenosis (RR = 2.5). Univariate and Monte Carlo sensitivity analyses indicate that the parameters most affecting the analysis are the capacity to select patients at high risk of restenosis, the average number of stents used per PCI, baseline restenosis rates for BMS, the effectiveness ratio of restenosis prevention for DES versus BMS, the cost of DES, and the revascularization rate after initial PCI. Sensitivity analyses suggest little additional health benefits but escalating cost-effectiveness ratios once a DES penetration of 40 percent has been attained. Under current conditions in Quebec, Canada, selective use of DES in high-risk patients is the most acceptable strategy in terms of cost-effectiveness. Results of such an analysis would be expected to be similar in other countries with key model parameters similar to those used in this model. This model provides an example of how to evaluate the cost-effectiveness of selective use of a new technology in high-risk patients.
RUIZ-RAMOS, MARGARITA; MÍNGUEZ, M. INÉS
2006-01-01
• Background Plant structural (i.e. architectural) models explicitly describe plant morphology by providing detailed descriptions of the display of leaf and stem surfaces within heterogeneous canopies and thus provide the opportunity for modelling the functioning of plant organs in their microenvironments. The outcome is a class of structural–functional crop models that combines advantages of current structural and process approaches to crop modelling. ALAMEDA is such a model. • Methods The formalism of Lindenmayer systems (L-systems) was chosen for the development of a structural model of the faba bean canopy, providing both numerical and dynamic graphical outputs. It was parameterized according to the results obtained through detailed morphological and phenological descriptions that capture the detailed geometry and topology of the crop. The analysis distinguishes between relationships of general application for all sowing dates and stem ranks and others valid only for all stems of a single crop cycle. • Results and Conclusions The results reveal that in faba bean, structural parameterization valid for the entire plant may be drawn from a single stem. ALAMEDA was formed by linking the structural model to the growth model ‘Simulation d'Allongement des Feuilles’ (SAF) with the ability to simulate approx. 3500 crop organs and components of a group of nine plants. Model performance was verified for organ length, plant height and leaf area. The L-system formalism was able to capture the complex architecture of canopy leaf area of this indeterminate crop and, with the growth relationships, generate a 3D dynamic crop simulation. Future development and improvement of the model are discussed. PMID:16390842
NASA Astrophysics Data System (ADS)
Metiche, Slimane
La demande croissante en poteaux pour les differents reseaux d'electricite et de telecommunications a rendu necessaire l'utilisation de materiaux innovants, qui preservent l'environnement. La majorite des poteaux electriques existants au Canada ainsi qu'a travers le monde, sont fabriques a partir de materiaux traditionnels tel que le bois, le beton ou l'acier. Les motivations des industriels et des chercheurs a penser a d'autres solutions sont diverses, citons entre autre: La limitation en longueur des poteaux en bois ainsi que la vulnerabilite des poteaux fabriques en beton ou en acier aux agressions climatiques. Les nouveaux poteaux en materiaux composites se presentent comme de bons candidats a cet effet, cependant; leur comportement structural n'est pas connu et des etudes theoriques et experimentales approfondies sont necessaires avant leur mise en marche a grande echelle. Un programme de recherche intensif comportant plusieurs projets experimentaux, analytiques et numeriques est en cours a l'Universite de Sherbrooke afin d'evaluer le comportement a court et a long termes de ces nouveaux poteaux en Polymeres Renforces de Fibres (PRF). C'est dans ce contexte que s'inscrit la presente these, et notre recherche vise a evaluer le comportement a la flexion de nouveaux poteaux tubulaires coniques fabriques en materiaux composites par enroulement filamentaire et ce, a travers une etude theorique, ainsi qu'a travers une serie d'essais de flexion en "grandeur reelle" afin de comprendre le comportement structural de ces poteaux, d'optimiser la conception et de proposer une procedure de dimensionnement pour les utilisateurs. Les poteaux en Polymeres Renforces de Fibres (PRF) etudies dans cette these sont fabriques avec une resine epoxyde renforcee de fibres de verre type E. Chaque type poteaux est constitue principalement de trois zones ou les proprietes geometriques (epaisseur, diametre) et les proprietes mecaniques sont differentes d'une zone a l'autre. La difference entre ces proprietes est due au nombre de couches utilisees dans chaque zone ainsi qu'a l'orientation des fibres de chaque couche. Un total de vingt-trois prototypes de dimensions differentes; ont ete testes en flexion jusqu'a la rupture. Deux types de fibres de verre de masses lineaires differentes, ont ete utilisees afin d'evaluer l'effet du type de fibres sur le comportement a la flexion. Un nouveau montage experimental permettant de tester tous les types de poteaux en PRF a ete dimensionne et fabrique selon les recommandations decrites dans les normes ASTM D 4923-01 et ANSI C 136.20-2005. Un modele analytique base sur la theorie des poutres en elasticite lineaire est propose dans cette these. Ce modele predit avec une bonne precision le comportement experimental charge---deflexion ainsi que la deflexion maximale au sommet des poteaux en PRF; constitues de plusieurs zones de caracteristiques geometriques et mecaniques differentes. Une procedure de dimensionnement des poteaux en PRF, basee sur les resultats experimentaux obtenus dans le cadre de la presente these, est egalement proposee. Les resultats obtenus dans le cadre de la presente these permettront le developpement et l'amelioration des regles de conception utiles et pratiques a l'usage des concepteurs et des industriels du domaine des poteaux en PRF. Les retombees de cette recherche sont a la fois economiques et technologiques, car les resultats obtenus constitueront une banque de donnees qui contribueront au developpement des normes de calcul, et par consequent a l'optimisation des materiaux utilises, et serviront a valider de futurs resultats et modeles theoriques.
Determination des Parametres Atmospheriques des Etoiles Naines Blanches de Type DB
NASA Astrophysics Data System (ADS)
Beauchamp, Alain
1995-01-01
Les etoiles naines blanches dont les spectres visibles sont domines par des raies fortes d'helium neutre sont subdivisees en trois classes, DB (raies d'helium neutre seulement), DBA (raies d'helium neutre et d'hydrogene) et DBZ (raies d'helium neutre et d'elements lourds). Nous analysons trois echantillons de spectres observes de ces types de naines blanches. Les echantillons consistent, respectivement, de 48 spectres dans le domaine du visible (3700-5100 A). 24 dans l'ultraviolet (1200-3100 A) et quatre dans la partie rouge du visible (5100-6900) A). Parmi les objets de l'echantillon visible, nous identifions quatre nouvelles DBA, ainsi que deux nouvelles DBZ, auparavant classees DB. L'analyse nous permet de determiner spectroscopiquement les parametres atmospheriques, soit la temperature effective, la gravite de surface, ainsi que l'abondance relative de l'hydrogene, N(H)/N(He), dans le cas des DBA. Pour les objets plus chauds que ~15,000 K, la gravite de surface determinee est fiable, et nous obtenons les masses stellaires avec une relation masse -rayon theorique. Les exigences propres a l'analyse de ces objets ont requis d'importantes ameliorations dans la modelisation de leurs atmospheres et distributions de flux de radiation emis par ces derniers. Nous avons inclus dans les modeles d'atmospheres, pour la premiere fois a notre connaissance, les effets dus a la molecule He_sp{2 }{+}, ainsi que l'equation d'etat de Hummer et Mihalas (1988), qui tient compte des perturbations entre particules dans le calcul des populations des differents niveaux atomiques. Nous traitons la convection dans le cadre de la theorie de la longueur de melange. Trois grilles de modeles d'atmospheres a l'ETL (equilibre thermodynamique local) ont ete produites, pour un ensemble de temperatures effectives, gravites de surface et abondances d'hydrogene couvrant les proprietes des etoiles de nos echantillons; elles sont caracterisees par differentes parametrisations appelees, respectivement, ML1, ML2 et ML3, de la theorie de longueur de melange. Nous avons calcule une grille de spectres synthetiques avec les memes parametrisations que la grille de modeles d'atmospheres. Notre traitement de l'elargissement des raies de l'helium neutre a ete ameliore de facon significative par rapport aux etudes precedentes. D'une part, nous tenons compte de l'elargissement des raies produit par les interactions entre l'emetteur et les particules neutres (elargissements par resonance et de van der Waals) en plus de celui par les particules chargees (elargissement Stark). D'autre part, nous avons calcule nous-memes les profils Stark avec les meilleures theories d'elargissement disponibles pour la majorite des raies observees; ces profils depassent en qualite ce qui a ete publie jusqu'a ce jour. Nous avons calcule la distribution de masse des etoiles DB plus chaudes que 15,000 K. La distribution de masse des DB est tres etroite, avec environ les trois quarts des etoiles incluses dans l'intervalle 0.55-0.65 Modot. La masse moyenne des etoiles DB est de 0.58 M_⊙ avec sigma = 0.07. La difference principale entre les distributions de masse des DB et DA est la faible proportion de DB dans les ailes de la distribution, ce qui implique que les DA moins massives que ~0.4 M odot et plus massives que ~0.8 M_⊙ ne se convertissent pas en DB. Les objets les plus massifs de notre echantillon sont de type DBA, ce qui suggere que la masse elevee favorise la visibilite de l'hydrogene. (Abstract shortened by UMI.).
Kelsh, H.T.
1949-01-01
Le ste??re??orestituteur Kelsh est un appareil de restitution a?? double projection base?? sur le principe des anaglyphes, comme le Multiplex par exemple, mais il a une plus grande pre??cision que ce dernier, vu que les ne??gatifs sont utilise??s directement pour la restitution, sans que l'on soit oblige?? de les re??duire au pre??alable. Une telle solution devient possible lorsque l'e??chelle de l'image plastique (mode??le) est au moins 7 fois plus grande que celle des cliche??s. L'appareil comprend un support reposant par 4 pieds sur la table a?? dessin. Sa partie supe??rieure porte, par l'interme??diaire de trois vis calantes, un cadre dans lequel les deux chambres de projection sont suspendues. Des leviers de commande, agissant sur les chambres, permettent d'introduire la base, le de??versement et les inclinaisons transversale et longitudinale. Les cliche??s conjugue??s ou des diapositifs sont directement place??s dans les chambres de projection munies d'objectifs. En projetant les deux cliche??s au moyen de couleurs comple??mentaires, l'observateur - muni de lunettes a?? verres colore??s (couleurs comple??mentaires) - observe l'image plastique audessus de la table a?? dessin, l'orientation relative des chambres ayant e??te?? e??tablie au pre??alable. Pour le report des points de l'image plastique sur la minute, l'ope??rateur dispose d'une tablette amovible a?? marque-repe??re lumineuse qu'il de??place a?? la main et dont la hauteur au-dessus de la table a?? dessin peut e??tre commande??e par une molette. Pour e??tablir l'orientation absolue de l'image plastique, il suffit d'incliner convenablement le cadre de suspension a?? l'aide des vis calantes, l'orientation relative n'e??tant pas de??truite par cette ope??ration. Les deux cliche??s sont e??claire??s par des projecteurs munis d'une suspension a?? la cardan et relie??s a?? la tablette de restitution par des tiges te??lescopiques. Moyennant ce dispositif, l'e??clairage est concentre?? sur deux petites re??gions conjugue??es des cliche??s et l'observateur ne voit ainsi qu'une petite partie de l'image spatiale, ce qui pre??sente l'avantage de ne pas l'e??blouir. D'autre part, les tiges te??lescopiques impriment, suivant l'inclinaison, un de??placement vertical plus ou moins prononce?? aux objectifs de projection, ce qui permet d'e??liminer les erreurs de distorsion. Pour les travaux de triangulation ae??rienne, l'auteur propose d'appliquer la me??thode des plaques a?? fontes radiales (templets). Pour le passage d'un mode??le a?? l'autre, l'orientation spatiale des chambres de restitution est de??termine??e au moyen d'une nivelle place??e sur les cliche??s. Un premier essai a e??te?? effectue?? sur 7 mode??les (e??chelle des cliche??s 1/34000) et apre??s compensation l'erreur probable altime??trique des douze points connus s'est e??leve??e a?? 1.50 m tandis que l'erreur maximum e??tait de 4 m. L'erreur altime??trique des restitutions effectue??es par le U.S. Forest Service en Californie avec des cliche??s ou 1/48000, ou?? les diffe??rences de niveau de??passaient parfois 600 m par cliche??, varie entre 1 m et 2.70 m. ?? 1949.
NASA Astrophysics Data System (ADS)
Auban-Senzier, P.; Audouard, A.; Laukhin, V. N.; Rousseau, R.; Canadell, E.; Brossard, L.; Jérome, D.; Kushch, N. D.
1995-10-01
Magnetotransport measurements have been carried out in layered organic metal α-(ET){2}TlHg(SeCN){4} at temperatures down to 0.4 K and under hydrostatic pressure. Only one... Des mesures de magnétorésistance ont été effectuées dans le conducteur organique bidimensionnel α-(ET){2}TlHg(SeCN){4} jusqu'à la température de 0.4 K et sous pression hydrostatique. Une seule série d'oscillations Shubnikov-de Haas est observée dans la gamme de pression comprise entre 3.5 et 11 kbar. La masse cyclotron décroit lentement lorsque la pression augmente et la fréquence des oscillations augmente rapidement depuis la valeur de (653± 3)T à pression ambiante jusqu'à (790± 3)T à 11 kbar. Une modélisation basée sur la méthode des liaisons fortes suggère que la dépendance en pression de l'aire de l'orbite fermée de la surface de Fermi est due au glissement, induit par la pression, des molécules des chaines ne contenant qu'un type de donneurs. Au contraire de son composé frère α-(ET){2}NH{4}Hg(SCN){4}, il présente des oscillations lentes, de fréquence (47± 3)T, à pression ambiante. Elles ne sont pas observées entre 3.5 et 11 kbar et pourraient être en relation avec un emboîtement des orbites ouvertes à pression ambiante qui pourraient détruire la supraconductivité dans le composé à base de sélénium.
Fissuration en relaxation des aciers inoxydables austénitiques au voisinage des soudures
NASA Astrophysics Data System (ADS)
Auzoux, Q.; Allais, L.; Gourgues, A. F.; Pineau, A.
2003-03-01
Des fissures intergranulaires peuvent se développer au voisinage des soudures des aciers inoxydables austénitiques lorsqu'ils sont réchauffés dans le domaine de température compris entre 500^{circ}C et 700^{circ}C. A ces températures, les contraintes résiduelles post-soudage se relaxent par déformation viscoplastique. Il peut arriver que ces zones proches de la soudure soient tellement fragiles, qu'elles ne puissent accommoder cette faible déformation. Afin de préciser quelles peuvent être les modifications microstructurales qui conduisent à une telle fragilisation, on a examiné les microstructures de ces zones et révélé ainsi un écrouissage résiduel, responsable d'une forte élévation de la dureté. On a pu reproduire par hypertrempe puis laminage entre 400^{circ}C et 600^{circ}C une microstructure similaire. Des essais mécaniques (traction, fluage, relaxation, sur éprouvettes lisses et pré-fissurées) ont été réalisés à 550^{circ}C et à 600^{circ}C sur ces zones affectées simulées et sur un état de référence hypertrempé. Ils ont montré que l'écrouissage diminuait la ductilité dans le domaine de rupture intergranulaire, sans modifier qualitativement le mécanisme d'endommagement. Pendant la pré-déformation les incompatibilités de déformation entre grains conduiraient à l'existence de contraintes locales élevées qui favoriseraient la germination des cavités intergranulaires.
Electromagnetic studies of global geodynamic processes
NASA Astrophysics Data System (ADS)
Tarits, Pascal
1994-03-01
The deep electromagnetic sounding (DES) technique is one of the few geophysical methods, along with seismology, gravity, heat flow, which may be use to probe the structure of the Earth's mantle directly. The interpretation of the DESs may provide electrical conductivity profiles down to the upper part of the lower mantle. The electrical conductivity is extremely sensitive to most of the thermodynamic processes we believe are acting in the Earth's mantle (temperature increases, partial melting, phase transition and to a lesser extent pressure). Therefore, in principle, results from DES along with laboratory measurements could be used to constrain models of these processes. The DES technique is reviewed in the light of recent results obtained in a variety of domains: data acquisition and analysis, global induction modeling and data inversion and interpretation. The mechanisms and the importance of surface distortions of the DES data are reviewed and techniques to model them are discussed. The recent results in terms of the conductivity distribution in the mantle from local and global DES are presented and a tentative synthesis is proposed. The geodynamic interpretations of the deep conductivity structures are reviewed. The existence of mantle lateral heterogeneities in conductivity at all scales and depths for which electromagnetic data are available is now well documented. A comparison with global results from seismology is presented.
NASA Astrophysics Data System (ADS)
Ostiguy, Pierre-Claude
Les matériaux composites sont de plus en plus utilisés en aéronautique. Leurs excellentes propriétés mécaniques et leur faible poids leur procurent un avantage certain par rapport aux matériaux métalliques. Ceux-ci étant soumis à diverses conditions de chargement et environnementales, ils sont suceptibles de subir plusieurs types d'endommagements, compromettant leur intégrité. Des méthodes fiables d'inspection sont donc nécessaires pour évaluer leur intégrité. Néanmoins, peu d'approches non destructives, embarquées et efficaces sont présentement utilisées. Ce travail de recherche se penche sur l'étude de l'effet de la composition des matériaux composites sur la détection et la caractérisation par ondes guidées. L'objectif du projet est de développer une approche de caractérisation mécanique embarquée permettant d'améliorer la performance d'une approche d'imagerie par antenne piézoélectriques sur des structures composite et métalliques. La contribution de ce projet est de proposer une approche embarquée de caractérisation mécanique par ultrasons qui ne requiert pas une mesure sur une multitude d'échantillons et qui est non destructive. Ce mémoire par articles est divisé en quatre parties, dont les parties deux A quatre présentant les articles publiés et soumis. La première partie présente l'état des connaissances dans la matière nécessaires à l'acomplissement de ce projet de maîtrise. Les principaux sujets traités portent sur les matériaux composites, propagation d'ondes, la modélisation des ondes guidées, la caractérisation par ondes guidées et la surveillance embarquée des structures. La deuxième partie présente une étude de l'effet des propriétés mécaniques sur la performance de l'algorithme d'imagerie Excitelet. L'étude est faite sur une structure isotrope. Les résultats ont démontré que l'algorithme est sensible à l'exactitude des propriétés mécaniques utilisées dans le modèle. Cette sensibilité a également été explorée afin de développer une méthode embarquée permettant d'évaluer les propriétés mécaniques d'une structure. La troisième partie porte sur une étude plus rigoureuse des performances de la méthode de caractérisation mécanique embarquée. La précision, la répétabilité et la robustesse de la méthode sont validés à l'aide d'un simulateur par FEM. Les propriétés estimées avec l'approche de caractérisation sont à moins de 1% des propriétés utilisées dans le modèle, ce qui rivalise avec l'incertitude des méthodes ASTM. L'analyse expérimentale s'est avérée précise et répétable pour des fréquences sous les 200 kHz, permettant d'estimer les propriétés mécaniques à moins de 1% des propriétés du fournisseur. La quatrième partie a démontrée la capacité de l'approche de caractérisation à identifier les propriétés mécaniques d'une plaques composite orthotrope. Les résultats estimés expérimentalement sont inclus dans les barres d'incertitude des propriétés estimées à l'aide des tests ASTM. Finalement, une simulation FEM a démontré la précision de l'approche avec des propriétés mécaniques à moins de 4 % des propriétés du modèle simulé.
Déjà vu experiences are rarely associated with pathological dissociation.
Adachi, Naoto; Akanuma, Nozomi; Akanu, Nozomi; Adachi, Takuya; Takekawa, Yoshikazu; Adachi, Yasushi; Ito, Masumi; Ikeda, Hiroshi
2008-05-01
We investigated the relation between déjà vu and dissociative experiences in nonclinical subjects. In 227 adult volunteers, déjà vu and dissociative experiences were evaluated by means of the inventory of déjà vu experiences assessment and dissociative experiences scale (DES). Déjà vu experiences occurred in 162 (71.4%) individuals. In univariate correlation analysis, the frequency of déjà vu experiences, as well as 5 other inventory of déjà vu experiences assessment symptoms and age at the time of evaluation, correlated significantly with the DES score. After exclusion of intercorrelative effects using multiple regression analysis, déjà vu experiences did not remain in the model. The DES score was best correlated with a model that included age, jamais vu, depersonalization, and precognitive dreams. Two indices for pathological dissociation (DES-taxon and DES > or = 30) were not associated with déjà vu experiences. Our findings suggest that déjà vu experiences are unlikely to be core pathological dissociative experiences.
Mouse hypospadias: A critical examination and definition.
Sinclair, Adriane Watkins; Cao, Mei; Shen, Joel; Cooke, Paul; Risbridger, Gail; Baskin, Laurence; Cunha, Gerald R
2016-12-01
Hypospadias is a common malformation whose etiology is based upon perturbation of normal penile development. The mouse has been previously used as a model of hypospadias, despite an unacceptably wide range of definitions for this malformation. The current paper presents objective criteria and a definition of mouse hypospadias. Accordingly, diethylstilbestrol (DES) induced penile malformations were examined at 60 days postnatal (P60) in mice treated with DES over the age range of 12 days embryonic to 20 days postnatal (E12-P20). DES-induced hypospadias involves malformation of the urethral meatus, which is most severe in DES E12-P10, DES P0-P10 and DES P5-P15 groups, and less so or absent in the other treatment groups. A frenulum-like ventral tether between the penis and the prepuce was seen in the most severely affected DES-treated mice. Internal penile morphology was also altered in the DES E12-P10, DES P0-P10 and DES P5-P15 groups (with little effect in the other DES treatment groups). Thus, adverse effects of DES are a function of the period of DES treatment and most severe in the P0-P10 period. In "estrogen mutant mice" (NERKI, βERKO, αERKO and AROM+) hypospadias was only seen in AROM+ male mice having genetically-engineered elevation is serum estrogen. Significantly, mouse hypospadias was only seen distally at and near the urethral meatus where epithelial fusion events are known to take place and never in the penile midshaft, where urethral formation occurs via an entirely different morphogenetic process. Copyright © 2016 International Society of Differentiation. Published by Elsevier B.V. All rights reserved.
The effect of dense gas dynamics on loss in ORC transonic turbines
NASA Astrophysics Data System (ADS)
Durá Galiana, FJ; Wheeler, APS; Ong, J.; Ventura, CA de M.
2017-03-01
This paper describes a number of recent investigations into the effect of dense gas dynamics on ORC transonic turbine performance. We describe a combination of experimental, analytical and computational studies which are used to determine how, in-particular, trailing-edge loss changes with choice of working fluid. A Ludwieg tube transient wind-tunnel is used to simulate a supersonic base flow which mimics an ORC turbine vane trailing-edge flow. Experimental measurements of wake profiles and trailing-edge base pressure with different working fluids are used to validate high-order CFD simulations. In order to capture the correct mixing in the base region, Large-Eddy Simulations (LES) are performed and verified against the experimental data by comparing the LES with different spatial and temporal resolutions. RANS and Detached-Eddy Simulation (DES) are also compared with experimental data. The effect of different modelling methods and working fluid on mixed-out loss is then determined. Current results point at LES predicting the closest agreement with experimental results, and dense gas effects are consistently predicted to increase loss.
Études par diffraction haute résolution et réflectivité de films minces épitaxiés
NASA Astrophysics Data System (ADS)
Baulès, P.; Casanove, M. J.; Roucau, C.; Ousset, J. C.; Bobo, J. F.; Snoeck, E.; Magnoux, D.; Gatel, C.
2002-07-01
The studies we present concern the general researches involved in order to understand the growth mechanisms of thin films deposited on oriented substrates. Among the different investigation technics of the thin layers, X-ray diffraction and reflectivity will be discussed through two applications. In a first step, thediffraction presented, illustrated by reciprocal space mapping, will give information about the layer state of stress and the lattice distorsion of La{1-x}SrxMnO3 deposited on SrTiO3. The results obtained are discussed and compared to those given by electron microscopy. In a second step, we will present an application of reflectivity concerning a very roughness surface of platinum deposited on MgO showing an island growth process. We will verify that reflectivity can lead to the determination of the recovered rate of the surface, even if some problems exist in the simulation of the whole data set. Results obtained in electron microscopy will complete those issued from reflectivity. Les études que nous allons présenter sont à replacer dans le contexte général de la compréhension des mécanismes de croissance des films minces déposés sur des substrats orientés. Parmi les diverses techniques d'investigations des couches minces, la diffraction et la réflectivité des rayons X vont être abordées à travers deux applications. Nous verrons tout d'abord que la diffraction, illustrée par la cartographie en deux dimension du réseau réciproque, permet de remonter à l'état de contrainte ainsi qu'à la déformation de la maille de La{1-x}SrxMnO3 déposé sur SrTiO3. Les résultats déduits de ces mesures seront discutés et comparés à ceux obtenus en microscopie électronique. Dans la suite de l'article, nous verrons une des applications de la réflectivité, sur une surface très perturbée de platine déposé sur MgO et présentant une croissance en îlots. Nous verrons que la réflectivité permet d'estimer le taux de recouvrement de la surface, mais que des problèmes persistent dans la simulation de l'intégralité des données expérimentales. Les résultats obtenus en MET viendront compléter ceux issus de la réflectivité.
Baschet, Louise; Bourguignon, Sandrine; Marque, Sébastien; Durand-Zaleski, Isabelle; Teiger, Emmanuel; Wilquin, Fanny; Levesque, Karine
2016-01-01
To determine the cost-effectiveness of drug-eluting stents (DES) compared with bare-metal stents (BMS) in patients requiring a percutaneous coronary intervention in France, using a recent meta-analysis including second-generation DES. A cost-effectiveness analysis was performed in the French National Health Insurance setting. Effectiveness settings were taken from a meta-analysis of 117 762 patient-years with 76 randomised trials. The main effectiveness criterion was major cardiac event-free survival. Effectiveness and costs were modelled over a 5-year horizon using a three-state Markov model. Incremental cost-effectiveness ratios and a cost-effectiveness acceptability curve were calculated for a range of thresholds for willingness to pay per year without major cardiac event gain. Deterministic and probabilistic sensitivity analyses were performed. Base case results demonstrated that DES are dominant over BMS, with an increase in event-free survival and a cost-reduction of €184, primarily due to a diminution of second revascularisations, and an absence of myocardial infarction and stent thrombosis. These results are robust for uncertainty on one-way deterministic and probabilistic sensitivity analyses. Using a cost-effectiveness threshold of €7000 per major cardiac event-free year gained, DES has a >95% probability of being cost-effective versus BMS. Following DES price decrease, new-generation DES development and taking into account recent meta-analyses results, the DES can now be considered cost-effective regardless of selective indication in France, according to European recommendations.
Ma, Jian; Yang, Weiwei; Singh, Manpreet; Peng, Tianqing; Fang, Ningyuan; Wei, Meng
2011-01-01
In the treatment of chronic total occlusions (CTOs), some uncertainty exists regarding the effect of drug-eluting stents (DESs) compared with the effects of bare mental stents (BMSs). We reviewed outcomes of DES vs. BMS implantation for CTO lesions, to evaluate the risk-benefit ratio of DES implantation. Relevant studies of long-term clinical outcomes or angiographic outcomes of both BMS and DES implantation were examined. The primary endpoint comprised major adverse cardiovascular events (MACEs), including all-cause deaths, myocardial infarctions (MIs), and target lesion revascularizations (TLRs). A fixed-effect model and random-effect model were used to analyze the pooling results. Ten studies were included according to the selection criteria. Eight were nonrandomized controlled trials, and two consisted of a randomized controlled comparison between DES and BMS implantation. No significant difference was evident for in-hospital MACE rates between the two groups (odds ratio [OR], 1.07; 95% confidence interval [CI], .53 to 2.13), but the long-term MACE rates in the DES group were significantly lower than in the BMS group (OR, .22; 95% CI, .13 to .38; P < .00001). The rates of stent restenosis and reocclusions were also significantly lower in the DES group (OR, .14; 95% CI, .09 to .20; and OR, .23; 95% CI, .12 to .41, respectively). Implantation of the DES improves long-term angiographic and clinical outcomes compared with BMS in the treatment of CTO lesions. Copyright © 2011 Elsevier Inc. All rights reserved.
Rejeb, Olfa; Pilet, Claire; Hamana, Sabri; Xie, Xiaolan; Durand, Thierry; Aloui, Saber; Doly, Anne; Biron, Pierre; Perrier, Lionel; Augusto, Vincent
2018-06-01
Innovation and health-care funding reforms have contributed to the deployment of Information and Communication Technology (ICT) to improve patient care. Many health-care organizations considered the application of ICT as a crucial key to enhance health-care management. The purpose of this paper is to provide a methodology to assess the organizational impact of high-level Health Information System (HIS) on patient pathway. We propose an integrated performance evaluation of HIS approach through the combination of formal modeling using the Architecture of Integrated Information Systems (ARIS) models, a micro-costing approach for cost evaluation, and a Discrete-Event Simulation (DES) approach. The methodology is applied to the consultation for cancer treatment process. Simulation scenarios are established to conclude about the impact of HIS on patient pathway. We demonstrated that although high level HIS lengthen the consultation, occupation rate of oncologists are lower and quality of service is higher (through the number of available information accessed during the consultation to formulate the diagnostic). The provided method allows also to determine the most cost-effective ICT elements to improve the care process quality while minimizing costs. The methodology is flexible enough to be applied to other health-care systems.
NASA Astrophysics Data System (ADS)
Dagallier, Adrien
L'emergence des lasers a impulsion ultrabreves et des nanotechnologies a revolutionne notre perception et notre maniere d'interagir avec l'infiniment petit. Les gigantesques intensites generees par ces impulsions plus courtes que les temps de relaxation ou de diffusion du milieu irradie induisent de nombreux phenomenes non-lineaires, du doublement de frequence a l'ablation, dans des volumes de dimension caracteristique de l'ordre de la longueur d'onde du laser. En biologie et en medecine, ces phenomenes sont utilises a des fins d'imagerie multiphotonique ou pour detruire des tissus vivants. L'introduction de nanoparticules plasmoniques, qui concentrent le champ electromagnetique incident dans des regions de dimensions nanometriques, jusqu'a une fraction de la longueur d'onde, amplifie les phenomenes non-lineaires tout en offrant un controle beaucoup plus precis de la deposition d'energie, ouvrant la voie a la detection de molecules individuelles en solution et a la nanochirurgie. La nanochirurgie repose principalement sur la formation d'une bulle de vapeur a proximite d'une membrane cellulaire. Cette bulle de vapeur perce la membrane de maniere irreversible,entrainant la cellule a sa mort, ou la perturbe temporairement, ce qui permet d'envisager de faire penetrer dans la cellule des medicaments ou des brins d'ADN pour de la therapie genique. C'est principalement la taille de la bulle qui va decider de l'issue de l'irradiation laser. Il est donc necessaire de controler finement les parametres du laser et la geometrie de la nanoparticule afin d'atteindre l'objectif fixe. Le moyen le plus direct a l'heure actuelle de valider un ensemble de conditions experimentales est de realiser l'experience en laboratoire,ce qui est long et couteux. Les modeles de dynamique de bulle existants ne prennent pas en compte les parametres de l'irradiation et ajustent souvent leurs conditions initiales a partir de leurs mesures experimentales, ce qui limite la portee du modele au cas pour lequel il est ecrit. Ce memoire se propose de predire la taille maximale ainsi que la dynamique des bulles generees par des impulsions ultrabreves en fonction uniquement de la geometrie de la particule et des parametres du laser, entre autres la duree de pulse, la longueur d'onde centrale et la fluence d'irradiation.
Validation of an Acoustic Head Simulator for the Evaluation of Personal Hearing Protection Devices
2004-11-01
et recouvert de peau artificielle. Les cavités de chaque côté permettent l’insertion de modules d’oreilles qui reproduisent les mécanismes des ...aux spécifications publiées. Ces différences n’ont pas influé sur la perte d’insertion. Après correction pour tenir compte des effets de la...un simulateur de tête époxy chargé d’aluminium et recouvert de peau artificielle. La tête est soutenue par un module de cou souple rattaché à un
2015-04-01
MIMO cohérents ou co-localisés, pour obtenir une compréhen- sion précise de leurs bénéfices potentiels. Le diagramme de rayonnement de l’antenne...d’expérimentations et de simulations (MESA), sont également discutés. On y trouve que les faisceaux principaux des diagrammes de rayonnement expérimen- taux...concordent avec ceux des diagrammes théoriques. On y montre que MIMO-1 a le même diagramme de rayonnement bidirectionel que la configuration radar à
2015-02-01
MIMO cohérents ou co-localisés, pour obtenir une compréhen- sion précise de leurs bénéfices potentiels. Le diagramme de rayonnement de l’antenne...d’expérimentations et de simulations (MESA), sont également discutés. On y trouve que les faisceaux principaux des diagrammes de rayonnement expérimen- taux...concordent avec ceux des diagrammes théoriques. On y montre que MIMO-1 a le même diagramme de rayonnement bidirectionel que la configuration radar à
2015-03-01
MIMO cohérents ou co-localisés, pour obtenir une compréhen- sion précise de leurs bénéfices potentiels. Le diagramme de rayonnement de l’antenne...d’expérimentations et de simulations (MESA), sont également discutés. On y trouve que les faisceaux principaux des diagrammes de rayonnement expérimen- taux...concordent avec ceux des diagrammes théoriques. On y montre que MIMO-1 a le même diagramme de rayonnement bidirectionel que la configuration radar à
NASA Astrophysics Data System (ADS)
Mariotte, F.; Sauviac, B.; Héliot, J. Ph.
1995-10-01
After a brief overview of the concept of electromagnetic chirality, this paper deals with a numerical simulation of isotropic composites with metallic chiral inclusions: computations of permittivity, permeability and chirality parameter as functions of frequency are presented. The theoretical results are, step by step, compared with measurements of chiral composites at microwave frequencies. The application of such media in Radar Cross-Section (RCS) management and control is discussed. The introduction of chiral inclusions seems to make impedance matching possible and may lead to attractive shields with lower reflectivity and larger bandwidth. However the optimization of material characteristics necessary to get a specific absorber remains a difficult task. Après un bref rappel du principe de la chiralité, cet article présente une modélisation des propriétés effectives des matériaux hétérogènes à inclusions chirales métalliques : calcul de la permittivité, perméabilité et coefficient de chiralité du composite en fonction de la fréquence. Les résultats théoriques sont validés, pas à pas, par des mesures effectuées sur des composites chiraux de natures différentes. L'application de tels matériaux à la conception de matériaux absorbant les ondes électromagnétiques est ensuite envisagée. Les inclusions chirales semblent offrir la possibilité de régler l'impédance à l'interface air-milieu absorbant permettant ainsi de concevoir des absorbants micro-ondes plus performants en terme d'atténuation ou de largeur de bande. L'optimisation des caractéristiques des matériaux pour obtenir des performances données restent néanmoins très délicate.
Numerical Model of Turbulence, Sediment Transport, and Sediment Cover in a Large Canyon-Bound River
NASA Astrophysics Data System (ADS)
Alvarez, L. V.; Schmeeckle, M. W.
2013-12-01
The Colorado River in Grand Canyon is confined by bedrock and coarse-grained sediments. Finer grain sizes are supply limited, and sandbars primarily occur in lateral separation eddies downstream of coarse-grained tributary debris fans. These sandbars are important resources for native fish, recreational boaters, and as a source of aeolian transport preventing the erosion of archaeological resources by gully extension. Relatively accurate prediction of deposition and, especially, erosion of these sandbar beaches has proven difficult using two- and three-dimensional, time-averaged morphodynamic models. We present a parallelized, three-dimensional, turbulence-resolving model using the Detached-Eddy Simulation (DES) technique. DES is a hybrid large eddy simulation (LES) and Reynolds-averaged Navier Stokes (RANS). RANS is applied to the near-bed grid cells, where grid resolution is not sufficient to fully resolve wall turbulence. LES is applied further from the bed and banks. We utilize the Spalart-Allmaras one equation turbulence closure with a rough wall extension. The model resolves large-scale turbulence using DES and simultaneously integrates the suspended sediment advection-diffusion equation. The Smith and McLean suspended sediment boundary condition is used to calculate the upward and downward settling of sediment fluxes in the grid cells attached to the bed. The model calculates the entrainment of five grain sizes at every time step using a mixing layer model. Where the mixing layer depth becomes zero, the net entrainment is zero or negative. As such, the model is able to predict the exposure and burial of bedrock and coarse-grained surfaces by fine-grained sediments. A separate program was written to automatically construct the computational domain between the water surface and a triangulated surface of a digital elevation model of the given river reach. Model results compare favorably with ADCP measurements of flow taken on the Colorado River in Grand Canyon during the High Flow Experiment (HFE) of 2008. The model accurately reproduces the size and position of the major recirculation currents, and the error in velocity magnitude was found to be less than 17% or 0.22 m/s absolute error. The mean deviation of the direction of velocity with respect to the measured velocity was found to be 20 degrees. Large-scale turbulence structures with vorticity predominantly in the vertical direction are produced at the shear layer between the main channel and the separation zone. However, these structures rapidly become three-dimensional with no preferred orientation of vorticity. Surprisingly, cross-stream velocities, into the main recirculation zone just upstream of the point of reattachment and out of the main recirculation region just downstream of the point of separation, are highest near the bed. Lateral separation eddies are more efficient at storing and exporting sediment than previously modeled. The input of sediment to the eddy recirculation zone occurs near the reattachment zone and is relatively continuous in time. While, the export of sediment to the main channel by the return current occurs in pulses. Pulsation of the strength of the return current becomes a key factor to determine the rates of erosion and deposition in the main recirculation zone.
Effect of tension and curvature on the chemical potential of lipids in lipid aggregates.
Grafmüller, Andrea; Lipowsky, Reinhard; Knecht, Volker
2013-01-21
Understanding the factors that influence the free energy of lipids in bilayer membranes is an essential step toward understanding exchange processes of lipids between membranes. In general, both lipid composition and membrane geometry can affect lipid exchange rates between bilayer membranes. Here, the free energy change ΔG(des) for the desorption of dipalmitoyl-phosphatidylcholine (DPPC) lipids from different lipid aggregates has been computed using molecular dynamics simulations and umbrella sampling. The value of ΔG(des) is found to depend strongly on the local properties of the aggregate, in that both tension and curvature lead to an increase in ΔG(des). A detailed analysis shows that the increased desorption free energy for tense bilayers arises from the increased conformational entropy of the lipid tails, which reduces the favorable component -TΔS(L) of the desorption free energy.
Dynamics of Sheared Granular Materials
NASA Technical Reports Server (NTRS)
Kondic, Lou; Utter, Brian; Behringer, Robert P.
2002-01-01
This work focuses on the properties of sheared granular materials near the jamming transition. The project currently involves two aspects. The first of these is an experiment that is a prototype for a planned ISS (International Space Station) flight. The second is discrete element simulations (DES) that can give insight into the behavior one might expect in a reduced-g environment. The experimental arrangement consists of an annular channel that contains the granular material. One surface, say the upper surface, rotates so as to shear the material contained in the annulus. The lower surface controls the mean density/mean stress on the sample through an actuator or other control system. A novel feature under development is the ability to 'thermalize' the layer, i.e. create a larger amount of random motion in the material, by using the actuating system to provide vibrations as well control the mean volume of the annulus. The stress states of the system are determined by transducers on the non-rotating wall. These measure both shear and normal components of the stress on different size scales. Here, the idea is to characterize the system as the density varies through values spanning dense almost solid to relatively mobile granular states. This transition regime encompasses the regime usually thought of as the glass transition, and/or the jamming transition. Motivation for this experiment springs from ideas of a granular glass transition, a related jamming transition, and from recent experiments. In particular, we note recent experiments carried out by our group to characterize this type of transition and also to demonstrate/ characterize fluctuations in slowly sheared systems. These experiments give key insights into what one might expect in near-zero g. In particular, they show that the compressibility of granular systems diverges at a transition or critical point. It is this divergence, coupled to gravity, that makes it extremely difficult if not impossible to characterize the transition region in an earth-bound experiment. In the DE modeling, we analyze dynamics of a sheared granular system in Couette geometry in two (2D) and three (3D) space dimensions. Here, the idea is to both better understand what we might encounter in a reduced-g environment, and at a deeper level to deduce the physics of sheared systems in a density regime that has not been addressed by past experiments or simulations. One aspect of the simulations addresses sheared 2D system in zero-g environment. For low volume fractions, the expected dynamics of this type of system is relatively well understood. However, as the volume fraction is increased, the system undergoes a phase transition, as explained above. The DES concentrate on the evolution of the system as the solid volume fraction is slowly increased, and in particular on the behavior of very dense systems. For these configurations, the simulations show that polydispersity of the sheared particles is a crucial factor that determines the system response. Figures 1 and 2 below, that present the total force on each grain, show that even relatively small (10 %) nonuniformity of the size of the grains (expected in typical experiments) may lead to significant modifications of the system properties, such as velocity profiles, temperature, force propagation, and formation shear bands. The simulations are extended in a few other directions, in order to provide additional insight to the experimental system analyzed above. In one direction, both gravity, and driving due to vibrations are included. These simulations allow for predictions on the driving regime that is required in the experiments in order to analyze the jamming transition. Furthermore, direct comparison of experiments and DES will allow for verification of the modeling assumptions. We have also extended our modeling efforts to 3D. The (preliminary) results of these simulations of an annular system in zero-g environment will conclude the presentation.
Diabetes-associated dry eye syndrome in a new humanized transgenic model of type 1 diabetes.
Imam, Shahnawaz; Elagin, Raya B; Jaume, Juan Carlos
2013-01-01
Patients with Type 1 Diabetes (T1D) are at high risk of developing lacrimal gland dysfunction. We have developed a new model of human T1D using double-transgenic mice carrying HLA-DQ8 diabetes-susceptibility haplotype instead of mouse MHC-class II and expressing the human beta cell autoantigen Glutamic Acid Decarboxylase in pancreatic beta cells. We report here the development of dry eye syndrome (DES) after diabetes induction in our humanized transgenic model. Double-transgenic mice were immunized with DNA encoding human GAD65, either naked or in adenoviral vectors, to induce T1D. Mice monitored for development of diabetes developed lacrimal gland dysfunction. Animals developed lacrimal gland disease (classically associated with diabetes in Non Obese Diabetic [NOD] mice and with T1D in humans) as they developed glucose intolerance and diabetes. Animals manifested obvious clinical signs of dry eye syndrome (DES), from corneal erosions to severe keratitis. Histological studies of peri-bulbar areas revealed lymphocytic infiltration of glandular structures. Indeed, infiltrative lesions were observed in lacrimal/Harderian glands within weeks following development of glucose intolerance. Lesions ranged from focal lymphocytic infiltration to complete acinar destruction. We observed a correlation between the severity of the pancreatic infiltration and the severity of the ocular disease. Our results demonstrate development of DES in association with antigen-specific insulitis and diabetes following immunization with clinically relevant human autoantigen concomitantly expressed in pancreatic beta cells of diabetes-susceptible mice. As in the NOD mouse model and as in human T1D, our animals developed diabetes-associated DES. This specific finding stresses the relevance of our model for studying these human diseases. We believe our model will facilitate studies to prevent/treat diabetes-associated DES as well as human diabetes.
Le niobate de lithium a haute temperature pour les applications ultrasons =
NASA Astrophysics Data System (ADS)
De Castilla, Hector
L'objectif de ce travail de maitrise en sciences appliquees est de trouver puis etudier un materiau piezoelectrique qui est potentiellement utilisable dans les transducteurs ultrasons a haute temperature. En effet, ces derniers sont actuellement limites a des temperatures de fonctionnement en dessous de 300°C a cause de l'element piezoelectrique qui les compose. Palier a cette limitation permettrait des controles non destructifs par ultrasons a haute temperature. Avec de bonnes proprietes electromecaniques et une temperature de Curie elevee (1200°C), le niobate de lithium (LiNbO 3) est un bon candidat. Mais certaines etudes affirment que des processus chimiques tels que l'apparition de conductivite ionique ou l'emergence d'une nouvelle phase ne permettent pas son utilisation dans les transducteurs ultrasons au-dessus de 600°C. Cependant, d'autres etudes plus recentes ont montre qu'il pouvait generer des ultrasons jusqu'a 1000°C et qu'aucune conductivite n'etait visible. Une hypothese a donc emerge : une conductivite ionique est presente dans le niobate de lithium a haute temperature (>500°C) mais elle n'affecte que faiblement ses proprietes a hautes frequences (>100 kHz). Une caracterisation du niobate de lithium a haute temperature est donc necessaire afin de verifier cette hypothese. Pour cela, la methode par resonance a ete employee. Elle permet une caracterisation de la plupart des coefficients electromecaniques avec une simple spectroscopie d'impedance electrochimique et un modele reliant de facon explicite les proprietes au spectre d'impedance. Il s'agit de trouver les coefficients du modele permettant de superposer au mieux le modele avec les mesures experimentales. Un banc experimental a ete realise permettant de controler la temperature des echantillons et de mesurer leur impedance electrochimique. Malheureusement, les modeles actuellement utilises pour la methode par resonance sont imprecis en presence de couplages entre les modes de vibration. Cela implique de posseder plusieurs echantillons de differentes formes afin d'isoler chaque mode principal de vibration. De plus, ces modeles ne prennent pas bien en compte les harmoniques et modes en cisaillement. C'est pourquoi un nouveau modele analytique couvrant tout le spectre frequentiel a ete developpe afin de predire les resonances en cisaillement, les harmoniques et les couplages entre les modes. Neanmoins, certains modes de resonances et certains couplages ne sont toujours pas modelises. La caracterisation d'echantillons carres a pu etre menee jusqu'a 750°C. Les resultats confirment le caractere prometteur du niobate de lithium. Les coefficients piezoelectriques sont stables en fonction de la temperature et l'elasticite et la permittivite ont le comportement attendu. Un effet thermoelectrique ayant un effet similaire a de la conductivite ionique a ete observe ce qui ne permet pas de quantifier l'impact de ce dernier. Bien que des etudes complementaires soient necessaires, l'intensite des resonances a 750°C semble indiquer que le niobate de lithium peut etre utilise pour des applications ultrasons a hautes frequences (>100 kHz).
Zhu, Y-Q; Cui, W-G; Cheng, Y-S; Chang, J; Chen, N-W; Yan, L
2013-05-01
Benign strictures at the cardia are troublesome for patients and often require repeated endoscopic treatments. Paclitaxel can reduce fibrosis. This study evaluated a biodegradable paclitaxel-eluting nanofibre-covered metal stent for the treatment of benign cardia stricture in vitro and in vivo. Drug release was investigated in vitro at pH 7·4 and 4·0. Eighty dogs were divided randomly into four groups (each n = 20): controls (no stent), bare stent (retained for 1 week), and two drug-eluting stent (DES) groups with retention for either 1 week (DES-1w) or 4 weeks (DES-4w). Lower oesophageal sphincter pressure (LOSP) and 5-min barium height (5-mBH) were assessed before, immediately after stent deployment, at 1 week, and 1, 3 and 6 months later. Five dogs in each group were killed for histological examination at each follow-up point. Stent migration rates were similar (0 bare stent versus 2 DES; P = 0·548). The percentage and amount of paclitaxel released in vitro was higher at pH 4·0 than at pH 7·4. After 6 months, LOSP and 5-mBH were both improved in the DES-1w (P = 0·004 and P = 0·049) and DES-4w (both P < 0·001) groups compared with the bare-stent group, with better relief when the stent was retained for 4 weeks (P = 0·004 and P = 0·007). The DES was associated with a reduced peak inflammatory reaction and less scar formation compared with bare stents, especially when inserted for 4 weeks. The DES was more effective for the treatment of benign cardia stricture than bare stents in a canine model. Retention of the DES for 4 weeks led to a better clinical and pathological outcome than 1 week. © 2013 British Journal of Surgery Society Ltd. Published by John Wiley & Sons Ltd.
Analysis of a Simulation Experiment on Optimized Crewing for Damage Control
2012-06-01
base donnaient un rendement supérieur à l’automatisation moyenne pour l’intervention en cas d’inondation. À partir de ces analyses, les auteurs du...et l’analyse ultérieures de données aux fins d’expériences de simulation semblables. Enfin, les auteurs du rapport ont établi des pistes...DRDC Toronto. [6] Floyd, J., Hunt, S., Williams, F., & Tatem, P. (2004). Fire + Smoke Simulator (FSSIM), Version 1 - Theory manual (NRL/MR/6180-04
Teng, Monica; Zhao, Ying Jiao; Khoo, Ai Leng; Ananthakrishna, Rajiv; Yeo, Tiong Cheng; Lim, Boon Peng; Chan, Mark Y; Loh, Joshua P
2018-06-05
Compared with second-generation durable polymer drug-eluting stents (DP-DES), the cost-effectiveness of biodegradable polymer drug-eluting stents (BP-DES) remains unclear in the real-world setting. We assessed the cost-effectiveness of BP-DES in patients with coronary artery disease undergoing percutaneous coronary intervention (PCI). We developed a decision-analytic model to compare the cost-effectiveness of BP-DES to DP-DES over one year and five years from healthcare payer perspective. Relative treatment effects during the first year post-PCI were obtained from a real-world population analysis while clinical event risks in the subsequent four years were derived from a meta-analysis of published studies. At one year, based on the clinical data analysis of 497 propensity-score matched pairs of patients, BP-DES were associated with an incremental cost-effectiveness ratio (ICER) of USD20,503 per quality-adjusted life-year (QALY) gained. At five years, BP-DES yielded an ICER of USD4,062 per QALY gained. At the willingness-to-pay threshold of USD50,400 (one gross domestic product per capita in Singapore in 2015), BP-DES were cost-effective. Sensitivity analysis showed that the cost of stents had a significant impact on the cost-effectiveness of BP-DES. Threshold analysis demonstrated that if the cost difference between BP-DES and DP-DES exceeded USD493, BP-DES would not be cost-effective in patients with one-year of follow-up. BP-DES were cost-effective compared with DP-DES in patients with coronary artery disease at one year and five years after PCI. It is worth noting that the cost of stents had a significant impact on the findings. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Computational Simulations of Convergent Nozzles for the AIAA 1st Propulsion Aerodynamics Workshop
NASA Technical Reports Server (NTRS)
Dippold, Vance F., III
2014-01-01
Computational Fluid Dynamics (CFD) simulations were completed for a series of convergent nozzles in participation of the American Institute of Aeronautics and Astronautics (AIAA) 1st Propulsion Aerodynamics Workshop. The simulations were performed using the Wind-US flow solver. Discharge and thrust coefficients were computed for four axisymmetric nozzles with nozzle pressure ratios (NPR) ranging from 1.4 to 7.0. The computed discharge coefficients showed excellent agreement with available experimental data; the computed thrust coefficients captured trends observed in the experimental data, but over-predicted the thrust coefficient by 0.25 to 1.0 percent. Sonic lines were computed for cases with NPR >= 2.0 and agreed well with experimental data for NPR >= 2.5. Simulations were also performed for a 25 deg. conic nozzle bifurcated by a flat plate at NPR = 4.0. The jet plume shock structure was compared with and without the splitter plate to the experimental data. The Wind-US simulations predicted the shock structure well, though lack of grid resolution in the plume reduced the sharpness of the shock waves. Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulations and Detached Eddy Simulations (DES) were performed at NPR = 1.6 for the 25 deg conic nozzle with splitter plate. The simulations predicted vortex shedding from the trailing edge of the splitter plate. However, the vortices of URANS and DES solutions appeared to dissipate earlier than observed experimentally. It is believed that a lack of grid resolution in the region of the vortex shedding may have caused the vortices to break down too soon
NASA Astrophysics Data System (ADS)
Vérinaud, Christophe
2000-11-01
Dans le domaine de la haute résolution angulaire en astronomie, les techniques de l'interférométrie optique et de l'optique adaptative sont en plein essor. La principale limitation de l'interférométrie est laturbulence atmosphérique qui entraîne des pertes de cohérence importantes, préjudiciables à la sensibilité et à la précision des mesures. L'optique adaptative appliquée à l'interférométrie va permettre un gain en sensibilité considérable. Le but de cette thèse est l'étude de l'influence de l'optique adaptative sur les mesures interférométriques et son application au Grand Interféromètre à DeuxTélescopes (GI2T) situé sur le Mont Calern dans le sud de la France. Deux problèmes principaux sont étudiés de manière théorique par des développements analytiques et des simulations numériques: le premier est le contrôle en temps réel de la variation des différences de marche optique, encore appelée piston différentiel, induite par l'optique adaptative ; le deuxième problème important est la calibration des mesures de contraste des franges dans le cas de la correction partielle. Je limite mon étude au cas d'un interféromètre multi-modes en courtes poses, mode de fonctionnement principal du GI2T également prévu sur le Very Large Telescope Interferometer installé au Cerro Paranal au Chili. Je développe une méthode de calibration des pertes de cohérence spatio-temporelles connaissant la fonction de structure des fronts d'onde corrigés. Je montre en particulier qu'il est possible d'estimer fréquence par fréquence la densité spectrale des images en courtes poses, méthode très utile pour augmenter la couverture du plan des fréquences spatiales dans l'observation d'objets étendus. La dernière partie de ce mémoire est consacrée au développements instrumentaux auxquels j'ai participé. J'ai développé un banc de qualification du système d'optique adaptative à courbure destiné au GI2T et j'ai étudié l'implantation optique de deux systèmes dans la table de recombinaison.
The Search for RR Lyrae Variables in the Dark Energy Survey
NASA Astrophysics Data System (ADS)
Nielsen, Chandler; Marshall, Jennifer L.; Long, James
2017-01-01
RR Lyrae variables are stars with a characteristic relationship between magnitude and phase and whose distances can be easily determined, making them extremely valuable in mapping and analyzing galactic substructure. We present our method of searching for RR Lyrae variable stars using data extracted from the Dark Energy Survey (DES). The DES probes for stars as faint as i = 24.3. Finding such distant RR Lyrae allows for the discovery of objects such as dwarf spheroidal tidal streams and dwarf galaxies; in fact, at least one RR Lyrae has been discovered in each of the probed dwarf spheroidal galaxies orbiting the Milky Way (Baker & Willman 2015). In turn, these discoveries may ultimately resolve the well-known missing satellite problem, in which theoretical simulations predict many more dwarf satellites than are observed in the local Universe. Using the Lomb-Scargle periodogram to determine the period of the star being analyzed, we could display the relationship between magnitude and phase and visually determine if the star being analyzed was an RR Lyrae. We began the search in frequently observed regions of the DES footprint, known as the supernova fields. We then moved our search to known dwarf galaxies found during the second year of the DES. Unfortunately, we did not discover RR Lyrae in the probed dwarf galaxies; this method should be tried again once more observations are taken in the DES.
Photometric Supernova Classification with Machine Learning
NASA Astrophysics Data System (ADS)
Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.
2016-08-01
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.
DES13S2cmm: The first superluminous supernova from the Dark Energy Survey
Papadopoulos, A.; Plazas, A. A.; D"Andrea, C. B.; ...
2015-03-23
We present DES13S2cmm, the first spectroscopically-confirmed superluminous supernova (SLSN) from the Dark Energy Survey (DES). We briefly discuss the data and search algorithm used to find this event in the first year of DES operations, and outline the spectroscopic data obtained from the European Southern Observatory (ESO) Very Large Telescope to confirm its redshift (z = 0.663 ± 0.001 based on the host-galaxy emission lines) and likely spectral type (type I). Using this redshift, we find M peak U = –21.05 +0.10 –0.09 for the peak, rest-frame U-band absolute magnitude, and find DES13S2cmm to be located in a faint, low-metallicitymore » (sub-solar), low stellar-mass host galaxy (log(M/M⊙) = 9.3 ± 0.3), consistent with what is seen for other SLSNe-I. We compare the bolometric light curve of DES13S2cmm to fourteen similarly well-observed SLSNe-I in the literature and find it possesses one of the slowest declining tails (beyond +30 days rest frame past peak), and is the faintest at peak. Moreover, we find the bolometric light curves of all SLSNe-I studied herein possess a dispersion of only 0.2–0.3 magnitudes between +25 and +30 days after peak (rest frame) depending on redshift range studied; this could be important for ‘standardising’ such supernovae, as is done with the more common type Ia. We fit the bolometric light curve of DES13S2cmm with two competing models for SLSNe-I – the radioactive decay of ⁵⁶Ni, and a magnetar – and find that while the magnetar is formally a better fit, neither model provides a compelling match to the data. Although we are unable to conclusively differentiate between these two physical models for this particular SLSN-I, further DES observations of more SLSNe-I should break this degeneracy, especially if the light curves of SLSNe-I can be observed beyond 100 days in the rest frame of the supernova.« less
Piccolo, Lidia Del; Finset, Arnstein; Mellblom, Anneli V; Figueiredo-Braga, Margarida; Korsvold, Live; Zhou, Yuefang; Zimmermann, Christa; Humphris, Gerald
2017-12-01
To discuss the theoretical and empirical framework of VR-CoDES and potential future direction in research based on the coding system. The paper is based on selective review of papers relevant to the construction and application of VR-CoDES. VR-CoDES system is rooted in patient-centered and biopsychosocial model of healthcare consultations and on a functional approach to emotion theory. According to the VR-CoDES, emotional interaction is studied in terms of sequences consisting of an eliciting event, an emotional expression by the patient and the immediate response by the clinician. The rationale for the emphasis on sequences, on detailed classification of cues and concerns, and on the choices of explicit vs. non-explicit responses and providing vs. reducing room for further disclosure, as basic categories of the clinician responses, is described. Results from research on VR-CoDES may help raise awareness of emotional sequences. Future directions in applying VR-CoDES in research may include studies on predicting patient and clinician behavior within the consultation, qualitative analyses of longer sequences including several VR-CoDES triads, and studies of effects of emotional communication on health outcomes. VR-CoDES may be applied to develop interventions to promote good handling of patients' emotions in healthcare encounters. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ayoub, Simon
Le reseau de distribution et de transport de l'electricite se modernise dans plusieurs pays dont le Canada. La nouvelle generation de ce reseau que l'on appelle smart grid, permet entre autre l'automatisation de la production, de la distribution et de la gestion de la charge chez les clients. D'un autre cote, des appareils domestiques intelligents munis d'une interface de communication pour des applications du smart grid commencent a apparaitre sur le marche. Ces appareils intelligents pourraient creer une communaute virtuelle pour optimiser leurs consommations d'une facon distribuee. La gestion distribuee de ces charges intelligentes necessite la communication entre un grand nombre d'equipements electriques. Ceci represente un defi important a relever surtout si on ne veut pas augmenter le cout de l'infrastructure et de la maintenance. Lors de cette these deux systemes distincts ont ete concus : un systeme de communication peer-to-peer, appele Ring-Tree, permettant la communication entre un nombre important de noeuds (jusqu'a de l'ordre de grandeur du million) tel que des appareils electriques communicants et une technique distribuee de gestion de la charge sur le reseau electrique. Le systeme de communication Ring-Tree inclut une nouvelle topologie reseau qui n'a jamais ete definie ou exploitee auparavant. Il inclut egalement des algorithmes pour la creation, l'exploitation et la maintenance de ce reseau. Il est suffisamment simple pour etre mis en oeuvre sur des controleurs associes aux dispositifs tels que des chauffe-eaux, chauffage a accumulation, bornes de recharges electriques, etc. Il n'utilise pas un serveur centralise (ou tres peu, seulement lorsqu'un noeud veut rejoindre le reseau). Il offre une solution distribuee qui peut etre mise en oeuvre sans deploiement d'une infrastructure autre que les controleurs sur les dispositifs vises. Finalement, un temps de reponse de quelques secondes pour atteindre 1'ensemble du reseau peut etre obtenu, ce qui est suffisant pour les besoins des applications visees. Les protocoles de communication s'appuient sur un protocole de transport qui peut etre un de ceux utilises sur l'Internet comme TCP ou UDP. Pour valider le fonctionnement de de la technique de controle distribuee et le systeme de communiction Ring-Tree, un simulateur a ete developpe; un modele de chauffe-eau, comme exemple de charge, a ete integre au simulateur. La simulation d'une communaute de chauffe-eaux intelligents a montre que la technique de gestion de la charge combinee avec du stockage d'energie sous forme thermique permet d'obtenir, sans affecter le confort de l'utilisateur, des profils de consommation varies dont un profil de consommation uniforme qui represente un facteur de charge de 100%. Mots-cles : Algorithme Distribue, Demand Response, Gestion de la Charge Electrique, M2M (Machine-to-Machine), P2P (Peer-to-Peer), Reseau Electrique Intelligent, Ring-Tree, Smart Grid
NASA Astrophysics Data System (ADS)
Coulibaly, Issa
Principale source d'approvisionnement en eau potable de la municipalite d'Edmundston, le bassin versant Iroquois/Blanchette est un enjeu capital pour cette derniere, d'ou les efforts constants deployes pour assurer la preservation de la qualite de son eau. A cet effet, plusieurs etudes y ont ete menees. Les plus recentes ont identifie des menaces de pollution de diverses origines dont celles associees aux changements climatiques (e.g. Maaref 2012). Au regard des impacts des modifications climatiques annonces a l'echelle du Nouveau-Brunswick, le bassin versant Iroquois/Blanchette pourrait etre fortement affecte, et cela de diverses facons. Plusieurs scenarios d'impacts sont envisageables, notamment les risques d'inondation, d'erosion et de pollution a travers une augmentation des precipitations et du ruissellement. Face a toutes ces menaces eventuelles, l'objectif de cette etude est d'evaluer les impacts potentiels des changements climatiques sur les risques d'erosion et de pollution a l'echelle du bassin versant Iroquois/Blanchette. Pour ce faire, la version canadienne de l'equation universelle revisee des pertes en sol RUSLE-CAN et le modele hydrologique SWAT ( Soil and Water Assessment Tool) ont ete utilises pour modeliser les risques d'erosion et de pollution au niveau dans la zone d'etude. Les donnees utilisees pour realiser ce travail proviennent de sources diverses et variees (teledetections, pedologiques, topographiques, meteorologiques, etc.). Les simulations ont ete realisees en deux etapes distinctes, d'abord dans les conditions actuelles ou l'annee 2013 a ete choisie comme annee de reference, ensuite en 2025 et 2050. Les resultats obtenus montrent une tendance a la hausse de la production de sediments dans les prochaines annees. La production maximale annuelle augmente de 8,34 % et 8,08 % respectivement en 2025 et 2050 selon notre scenario le plus optimiste, et de 29,99 % en 2025 et 29,72 % en 2050 selon le scenario le plus pessimiste par rapport a celle de 2013. Pour ce qui est de la pollution, les concentrations observees (sediment, nitrate et phosphore) connaissent une evolution avec les changements climatiques. La valeur maximale de la concentration en sediments connait une baisse en 2025 et 2050 par rapport a 2013, de 11,20 mg/l en 2013, elle passe a 9,03 mg/l en 2025 puis a 6,25 en 2050. On s'attend egalement a une baisse de la valeur maximale de la concentration en nitrate au fil des annees, plus accentuee en 2025. De 4,12 mg/l en 2013, elle passe a 1,85 mg/l en 2025 puis a 2,90 en 2050. La concentration en phosphore par contre connait une augmentation dans les annees a venir par rapport a celle de 2013, elle passe de 0,056 mg/l en 2013 a 0,234 mg/l en 2025 puis a 0,144 en 2050.
Snap-through instability analysis of dielectric elastomers with consideration of chain entanglements
NASA Astrophysics Data System (ADS)
Zhu, Jiakun; Luo, Jun; Xiao, Zhongmin
2018-06-01
It is widely recognized that the extension limit of polymer chains has a significant effect on the snap-through instability of dielectric elastomers (DEs). The snap-through instability performance of DEs has been extensively studied by two limited-stretch models, i.e., the eight-chain model and Gent model. However, the real polymer networks usually have many entanglements due to the impenetrability of the network chains as well as a finite extensibility resulting from the full stretching of the polymer chains. The effects of entanglements on the snap-through instability of DEs cannot be captured by the previous two limited-stretch models. In this paper, the nonaffine model proposed by Davidson and Goulbourne is adopted to characterize the influence of entanglements and extension limit of the polymer chains. It is demonstrated that the nonaffine model is almost identical to the eight-chain model and is close to the Gent model if we ignore the effects of chain entanglements and adopt the affine assumption. The suitability of the nonaffine model to characterize the mechanical behavior of elastomers is validated by fitting the experimental results reported in the open literature. After that, the snap-through stability performance of an ideal DE membrane under equal-biaxial prestretches is studied with the nonaffine model. It is revealed that besides the prestretch and chain extension limit, the chain entanglements can markedly influence the snap-through instability and the path to failure of DEs. These results provide a more comprehensive understanding on the snap-through instability of a DE and may be helpful to guide the design of DE devices.
The Development of Cervical and Vaginal Adenosis as a Result of Diethylstilbestrol Exposure In Utero
Laronda, Monica M.; Unno, Kenji; Butler, Lindsey M.; Kurita, Takeshi
2012-01-01
Exposure to exogenous hormones during development can result in permanent health problems. In utero exposure to diethylstilbestrol (DES) is probably the most well documented case in human history. DES, an orally active synthetic estrogen, was believed to prevent adverse pregnancy outcome and thus was routinely given to selected pregnant women from the 1940s to the 1960s. It has been estimated that 5 million pregnant women worldwide were prescribed with DES during this period. In the early 1970s, vaginal clear cell adenocarcinomas (CCACs) were diagnosed in daughters whose mother took DES during pregnancy (known as DES daughters). Follow up studies demonstrated that exposure to DES in utero causes a spectrum of congenital anomalies in female reproductive tracts and CCACs. Among those, cervical and vaginal adenoses are most commonly found, which are believed to be the precursors of CCACs. Transformation related protein 63 (TRP63/p63) marks the cell fate decision of Müllerian duct epithelium (MDE) to become squamous epithelium in the cervix and vagina. DES disrupts the TRP63 expression in mice and induces adenosis lesions in the cervix and vagina. This review describes mouse models can be used to study the development of DES-induced anomalies, focusing on cervical and vaginal adenoses, and discusses its molecular pathogenesis. PMID:22682699
Gued, Lisa R; Thomas, Ronald D; Green, Mario
2003-01-01
Diallyl sulfide (DAS) is a component of garlic and prevents cancer in several animal models in various organs. The chemopreventive effects of DAS are attributed to modulation of enzymes to alter the bioactivation of xenobiotics. Diethylstilbestrol (DES) is a synthetic estrogen that causes breast cancer in female ACI rats subsequent to metabolism with concurrent free radical production. This study assessed the effect of DAS on DES-induced reactive oxygen species (ROS) using lipid peroxidation as an empirical endpoint. We have demonstrated that acute exposure to DES results in a significant increase in lipid hydroperoxides (LPH) in breast tissue and DAS attenuated DES-induced LPH concentrations. Two-week exposure to DES caused significant increases in LPH concentrations in breast and liver tissues. DES-induced LPH concentrations were decreased by coadministration of DAS at this time point. There were no statistical differences in the concentrations of LPH in breast and liver tissues of rats treated for 4/6 weeks with DAS/DES. These results demonstrate that DAS inhibits the production of ROS which suggests that DAS effectively inhibits DES bioactivation in female ACI rats which may have implications for chemopreventive intervention strategies. Our results suggest that garlic consumption might be useful for the prevention of human breast cancers.
Focusing cosmic telescopes: systematics of strong lens modeling
NASA Astrophysics Data System (ADS)
Johnson, Traci Lin; Sharon, Keren q.
2018-01-01
The use of strong gravitational lensing by galaxy clusters has become a popular method for studying the high redshift universe. While diverse in computational methods, lens modeling techniques have grasped the means for determining statistical errors on cluster masses and magnifications. However, the systematic errors have yet to be quantified, arising from the number of constraints, availablity of spectroscopic redshifts, and various types of image configurations. I will be presenting my dissertation work on quantifying systematic errors in parametric strong lensing techniques. I have participated in the Hubble Frontier Fields lens model comparison project, using simulated clusters to compare the accuracy of various modeling techniques. I have extended this project to understanding how changing the quantity of constraints affects the mass and magnification. I will also present my recent work extending these studies to clusters in the Outer Rim Simulation. These clusters are typical of the clusters found in wide-field surveys, in mass and lensing cross-section. These clusters have fewer constraints than the HFF clusters and thus, are more susceptible to systematic errors. With the wealth of strong lensing clusters discovered in surveys such as SDSS, SPT, DES, and in the future, LSST, this work will be influential in guiding the lens modeling efforts and follow-up spectroscopic campaigns.
Risk Assessment of Anthrax Threat Letters
2001-09-01
extent of the hazard. In the experiments, envelopes containing Bacillus globigii spores (a simulant for anthrax) were opened in a mock mail room/office...des spores de Bacillus globigii (une bactérie imitant l’agent de l’anthrax) ont été ouvertes dans un endroit simulant une salle de courrier ou un...provide guidance to first responders and other government departments. In this study (non-pathogenic) Bacillus globigii (BG) spore contaminated
NASA Astrophysics Data System (ADS)
Salissou, Yacoubou
L'objectif global vise par les travaux de cette these est d'ameliorer la caracterisation des proprietes macroscopiques des materiaux poreux a structure rigide ou souple par des approches inverses et indirectes basees sur des mesures acoustiques faites en tube d'impedance. La precision des approches inverses et indirectes utilisees aujourd'hui est principalement limitee par la qualite des mesures acoustiques obtenues en tube d'impedance. En consequence, cette these se penche sur quatre problemes qui aideront a l'atteinte de l'objectif global precite. Le premier probleme porte sur une caracterisation precise de la porosite ouverte des materiaux poreux. Cette propriete en est une de passage permettant de lier la mesure des proprietes dynamiques acoustiques d'un materiau poreux aux proprietes effectives de sa phase fluide decrite par les modeles semi-phenomenologiques. Le deuxieme probleme traite de l'hypothese de symetrie des materiaux poreux selon leur epaisseur ou un index et un critere sont proposes pour quantifier l'asymetrie d'un materiau. Cette hypothese est souvent source d'imprecision des methodes de caracterisation inverses et indirectes en tube d'impedance. Le critere d'asymetrie propose permet ainsi de s'assurer de l'applicabilite et de la precision de ces methodes pour un materiau donne. Le troisieme probleme vise a mieux comprendre le probleme de transmission sonore en tube d'impedance en presentant pour la premiere fois un developpement exact du probleme par decomposition d'ondes. Ce developpement permet d'etablir clairement les limites des nombreuses methodes existantes basees sur des tubes de transmission a 2, 3 ou 4 microphones. La meilleure comprehension de ce probleme de transmission est importante puisque c'est par ce type de mesures que des methodes permettent d'extraire successivement la matrice de transfert d'un materiau poreux et ses proprietes dynamiques intrinseques comme son impedance caracteristique et son nombre d'onde complexe. Enfin, le quatrieme probleme porte sur le developpement d'une nouvelle methode de transmission exacte a 3 microphones applicable a des materiaux ou systemes symetriques ou non. Dans le cas symetrique, on montre que cette approche permet une nette amelioration de la caracterisation des proprietes dynamiques intrinseques d'un materiau. Mots cles. materiaux poreux, tube d'impedance, transmission sonore, absorption sonore, impedance acoustique, symetrie, porosite, matrice de transfert.
Ma, Jie; Yun-guang, Hu; Zhang, Da-hui
2008-08-01
To observe the effect of acupuncture (Acu) on bone metabolism and serum estradiol (E2) in ovariectomized (OVX) rats for studying its underlying mechanism in treating osteoporosis. Forty female SD rats of six months were randomized into sham operation (sham), model, Acu and Diethylstibestrol (DES) groups, with 10 cases in each. Postmenopausal osteoporosis model was established by removing ovaries under anesthesia. In Acu group, bilateral "Dazhu" (BL 11), "Shenshu" (BL 23) and "Pishu" (BL 20) were punctured and stimulated for 30 minutes, once daily for 60 days. Rats of DES group were drenched with saline+DES (22.5 microg/ml) 1 ml/100 g, once daily for 60 days. At the end of experiments, blood samples were collected from femoral artery for assaying serum alkaline phosphatase (ALP), tartrate-resistant acid phosphatase (TRAP) contents by using biochemistry, and serum bone gla protein (BGP) and E2 levels by immunoradioassay. Compared with sham group, uterus wet weight, serum E2 content in model group decreased significantly (P < 0.01) while body weight, serum ALP, BGP and TRAP levels in model group increased significantly (P < 0.01, 0.05). Compared with model group, uterus wet weight and serum Ez content in Acu and DES groups increased significantly (P < 0.01); while body weight, serum ALP, BGP and TRAP levels decreased considerably (P < 0.01). No significant differences were found between Acu and DES groups in serum E2, ALP, BGP and TRAP levels (P > 0.05). Acupuncture can suppress abnormal increase of body weight and decrease of serum E2 level, and significantly downregulate serum ALP, BGP and TRAP levels in OVX rats, which may contribute to its effect in relieving osteoporosis.
NASA Astrophysics Data System (ADS)
Homier, Ram
Dans le contexte environnemental actuel, le photovoltaïque bénéficie de l'augmentation des efforts de recherche dans le domaine des énergies renouvelables. Pour réduire le coût de la production d'électricité par conversion directe de l'énergie lumineuse en électricité, le photovoltaïque concentré est intéressant. Le principe est de concentrer une grande quantité d'énergie lumineuse sur des petites surfaces de cellules solaires multi-jonction à haute efficacité. Lors de la fabrication d'une cellule solaire, il est essentiel d'inclure une méthode pour réduire la réflexion de la lumière à la surface du dispositif. Le design d'un revêtement antireflet (ARC) pour cellules solaires multi-jonction présente des défis à cause de la large bande d'absorption et du besoin d'égaliser le courant produit par chaque sous-cellule. Le nitrure de silicium déposé par PECVD en utilisant des conditions standards est largement utilisé dans l'industrie des cellules solaires à base de silicium. Cependant, ce diélectrique présente de l'absorption dans la plage des courtes longueurs d'onde. Nous proposons l'utilisation du nitrure de silicium déposé par PECVD basse fréquence (LFSiN) optimisé pour avoir un haut indice de réfraction et une faible absorption optique pour l'ARC pour cellules solaires triple-jonction III-V/Ge. Ce matériau peut aussi servir de couche de passivation/encapsulation. Les simulations montrent que l'ARC double couche SiO2/LFSiN peut être très efficace pour réduire les pertes par réflexion dans la plage de longueurs d'onde de la sous-cellule limitante autant pour des cellules solaires triple-jonction limitées par la sous-cellule du haut que pour celles limitées par la sous-cellule du milieu. Nous démontrons aussi que la performance de la structure est robuste par rapport aux fluctuations des paramètres des couches PECVD (épaisseurs, indice de réfraction). Mots-clés : Photovoltaïque concentré (CPV), cellules solaires multi-jonction (MJSC), revêtement antireflet (ARC), passivation des semiconducteurs III-V, nitrure de silicium (Si"Ny), PECVD.
2010-12-01
Base ( CFB ) Kingston. The computer simulation developed in this project is intended to be used for future research and as a possible training platform...DRDC Toronto No. CR 2010-055 Development of an E-Prime based computer simulation of an interactive Human Rights Violation negotiation script...Abstract This report describes the method of developing an E-Prime computer simulation of an interactive Human Rights Violation (HRV) negotiation. An
Use of DES Modeling for Determining Launch Availability for SLS
NASA Technical Reports Server (NTRS)
Watson, Mike; Staton, Eric; Cates, Grant; Finn, Ron; Altino, Karen; Burns, Lee
2014-01-01
The National Aeronautics and Space Administration (NASA) is developing new capabilities for human and scientific exploration beyond Earth's orbit. This effort includes the Space Shuttle derived Space Launch System (SLS), the Multi-Purpose Crew Vehicle (MPCV) "Orion", and the Ground Systems Development and Operations (GSDO). There are several requirements and Technical Performance Measures (TPMs) that have been levied by the Exploration Systems Development (ESD) upon the SLS, MPCV, and GSDO Programs including an integrated Launch Availability (LA) TPM. The LA TPM is used to drive into the SLS, Orion and GSDO designs a high confidence of successfully launching exploration missions that have narrow Earth departure windows. The LA TPM takes into consideration the reliability of the overall system (SLS, Orion and GSDO), natural environments, likelihood of a failure, and the time required to recover from an anomaly. A challenge with the LA TPM is the interrelationships between SLS, Orion, GSDO and the natural environments during launch countdown and launch delays that makes it impossible to develop an analytical solution for calculating the integrated launch probability. This paper provides an overview of how Discrete Event Simulation (DES) modeling was used to develop the LA TPM, how it was allocated down to the individual programs, and how the LA analysis is being used to inform and drive the SLS, Orion, and GSDO designs to ensure adequate launch availability for future human exploration.
Use of DES Modeling for Determining Launch Availability for SLS
NASA Technical Reports Server (NTRS)
Staton, Eric; Cates, Grant; Finn, Ronald; Altino, Karen M.; Burns, K. Lee; Watson, Michael D.
2014-01-01
The National Aeronautics and Space Administration (NASA) is developing new capabilities for human and scientific exploration beyond Earth's orbit. This effort includes the Space Shuttle derived Space Launch System (SLS), the Orion Multi-Purpose Crew Vehicle (MPCV), and the Ground Systems Development and Operations (GSDO). There are several requirements and Technical Performance Measures (TPMs) that have been levied by the Exploration Systems Development (ESD) upon the SLS, Orion, and GSDO Programs including an integrated Launch Availability (LA) TPM. The LA TPM is used to drive into the SLS, Orion and GSDO designs a high confidence of successfully launching exploration missions that have narrow Earth departure windows. The LA TPM takes into consideration the reliability of the overall system (SLS, Orion and GSDO), natural environments, likelihood of a failure, and the time required to recover from an anomaly. A challenge with the LA TPM is the interrelationships between SLS, Orion, GSDO and the natural environments during launch countdown and launch delays that makes it impossible to develop an analytical solution for calculating the integrated launch probability. This paper provides an overview of how Discrete Event Simulation (DES) modeling was used to develop the LA TPM, how it was allocated down to the individual programs, and how the LA analysis is being used to inform and drive the SLS, Orion, and GSDO designs to ensure adequate launch availability for future human exploration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, M.; Sullivan, M.; D’Andrea, C. B.
2016-02-03
We present DES14X3taz, a new hydrogen-poor superluminous supernova (SLSN-I) discovered by the Dark Energy Survey (DES) supernova program, with additional photometric data provided by the Survey Using DECam for Superluminous Supernovae. Spectra obtained using Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy on the Gran Telescopio CANARIAS show DES14X3taz is an SLSN-I at z = 0.608. Multi-color photometry reveals a double-peaked light curve: a blue and relatively bright initial peak that fades rapidly prior to the slower rise of the main light curve. Our multi-color photometry allows us, for the first time, to show that the initial peak cools frommore » 22,000 to 8000 K over 15 rest-frame days, and is faster and brighter than any published core-collapse supernova, reaching 30% of the bolometric luminosity of the main peak. No physical Ni-56-powered model can fit this initial peak. We show that a shock-cooling model followed by a magnetar driving the second phase of the light curve can adequately explain the entire light curve of DES14X3taz. Models involving the shock-cooling of extended circumstellar material at a distance of similar or equal to 400 R-circle dot are preferred over the cooling of shock-heated surface layers of a stellar envelope. We compare DES14X3taz to the few double-peaked SLSN-I events in the literature. Although the rise. times and characteristics of these initial peaks differ, there exists the tantalizing possibility that they can be explained by one physical interpretation« less
Smith, M.
2016-02-03
Here, we present DES14X3taz, a new hydrogen-poor superluminous supernova (SLSN-I) discovered by the Dark Energy Survey (DES) supernova program, with additional photometric data provided by the Survey Using DECam for Superluminous Supernovae. Spectra obtained using Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy on the Gran Telescopio CANARIAS show DES14X3taz is an SLSN-I at z = 0.608. Multi-color photometry reveals a double-peaked light curve: a blue and relatively bright initial peak that fades rapidly prior to the slower rise of the main light curve. Our multi-color photometry allows us, for the first time, to show that the initial peak cools from 22,000more » to 8000 K over 15 rest-frame days, and is faster and brighter than any published core-collapse supernova, reaching 30% of the bolometric luminosity of the main peak. No physical (56)Ni-powered model can fit this initial peak. We show that a shock-cooling model followed by a magnetar driving the second phase of the light curve can adequately explain the entire light curve of DES14X3taz. Models involving the shock-cooling of extended circumstellar material at a distance of ≃400 R ⊙ are preferred over the cooling of shock-heated surface layers of a stellar envelope. We compare DES14X3taz to the few double-peaked SLSN-I events in the literature. Although the rise times and characteristics of these initial peaks differ, there exists the tantalizing possibility that they can be explained by one physical interpretation.« less
NASA Astrophysics Data System (ADS)
Tutashkonko, Sergii
Le sujet de cette these porte sur l'elaboration du nouveau nanomateriau par la gravure electrochimique bipolaire (BEE) --- le Ge mesoporeux et sur l'analyse de ses proprietes physico-chimiques en vue de son utilisation dans des applications photovoltaiques. La formation du Ge mesoporeux par gravure electrochimique a ete precedemment rapportee dans la litterature. Cependant, le verrou technologique important des procedes de fabrication existants consistait a obtenir des couches epaisses (superieure a 500 nm) du Ge mesoporeux a la morphologie parfaitement controlee. En effet, la caracterisation physico-chimique des couches minces est beaucoup plus compliquee et le nombre de leurs applications possibles est fortement limite. Nous avons developpe un modele electrochimique qui decrit les mecanismes principaux de formation des pores ce qui nous a permis de realiser des structures epaisses du Ge mesoporeux (jusqu'au 10 mum) ayant la porosite ajustable dans une large gamme de 15% a 60%. En plus, la formation des nanostructures poreuses aux morphologies variables et bien controlees est desormais devenue possible. Enfin, la maitrise de tous ces parametres a ouvert la voie extremement prometteuse vers la realisation des structures poreuses a multi-couches a base de Ge pour des nombreuses applications innovantes et multidisciplinaires grace a la flexibilite technologique actuelle atteinte. En particulier, dans le cadre de cette these, les couches du Ge mesoporeux ont ete optimisees dans le but de realiser le procede de transfert de couches minces d'une cellule solaire a triple jonctions via une couche sacrificielle en Ge poreux. Mots-cles : Germanium meso-poreux, Gravure electrochimique bipolaire, Electrochimie des semi-conducteurs, Report des couches minces, Cellule photovoltaique
2012-10-01
in the selection literature today is the Five Factor Model ( FFM ) or “Big 5” model of personality. This model includes: 1) Openness; 2...Conscientiousness; 3) Extraversion; 4) Agreeableness; and 5) Emotional Stability. Meta-analytic studies have found the FFM of personality to be predictive...is a self-report measure of the FFM that has demonstrated reliability and validity in numerous studies [18]. Another FFM measure, the Trait Self
Silvestro, Daniele; Zizka, Alexander; Bacon, Christine D; Cascales-Miñana, Borja; Salamin, Nicolas; Antonelli, Alexandre
2016-04-05
Methods in historical biogeography have revolutionized our ability to infer the evolution of ancestral geographical ranges from phylogenies of extant taxa, the rates of dispersals, and biotic connectivity among areas. However, extant taxa are likely to provide limited and potentially biased information about past biogeographic processes, due to extinction, asymmetrical dispersals and variable connectivity among areas. Fossil data hold considerable information about past distribution of lineages, but suffer from largely incomplete sampling. Here we present a new dispersal-extinction-sampling (DES) model, which estimates biogeographic parameters using fossil occurrences instead of phylogenetic trees. The model estimates dispersal and extinction rates while explicitly accounting for the incompleteness of the fossil record. Rates can vary between areas and through time, thus providing the opportunity to assess complex scenarios of biogeographic evolution. We implement the DES model in a Bayesian framework and demonstrate through simulations that it can accurately infer all the relevant parameters. We demonstrate the use of our model by analysing the Cenozoic fossil record of land plants and inferring dispersal and extinction rates across Eurasia and North America. Our results show that biogeographic range evolution is not a time-homogeneous process, as assumed in most phylogenetic analyses, but varies through time and between areas. In our empirical assessment, this is shown by the striking predominance of plant dispersals from Eurasia into North America during the Eocene climatic cooling, followed by a shift in the opposite direction, and finally, a balance in biotic interchange since the middle Miocene. We conclude by discussing the potential of fossil-based analyses to test biogeographic hypotheses and improve phylogenetic methods in historical biogeography. © 2016 The Author(s).
Unsteady Analysis of Separated Aerodynamic Flows Using an Unstructured Multigrid Algorithm
NASA Technical Reports Server (NTRS)
Pelaez, Juan; Mavriplis, Dimitri J.; Kandil, Osama
2001-01-01
An implicit method for the computation of unsteady flows on unstructured grids is presented. The resulting nonlinear system of equations is solved at each time step using an agglomeration multigrid procedure. The method allows for arbitrarily large time steps and is efficient in terms of computational effort and storage. Validation of the code using a one-equation turbulence model is performed for the well-known case of flow over a cylinder. A Detached Eddy Simulation model is also implemented and its performance compared to the one equation Spalart-Allmaras Reynolds Averaged Navier-Stokes (RANS) turbulence model. Validation cases using DES and RANS include flow over a sphere and flow over a NACA 0012 wing including massive stall regimes. The project was driven by the ultimate goal of computing separated flows of aerodynamic interest, such as massive stall or flows over complex non-streamlined geometries.
Jeffrey, N.; Abdalla, F. B.; Lahav, O.; ...
2018-05-15
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey, N.; et al.
2018-01-26
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey, N.; Abdalla, F. B.; Lahav, O.
Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less
NASA Astrophysics Data System (ADS)
Bergeron, Jean
Snow cover estimation is a principal source of error for spring streamflow simulations in Québec, Canada. Optical and near infrared remote sensing can improve snow cover area (SCA) estimation due to high spatial resolution but is limited by cloud cover and incoming solar radiation. Passive microwave remote sensing is complementary by its near-transparence to cloud cover and independence to incoming solar radiation, but is limited by its coarse spatial resolution. The study aims to create an improved SCA product from blended passive microwave (AMSR-E daily L3 Brightness Temperature) and optical (MODIS Terra and Aqua daily snow cover L3) remote sensing data in order to improve estimation of river streamflow caused by snowmelt with Québec's operational MOHYSE hydrological model through direct-insertion of the blended SCA product in a coupled snowmelt module (SPH-AV). SCA estimated from AMSR-E data is first compared with SCA estimated with MODIS, as well as with in situ snow depth measurements. Results show good agreement (+95%) between AMSR-E-derived and MODIS-derived SCA products in spring but comparisons with Environment Canada ground stations and SCA derived from Advanced Very High Resolution Radiometer (AVHRR) data show lesser agreements (83 % and 74% respectively). Results also show that AMSR-E generally underestimates SCA. Assimilating the blended snow product in SPH-AV coupled with MOHYSE yields significant improvement of simulated streamflow for the aux Écorces et au Saumon rivers overall when compared with simulations with no update during thaw events, These improvements are similar to results driven by biweekly ground data. Assimilation of remotely-sensed passive microwave data was also found to have little positive impact on springflood forecast due to the difficulty in differentiating melting snow from snow-free surfaces. Considering the direct-insertion and Newtonian nudging assimilation methods, the study also shows the latter method to be superior to the former, notably when assimilating noisy data. Keywords: Snow cover, spring streamflow, MODIS, AMSR-E, hydrological model.
Transport de Particules et Atmospheres D'etoiles Magnetiques
NASA Astrophysics Data System (ADS)
LeBlanc, Francis
1995-01-01
Les phenomenes relies a la diffusion atomique dans les etoiles sont etudies de facon intensive depuis environ un quart de siecle. La diffusion peut a la fois modifier les abondances atomiques presentes ainsi qu'affecter la structure et l'evolution stellaires. Dans cette these, nous allons etudier trois phenomenes physiques relies a la diffusion. Nous avons developpe la theorie de la derive induite par la radiation afin qu'elle soit facilement applicable dans le contexte de l'astrophysique stellaire. Des calcuis detailles furent effectues afin d'evaluer l'importance de cet effet sur la diffusion relative de l'^3 He et l'^4He et montrent que la derive induite par la radiation accelere la separation de ces deux isotopes dans une etoile de temperature effective de 18000 K. Lorsque l'^4He est present, ce phenomene augmente la vitesse de derive de l'^3He qui migre vers l'exterieur ce qui fait apparai tre la surabondance de cet isotope plus tot dans l'evolution. Des calculs sur le lithium a la base de la zone convective d'une etoile avec une temperature effective de 6700 K monuent que la derive induite par la radiation n'est pas importante dans ce cas. Ce phenomene semble aussi etre negligeable pour l'oxygene dans les etoiles de type A ainsi que pour le mercure dans les etoiles de type B. Deuxiemement nous avons construit des modeles d'atmospheres d'etoiles ayant un champ magnetique horizontal et constant en incluant l'interaction entre ce champ et la diffusion ambipolaire de l'hydrogene. Cette interaction cause une compression de la zone d'ionisation de l'hydrogene. Dans un modele de temperature effective de 10,000 K, et avec log g = 4.0 la gravite effective, c'est-a-dire la gravite plus l'acceleration causee par la force de Lorentz, en presence d'un champ magnetique de 5 kG est sept fois plus grande que la gravite. Ce phenomene affecte donc fortement la structure des etoiles Ap. Cette modification de la structure des etoiles magnetiques cause un plus grand elargissement des raies de Balmer de l'hydrogene. Puisque le champ magnetique observe n'est pas uniforme a la surface des etoiles Ap, la modification de la structure causee par l'interaction entre la diffusion ambipolaire de l'hydrogene et le champ magnetique engendre une variation de l'elargissement des raies de Balmer durant une periode de rotation. La variation causee par ce phenomene est inferieure aux variations observees. D'autres facteurs tels que des gradients horizontaux et verticaux de la metallicite et de la configuration du champ magnetique peuvent aussi influencer la variation des raies de Balmer. Des ameliorations majeures furent apportees au calcul des accelerations radiatives. Grace a des bases de donnees plus completes, il est maintenant possible de calculer l'acceleration causee par la photoionisation. De plus nous avons calcule de maniere approximative l'opacite monochromatique totale qui est un ingredient essentiel au calcul de l'acceleration radiative. Des ameliorations concernant l'elargissement des raies et la distribution de l'acceleration entre les divers ions d'un element furent aussi incluses. Des calculs detailles de l'acceleration radiative sur le fer montrent qu'une abondance consistente avec les observations peut etre supportee dans les etoiles de type A et F. L'abondance de fer supportee depend de la temperature effective et de la gravite de surface de l'etoile. Les accelerations radiatives ont ete tabulees afin d'etre facilement utilisables dans des codes d'evolution stellaire.
NASA Astrophysics Data System (ADS)
Leclaire, Sebastien
The computer assisted simulation of the dynamics of fluid flow has been a highly rewarding topic of research for several decades now, in terms of the number of scientific problems that have been solved as a result, both in the academic world and in industry. In the fluid dynamics field, simulating multiphase immiscible fluid flow remains a challenge, because of the complexity of the interactions at the flow phase interfaces. Various numerical methods are available to study these phenomena, and, the lattice Boltzmann method has been shown in recent years to be well adapted to solving this type of complex flow. In this thesis, a lattice Boltzmann model for the simulation of two-phase immiscible flows is studied. The main objective of the thesis is to develop this promising method further, with a view to enhancing its validity. To achieve this objective, the research is divided into five distinct themes. The first two focus on correcting some of the deficiencies of the original model. The third generalizes the model to support the simulation of N-phase immiscible fluid flows. The fourth is aimed at modifying the model itself, to enable the simulation of immiscible fluid flows in which the density of the phases varies. With the lattice Boltzmann class of models studied here, this density variation has been inadequately modeled, and, after 20 years, the issue still has not been resolved. The fifth, which complements this thesis, is connected with the lattice Boltzmann method, in that it generalizes the theory of 2D and 3D isotropic gradients for a high order of spatial precision. These themes have each been the subject of a scientific article, as listed in the appendix to this thesis, and together they constitute a synthesis that explains the links between the articles, as well as their scientific contributions, and satisfy the main objective of this research. Globally, a number of qualitative and quantitative test cases based on the theory of multiphase fluid flows have highlighted issues plaguing the simulation model. These test cases have resulted in various modifications to the model, which have reduced or eliminated some numerical artifacts that were problematic. They also allowed us to validate the extensions that were applied to the original model.
Newbold, Retha R.; Jefferson, Wendy N.; Grissom, Sherry F.; Padilla-Banks, Elizabeth; Snyder, Ryan J.; Lobenhofer, Edward K.
2008-01-01
Previously, we described a mouse model where the well-known reproductive carcinogen with estrogenic activity, diethylstilbestrol (DES), caused uterine adenocarcinoma following neonatal treatment. Tumor incidence was dose-dependent reaching >90% by 18 mo following neonatal treatment with 1000 μg/kg/d of DES. These tumors followed the initiation/promotion model of hormonal carcinogenesis with developmental exposure as initiator, and exposure to ovarian hormones at puberty as the promoter. To identify molecular pathways involved in DES-initiation events, uterine gene expression profiles were examined in prepubertal mice exposed to DES (1, 10, or 1000 μg/kg/d) on days 1–5 and compared to controls. Of more than 20 000 transcripts, approximately 3% were differentially expressed in at least one DES treatment group compared to controls; some transcripts demonstrated dose–responsiveness. Assessment of gene ontology annotation revealed alterations in genes associated with cell growth, differentiation, and adhesion. When expression profiles were compared to published studies of uteri from 5-d-old DES-treated mice, or adult mice treated with 17β estradiol, similarities were seen suggesting persistent differential expression of estrogen responsive genes following developmental DES exposure. Moreover, several altered genes were identified in human uterine adenocarcinomas. Four altered genes [lactotransferrin (Ltf), transforming growth factor beta inducible (Tgfb1), cyclin D1 (Ccnd1), and secreted frizzled-related protein 4 (Sfrp4)], selected for real-time RT-PCR analysis, correlated well with the directionality of the microarray data. These data suggested altered gene expression profiles observed 2 wk after treatment ceased, were established at the time of developmental exposure and maybe related to the initiation events resulting in carcinogenesis. PMID:17394237
Newbold, Retha R; Jefferson, Wendy N; Grissom, Sherry F; Padilla-Banks, Elizabeth; Snyder, Ryan J; Lobenhofer, Edward K
2007-09-01
Previously, we described a mouse model where the well-known reproductive carcinogen with estrogenic activity, diethylstilbestrol (DES), caused uterine adenocarcinoma following neonatal treatment. Tumor incidence was dose-dependent reaching >90% by 18 mo following neonatal treatment with 1000 microg/kg/d of DES. These tumors followed the initiation/promotion model of hormonal carcinogenesis with developmental exposure as initiator, and exposure to ovarian hormones at puberty as the promoter. To identify molecular pathways involved in DES-initiation events, uterine gene expression profiles were examined in prepubertal mice exposed to DES (1, 10, or 1000 microg/kg/d) on days 1-5 and compared to controls. Of more than 20 000 transcripts, approximately 3% were differentially expressed in at least one DES treatment group compared to controls; some transcripts demonstrated dose-responsiveness. Assessment of gene ontology annotation revealed alterations in genes associated with cell growth, differentiation, and adhesion. When expression profiles were compared to published studies of uteri from 5-d-old DES-treated mice, or adult mice treated with 17beta estradiol, similarities were seen suggesting persistent differential expression of estrogen responsive genes following developmental DES exposure. Moreover, several altered genes were identified in human uterine adenocarcinomas. Four altered genes [lactotransferrin (Ltf), transforming growth factor beta inducible (Tgfb1), cyclin D1 (Ccnd1), and secreted frizzled-related protein 4 (Sfrp4)], selected for real-time RT-PCR analysis, correlated well with the directionality of the microarray data. These data suggested altered gene expression profiles observed 2 wk after treatment ceased, were established at the time of developmental exposure and maybe related to the initiation events resulting in carcinogenesis. (c) 2007 Wiley-Liss, Inc.
2010-12-01
however, was the possibility for students to choose the role of insurgents. Two weeks prior to the start of the simulation, the 78 undergraduate ...King, 2009), students in a political science class participated in a week-long simulation of large-scale regional insurgency. Before the simulation... Students could choose to be government officials, such as the president or the secretary of defence of a country. Alternatively students could role-play
1998-04-01
The result of the project is a demonstration of the fusion process, the sensors management and the real-time capabilities using simulated sensors...demonstrator (TAD) is a system that demonstrates the core ele- ment of a battlefield ground surveillance system by simulation in near real-time. The core...Management and Sensor/Platform simulation . The surveillance system observes the real world through a non-collocated heterogene- ous multisensory system
Étude de la réponse photoacoustique d'objets massifs en 3D
NASA Astrophysics Data System (ADS)
Séverac, H.; Mousseigne, M.; Franceschi, J. L.
1996-11-01
In some sectors such as microelectronics or the physics of materials, reliability is of capital importance. It is also particularly attractive to have access on informations on the material behaviour without the use of a destructive test like chemical analysis or others mechanical tests. The submitted method for non-destructive testing is based on the waves generation with a laser beam. The aim of studying the various waves in the three-dimensional space is to bring informations about materials response. Thermoelastic modelisation allowed a rigorous analytic approach and to give rise to a software written in Turbo-Pascal for a more general solution. Dans les secteurs où la fiabilité est capitale, tels la micro-électronique ou la physique des matériaux, il est particulièrement utile d'accéder aux informations sur le comportement du matériau sans avoir à utiliser une méthode destructive (analyses chimiques ou autres essais mécaniques). La méthode de contrôle non destructif présentée est basée sur la génération d'ondes par impact d'un faisceau laser focalisé à la surface d'un échantillon, sans atteindre le régime d'ablation. L'étude de la propagation des diverses ondes dans l'espace tridimensionnel permet d'apporter des mesures quantitatives sur l'analyse de la réponse des matériaux utilisés. La modélisation des phénomènes thermoélastiques a permis une approche analytique rigoureuse et donné naissance à un logiciel de simulation écrit en Turbo-Pascal pour des études plus générales.
NASA Astrophysics Data System (ADS)
Guérin, Christophe; Marin, Gildas; Garnero, Line; Meunier, Gérard
1997-12-01
So as to compute the electric potential and the flux density generated by the electrical activity of the brain, numerical methods based on the Finite Element Method have been developed. These methods, which can treat realistic head models and can take into account anisotropy of conductivity, for instance in the skull, are presented. Then two numerical examples are described: a spherical model and a realistic head model. Afin de calculer le potentiel électrique et l'induction créés par l'activité électrique du cerveau, nous avons développé des méthodes utilisant la Méthode des Éléments Finis. Ces méthodes, qui peuvent s'appliquer à des modèles réalistes de tête et qui permettent de tenir compte de la conductivité anisotrope de certains tissus comme l'os, sont présentées. Puis deux exemples numériques sont décrits : un modèle de sphères concentriques et un modèle réaliste de tête.
Navigation d'un vehicule autonome autour d'un asteroide
NASA Astrophysics Data System (ADS)
Dionne, Karine
Les missions d'exploration planetaire utilisent des vehicules spatiaux pour acquerir les donnees scientifiques qui font avancer notre connaissance du systeme solaire. Depuis les annees 90, ces missions ciblent non seulement les planetes, mais aussi les corps celestes de plus petite taille comme les asteroides. Ces astres representent un defi particulier du point de vue des systemes de navigation, car leur environnement dynamique est complexe. Une sonde spatiale doit reagir rapidement face aux perturbations gravitationnelles en presence, sans quoi sa securite pourrait etre compromise. Les delais de communication avec la Terre pouvant souvent atteindre plusieurs dizaines de minutes, il est necessaire de developper des logiciels permettant une plus grande autonomie d'operation pour ce type de mission. Ce memoire presente un systeme de navigation autonome qui determine la position et la vitesse d'un satellite en orbite autour d'un asteroide. Il s'agit d'un filtre de Kalman etendu adaptatif a trois degres de liberte. Le systeme propose se base sur l'imagerie optique pour detecter des " points de reperes " qui ont ete prealablement cartographies. Il peut s'agir de crateres, de rochers ou de n'importe quel trait physique discernable a la camera. Les travaux de recherche realises se concentrent sur les techniques d'estimation d'etat propres a la navigation autonome. Ainsi, on suppose l'existence d'un logiciel approprie qui realise les fonctions de traitement d'image. La principale contribution de recherche consiste en l'inclusion, a chaque cycle d'estimation, d'une mesure de distance afin d'ameliorer les performances de navigation. Un estimateur d'etat de type adaptatif est necessaire pour le traitement de ces mesures, car leur precision varie dans le temps en raison de l'erreur de pointage. Les contributions secondaires de recherche sont liees a l'analyse de l'observabilite du systeme ainsi qu'a une analyse de sensibilite pour six parametres principaux de conception. Les resultats de simulation montrent que l'ajout d'une mesure de distance par cycle de mise a jour entraine une amelioration significative des performances de navigation. Ce procede reduit l'erreur d'estimation ainsi que les periodes de non-observabilite en plus de contrer la dilution de precision des mesures. Les analyses de sensibilite confirment quant a elles la contribution des mesures de distance a la diminution globale de l'erreur d'estimation et ce pour une large gamme de parametres de conception. Elles indiquent egalement que l'erreur de cartographie est un parametre critique pour les performances du systeme de navigation developpe. Mots cles : Estimation d'etat, filtre de Kalman adaptatif, navigation optique, lidar, asteroide, simulations numeriques
NASA Astrophysics Data System (ADS)
Goienetxea Uriarte, A.; Ruiz Zúñiga, E.; Urenda Moris, M.; Ng, A. H. C.
2015-05-01
Discrete Event Simulation (DES) is nowadays widely used to support decision makers in system analysis and improvement. However, the use of simulation for improving stochastic logistic processes is not common among healthcare providers. The process of improving healthcare systems involves the necessity to deal with trade-off optimal solutions that take into consideration a multiple number of variables and objectives. Complementing DES with Multi-Objective Optimization (SMO) creates a superior base for finding these solutions and in consequence, facilitates the decision-making process. This paper presents how SMO has been applied for system improvement analysis in a Swedish Emergency Department (ED). A significant number of input variables, constraints and objectives were considered when defining the optimization problem. As a result of the project, the decision makers were provided with a range of optimal solutions which reduces considerably the length of stay and waiting times for the ED patients. SMO has proved to be an appropriate technique to support healthcare system design and improvement processes. A key factor for the success of this project has been the involvement and engagement of the stakeholders during the whole process.
NASA Astrophysics Data System (ADS)
Corbeil Therrien, Audrey
La tomographie d'emission par positrons (TEP) est un outil precieux en recherche preclinique et pour le diagnostic medical. Cette technique permet d'obtenir une image quantitative de fonctions metaboliques specifiques par la detection de photons d'annihilation. La detection des ces photons se fait a l'aide de deux composantes. D'abord, un scintillateur convertit l'energie du photon 511 keV en photons du spectre visible. Ensuite, un photodetecteur convertit l'energie lumineuse en signal electrique. Recemment, les photodiodes avalanche monophotoniques (PAMP) disposees en matrice suscitent beaucoup d'interet pour la TEP. Ces matrices forment des detecteurs sensibles, robustes, compacts et avec une resolution en temps hors pair. Ces qualites en font un photodetecteur prometteur pour la TEP, mais il faut optimiser les parametres de la matrice et de l'electronique de lecture afin d'atteindre les performances optimales pour la TEP. L'optimisation de la matrice devient rapidement une operation difficile, car les differents parametres interagissent de maniere complexe avec les processus d'avalanche et de generation de bruit. Enfin, l'electronique de lecture pour les matrices de PAMP demeure encore rudimentaire et il serait profitable d'analyser differentes strategies de lecture. Pour repondre a cette question, la solution la plus economique est d'utiliser un simulateur pour converger vers la configuration donnant les meilleures performances. Les travaux de ce memoire presentent le developpement d'un tel simulateur. Celui-ci modelise le comportement d'une matrice de PAMP en se basant sur les equations de physique des semiconducteurs et des modeles probabilistes. Il inclut les trois principales sources de bruit, soit le bruit thermique, les declenchements intempestifs correles et la diaphonie optique. Le simulateur permet aussi de tester et de comparer de nouvelles approches pour l'electronique de lecture plus adaptees a ce type de detecteur. Au final, le simulateur vise a quantifier l'impact des parametres du photodetecteur sur la resolution en energie et la resolution en temps et ainsi optimiser les performances de la matrice de PAMP. Par exemple, l'augmentation du ratio de surface active ameliore les performances, mais seulement jusqu'a un certain point. D'autres phenomenes lies a la surface active, comme le bruit thermique, provoquent une degradation du resultat. Le simulateur nous permet de trouver un compromis entre ces deux extremes. Les simulations avec les parametres initiaux demontrent une efficacite de detection de 16,7 %, une resolution en energie de 14,2 % LMH et une resolution en temps de 0.478 ns LMH. Enfin, le simulateur propose, bien qu'il vise une application en TEP, peut etre adapte pour d'autres applications en modifiant la source de photons et en adaptant les objectifs de performances. Mots-cles : Photodetecteurs, photodiodes avalanche monophotoniques, semiconducteurs, tomographie d'emission par positrons, simulations, modelisation, detection monophotonique, scintillateurs, circuit d'etouffement, SPAD, SiPM, Photodiodes avalanche operees en mode Geiger
Simulations of the OzDES AGN reverberation mapping project
King, Anthea L.; Martini, Paul; Davis, Tamara M.; ...
2015-08-26
As part of the Australian spectroscopic dark energy survey (OzDES) we are carrying out a large-scale reverberation mapping study of ~500 quasars over five years in the 30 deg 2 area of the Dark Energy Survey (DES) supernova fields. These quasars have redshifts ranging up to 4 and have apparent AB magnitudes between 16.8 mag < r < 22.5 mag. The aim of the survey is to measure time lags between fluctuations in the quasar continuum and broad emission-line fluxes of individual objects in order to measure black hole masses for a broad range of active galactic nuclei (AGN) andmore » constrain the radius–luminosity (R–L) relationship. Here we investigate the expected efficiency of the OzDES reverberation mapping campaign and its possible extensions. We expect to recover lags for ~35–45 % of the quasars. AGN with shorter lags and greater variability are more likely to yield a lag measurement, and objects with lags ≲6 months or ~1 yr are expected to be recovered the most accurately. The baseline OzDES reverberation mapping campaign is predicted to produce an unbiased measurement of the R–L relationship parameters for Hβ, MgIIλ2798, and C IVλ1549. As a result, extending the baseline survey by either increasing the spectroscopic cadence, extending the survey season, or improving the emission-line flux measurement accuracy will significantly improve the R–L parameter constraints for all broad emission lines.« less
Morais, Eduarda S; Mendonça, Patrícia V; Coelho, Jorge F J; Freire, Mara G; Freire, Carmen S R; Coutinho, João A P; Silvestre, Armando J D
2018-02-22
This work contributes to the development of integrated lignocellulosic-based biorefineries by the pioneering exploitation of hardwood xylans by solubilization and extraction in deep eutectic solvents (DES). DES formed by choline chloride and urea or acetic acid were initially evaluated as solvents for commercial xylan as a model compound. The effects of temperature, molar ratio, and concentration of the DES aqueous solutions were evaluated and optimized by using a response surface methodology. The results obtained demonstrated the potential of these solvents, with 328.23 g L -1 of xylan solubilization using 66.7 wt % DES in water at 80 °C. Furthermore, xylans could be recovered by precipitation from the DES aqueous media in yields above 90 %. The detailed characterization of the xylans recovered after solubilization in aqueous DES demonstrated that 4-O-methyl groups were eliminated from the 4-O-methylglucuronic acids moieties and uronic acids (15 %) were cleaved from the xylan backbone during this process. The similar M w values of both pristine and recovered xylans confirmed the success of the reported procedure. DES recovery in four additional extraction cycles was also demonstrated. Finally, the successful extraction of xylans from Eucalyptus globulus wood by using aqueous solutions of DES was demonstrated. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Li, Guizhen; Wang, Wei; Wang, Qian; Zhu, Tao
2016-02-01
Deep eutectic solvents (DES) were synthesized with choline chloride (ChCl), and DES modified molecular imprinted polymers (DES-MIPs), DES modified non-imprinted polymers (DES-NIPs, without template), MIPs and NIPs were prepared in an identical procedure. Fourier transform infrared spectrometer (FT-IR) and field emission scanning electron microscopy (FE-SEM) were used to characterize the obtained polymers. Rebinding experiment and solid-phase extraction (SPE) were used to prove the high selectivity adsorption properties of the polymers. Box-Behnken design (BBD) with three factors was used to optimize the extraction condition of chlorogenic acid (CA) from honeysuckles. The optimum extraction conditions were found to be ultrasonic time optimized (20 min), the volume fraction of ethanol (60%) and ratio of liquid to material (15 mL g(-1)). Under these conditions, the mean extraction yield of CA was 12.57 mg g(-1), which was in good agreement with the predicted BBD model value. Purification of hawthorn extract was achieved by SPE process, and SPE recoveries of CA were 72.56, 64.79, 69.34 and 60.08% by DES-MIPs, DES-NIPs, MIPs and NIPs, respectively. The results showed DES-MIPs had potential for promising functional adsorption material for the purification of bioactive compounds. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Optimisation de fonctionnements de pompe à chaleur chimique : synchronisation et commande du procédé
NASA Astrophysics Data System (ADS)
Cassou, T.; Amouroux, M.; Labat, P.
1995-04-01
We present the mathematical modelling of a chemical heat pump and the associated simulator. This simulator is able to determine the influence of different parameters (which would be associated to the heat exchanges or to the chemical kinetics), but also to simulate the main operating modes. An optimal management of process represents the objective to reach; we materialize it by a continuous and steady production of the power delivered by the machine. Nous présentons le modèle mathématique d'un pilote de pompe à chaleur chimique et le simulateur numérique correspondant. Ce simulateur est capable de déterminer l'influence de divers paramètres (qu'ils soient liés aux échanges de chaleur ou à la cinétique chimique), mais aussi de simuler les principaux modes de fonctionnement. Une gestion optimale du procédé représente le but à atteindre: une conduite optimisée du système permet, par une gestion des différentes phases, une production continue et stable de la puissance délivrée par la machine.
Fabrication par injection flexible de pieces coniques pour des applications aerospatiales
NASA Astrophysics Data System (ADS)
Shebib Loiselle, Vincent
Les materiaux composites sont presents dans les tuyeres de moteurs spatiaux depuis les annees soixante. Aujourd'hui, l'avenement des tissus tridimensionnels apporte une solution innovatrice au probleme de delamination qui limitait les proprietes mecaniques de ces composites. L'utilisation de ces tissus necessite toutefois la conception de procedes de fabrication mieux adaptes. Une nouvelle methode de fabrication de pieces composites pour des applications aerospatiales a ete etudiee tout au long de ce travail. Celle-ci applique les principes de l'injection flexible (procede Polyflex) a la fabrication de pieces coniques de fortes epaisseurs. La piece de validation a fabriquer represente un modele reduit de piece de tuyere de moteur spatial. Elle est composee d'un renfort tridimensionnel en fibres de carbone et d'une resine phenolique. La reussite du projet est definie par plusieurs criteres sur la compaction et la formation de plis du renfort et sur la formation de porosites de la piece fabriquee. Un grand nombre d'etapes ont ete necessaires avant la fabrication de deux pieces de validation. Premierement, pour repondre au critere sur la compaction du renfort, la conception d'un outil de caracterisation a ete entreprise. L'etude de la compaction a ete effectuee afin d'obtenir les informations necessaires a la comprehension de la deformation d'un renfort 3D axisymetrique. Ensuite, le principe d'injection de la piece a ete defini pour ce nouveau procede. Pour en valider les concepts proposes, la permeabilite du renfort fibreux ainsi que la viscosite de la resine ont du etre caracterisees. A l'aide de ces donnees, une serie de simulations de l'ecoulement pendant l'injection de la piece ont ete realisees et une approximation du temps de remplissage calculee. Apres cette etape, la conception du moule de tuyere a ete entamee et appuyee par une simulation mecanique de la resistance aux conditions de fabrication. Egalement, plusieurs outillages necessaires pour la fabrication ont ete concus et installes au nouveau laboratoire CGD (composites grandes dimensions). En parallele, plusieurs etudes ont ete effectuees pour comprendre les phenomenes influencant la polymerisation de la resine.
NASA Astrophysics Data System (ADS)
Tremblay, Jose-Philippe
Les systemes avioniques ne cessent d'evoluer depuis l'apparition des technologies numeriques au tournant des annees 60. Apres le passage par plusieurs paradigmes de developpement, ces systemes suivent maintenant l'approche " Integrated Modular Avionics " (IMA) depuis le debut des annees 2000. Contrairement aux methodes anterieures, cette approche est basee sur une conception modulaire, un partage de ressources generiques entre plusieurs systemes et l'utilisation plus poussee de bus multiplexes. La plupart des concepts utilises par l'architecture IMA, bien que deja connus dans le domaine de l'informatique distribuee, constituent un changement marque par rapport aux modeles anterieurs dans le monde avionique. Ceux-ci viennent s'ajouter aux contraintes importantes de l'avionique classique telles que le determinisme, le temps reel, la certification et les cibles elevees de fiabilite. L'adoption de l'approche IMA a declenche une revision de plusieurs aspects de la conception, de la certification et de l'implementation d'un systeme IMA afin d'en tirer profit. Cette revision, ralentie par les contraintes avioniques, est toujours en cours, et offre encore l'opportunite de developpement de nouveaux outils, methodes et modeles a tous les niveaux du processus d'implementation d?un systeme IMA. Dans un contexte de proposition et de validation d'une nouvelle architecture IMA pour un reseau generique de capteurs a bord d?un avion, nous avons identifie quelques aspects des differentes approches traditionnelles pour la realisation de ce type d?architecture pouvant etre ameliores. Afin de remedier a certaines des differentes lacunes identifiees, nous avons propose une approche de validation basee sur une plateforme materielle reconfigurable ainsi qu'une nouvelle approche de gestion de la redondance pour l'atteinte des cibles de fiabilite. Contrairement aux outils statiques plus limites satisfaisant les besoins pour la conception d'une architecture federee, notre approche de validation est specifiquement developpee de maniere a faciliter la conception d'une architecture IMA. Dans le cadre de cette these, trois axes principaux de contributions originales se sont degages des travaux executes suivant les differents objectifs de recherche enonces precedemment. Le premier axe se situe au niveau de la proposition d'une architecture hierarchique de reseau de capteurs s'appuyant sur le modele de base de la norme IEEE 1451. Cette norme facilite l'integration de capteurs et actuateurs intelligents a tout systeme de commande par des interfaces normalisees et generiques.
NASA Astrophysics Data System (ADS)
Taschereau, Richard
Cette these concerne les implants permanents pour la prostate. Les isotopes employes, le 103Pd et l'125I, semblent produire les memes resultats cliniques: le premier a cause d'une radiation plus efficace et le second a cause de sa demi-vie plus longue. La recherche utilise le cadre theorique de la microdosimetrie et des simulations Monte Carlo. Elle propose d'employer le spectre d'ejection dans le calcul de l'efficacite; ce changement fait passer l'efficacite relative du 103Pd de 10% a 5%. Elle montre ensuite qu'il est possible d'ameliorer l'efficacite de la radiation de 125I par l'exploitation des rayons X caracteristiques de la capsule. Une source amelioree faite de molybdene et d'yttrium est donnee en exemple. Elle procure une radiation de 5--7% plus efficace, ce qui surclasse les deux sources existantes. Les applications ne se limitent pas au traitement de la prostate; le traitement du melanome oculaire et la curietherapie endovasculaire pourraient en beneficier.
NASA Astrophysics Data System (ADS)
Trudel, Mélanie; Leconte, Robert; Paniconi, Claudio
2014-06-01
Data assimilation techniques not only enhance model simulations and forecast, they also provide the opportunity to obtain a diagnostic of both the model and observations used in the assimilation process. In this research, an ensemble Kalman filter was used to assimilate streamflow observations at a basin outlet and at interior locations, as well as soil moisture at two different depths (15 and 45 cm). The simulation model is the distributed physically-based hydrological model CATHY (CATchment HYdrology) and the study site is the Des Anglais watershed, a 690 km2 river basin located in southern Quebec, Canada. Use of Latin hypercube sampling instead of a conventional Monte Carlo method to generate the ensemble reduced the size of the ensemble, and therefore the calculation time. Different post-assimilation diagnostics, based on innovations (observation minus background), analysis residuals (observation minus analysis), and analysis increments (analysis minus background), were used to evaluate assimilation optimality. An important issue in data assimilation is the estimation of error covariance matrices. These diagnostics were also used in a calibration exercise to determine the standard deviation of model parameters, forcing data, and observations that led to optimal assimilations. The analysis of innovations showed a lag between the model forecast and the observation during rainfall events. Assimilation of streamflow observations corrected this discrepancy. Assimilation of outlet streamflow observations improved the Nash-Sutcliffe efficiencies (NSE) between the model forecast (one day) and the observation at both outlet and interior point locations, owing to the structure of the state vector used. However, assimilation of streamflow observations systematically increased the simulated soil moisture values.
Luque, E.
2016-02-09
Here, the Dark Energy Survey (DES) is a 5000 sq. degree survey in the southern hemisphere, which is rapidly reducing the existing north-south asymmetry in the census of MW satellites and other stellar substructure. We use the first-year DES data down to previously unprobed photometric depths to search for stellar systems in the Galactic halo, therefore complementing the previous analysis of the same data carried out by our group earlier this year. Our search is based on a matched filter algorithm that produces stellar density maps consistent with stellar population models of various ages, metallicities, and distances over the surveymore » area. The most conspicuous density peaks in these maps have been identified automatically and ranked according to their significance and recurrence for different input models. We report the discovery of one additional stellar system besides those previously found by several authors using the same first-year DES data. The object is compact, and consistent with being dominated by an old and metal-poor population. DES J0034-4902 is found at high significance and appears in the DES images as a compact concentration of faint blue point sources at ~ 87 {kpc}.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luque, E.
Here, the Dark Energy Survey (DES) is a 5000 sq. degree survey in the southern hemisphere, which is rapidly reducing the existing north-south asymmetry in the census of MW satellites and other stellar substructure. We use the first-year DES data down to previously unprobed photometric depths to search for stellar systems in the Galactic halo, therefore complementing the previous analysis of the same data carried out by our group earlier this year. Our search is based on a matched filter algorithm that produces stellar density maps consistent with stellar population models of various ages, metallicities, and distances over the surveymore » area. The most conspicuous density peaks in these maps have been identified automatically and ranked according to their significance and recurrence for different input models. We report the discovery of one additional stellar system besides those previously found by several authors using the same first-year DES data. The object is compact, and consistent with being dominated by an old and metal-poor population. DES J0034-4902 is found at high significance and appears in the DES images as a compact concentration of faint blue point sources at ~ 87 {kpc}.« less
In a classical model of latent hormonal carcinogenesis, exposing mice on neonatal days 1-5 to the synthetic estrogen diethylstilbestrol (DES; 1 mg/kg/day) results in high incidence of uterine carcinoma. However, the biological mechanisms driving DES-induced carcinogenesis remain ...
Keller, C; Nanda, R; Shannon, R L; Amit, A; Kaplan, A L
2001-01-01
Diethylstilbestrol (DES) was used widely in the late 1940s in an attempt to prevent adverse pregnancy outcomes. In 1971 the US Food and Drug Administration proscribed its use for pregnancy support secondary to its association with clear cell adenocarcinoma of the vagina. Several studies in animal models demonstrated an association with endometrial cancer among offspring following in utero DES exposure. To date, there is only one case report of endometrial cancer in women exposed to DES in utero. We present the first case, to our knowledge, of a woman exposed to DES in utero who presented with double primaries of clear cell cancer of the vagina concomitant with endometrial cancer.
NASA Astrophysics Data System (ADS)
Basri, M.; Mawengkang, H.; Zamzami, E. M.
2018-03-01
Limitations of storage sources is one option to switch to cloud storage. Confidentiality and security of data stored on the cloud is very important. To keep up the confidentiality and security of such data can be done one of them by using cryptography techniques. Data Encryption Standard (DES) is one of the block cipher algorithms used as standard symmetric encryption algorithm. This DES will produce 8 blocks of ciphers combined into one ciphertext, but the ciphertext are weak against brute force attacks. Therefore, the last 8 block cipher will be converted into 8 random images using Least Significant Bit (LSB) algorithm which later draws the result of cipher of DES algorithm to be merged into one.
A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tie, S. S.; Martini, P.; Mudd, D.
In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less
A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey
Tie, S. S.; Martini, P.; Mudd, D.; ...
2017-02-15
In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less
NASA Astrophysics Data System (ADS)
Garcia, M. H.
2016-12-01
Modeling Sediment Transport Using a Lagrangian Particle Tracking Algorithm Coupled with High-Resolution Large Eddy Simulations: a Critical Analysis of Model Limits and Sensitivity Som Dutta1, Paul Fischer2, Marcelo H. Garcia11Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, Urbana, Il, 61801 2Department of Computer Science and Department of MechSE, University of Illinois at Urbana-Champaign, Urbana, Il, 61801 Since the seminal work of Niño and Garcia [1994], one-way coupled Lagrangian particle tracking has been used extensively for modeling sediment transport. Over time, the Lagrangian particle tracking method has been coupled with Eulerian flow simulations, ranging from Reynolds Averaged Navier-Stokes (RANS) based models to Detached Eddy Simulations (DES) [Escauriaza and Sotiropoulos, 2011]. Advent of high performance computing (HPC) platforms and faster algorithms have resulted in the work of Dutta et al. [2016], where Lagrangian particle tracking was coupled with high-resolution Large Eddy Simulations (LES) to model the complex and highly non-linear phenomenon of Bulle-Effect at diversions. Despite all the advancements in using Lagrangian particle tracking, there has not been a study that looks in detail at the limits of the model in the context of sediment transport, and also analyzes the sensitivity of the various force formulation in the force balance equation of the particles. Niño and Garcia [1994] did a similar analysis, but the vertical flow velocity distribution was modeled as the log-law. The current study extends the analysis by modeling the flow using high-resolution LES at a Reynolds number comparable to experiments of Niño et al. [1994]. Dutta et al., (2016), Large Eddy Simulation (LES) of flow and bedload transport at an idealized 90-degree diversion: insight into Bulle-Effect, River Flow 2016 - Constantinescu, Garcia & Hanes (Eds), Taylor & Francis Group, London, 101-109. Escauriaza and Sotiropoulos, (2011), Lagrangian model of bed-load transport in turbulent junction flows, Journal of Fluid Mechanics, 666,36-76. Niño and García, (1994), Gravel saltation: 2. Modeling, Water Resources Research, 30(6),1915-1924. Niño et al., (1994), Gravel saltation: 1. Experiments, Water Resources Research, 30(6), 1907-1914.
NASA Astrophysics Data System (ADS)
Lasri, Abdel-Halim
Dans cette recherche-developpement, nous avons concu, developpe et mis a l'essai un simulateur interactif pour favoriser l'apprentissage des lois probabilistes impliqees dans la genetique mendelienne. Cet environnement informatise devra permettre aux etudiants de mener des experiences simulees, utilisant les statistiques et les probebilites comme outils mathematiques pour modeliser le phenomene de la transmission des caracteres hereditaires. L'approche didactique est essentiellement orientee vers l'utilisation des methodes quantitatives impliquees dans l'experimentation des facteurs hereditaires. En incorporant au simulateur le principe de la "Lunette cognitive" de Nonnon (1986), l'etudiant fut place dans une situation ou il a pu synchroniser la perception de la representation iconique (concrete) et symbolique (abstraite) des lois probabilistes de Mendel. A l'aide de cet environnement, nous avons amene l'etudiant a identifier le(s) caractere(s) hereditaire(s) des parents a croiser, a predire les frequences phenotypiques probables de la descendance issue du croisement, a observer les resultats statistiques et leur fluctuation au niveau de l'histogramme des frequences, a comparer ces resultats aux predictions anticipees, a interpreter les donnees et a selectionner en consequence d'autres experiences a realiser. Les etapes de l'approche inductive sont privilegiees du debut a la fin des activites proposees. L'elaboration, du simulateur et des documents d'accompagnement, a ete concue a partir d'une vingtaine de principes directeurs et d'un modele d'action. Ces principes directeurs et le modele d'action decoulent de considerations theoriques psychologiques, didactiques et technologiques. La recherche decrit la structure des differentes parties composant le simulateur. L'architecture de celui-ci est construite autour d'une unite centrale, la "Principale", dont les liens et les ramifications avec les autres unites confere a l'ensemble du simulateur sa souplesse et sa facilite d'utilisation. Le simulateur "Genetique", a l'etat de prototype, et la documentation qui lui est afferente ont ete soumis a deux mises a l'essai: l'une fonctionnelle, l'autre empirique. La mise a l'essai fonctionnelle, menee aupres d'un groupe d'enseignants experts, a permis d'identifier les lacunes du materiel elabore afin de lui apporter les reajustements qui s'imposaient. La mise a l'essai empirique, conduite par un groupe de onze (11) etudiants de niveau secondaire, avait pour but, d'une part, de tester la facilite d'utilisation du simulateur "Genetique" ainsi que les documents d'accompagnement et, d'autre part, de verifier si les participants retiraient des avantages pedagogiques de cet environnement. Trois techniques furent exploitees pour recolter les donnees de la mise a l'essai empirique. L'analyse des resultats a permis de faire un retour critique sur les productions concretes de cette recherche et d'apporter les modifications necessaires tant au simulateur qu'aux documents d'accompagnement. Cette analyse a permis egalement de conclure que notre simulateur interactif favorise une approche inductive permettant aux etudiants de s'approprier les lois probabilistes de Mendel. Enfin, la conclusion degage des pistes de recherches destinees aux etudes ulterieures, plus particulierement celles qui s'interessent a developper des simulateurs, afin d'integrer a ceux-ci des representations concretes et abstraites presentees en temps reel. Les disquettes du simulateur "Genetique" et les documents d'accompagnement sont annexes a la presente recherche.
Dry Eye Syndrome After Proton Therapy of Ocular Melanomas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thariat, Juliette, E-mail: jthariat@gmail.com; Maschi, Celia; Lanteri, Sara
Purpose: To investigate whether proton therapy (PT) performs safely in superotemporal melanomas, in terms of risk of dry-eye syndrome (DES). Methods and Materials: Tumor location, DES grade, and dose to ocular structures were analyzed in patients undergoing PT (2005-2015) with 52 Gy (prescribed dose, not accounting for biologic effectiveness correction of 1.1). Prognostic factors of DES and severe DES (sDES, grades 2-3) were determined with Cox proportional hazard models. Visual acuity deterioration and enucleation rates were compared by sDES and tumor locations. Results: Median follow-up was 44 months (interquartile range, 18-60 months). Of 853 patients (mean age, 64 years), 30.5% had temporal and 11.4% superotemporalmore » tumors. Five-year incidence of DES and sDES was 23.0% (95% confidence interval [CI] 19.0%-27.7%) and 10.9% (95% CI 8.2%-14.4%), respectively. Multivariable analysis showed a higher risk for sDES in superotemporal (hazard ratio [HR] 5.82, 95% CI 2.72-12.45) and temporal tumors (HR 2.63, 95% CI 1.28-5.42), age ≥70 years (HR 1.90, 95% CI 1.09-3.32), distance to optic disk ≥5 mm (HR 2.71, 95% CI 1.52-4.84), ≥35% of retina receiving 12 Gy (HR 2.98, 95% CI 1.54-5.77), and eyelid rim irradiation (HR 2.68, 95% CI 1.49-4.80). The same risk factors were found for DES. Visual acuity deteriorated more in patients with sDES (0.86 ± 1.10 vs 0.64 ± 0.98 logMAR, P=.034) but not between superotemporal/temporal and other locations (P=.890). Enucleation rates were independent of sDES (P=.707) and tumor locations (P=.729). Conclusions: Severe DES was more frequent in superotemporal/temporal melanomas. Incidence of vision deterioration and enucleation was no higher in patients with superotemporal melanoma than in patients with tumors in other locations. Tumor location should not contraindicate PT.« less
Direct simulation of groundwater transit-time distributions using the reservoir theory
NASA Astrophysics Data System (ADS)
Etcheverry, David; Perrochet, Pierre
Groundwater transit times are of interest for the management of water resources, assessment of pollution from non-point sources, and quantitative dating of groundwaters by the use of environmental isotopes. The age of water is the time water has spent in an aquifer since it has entered the system, whereas the transit time is the age of water as it exits the system. Water at the outlet of an aquifer is a mixture of water elements with different transit times, as a consequence of the different flow-line lengths. In this paper, transit-time distributions are calculated by coupling two existing methods, the reservoir theory and a recent age-simulation method. Based on the derivation of the cumulative age distribution over the whole domain, the approach accounts for the whole hydrogeological framework. The method is tested using an analytical example and its applicability illustrated for a regional layered aquifer. Results show the asymmetry and multimodality of the transit-time distribution even in advection-only conditions, due to the aquifer geometry and to the velocity-field heterogeneity. Résumé Les temps de transit des eaux souterraines sont intéressants à connaître pour gérer l'évaluation des ressources en eau dans le cas de pollution à partir de sources non ponctuelles, et aussi pour dater quantitativement les eaux souterraines au moyen des isotopes du milieu. L'âge de l'eau est le temps qu'elle a passé dans un aquifère depuis qu'elle est entrée dans le système, alors que le temps de transit est l'âge de l'eau au moment où elle quitte le système. L'eau à la sortie d'un aquifère est un mélange d'eaux possédant différents temps de transit, du fait des longueurs différentes des lignes de courant suivies. Dans ce papier, les distributions des temps de transit sont calculées en couplant deux méthodes, la théorie du réservoir et une méthode récente de simulation des âges. Basée sur la dérivation de la distribution cumulées des âges sur tout le domaine, l'approche prend en compte le cadre hydrogéologique dans son ensemble. La méthode est testée sur un exemple analytique et son applicabilité est illustrée pour un aquifère stratifié régional. Les résultats montrent l'asymétrie et la plurimodalité de la distribution des temps de transit même dans des conditions uniquement d'advection, à cause de la géométrie de l'aquifère et de l'hétérogénéité du champ des vitesses. Resumen El estudio de los tiempos de tránsito del agua subterránea es muy útil para (1) la gestión de los recursos de agua frente a la contaminación por focos no puntuales, y (2) la datación de aguas mediante isótopos ambientales. La edad de un agua subterránea es el tiempo que ésta ha permanecido en el acuífero contada desde el momento de su entrada, mientras que el tiempo de tránsito corresponde a la edad del agua en el momento en que abandona el sistema. En el punto de descarga en realidad se encuentra una mezcla de aguas con distintos tiempos de tránsito, debido a la yuxtaposición de líneas de flujo con diferentes recorridos. En este artículo se calculan las distribuciones de tiempos de tránsito mediante el acoplamiento de dos métodos ya existentes: la teoría de embalse y un método reciente de simulación de edades. El método se basa en la derivación de la distribución acumulada de edades, y es aplicable en todo el dominio hidrogeológico. El método se ha probado en un ejemplo analítico, y su aplicabilidad se muestra para un acuífero estratificado. Como resultado se obtiene que, aun en el caso de flujo exclusivamente advectivo, la distribución de tiempos de tránsito es asimétrica y multimodal debido a la geometría del acuífero y a la heterogeneidad del campo de velocidades.
NASA Astrophysics Data System (ADS)
France, Lydéric; Demacon, Mickael; Gurenko, Andrey A.; Briot, Danielle
2016-09-01
The two main magmatic properties associated with explosive eruptions are high viscosity of silica-rich magmas and/or high volatile contents. Magmatic processes responsible for the genesis of such magmas are differentiation through crystallization, and crustal contamination (or assimilation) as this process has the potential to enhance crystallization and add volatiles to the initial budget. In the Chaîne des Puy series (French Massif Central), silica- and H2O-rich magmas were only emitted during the most recent eruptions (ca. 6-15 ka). Here, we use in situ measurements of oxygen isotopes in zircons from two of the main trachytic eruptions from the Chaîne des Puys to track the crustal contamination component in a sequence that was previously presented as an archetypal fractional crystallization series. Zircons from Sarcoui volcano and Puy de Dôme display homogeneous oxygen isotope compositions with δ18O = 5.6 ± 0.25‰ and 5.6 ± 0.3‰, respectively, and have therefore crystallized from homogeneous melts with δ18Omelt = 7.1 ± 0.3‰. Compared to mantle derived melts resulting from pure fractional crystallization (δ18Odif.mant. = 6.4 ± 0.4‰), those δ18Omelt values are enriched in 18O and support a significant role of crustal contamination in the genesis of silica-rich melts in the Chaîne des Puys. Assimilation-fractional-crystallization models highlight that the degree of contamination was probably restricted to 5.5-9.5% with Rcrystallization/Rassimilation varying between 8 and 14. The very strong intra-site homogeneity of the isotopic data highlights that magmas were well homogenized before eruption, and consequently that crustal contamination was not the trigger of silica-rich eruptions in the Chaîne des Puys. The exceptionally strong inter-site homogeneity of the isotopic data brings to light that Sarcoui volcano and Puy de Dôme were fed by a single large magma chamber. Our results, together with recent thermo-kinetic models and an experimental simulation (Martel et al., 2013), support the existence of a large ( 6-15 km3), still partially molten mid-crustal reservoir (10-12 km deep) that is filled with silica-rich magma. Calculated oxygen isotope compositions of the trachytic melts that crystallized the analyzed zircons for Puy de Dôme, Sarcoui dome, and Sarcoui phreatomagmatic deposits, and the range of values for each analyzed zircon grain. The range for trachytes obtained by pure fractional crystallization of mantle melts is given for comparison. See text for details on calculations. Chemical differentiation trend of Chaîne des Puys magmas (data from Boivin et al., 2009), and results of the fractional crystallization models presented herein and in Table 3. L1 is obtained after the first step of differentiation, and L2 after the second. The composition of Sarcoui trachytes is identified by an X. S3.1. Core-rim variations for oxygen isotope compositions of the studied zircons. S3.2. Oxygen isotope compositions of the various zircon domains observed with cathodoluminescence imaging (dark versus bright), and for zircons with different types of zoning (oscillatory versus sector). No systematic variation is observed.
2015-01-01
des lacunes de M&S du MORS du NMSG a déterminé que le manque d’interopérabilité de la simulation constituait la lacune prioritaire à combler en...matière de capacité, une équipe exploratoire (ET-027) a été constituée au sein du NMSG pour étudier l’interopérabilité de la simulation. L’ET-027 a...élevés (autrement dit, aux niveaux pratique, dynamique et conceptuel) de même que l’automatisation relative du développement, de
Mandonnet, Emmanuel; Winkler, Peter A; Duffau, Hugues
2010-02-01
While the fundamental and clinical contribution of direct electrical stimulation (DES) of the brain is now well acknowledged, its advantages and limitations have not been re-evaluated for a long time. Here, we critically review exactly what DES can tell us about cerebral function. First, we show that DES is highly sensitive for detecting the cortical and axonal eloquent structures. Moreover, DES also provides a unique opportunity to study brain connectivity, since each area responsive to stimulation is in fact an input gate into a large-scale network rather than an isolated discrete functional site. DES, however, also has a limitation: its specificity is suboptimal. Indeed, DES may lead to interpretations that a structure is crucial because of the induction of a transient functional response when stimulated, whereas (1) this effect is caused by the backward spreading of the electro-stimulation along the network to an essential area and/or (2) the stimulated region can be functionally compensated owing to long-term brain plasticity mechanisms. In brief, although DES is still the gold standard for brain mapping, its combination with new methods such as perioperative neurofunctional imaging and biomathematical modeling is now mandatory, in order to clearly differentiate those networks that are actually indispensable to function from those that can be compensated.
Description and comparative evaluation of a proposed design for the low visibility approach study
DOT National Transportation Integrated Search
1985-10-01
Ths memorandum was prepared in support of the low visibility simulation study being : conducted by the FAA as a basis for establishing the lowest RVR (runway visual range) : required for safe, fail passive auto landings in Category III weather. A des...
Disappearing and reappearing differences in drug-eluting stent use by race.
Federspiel, Jerome J; Stearns, Sally C; Reiter, Kristin L; Geissler, Kimberley H; Triplette, Matthew A; D'Arcy, Laura P; Sheridan, Brett C; Rossi, Joseph S
2013-04-01
Drug-eluting coronary stents (DES) rapidly dominated the marketplace in the United States after approval in 2003, but utilization rates were initially lower among African American patients. We assess whether racial differences persisted as DES diffused into practice. Medicare claims data were used to identify coronary stenting procedures among elderly patients with acute coronary syndromes (ACS). Regression models of the choice of DES versus bare mental stent controlled for demographics, ACS type, co-morbidities and hospital characteristics. Diffusion was assessed in the short run (2003-2004) and long run (2007), with the effect of race calculated to allow for time-varying effects. The sample included 381,887 Medicare beneficiaries treated with stent insertion; approximately 5% were African American. Initially (May 2003-February 2004), African American race was associated with lower DES use compared to other races (44.3% versus 46.5%, P < 0.01). Once DES usage was high in all patients (March-December 2004), differences were not significant (79.8% versus 80.3%, P = 0.45). Subsequent concerns regarding DES safety caused reductions in DES use, with African Americans having lower use than other racial groups in 2007 (63.1% versus 65.2%, P < 0.01). Racial disparities in DES use initially disappeared during a period of rapid diffusion and high usage rates; the reappearance of disparities in use by 2007 may reflect DES use tailored to unmeasured aspects of case mix and socio-economic status. Further work is needed to understand whether underlying differences in race reflect decisions regarding treatment appropriateness. © 2011 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Francoeur, Dany
Cette these de doctorat s'inscrit dans le cadre de projets CRIAQ (Consortium de recherche et d'innovation en aerospatiale du Quebec) orientes vers le developpement d'approches embarquees pour la detection de defauts dans des structures aeronautiques. L'originalite de cette these repose sur le developpement et la validation d'une nouvelle methode de detection, quantification et localisation d'une entaille dans une structure de joint a recouvrement par la propagation d'ondes vibratoires. La premiere partie expose l'etat des connaissances sur l'identification d'un defaut dans le contexte du Structural Health Monitoring (SHM), ainsi que la modelisation de joint a recouvrements. Le chapitre 3 developpe le modele de propagation d'onde d'un joint a recouvrement endommage par une entaille pour une onde de flexion dans la plage des moyennes frequences (10-50 kHz). A cette fin, un modele de transmission de ligne (TLM) est realise pour representer un joint unidimensionnel (1D). Ce modele 1D est ensuite adapte a un joint bi-dimensionnel (2D) en faisant l'hypothese d'un front d'onde plan incident et perpendiculaire au joint. Une methode d'identification parametrique est ensuite developpee pour permettre a la fois la calibration du modele du joint a recouvrement sain, la detection puis la caracterisation de l'entaille situee sur le joint. Cette methode est couplee a un algorithme qui permet une recherche exhaustive de tout l'espace parametrique. Cette technique permet d'extraire une zone d'incertitude reliee aux parametres du modele optimal. Une etude de sensibilite est egalement realisee sur l'identification. Plusieurs resultats de mesure sur des joints a recouvrements 1D et 2D sont realisees permettant ainsi l'etude de la repetabilite des resultats et la variabilite de differents cas d'endommagement. Les resultats de cette etude demontrent d'abord que la methode de detection proposee est tres efficace et permet de suivre la progression d'endommagement. De tres bons resultats de quantification et de localisation d'entailles ont ete obtenus dans les divers joints testes (1D et 2D). Il est prevu que l'utilisation d'ondes de Lamb permettraient d'etendre la plage de validite de la methode pour de plus petits dommages. Ces travaux visent d'abord la surveillance in-situ des structures de joint a recouvrements, mais d'autres types de defauts. (comme les disbond) et. de structures complexes sont egalement envisageables. Mots cles : joint a recouvrement, surveillance in situ, localisation et caracterisation de dommages
PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer
Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less
NASA Astrophysics Data System (ADS)
Bartolini, S.; Becerril, L.; Martí, J.
2014-11-01
One of the most important issues in modern volcanology is the assessment of volcanic risk, which will depend - among other factors - on both the quantity and quality of the available data and an optimum storage mechanism. This will require the design of purpose-built databases that take into account data format and availability and afford easy data storage and sharing, and will provide for a more complete risk assessment that combines different analyses but avoids any duplication of information. Data contained in any such database should facilitate spatial and temporal analysis that will (1) produce probabilistic hazard models for future vent opening, (2) simulate volcanic hazards and (3) assess their socio-economic impact. We describe the design of a new spatial database structure, VERDI (Volcanic managEment Risk Database desIgn), which allows different types of data, including geological, volcanological, meteorological, monitoring and socio-economic information, to be manipulated, organized and managed. The root of the question is to ensure that VERDI will serve as a tool for connecting different kinds of data sources, GIS platforms and modeling applications. We present an overview of the database design, its components and the attributes that play an important role in the database model. The potential of the VERDI structure and the possibilities it offers in regard to data organization are here shown through its application on El Hierro (Canary Islands). The VERDI database will provide scientists and decision makers with a useful tool that will assist to conduct volcanic risk assessment and management.
NASA Astrophysics Data System (ADS)
Quang Tran, Danh; Li, Jin; Xuan, Fuzhen; Xiao, Ting
2018-06-01
Dielectric elastomers (DEs) are belonged to a group of polymers which cause a time-dependence deformation due to the effect of viscoelastic. In recent years, viscoelasticity has been accounted in the modeling in order to understand the complete electromechanical behavior of dielectric elastomer actuators (DEAs). In this paper, we investigate the actuation performance of a circular DEA under different equal, un-equal biaxial pre-stretches, based on a nonlinear rheological model. The theoretical results are validated by experiments, which verify the electromechanical constitutive equation of the DEs. The viscoelastic mechanical characteristic is analyzed by modeling simulation analysis and experimental to describe the influence of frequency, voltage, pre-stretch, and waveform on the actuation response of the actuator. Our study indicates that: The DEA with different equal or un-equal biaxial pre-stretches undergoes different actuation performance when subject to high voltage. Under an un-equal biaxial pre-stretch, the DEA deforms unequally and shows different deformation abilities in two directions. The relative creep strain behavior of the DEA due to the effect of viscoelasticity can be reduced by increasing pre-stretch ratio. Higher equal biaxial pre-stretch obtains larger deformation strain, improves actuation response time, and reduces the drifting of the equilibrium position in the dynamic response of the DEA when activated by step and period voltage, while increasing the frequency will inhibit the output stretch amplitude. The results in this paper can provide theoretical guidance and application reference for design and control of the viscoelastic DEAs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giannantonio, T.; et al.
Optical imaging surveys measure both the galaxy density and the gravitational lensing-induced shear fields across the sky. Recently, the Dark Energy Survey (DES) collaboration used a joint fit to two-point correlations between these observables to place tight constraints on cosmology (DES Collaboration et al. 2017). In this work, we develop the methodology to extend the DES Collaboration et al. (2017) analysis to include cross-correlations of the optical survey observables with gravitational lensing of the cosmic microwave background (CMB) as measured by the South Pole Telescope (SPT) and Planck. Using simulated analyses, we show how the resulting set of five two-pointmore » functions increases the robustness of the cosmological constraints to systematic errors in galaxy lensing shear calibration. Additionally, we show that contamination of the SPT+Planck CMB lensing map by the thermal Sunyaev-Zel'dovich effect is a potentially large source of systematic error for two-point function analyses, but show that it can be reduced to acceptable levels in our analysis by masking clusters of galaxies and imposing angular scale cuts on the two-point functions. The methodology developed here will be applied to the analysis of data from the DES, the SPT, and Planck in a companion work.« less
Is it beneficial to increase the provision of thrombolysis?-- a discrete-event simulation model.
Barton, M; McClean, S; Gillespie, J; Garg, L; Wilson, D; Fullerton, K
2012-07-01
Although Thrombolysis has been licensed in the UK since 2003, it is still administered only to a small percentage of eligible patients. We consider the impact of investing the impact of thrombolysis on important acute stroke services, and the effect on quality of life. The concept is illustrated using data from the Northern Ireland Stroke Service. Retrospective study. We first present results of survival analysis utilizing length of stay (LOS) for discharge destinations, based on data from the Belfast City Hospital (BCH). None of these patients actually received thrombolysis but from those who would have been eligible, we created two initial groups, the first representing a scenario where they received thrombolysis and the second comprising those who do not receive thrombolysis. On the basis of the survival analysis, we created several subgroups based on discharge destination. We then developed a discrete event simulation (DES) model, where each group is a patient pathway within the simulation. Coxian phase type distributions were used to model the group LOS. Various scenarios were explored focusing on cost-effectiveness across hospital, community and social services had thrombolysis been administered to these patients, and the possible improvement in quality of life, should the proportion of patients who are administered thrombolysis be increased. Our aim in simulating various scenarios for this historical group of patients is to assess what the cost-effectiveness of thrombolysis would have been under different scenarios; from this we can infer the likely cost-effectiveness of future policies. The cost of thrombolysis is offset by reduction in hospital, community rehabilitation and institutional care costs, with a corresponding improvement in quality of life. Our model suggests that provision of thrombolysis would produce moderate overall improvement to the service assuming current levels of funding.
Models and numerical methods for the simulation of loss-of-coolant accidents in nuclear reactors
NASA Astrophysics Data System (ADS)
Seguin, Nicolas
2014-05-01
In view of the simulation of the water flows in pressurized water reactors (PWR), many models are available in the literature and their complexity deeply depends on the required accuracy, see for instance [1]. The loss-of-coolant accident (LOCA) may appear when a pipe is broken through. The coolant is composed by light water in its liquid form at very high temperature and pressure (around 300 °C and 155 bar), it then flashes and becomes instantaneously vapor in case of LOCA. A front of liquid/vapor phase transition appears in the pipes and may propagate towards the critical parts of the PWR. It is crucial to propose accurate models for the whole phenomenon, but also sufficiently robust to obtain relevant numerical results. Due to the application we have in mind, a complete description of the two-phase flow (with all the bubbles, droplets, interfaces…) is out of reach and irrelevant. We investigate averaged models, based on the use of void fractions for each phase, which represent the probability of presence of a phase at a given position and at a given time. The most accurate averaged model, based on the so-called Baer-Nunziato model, describes separately each phase by its own density, velocity and pressure. The two phases are coupled by non-conservative terms due to gradients of the void fractions and by source terms for mechanical relaxation, drag force and mass transfer. With appropriate closure laws, it has been proved [2] that this model complies with all the expected physical requirements: positivity of densities and temperatures, maximum principle for the void fraction, conservation of the mixture quantities, decrease of the global entropy… On the basis of this model, it is possible to derive simpler models, which can be used where the flow is still, see [3]. From the numerical point of view, we develop new Finite Volume schemes in [4], which also satisfy the requirements mentioned above. Since they are based on a partial linearization of the physical model, this numerical scheme is also efficient in terms of CPU time. Eventually, simpler models can locally replace the more complex model in order to simplify the overall computation, using some appropriate local error indicators developed in [5], without reducing the accuracy. References 1. Ishii, M., Hibiki, T., Thermo-fluid dynamics of two-phase flow, Springer, New-York, 2006. 2. Gallouët, T. and Hérard, J.-M., Seguin, N., Numerical modeling of two-phase flows using the two-fluid two-pressure approach, Math. Models Methods Appl. Sci., Vol. 14, 2004. 3. Seguin, N., Étude d'équations aux dérivées partielles hyperboliques en mécanique des fluides, Habilitation à diriger des recherches, UPMC-Paris 6, 2011. 4. Coquel, F., Hérard, J-M., Saleh, K., Seguin, N., A Robust Entropy-Satisfying Finite Volume Scheme for the Isentropic Baer-Nunziato Model, ESAIM: Mathematical Modelling and Numerical Analysis, Vol. 48, 2013. 5. Mathis, H., Cancès, C., Godlewski, E., Seguin, N., Dynamic model adaptation for multiscale simulation of hyperbolic systems with relaxation, preprint, 2013.
Turbulent Flow Effects on the Biological Performance of Hydro-Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richmond, Marshall C.; Romero Gomez, Pedro DJ
2014-08-25
The hydro-turbine industry uses Computational Fluid Dynamics (CFD) tools to predict the flow conditions as part of the design process for new and rehabilitated turbine units. Typically the hydraulic design process uses steady-state simulations based on Reynolds-Averaged Navier-Stokes (RANS) formulations for turbulence modeling because these methods are computationally efficient and work well to predict averaged hydraulic performance, e.g. power output, efficiency, etc. However, in view of the increasing emphasis on environmental concerns, such as fish passage, the consideration of the biological performance of hydro-turbines is also required in addition to hydraulic performance. This leads to the need to assess whethermore » more realistic simulations of the turbine hydraulic environment -those that resolve unsteady turbulent eddies not captured in steady-state RANS computations- are needed to better predict the occurrence and extent of extreme flow conditions that could be important in the evaluation of fish injury and mortality risks. In the present work, we conduct unsteady, eddy-resolving CFD simulations on a Kaplan hydro-turbine at a normal operational discharge. The goal is to quantify the impact of turbulence conditions on both the hydraulic and biological performance of the unit. In order to achieve a high resolution of the incoming turbulent flow, Detached Eddy Simulation (DES) turbulence model is used. These transient simulations are compared to RANS simulations to evaluate whether extreme hydraulic conditions are better captured with advanced eddy-resolving turbulence modeling techniques. The transient simulations of key quantities such as pressure and hydraulic shear flow that arise near the various components (e.g. wicket gates, stay vanes, runner blades) are then further analyzed to evaluate their impact on the statistics for the lowest absolute pressure (nadir pressures) and for the frequency of collisions that are known to cause mortal injury in fish passing through hydro-turbines.« less
Data engineering systems: Computerized modeling and data bank capabilities for engineering analysis
NASA Technical Reports Server (NTRS)
Kopp, H.; Trettau, R.; Zolotar, B.
1984-01-01
The Data Engineering System (DES) is a computer-based system that organizes technical data and provides automated mechanisms for storage, retrieval, and engineering analysis. The DES combines the benefits of a structured data base system with automated links to large-scale analysis codes. While the DES provides the user with many of the capabilities of a computer-aided design (CAD) system, the systems are actually quite different in several respects. A typical CAD system emphasizes interactive graphics capabilities and organizes data in a manner that optimizes these graphics. On the other hand, the DES is a computer-aided engineering system intended for the engineer who must operationally understand an existing or planned design or who desires to carry out additional technical analysis based on a particular design. The DES emphasizes data retrieval in a form that not only provides the engineer access to search and display the data but also links the data automatically with the computer analysis codes.
Kazachkin, Dmitry; Nishimura, Yoshifumi; Irle, Stephan; Morokuma, Keiji; Vidic, Radisav D; Borguet, Eric
2008-08-05
The interaction of acetone with single wall carbon nanotubes (SWCNTs) at low temperatures was studied by a combination of temperature programmed desorption (TPD) and dispersion-augmented density-functional-based tight binding (DFTB-D) theoretical simulations. On the basis of the results of the TPD study and theoretical simulations, the desorption peaks of acetone can be assigned to the following adsorption sites: (i) sites with energy of approximately 75 kJ mol (-1) ( T des approximately 300 K)endohedral sites of small diameter nanotubes ( approximately 7.7 A); (ii) sites with energy 40-68 kJ mol (-1) ( T des approximately 240 K)acetone adsorption on accessible interstitial, groove sites, and endohedral sites of larger nanotubes ( approximately 14 A); (iii) sites with energy 25-42 kJ mol (-1) ( T des approximately 140 K)acetone adsorption on external walls of SWCNTs and multilayer adsorption. Oxidatively purified SWCNTs have limited access to endohedral sites due to the presence of oxygen functionalities. Oxygen functionalities can be removed by annealing to elevated temperature (900 K) opening access to endohedral sites of nanotubes. Nonpurified, as-received SWCNTs are characterized by limited access for acetone to endohedral sites even after annealing to elevated temperatures (900 K). Annealing of both purified and as-produced SWCNTs to high temperatures (1400 K) leads to reduction of access for acetone molecules to endohedral sites of small nanotubes, probably due to defect self-healing and cap formation at the ends of SWCNTs. No chemical interaction between acetone and SWCNTs was detected for low temperature adsorption experiments. Theoretical simulations of acetone adsorption on finite pristine SWCNTs of different diameters suggest a clear relationship of the adsorption energy with tube sidewall curvature. Adsorption of acetone is due to dispersion forces, with its C-O bond either parallel to the surface or O pointing away from it. No significant charge transfer or polarization was found. Carbon black was used to model amorphous carbonaceous impurities present in as-produced SWCNTs. Desorption of acetone from carbon black revealed two peaks at approximately 140 and approximately 180-230 K, similar to two acetone desorption peaks from SWCNTs. The characteristic feature of acetone desorption from SWCNTs was peak at approximately 300 K that was not observed for carbon black. Care should be taken when assigning TPD peaks for molecules desorbing from carbon nanotubes as amorphous carbon can interfere.
airGR: a suite of lumped hydrological models in an R-package
NASA Astrophysics Data System (ADS)
Coron, Laurent; Perrin, Charles; Delaigue, Olivier; Andréassian, Vazken; Thirel, Guillaume
2016-04-01
Lumped hydrological models are useful and convenient tools for research, engineering and educational purposes. They propose catchment-scale representations of the precipitation-discharge relationship. Thanks to their limited data requirements, they can be easily implemented and run. With such models, it is possible to simulate a number of hydrological key processes over the catchment with limited structural and parametric complexity, typically evapotranspiration, runoff, underground losses, etc. The Hydrology Group at Irstea (Antony) has been developing a suite of rainfall-runoff models over the past 30 years with the main objectives of designing models as efficient as possible in terms of streamflow simulation, applicable to a wide range of catchments and having low data requirements. This resulted in a suite of models running at different time steps (from hourly to annual) applicable for various issues including water balance estimation, forecasting, simulation of impacts and scenario testing. Recently, Irstea has developed an easy-to-use R-package (R Core Team, 2015), called airGR, to make these models widely available. It includes: - the water balance annual GR1A (Mouehli et al., 2006), - the monthly GR2M (Mouehli, 2003) models, - three versions of the daily model, namely GR4J (Perrin et al., 2003), GR5J (Le Moine, 2008) and GR6J (Pushpalatha et al., 2011), - the hourly GR4H model (Mathevet, 2005), - a degree-day snow module CemaNeige (Valéry et al., 2014). The airGR package has been designed to facilitate the use by non-expert users and allow the addition of evaluation criteria, models or calibration algorithms selected by the end-user. Each model core is coded in FORTRAN to ensure low computational time. The other package functions (i.e. mainly the calibration algorithm and the efficiency criteria) are coded in R. The package is already used for educational purposes. The presentation will detail the main functionalities of the package and present a case study application. References: - Le Moine, N. (2008), Le bassin versant de surface vu par le souterrain : une voie d'amélioration des performances et du réalisme des modèles pluie-débit ?, PhD thesis (in French), UPMC, Paris, France. - Mathevet, T. (2005), Quels modèles pluie-débit globaux pour le pas de temps horaire ? Développement empirique et comparaison de modèles sur un large échantillon de bassins versants, PhD thesis (in French), ENGREF - Cemagref (Antony), Paris, France. - Mouelhi S. (2003), Vers une chaîne cohérente de modèles pluie-débit conceptuels globaux aux pas de temps pluriannuel, annuel, mensuel et journalier, PhD thesis (in French), ENGREF - Cemagref Antony, Paris, France. - Mouelhi, S., C. Michel, C. Perrin and V. Andréassian (2006), Stepwise development of a two-parameter monthly water balance model, Journal of Hydrology, 318(1-4), 200-214, doi:10.1016/j.jhydrol.2005.06.014. - Perrin, C., C. Michel and V. Andréassian (2003), Improvement of a parsimonious model for streamflow simulation, Journal of Hydrology, 279(1-4), 275-289, doi:10.1016/S0022-1694(03)00225-7. - Pushpalatha, R., C. Perrin, N. Le Moine, T. Mathevet and V. Andréassian (2011), A downward structural sensitivity analysis of hydrological models to improve low-flow simulation, Journal of Hydrology, 411(1-2), 66-76, doi:10.1016/j.jhydrol.2011.09.034. - R Core Team (2015). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/ - Valéry, A., V. Andréassian and C. Perrin (2014), "As simple as possible but not simpler": What is useful in a temperature-based snow-accounting routine? Part 2 - Sensitivity analysis of the Cemaneige snow accounting routine on 380 catchments, Journal of Hydrology, doi:10.1016/j.jhydrol.2014.04.058.
Owuor, Theresa O; Reid, Michaela; Reschke, Lauren; Hagemann, Ian; Greco, Suellen; Modi, Zeel; Moley, Kelle H
2018-01-01
Thirty-eight percent of US adult women are obese, meaning that more children are now born of overweight and obese mothers, leading to an increase in predisposition to several adult onset diseases. To explore this phenomenon, we developed a maternal obesity animal model by feeding mice a diet composed of high fat/ high sugar (HF/HS) and assessed both maternal diet and offspring diet on the development of endometrial cancer (ECa). We show that maternal diet by itself did not lead to ECa initiation in wildtype offspring of the C57Bl/6J mouse strain. While offspring fed a HF/HS post-weaning diet resulted in poor metabolic health and decreased uterine weight (regardless of maternal diet), it did not lead to ECa. We also investigated the effects of the maternal obesogenic diet on ECa development in a Diethylstilbestrol (DES) carcinogenesis mouse model. All mice injected with DES had reproductive tract lesions including decreased number of glands, condensed and hyalinized endometrial stroma, and fibrosis and increased collagen deposition that in some mice extended into the myometrium resulting in extensive disruption and loss of the inner and outer muscular layers. Fifty percent of DES mice that were exposed to maternal HF/HS diet developed several features indicative of the initial stages of carcinogenesis including focal glandular and atypical endometrial hyperplasia versus 0% of their Chow counterparts. There was an increase in phospho-Akt expression in DES mice exposed to maternal HF/HS diet, a regulator of persistent proliferation in the endometrium, and no difference in total Akt, phospho-PTEN and total PTEN expression. In summary, maternal HF/HS diet exposure induces endometrial hyperplasia and other precancerous phenotypes in mice treated with DES. This study suggests that maternal obesity alone is not sufficient for the development of ECa, but has an additive effect in the presence of a secondary insult such as DES.
Montgomery, Stephen M; Kusel, Jeanette; Nicholas, Richard; Adlard, Nicholas
2017-09-01
Patients with relapsing-remitting multiple sclerosis (RRMS) treated with disease modifying therapies (DMTs) who continue to experience disease activity may be considered for escalation therapies such as fingolimod, or may be considered for alemtuzumab. Previous economic modeling used Markov models; applying one alternative technique, discrete event simulation (DES) modeling, allows re-treatment and long-term adverse events (AEs) to be included in the analysis. A DES was adapted to model relapse-triggered re-treatment with alemtuzumab and the effect of including ongoing quality-adjusted life year (QALY) decrements for AEs that extend beyond previous 1-year Markov cycles. As the price to the NHS of fingolimod in the UK is unknown, due to a confidential patient access scheme (PAS), a variety of possible discounts were tested. The interaction of re-treatment assumptions for alemtuzumab with the possible discounts for fingolimod was tested to determine which DMT resulted in lower lifetime costs. The lifetime QALY results were derived from modeled treatment effect and short- and long-term AEs. Most permutations of fingolimod PAS discount and alemtuzumab re-treatment rate resulted in fingolimod being less costly than alemtuzumab. As the percentage of patients who are re-treated with alemtuzumab due to experiencing a relapse approaches 100% of those who relapse whilst on treatment, the discount required for fingolimod to be less costly drops below 5%. Consideration of treatment effect alone found alemtuzumab generated 0.2 more QALYs/patient; the inclusion of AEs up to a duration of 1 year reduced this advantage to only 0.14 QALYs/patient. Modeling AEs with a lifetime QALY decrement found that both DMTs generated very similar QALYs with the difference only 0.04 QALYs/patient. When the model captured alemtuzumab re-treatment and long-term AE decrements, it was found that fingolimod is cost-effective compared to alemtuzumab, assuming application of only a modest level of confidential PAS discount.
NASA Astrophysics Data System (ADS)
Mahjoub, Mehdi
La resolution de l'equation de Boltzmann demeure une etape importante dans la prediction du comportement d'un reacteur nucleaire. Malheureusement, la resolution de cette equation presente toujours un defi pour une geometrie complexe (reacteur) tout comme pour une geometrie simple (cellule). Ainsi, pour predire le comportement d'un reacteur nucleaire,un schema de calcul a deux etapes est necessaire. La premiere etape consiste a obtenir les parametres nucleaires d'une cellule du reacteur apres une etape d'homogeneisation et condensation. La deuxieme etape consiste en un calcul de diffusion pour tout le reacteur en utilisant les resultats de la premiere etape tout en simplifiant la geometrie du reacteur a un ensemble de cellules homogenes le tout entoure de reflecteur. Lors des transitoires (accident), ces deux etapes sont insuffisantes pour pouvoir predire le comportement du reacteur. Comme la resolution de l'equation de Boltzmann dans sa forme dependante du temps presente toujours un defi de taille pour tous types de geometries,un autre schema de calcul est necessaire. Afin de contourner cette difficulte, l'hypothese adiabatique est utilisee. Elle se concretise en un schema de calcul a quatre etapes. La premiere et deuxieme etapes demeurent les memes pour des conditions nominales du reacteur. La troisieme etape se resume a obtenir les nouvelles proprietes nucleaires de la cellule a la suite de la perturbation pour les utiliser, au niveau de la quatrieme etape, dans un nouveau calcul de reacteur et obtenir l'effet de la perturbation sur le reacteur. Ce projet vise a verifier cette hypothese. Ainsi, un nouveau schema de calcul a ete defini. La premiere etape de ce projet a ete de creer un nouveau logiciel capable de resoudre l'equation de Boltzmann dependante du temps par la methode stochastique Monte Carlo dans le but d'obtenir des sections efficaces qui evoluent dans le temps. Ce code a ete utilise pour simuler un accident LOCA dans un reacteur nucleaire de type CANDU-6. Les sections efficaces dependantes du temps ont ete par la suite utilisees dans un calcul de diffusion espace-temps pour un reacteur CANDU-6 subissant un accident de type LOCA affectant la moitie du coeur afin d'observer son comportement durant toutes les phases de la perturbation. Dans la phase de developpement, nous avons choisi de demarrer avec le code OpenMC, developpe au MIT,comme plateforme initiale de developpement. L'introduction et le traitement des neutrons retardes durant la simulation ont presente un grand defi a surmonter. Il est important de noter que le code developpe utilisant la methode Monte Carlo peut etre utilise a grande echelle pour la simulation de tous les types des reacteurs nucleaires si les supports informatiques sont disponibles.
NASA Astrophysics Data System (ADS)
McKenna, Sean A.; Selroos, Jan-Olof
Tracer tests are conducted to ascertain solute transport parameters of a single rock feature over a 5-m transport pathway. Two different conceptualizations of double-porosity solute transport provide estimates of the tracer breakthrough curves. One of the conceptualizations (single-rate) employs a single effective diffusion coefficient in a matrix with infinite penetration depth. However, the tracer retention between different flow paths can vary as the ratio of flow-wetted surface to flow rate differs between the path lines. The other conceptualization (multirate) employs a continuous distribution of multiple diffusion rate coefficients in a matrix with variable, yet finite, capacity. Application of these two models with the parameters estimated on the tracer test breakthrough curves produces transport results that differ by orders of magnitude in peak concentration and time to peak concentration at the performance assessment (PA) time and length scales (100,000 years and 1,000 m). These differences are examined by calculating the time limits for the diffusive capacity to act as an infinite medium. These limits are compared across both conceptual models and also against characteristic times for diffusion at both the tracer test and PA scales. Additionally, the differences between the models are examined by re-estimating parameters for the multirate model from the traditional double-porosity model results at the PA scale. Results indicate that for each model the amount of the diffusive capacity that acts as an infinite medium over the specified time scale explains the differences between the model results and that tracer tests alone cannot provide reliable estimates of transport parameters for the PA scale. Results of Monte Carlo runs of the transport models with varying travel times and path lengths show consistent results between models and suggest that the variation in flow-wetted surface to flow rate along path lines is insignificant relative to variability in the amount of diffusive capacity that can be accessed along the transport pathway. Contraindre le bilan des performances des modèles avec les résultats de traçages: une comparaison entre deux modèles conceptuels. Des tests de traçage sont mis en oeuvre pour étudier les paramètres de transport de soluté d'une roche sur une longueur de 5 m. Deux différents modèles de transport de soluté dans un milieu à double porosité fournissent des estimation des courbes de restitution. L'une des conceptualisations (unique taux de restitution) emploie un seul coefficient effectif de diffusion dans une matrice possédant une pénétration infinie en profondeur. Par ailleurs, la rétention du traceur entre les différentes lignes d'écoulement peut varier comme le rapport des débits aux surfaces mouillées et comme le rapport de la différence de débits entres les lignes d'écoulement. L'autre conceptualisation (taux multiple) emploie une distribution continue de coefficients de diffusion dans une matrice à capacité variable et finit. L'application de ces deux modèles avec les pa! ramètres estimés grâce aux courbes de restitution produit des résultats de transport qui différent de plusieurs ordres de grandeur dans la magnitude du pic, le temps du pic de concentration, au bilan des performances (PA) et aux échelles de distance (100,000 ans et 1,000 m). Ces différences sont éxaminées par l'intermédiaire des temps limites pour que la capacité diffusive équivaille à un milieu infini. Ces limites sont comparées à travers les modèles conceptuels ainsi que les temps caractéristiques de diffusion à l'échelle du test de traçage et à l'échelle du PA. Par ailleurs, les différences entre les modèles sont éxaminées en réestimant les paramètres pour le modèle à taux multiple à partir des résultats du modèle à double porosité à l'échelle du PA. Les résultats indiquent que pour chaque modèle la valeur de la capacité diffusive dans un milieu infini sur une période de temps spécifiée explique les différences entre les modèles et le ! fait que le test de traçage seul ne permet pas de déterminer les paramètres de transport à l'échelle du PA. Les résultats des simulations Monte Carlo du modèle de transport avec des temps et des distances de transport variables montrent des résultats concordances entre les différents modèles et suggère que la variation que la variation entre surface mouillée et le rapport de la différence de débits entre les lignes d'écoulement est insignifiante, en regard de la variabilité du montant de la capacité diffusive qui peut être accessible le long de la ligne de transport. Delimitando la eficacia de modelos de evaluación con resultados de pruebas de trazadores: una comparación entre dos modelos conceptuales. Se llevaron a cabo pruebas de trazadores para evaluar los parámetros de transporte de solutos en un rasgo rocoso a lo largo de una trayectoria de transporte de 5 m. Dos diferentes conceptualizaciones del transporte de solutos de porosidad doble aportan estimados de las curvas de avance de trazadores. Una de las conceptualizaciones (ritmo único) utiliza un coeficiente de difusión efectiva único en una matriz con profundidad de penetración infinita. Sin embargo, la retención del trazador entre diferentes trayectorias de flujo puede variar debido a que la relación entre la superficie mojada de flujo y el ritmo de flujo difiere entre las líneas de trayectoria. La otra conceptualización (multi-ritmo) utiliza una distribución continua de coeficientes de ritmos de difusión múltiple en una matriz con capacidad variable pero finita. La aplicación de los dos modelos con los parámetros estimados en base a las curvas de avance de trazadores producen resultados de transporte que difieren en varios órdenes de magnitud, tanto en concentración pico como en el tiempo en que se alcanza la concentración pico, en las escalas de evaluación de eficacia (PA) de tiempo y longitud (100,000 años y 1,000 m). Estas diferencias se examinan mediante el cálculo de límites de tiempo en que se considera que la capacidad difusiva actúa como un medio infinito. Estos límites se comparan en ambos modelos conceptuales y contra los tiempos característicos para difusión en escalas de PA y de prueba de trazador. Adicionalmente, se examinan las diferencias entre los modelos calculando de nuevo los parámetros para el modelo multi-ritmo a partir del modelo tradicional de doble porosidad a escala PA. Los resultados indican que para cada modelo la cantidad de la capacidad difusiva que actúa como un medio infinito sobre la escala de tiempo especificada explica las diferencias entre los resultados del modelo y que las pruebas de trazadores por sí solas no aportan cálculos confiables de los parámetros de transporte para la escala PA. Los resultados provenientes de corridas Monte Carlo de los modelos de transporte con distintos tiempos de viaje y diferentes longitudes de trayectorias muestran resultados consistentes entre modelos y sugieren que la variación en la relación de superficie de flujo mojada a ritmo de flujo a lo largo de las líneas de trayectoria es insignificante en relación con la variabilidad en la cantidad de capacidad difusiva que puede alcanzarse a lo largo de la trayectoria de transporte.
NASA Astrophysics Data System (ADS)
Blais, Mathieu
Au Quebec, les alumineries sont de grandes consommatrices d'energie electrique, soit pres de 14 % de la puissance installee d'Hydro-Quebec. Dans ce contexte, des petits gains en efficacite energetique des cuves d'electrolyse pourraient avoir un impact important sur la reduction globale de la consommation d'electricite. Le projet de maitrise decrit dans cette etude repond a la problematique suivante : comment l'optimisation de la geometrie d'un bloc cathodique en vue d'uniformiser la densite de courant peut augmenter l'efficacite energetique et la duree de vie de la cuve d'aluminium? Le but premier du projet est de modifier la geometrie en vue d'ameliorer le comportement thermoelectrique des blocs cathodiques et d'accroitre par le fait meme l'efficacite energetique du procede de production d'aluminium. La mauvaise distribution de la densite de courant dans la cuve est responsable de certains problemes energetiques ayant des impacts negatifs sur l'economie et l'environnement. Cette non-uniformite de la distribution du courant induit une usure prematuree de la surface de la cathode et contribue a reduire la stabilite magnetohydrodynamique de la nappe de metal liquide. Afin de quantifier les impacts que peut avoir l'uniformisation de la densite de courant a travers le bloc cathodique, un modele d'un bloc cathodique d'une cuve de la technologie AP-30 a ete concu et analyse par elements finis. A partir de son comportement thermoelectrique et de donnees experimentales d'une cuve AP-30 tirees de la litterature, une correlation entre le profil de densite de courant a la surface du bloc et le taux d'erosion local au meme endroit a ete creee. Cette relation correspond au modele predictif de la duree de vie de tout bloc du meme materiau a partir de son profil de densite de courant. Ensuite, une programmation a ete faite incorporant dans une meme fonction cout les impacts economiques de la duree de vie, de la chute de voltage cathodique et de l'utilisation de nouveaux materiaux. Ceci a permis d'evaluer les benefices faits a partir d'un bloc modifie par rapport au bloc de reference. Plusieurs parametres geometriques du bloc sont variables sur un domaine realiste et l'integration d'un composant en materiau plus conducteur y a egalement ete etudiee. Utilisant des outils mathematiques d'optimisation, un design de bloc optimal a pu etre trouve. Les resultats demontrent qu'il est possible de generer des economies a partir de la modification du bloc. Il est egalement prouve que l'uniformisation de la densite de courant a travers le bloc peut apporter de grands avantages economiques et environnementaux dans le procede d'electrolyse de l'aluminium. Les resultats de cette etude serviront d'arguments pour les chercheurs dans l'industrie a savoir s'il vaut la peine d'investir ou non dans la fabrication d'un prototype experimental souvent tres couteux. Mots-cles : Efficacite energetique, electrolyse de l'aluminium, cathode, simulation thermoelectrique, uniformisation de la densite de courant, optimisation.
NASA Astrophysics Data System (ADS)
Lepage, Martin
1998-12-01
Cette these est presentee a la Faculte de medecine de l'Universite de Sherbrooke en vue de l'obtention du grade de Ph.D. en Radiobiologie. Elle contient des resultats experimentaux enregistres avec un spectrometre d'electrons a haute resolution. Ces resultats portent sur la formation de resonances electroniques en phase condensee et de differents canaux pour leur decroissance. En premier lieu, nous presentons des mesures d'excitations vibrationnelles de l'oxygene dilue en matrice d'argon pour des energies des electrons incidents de 1 a 20 eV. Les resultats suggerent que le temps de vie des resonances de l'oxygene est modifie par la densite d'etats d'electrons dans la bande de conduction de l'argon. Nous presentons aussi des spectres de pertes d'energie d'electrons des molecules de tetrahydrofuranne (THF) et d'acetone. Dans les deux cas, la position en energie des pertes associees aux excitations vibrationnelles est en excellent accord avec les resultats trouves dans la litterature. Les fonctions d'excitation de ces modes revelent la presence de plusieurs nouvelles resonances electroniques. Nous comparons les resonances du THF et celles de la molecule de cyclopentane en phase gazeuse. Nous proposons une origine commune aux resonances ce qui implique qu'elles ne sont pas necessairement attribuees a l'excitation des electrons non-apparies de l'oxygene du THF. Nous proposons une nouvelle methode basee sur la spectroscopie par pertes d'energie des electrons pour detecter la production de fragments neutres qui demeurent a l'interieur d'un film mince condense a basse temperature. Cette methode se base sur la detection des excitations electroniques du produit neutre. Nous presentons des resultats de la production de CO dans un film de methanol. Le taux de production de CO en fonction de l'energie incidente des electrons est calibre en termes d'une section efficace totale de diffusion des electrons. Les resultats indiquent une augmentation lineaire du taux de production de CO en fonction de l'epaisseur du film et de la dose d'electrons incidente sur le film. Ces donnees experimentales cadrent dans un modele simple ou un electron cause la fragmentation de la molecule sans reaction avec les molecules avoisinantes. Le mecanisme propose pour la fragmentation unimoleculaire du methanol est la formation de resonances qui decroissent dans un etat electronique excite. Nous suggerons l'action combinee de la presence d'un trou dans une orbitale de coeur du methanol et de la presence de deux electrons dans la premiere orbitale vide pour expliquer la dehydrogenation complete du methanol pour des energies des electrons entre 8 et 18 eV. Pour des energies plus grandes, la fragmentation par l'intermediaire de l'ionisation de la molecule a deja ete suggeree. La methode de detection des etats electroniques offre une alternative a la detection des excitations vibrationnelles puisque les spectres de pertes d'energie des electrons sont congestionnes dans cette region d'energie pour les molecules polyatomiques.
Mitchell, Dominic; Guertin, Jason R; Dubois, Anick; Dubé, Marie-Pierre; Tardif, Jean-Claude; Iliza, Ange Christelle; Fanton-Aita, Fiorella; Matteau, Alexis; LeLorier, Jacques
2018-04-01
Statin (HMG-CoA reductase inhibitor) therapy is the mainstay dyslipidemia treatment and reduces the risk of a cardiovascular (CV) event (CVE) by up to 35%. However, adherence to statin therapy is poor. One reason patients discontinue statin therapy is musculoskeletal pain and the associated risk of rhabdomyolysis. Research is ongoing to develop a pharmacogenomics (PGx) test for statin-induced myopathy as an alternative to the current diagnosis method, which relies on creatine kinase levels. The potential economic value of a PGx test for statin-induced myopathy is unknown. We developed a lifetime discrete event simulation (DES) model for patients 65 years of age initiating a statin after a first CVE consisting of either an acute myocardial infarction (AMI) or a stroke. The model evaluates the potential economic value of a hypothetical PGx test for diagnosing statin-induced myopathy. We have assessed the model over the spectrum of test sensitivity and specificity parameters. Our model showed that a strategy with a perfect PGx test had an incremental cost-utility ratio of 4273 Canadian dollars ($Can) per quality-adjusted life year (QALY). The probabilistic sensitivity analysis shows that when the payer willingness-to-pay per QALY reaches $Can12,000, the PGx strategy is favored in 90% of the model simulations. We found that a strategy favoring patients staying on statin therapy is cost effective even if patients maintained on statin are at risk of rhabdomyolysis. Our results are explained by the fact that statins are highly effective in reducing the CV risk in patients at high CV risk, and this benefit largely outweighs the risk of rhabdomyolysis.
NASA Astrophysics Data System (ADS)
Lirette-Pitre, Nicole T.
2009-07-01
La reussite scolaire des filles les amene de plus en plus a poursuivre une formation postsecondaire et a exercer des professions qui demandent un haut niveau de connaissances et d'expertise scientifique. Toutefois, les filles demeurent toujours tres peu nombreuses a envisager une carriere en sciences (chimie et physique), en ingenierie ou en TIC (technologie d'information et de la communication), soit une carriere reliee a la nouvelle economie. Pour plusieurs filles, les sciences et les TIC ne sont pas des matieres scolaires qu'elles trouvent interessantes meme si elles y reussissent tres bien. Ces filles admettent que leurs experiences d'apprentissage en sciences et en TIC ne leur ont pas permis de developper un interet ni de se sentir confiante en leurs habiletes a reussir dans ces matieres. Par consequent, peu de filles choisissent de poursuivre leurs etudes postsecondaires dans ces disciplines. La theorie sociocognitive du choix carriere a ete choisie comme modele theorique pour mieux comprendre quelles variables entrent en jeu lorsque les filles choisissent leur carriere. Notre etude a pour objet la conception et l'evaluation de l'efficacite d'un materiel pedagogique concu specifiquement pour ameliorer les experiences d'apprentissage en sciences et en TIC des filles de 9e annee au Nouveau-Brunswick. L'approche pedagogique privilegiee dans notre materiel a mis en oeuvre des strategies pedagogiques issues des meilleures pratiques que nous avons identifiees et qui visaient particulierement l'augmentation du sentiment d'auto-efficacite et de l'interet des filles pour ces disciplines. Ce materiel disponible par Internet a l'adresse http://www.umoncton.ca/lirettn/scientic est directement en lien avec le programme d'etudes en sciences de la nature de 9e annee du Nouveau-Brunswick. L'evaluation de l'efficacite de notre materiel pedagogique a ete faite selon deux grandes etapes methodologiques: 1) l'evaluation de l'utilisabilite et de la convivialite du materiel et 2) l'evaluation de l'effet du materiel en fonction de diverses variables reliees a l'interet et au sentiment d'auto-efficacite des filles en sciences et en TIC. Cette recherche s'est inscrite dans un paradigme pragmatique de recherche. Le pragmatisme a guide nos choix en ce qui a trait au modele de recherche et des techniques utilisees. Cette recherche a associe a la fois des techniques qualitatives et quantitatives, particulierement en ce qui concerne la collecte et l'analyse de donnees. Les donnees recueillies dans la premiere etape de l'evaluation de l'utilisabilite et de la convivialite du materiel par les enseignantes et les enseignants de sciences et les filles ont revele que le materiel concu est tres utilisable et convivial. Toutefois quelques petites ameliorations seront apportees a une version subsequente afin de faciliter davantage la navigation. Quant a l'evaluation des effets du materiel concu sur les variables reliees au sentiment d'auto-efficacite et aux interets lors de l'etape quasi experimentale, nos donnees qualitatives ont indique que ce materiel a eu des effets positifs sur le sentiment d'auto-efficacite et sur les interets des filles qui l'ont utilise. Toutefois, nos donnees quantitatives n'ont pas permis d'inferer un lien causal direct entre l'utilisation du materiel et l'augmentation du sentiment d'auto-efficacite et des interets des filles en sciences et en TIC. A la lumiere des resultats obtenus, nous avons conclu que le materiel a eu les effets escomptes. Donc, nous recommandons la creation et l'utilisation de materiel de ce genre dans toutes les classes de sciences de la 6e annee a la 12e annee au Nouveau-Brunswick.
Measured and Modeled Long-Wave Infrared Signature of Quest in Q280
2008-03-01
l’eau chaude par rapport à l’air froid , a constitué un autre cas permet- tant de mieux comprendre les signatures infrarouge et de mettre au point des...d’intensité énergétique infrarouge à l’horizon et dans la valeur absolue de l’in- tensité énergétique de la mer : le modèle laisse prévoir...ports de l’Arctique favorise fortement d’autres recherches sur les signatures infrarouge des navires en eaux froides et dans des conditions hivernales
Takahashi, Toshiaki; Friedmacher, Florian; Zimmer, Julia; Puri, Prem
2016-12-01
Congenital diaphragmatic hernia (CDH) is presumed to originate from defects in the primordial diaphragmatic mesenchyme, mainly comprising of muscle connective tissue (MCT). Thus, normal diaphragmatic morphogenesis depends on the structural integrity of the underlying MCT. Developmental mutations that inhibit normal formation of diaphragmatic MCT have been shown to result in CDH. Desmin (DES) is a major filament protein in the MCT, which is essential for the tensile strength of the developing diaphragm muscle. DES -/- knockout mice exhibit significant reductions in stiffness and elasticity of the developing diaphragmatic muscle tissue. Furthermore, sequence changes in the DES gene have recently been identified in human cases of CDH, suggesting that alterations in DES expression may lead to diaphragmatic defects. This study was designed to investigate the hypothesis that diaphragmatic DES expression is decreased in fetal rats with nitrofen-induced CDH. Time-mated Sprague-Dawley rats were exposed to either nitrofen or vehicle on gestational day 9 (D9). Fetuses were harvested on selected time-points D13, D15 and D18, and dissected diaphragms (n = 72) were divided into control and nitrofen-exposed specimens (n = 12 per time-point and experimental group, respectively). Laser-capture microdissection was used to obtain diaphragmatic tissue elements. Diaphragmatic gene expression of DES was analyzed by quantitative real-time polymerase chain reaction. Immunofluorescence double staining for DES was combined with the mesenchymal marker GATA4 to evaluate protein expression and localization in developing fetal diaphragms. Relative mRNA expression levels of DES were significantly decreased in pleuroperitoneal folds on D13 (1.49 ± 1.79 vs. 3.47 ± 2.32; p < 0.05), developing diaphragms on D15 (1.49 ± 1.41 vs. 3.94 ± 3.06; p < 0.05) and fully muscularized diaphragms on D18 (2.45 ± 1.47 vs. 5.12 ± 3.37; p < 0.05) of nitrofen-exposed fetuses compared to controls. Confocal laser scanning microscopy demonstrated markedly diminished immunofluorescence of DES mainly in diaphragmatic MCT, which was associated with a reduction of proliferating mesenchymal cells in nitrofen-exposed fetuses on D13, D15 and D18 compared to controls. Decreased expression of DES in the fetal diaphragm may disturb the basic integrity of myofibrils and the cytoskeletal network during myogenesis, causing malformed MCT and leading to diaphragmatic defects in the nitrofen-induced CDH model.
Minimisation des inductances propres des condensateurs à film métallisé
NASA Astrophysics Data System (ADS)
Joubert, Ch.; Rojat, G.; Béroual, A.
1995-07-01
In this article, we examine the different factors responsible for the equivalent series inductance in metallized capacitors and we propose structures for capacitors that reduce this inductance. After recalling the structure of metallized capacitors we compare, by experimental measurements, the inductance due to the winding and that one added by the connections. The latter can become preponderant. In order to explain the experimental evolution of the winding impedance vs. frequency, we describe an analytical model which gives the current density in the winding and its impedance. This model enables us to determine the self resonant frequency for different types of capacitors. From where, we can infer the influence of the height of capacitors and their internal and external radius upon performances, It appears that to reduce the equivalent series inductance, it is better to use flat windings and annular windings. Dans cet article nous examinons les différents facteurs responsables de l'inductance équivalente série dans les condensateurs à film métallisé et proposons des géométries de condensateurs qui réduisent cette inductance. Après avoir rappelé la structure des condensateurs à film métallisé, nous comparons, par des mesures expérimentales, l'inductance due au bobinage et l'inductance ajoutée par les connexions. Cette dernière peut devenir prépondérante. Afin d'expliquer l'évolution de l'impédance du bobinage en fonction de la fréquence, nous décrivons un modèle analytique qui donne la densité du courant dans le bobinage et l'impédance de ce dernier. En outre, ce modèle permet de déterminer la fréquence de résonance série de divers types de condensateurs ce qui permet de déduire l'influence de la hauteur des condensateurs et de leurs rayons interne et externe sur les performances. Il apparaît ainsi que, pour diminuer l'inductance équivalente série, il vaut mieux employer des bobinages plats et des bobinages annulaires.
Panaich, Sidakpal S; Badheka, Apurva O; Arora, Shilpkumar; Patel, Nileshkumar J; Thakkar, Badal; Patel, Nilay; Singh, Vikas; Chothani, Ankit; Deshmukh, Abhishek; Agnihotri, Kanishk; Jhamnani, Sunny; Lahewala, Sopan; Manvar, Sohilkumar; Panchal, Vinaykumar; Patel, Achint; Patel, Neil; Bhatt, Parth; Savani, Chirag; Patel, Jay; Savani, Ghanshyambhai T; Solanki, Shantanu; Patel, Samir; Kaki, Amir; Mohamad, Tamam; Elder, Mahir; Kondur, Ashok; Cleman, Michael; Forrest, John K; Schreiber, Theodore; Grines, Cindy
2016-01-01
We studied the trends and predictors of drug eluting stent (DES) utilization from 2006 to 2011 to further expound the inter-hospital variability in their utilization. We queried the Healthcare Cost and Utilization Project's Nationwide Inpatient Sample (NIS) between 2006 and 2011 using ICD-9-CM procedure code, 36.06 (bare metal stent) or 36.07 (drug eluting stents) for Percutaneous Coronary Intervention (PCI). Annual hospital volume was calculated using unique identification numbers and divided into quartiles for analysis. We built a hierarchical two level model adjusted for multiple confounding factors, with hospital ID incorporated as random effects in the model. About 665,804 procedures (weighted n = 3,277,884) were analyzed. Safety concerns arising in 2006 reduced utilization DES from 90% of all PCIs performed in 2006 to a nadir of 69% in 2008 followed by increase (76% of all stents in 2009) and plateau (75% in 2011). Significant between-hospital variation was noted in DES utilization irrespective of patient or hospital characteristics. Independent patient level predictors of DES were (OR, 95% CI, P-value) age (0.99, 0.98-0.99, <0.001), female(1.12, 1.09-1.15, <0.001), acute myocardial infarction(0.75, 0.71-0.79, <0.001), shock (0.53, 0.49-0.58, <0.001), Charlson Co-morbidity index (0.81,0.77-0.86, <0.001), private insurance/HMO (1.27, 1.20-1.34, <0.001), and elective admission (1.16, 1.05-1.29, <0.001). Highest quartile hospital (1.64, 1.25-2.16, <0.001) volume was associated with higher DES placement. There is significant between-hospital variation in DES utilization and a higher annual hospital volume is associated with higher utilization rate of DES. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Li, Yan; Tellez, Armando; Rousselle, Serge D; Dillon, Krista N; Garza, Javier A; Barry, Chris; Granada, Juan F
2016-07-01
To evaluate the biological effect of a paclitaxel-coated balloon (PCB) technology on vascular drug distribution and healing in drug eluting stent restenosis (DES-ISR) swine model. The mechanism of action and healing response via PCB technology in DES-ISR is not completely understood. A total of 27 bare metal stents were implanted in coronary arteries and 30 days later the in-stent restenosis was treated with PCB. Treated segments were harvested at 1 hr, 14 days and 30 days post treatment for the pharmacokinetic analysis. In addition, 24 DES were implanted in coronary arteries for 30 days, then all DES-ISRs were treated with either PCB (n = 12) or uncoated balloon (n = 12). At day 60, vessels were harvested for histology following angiography and optical coherence tomography (OCT). The paclitaxel level in neointimal tissue was about 18 times higher (P = 0.0004) at 1 hr Cmax , and retained about five times higher (P = 0.008) at day 60 than that in vessel wall. A homogenous distribution of paclitaxel in ISR was demonstrated by using fluorescently labeled paclitaxel. Notably, in DES-ISR, both termination OCT and quantitative coronary angioplasty showed a significant neointimal reduction and less late lumen loss (P = 0.05 and P = 0.03, respectively) post PCB versus post uncoated balloon. The PES-ISR + PCB group displayed higher levels of peri-strut inflammation and fibrin scores compared to the -limus DES-ISR + PCB group. In ISR, paclitaxel is primarily deposited in neointimal tissue and effectively retained over time following PCB use. Despite the presence of metallic struts, a uniform distribution was characterized. PCB demonstrated an equivalent biological effect in DES-ISR without significantly increasing inflammation. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Saafi, Kais
The aerodynamic model of the aircraft L1011-500 was designed and simulated in Matlab and Simulink by Bombardier to serve the Esterline-CMC Electronics Company in its goals to improve the Flight Management System FMS. In this model implemented in FLSIM by CMC-Electronics Esterline, a longitudinal instability appears during the approach phase and when flaps have a higher or equal angle to 4 degrees. The global project at LARCASE consisted in the improvement of the L1011-500 aerodynamic model stability under Matlab / Simulink and mainly for flaps angles situated between 4 degrees and 22 degrees. The L1011-500 global model was finalized in order to visualize and analyze its dynamic behavior. When the global model of the aircraft L1011-500 was generated, corrections were added to the lift coefficient (CL), the drag coefficient (CD) and the pitching moment coefficient (CM) to ensure the trim of the aircraft. The obtained results are compared with the flight tests data delivered by CMC Electronics-Esterline to validate our numerical studies.
NASA Astrophysics Data System (ADS)
Podladchikova, O.
2002-02-01
The high temperature of the solar corona is still a puzzling problem of solar physics. However, the recent observations of satellites SoHO, Yohkoh or TRACE seem to indicate that the processes responsible for the heating of the closed regions are situated in the low corona or in the chromosphere, thus close to the sun surface, and are associated to the direct currents dissipation. Statistical data analysis suggest that the heating mechanisms result thus from numerous events of current layers dissipation of small scale and weak energy, on the resolution limit of modern instruments. We propose a statistical lattice model, resulting from an approach more physical than self-organized criticality, constituted by a magnetic energy source at small scales and by dissipation mechanisms of the currents, which can be associated either to magnetic reconnection or to anomalous resistivity. The various types of sources and mechanisms of dissipation allow to study their influence on the statistical properties of the system, in particular on the energy dissipation. With the aim of quantifying this behavior and allowing detailed comparisons between models and observations, analysis techniques little used in solar physics, such as the singular values decomposition, entropies, or Pearson technique of PDF classification are introduced and applied to the study of the spatial and temporal properties of the model. La température anormalement élevée de la couronne reste un des problèmes majeurs de la physique solaire. Toutefois, les observations récentes des satellites SoHO, Yohkoh ou TRACE semblent indiquer que les processus responsables du chauffage des régions fermées se situent dans la basse couronne ou dans la chromosphère, donc proches de la surface solaire, et sont associés à la dissipation de couches de courant continu. L'analyse statistique de données suggère que les mécanismes de chauffage résulteraient donc de nombreux événements de dissipation de couches de courant de petite échelle et de faible énergie, àla limite de la résolution des instruments modernes. Nous proposons un modèle statistique sur réseau, résultant d'une approche plus physique que la criticalité auto-organisée, constitué d'une source d'énergie magnétique de petite échelle et de mécanismes de dissipation des courants, qui peuvent être associés soit à la reconnection magnétique soit à la résistivité anormale. Les différents types de sources et de mécanismes de dissipation permettent d'étudier leur influence sur les propriétés statistiques du système, en particulier sur l'énergie dissipée. Dans le but de quantifier ces comportements et de permettre des comparaisons approfondies entres les modèles et les observations, des techniques d'analyse peu utilisées en physique solaire, telles que la décomposition en valeurs singulières, des entropies, ou la technique de Pearson de classification des densités de probabilité, sont introduites et appliquées `a l'étude des propriétés spatiales et temporelles du modèle.
NASA Technical Reports Server (NTRS)
Parker, D. E.; Reschke, M. F.
1988-01-01
An effort to develop preflight adaptation training (PAT) apparatus and procedures to adapt astronauts to the stimulus rearrangement of weightless spaceflight is being pursued. Based on the otolith tilt-translation reinterpretation model of sensory adaptation to weightlessness, two prototype preflight adaptation trainers (PAT) have been developed. These trainers couple pitch movement of the subject with translation of the visual surround. Subjects were exposed to this stimulus rearrangement for periods of 30 m. The hypothesis is that exposure to the rearrangement would attenuate vertical eye movements was supported by two experiments using the Miami University Seesaw (MUS) PAT prototype. The Dynamic Environment Simulator (DES) prototype failed to support this hypothesis; this result is attributed to a pecularity of the DES apparatus. A final experiment demonstrated that changes in vertical eye movements were not a consequence of fixation on an external target during exposure to a control condition. Together these experiments support the view that preflight adaptation training can alter eye movements in a manner consistent with adaptation to weightlessness. Following these initial studies, concepts for development of operational preflight trainers were proposed. The trainers are intended to: demonstrate the stimulus rearrangement of weightlessness; allow astronauts to train in altered sensory environment; modify sensory motor reflexes; and reduce/eliminate space motion sickness symptoms.
NASA Astrophysics Data System (ADS)
Luque, E.; Santiago, B.; Pieres, A.; Marshall, J. L.; Pace, A. B.; Kron, R.; Drlica-Wagner, A.; Queiroz, A.; Balbinot, E.; Ponte, M. dal; Neto, A. Fausti; da Costa, L. N.; Maia, M. A. G.; Walker, A. R.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Crocce, M.; Davis, C.; Doel, P.; Eifler, T. F.; Flaugher, B.; García-Bellido, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kuehn, K.; Kuropatkin, N.; Miquel, R.; Nichol, R. C.; Plazas, A. A.; Sanchez, E.; Scarpine, V.; Schindler, R.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.
2018-04-01
We report the discovery of a new star cluster, DES 3, in the constellation of Indus, and deeper observations of the previously identified satellite DES J0222.7-5217 (Eridanus III). DES 3 was detected as a stellar overdensity in first-year Dark Energy Survey data, and confirmed with deeper photometry from the 4.1 metre Southern Astrophysical Research (SOAR) telescope. The new system was detected with a relatively high significance and appears in the DES images as a compact concentration of faint blue point sources. We determine that DES 3 is located at a heliocentric distance of ≃ 76.2 kpc and it is dominated by an old (≃ 9.8 Gyr) and metal-poor ([Fe/H] ≃ -1.84) population. While the age and metallicity values of DES 3 are comparable to typical globular clusters (objects with a high stellar density, stellar mass of ˜105M⊙ and luminosity MV ˜ -7.3), its half-light radius (rh ˜ 6.87 pc) and luminosity (MV ˜ -1.7) are more indicative of faint star cluster. Based on the angular size, DES 3, with a value of rh ˜ 0{^'.}31, is among the smallest faint star clusters known to date. Furthermore, using deeper imaging of DES J0222.7-5217 taken with the SOAR telescope, we update structural parameters and perform the first isochrone modeling. Our analysis yields the first age (≃ 12.6 Gyr) and metallicity ([Fe/H] ≃ -2.01) estimates for this object. The half-light radius (rh ≃ 11.24 pc) and luminosity (MV ≃ -2.4) of DES J0222.7-5217 suggest that it is likely a faint star cluster. The discovery of DES 3 indicates that the census of stellar systems in the Milky Way is still far from complete, and demonstrates the power of modern wide-field imaging surveys to improve our knowledge of the Galaxy's satellite population.
Modelisation frequentielle de la permittivite du beton pour le controle non destructif par georadar
NASA Astrophysics Data System (ADS)
Bourdi, Taoufik
Le georadar (Ground Penetrating Radar (GPR)) constitue une technique de controle non destructif (CND) interessante pour la mesure des epaisseurs des dalles de beton et la caracterisation des fractures, en raison de ses caracteristiques de resolution et de profondeur de penetration. Les equipements georadar sont de plus en plus faciles a utiliser et les logiciels d'interpretation sont en train de devenir plus aisement accessibles. Cependant, il est ressorti dans plusieurs conferences et ateliers sur l'application du georadar en genie civil qu'il fallait poursuivre les recherches, en particulier sur la modelisation et les techniques de mesure des proprietes electriques du beton. En obtenant de meilleures informations sur les proprietes electriques du beton aux frequences du georadar, l'instrumentation et les techniques d'interpretation pourraient etre perfectionnees plus efficacement. Le modele de Jonscher est un modele qui a montre son efficacite dans le domaine geophysique. Pour la premiere fois, son utilisation dans le domaine genie civil est presentee. Dans un premier temps, nous avons valide l'application du modele de Jonscher pour la caracterisation de la permittivite dielectrique du beton. Les resultats ont montre clairement que ce modele est capable de reproduire fidelement la variation de la permittivite de differents types de beton sur la bande de frequence georadar (100 MHz-2 GHz). Dans un deuxieme temps, nous avons montre l'interet du modele de Jonscher en le comparant a d'autres modeles (Debye et Debye-etendu) deja utilises dans le domaine genie civil. Nous avons montre aussi comment le modele de Jonscher peut presenter une aide a la prediction de l'efficacite de blindage et a l'interpretation des ondes de la technique GPR. Il a ete determine que le modele de Jonscher permet de donner une bonne presentation de la variation de la permittivite du beton dans la gamme de frequence georadar consideree. De plus, cette modelisation est valable pour differents types de beton et a differentes teneurs en eau. Dans une derniere partie, nous avons presente l'utilisation du modele de Jonscher pour l'estimation de l'epaisseur d'une dalle de beton par la technique GPR dans le domaine frequentiel. Mots-cles : CND, beton, georadar , permittivite, Jonscher
NASA Astrophysics Data System (ADS)
Lallier-Daniels, Dominic
La conception de ventilateurs est souvent basée sur une méthodologie « essais/erreurs » d'amélioration de géométries existantes ainsi que sur l'expérience de design et les résultats expérimentaux cumulés par les entreprises. Cependant, cette méthodologie peut se révéler coûteuse en cas d'échec; même en cas de succès, des améliorations significatives en performance sont souvent difficiles, voire impossibles à obtenir. Le projet présent propose le développement et la validation d'une méthodologie de conception basée sur l'emploi du calcul méridien pour la conception préliminaire de turbomachines hélico-centrifuges (ou flux-mixte) et l'utilisation du calcul numérique d'écoulement fluides (CFD) pour la conception détaillée. La méthode de calcul méridien à la base du processus de conception proposé est d'abord présentée. Dans un premier temps, le cadre théorique est développé. Le calcul méridien demeurant fondamentalement un processus itératif, le processus de calcul est également présenté, incluant les méthodes numériques de calcul employée pour la résolution des équations fondamentales. Une validation du code méridien écrit dans le cadre du projet de maîtrise face à un algorithme de calcul méridien développé par l'auteur de la méthode ainsi qu'à des résultats de simulation numérique sur un code commercial est également réalisée. La méthodologie de conception de turbomachines développée dans le cadre de l'étude est ensuite présentée sous la forme d'une étude de cas pour un ventilateur hélico-centrifuge basé sur des spécifications fournies par le partenaire industriel Venmar. La méthodologie se divise en trois étapes: le calcul méridien est employé pour le pré-dimensionnement, suivi de simulations 2D de grilles d'aubes pour la conception détaillée des pales et finalement d'une analyse numérique 3D pour la validation et l'optimisation fine de la géométrie. Les résultats de calcul méridien sont en outre comparés aux résultats de simulation pour la géométrie 3D afin de valider l'emploi du calcul méridien comme outil de prédimensionnement.
The influence of liquid-gas velocity ratio on the noise of the cooling tower
NASA Astrophysics Data System (ADS)
Yang, Bin; Liu, Xuanzuo; Chen, Chi; Zhao, Zhouli; Song, Jinchun
2018-05-01
The noise from the cooling tower has a great influence on psychological performance of human beings. The cooling tower noise mainly consists of fan noise, falling water noise and mechanical noise. This thesis used DES turbulence model with FH-W model to simulate the flow and sound pressure field in cooling tower based on CFD software FLUENT and analyzed the influence of different kinds noise, which affected by diverse factors, on the cooling tower noise. It can be concluded that the addition of cooling water can reduce the turbulence and vortex noise of the rotor fluid field in the cooling tower at some extent, but increase the impact noise of the liquid-gas two phase. In general, the cooling tower noise decreases with the velocity ratio of liquid to gas increasing, and reaches the lowest when the velocity ratio of liquid to gas is close to l.
Climate and health: observation and modeling of malaria in the Ferlo (Senegal).
Diouf, Ibrahima; Deme, Abdoulaye; Ndione, Jacques-André; Gaye, Amadou Thierno; Rodríguez-Fonseca, Belén; Cissé, Moustapha
2013-01-01
The aim of this work, undertaken in the framework of QWeCI (Quantifying Weather and Climate Impacts on health in the developing countries) project, is to study how climate variability could influence malaria seasonal incidence. It will also assess the evolution of vector-borne diseases such as malaria by simulation analysis of climate models according to various climate scenarios for the next years. Climate variability seems to be determinant for the risk of malaria development (Freeman and Bradley, 1996 [1], Lindsay and Birley, 1996 [2], Kuhn et al., 2005 [3]). Climate can impact on the epidemiology of malaria by several mechanisms, directly, via the development rates and survival of both pathogens and vectors, and indirectly, through changes in vegetation and land surface characteristics such as the variability of breeding sites like ponds. Copyright © 2013 Académie des sciences. Published by Elsevier SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Minakov, A.; Sentyabov, A.; Platonov, D.
2017-01-01
We performed numerical simulation of flow in a laboratory model of a Francis hydroturbine at startup regimes. Numerical technique for calculating of low frequency pressure pulsations in a water turbine is based on the use of DES (k-ω Shear Stress Transport) turbulence model and the approach of “frozen rotor”. The structure of the flow behind the runner of turbine was analysed. Shows the effect of flow structure on the frequency and intensity of non-stationary processes in the flow path. Two version of the inlet boundary conditions were considered. The first one corresponded measured time dependence of the discharge. Comparison of the calculation results with the experimental data shows the considerable delay of the discharge in this calculation. Second version corresponded linear approximation of time dependence of the discharge. This calculation shows good agreement with experimental results.
Arcanjo, Daniel D R; Vasconcelos, Andreanne G; Nascimento, Lucas A; Mafud, Ana Carolina; Plácido, Alexandra; Alves, Michel M M; Delerue-Matos, Cristina; Bemquerer, Marcelo P; Vale, Nuno; Gomes, Paula; Oliveira, Eduardo B; Lima, Francisco C A; Mascarenhas, Yvonne P; Carvalho, Fernando Aécio A; Simonsen, Ulf; Ramos, Ricardo M; Leite, José Roberto S A
2017-10-20
The vasoactive proline-rich oligopeptide termed BPP-BrachyNH 2 (H-WPPPKVSP-NH 2 ) induces in vitro inhibitory activity of angiotensin I-converting enzyme (ACE) in rat blood serum. In the present study, the removal of N-terminal tryptophan or C-terminal proline from BPP-BrachyNH 2 was investigated in order to predict which structural components are important or required for interaction with ACE. Furthermore, the toxicological profile was assessed by in silico prediction and in vitro MTT assay. Two BPP-BrachyNH 2 analogues (des-Trp 1 -BPP-BrachyNH 2 and des-Pro 8 -BPP-BrachyNH 2 ) were synthesized, and in vitro and in silico ACE inhibitory activity and toxicological profile were assessed. The des-Trp 1 -BPP-BrachyNH 2 and des-Pro 8 -BPP-BrachyNH 2 were respectively 3.2- and 29.5-fold less active than the BPP-BrachyNH 2 -induced ACE inhibitory activity. Molecular Dynamic and Molecular Mechanics Poisson-Boltzmann Surface Area simulations (MM-PBSA) demonstrated that the ACE/BBP-BrachyNH 2 complex showed lower binding and van der Wall energies than the ACE/des-Pro 8 -BPP-BrachyNH 2 complex, therefore having better stability. The removal of the N-terminal tryptophan increased the in silico predicted toxicological effects and cytotoxicity when compared with BPP-BrachyNH 2 or des-Pro 8 -BPP-BrachyNH 2 . Otherwise, des-Pro 8 -BPP-BrachyNH 2 was 190-fold less cytotoxic than BPP-BrachyNH 2 . Thus, the removal of C-terminal proline residue was able to markedly decrease both the BPP-BrachyNH 2 -induced ACE inhibitory and cytotoxic effects assessed by in vitro and in silico approaches. In conclusion, the aminoacid sequence of BPP-BrachyNH 2 is essential for its ACE inhibitory activity and associated with an acceptable toxicological profile. The perspective of the interactions of BPP-BrachyNH 2 with ACE found in the present study can be used for development of drugs with differential therapeutic profile than current ACE inhibitors. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Rational use of drug-eluting stents: a comparison of different policies.
Remmel, Marko; Hartmann, Franz; Harland, Lars C; Schunkert, Heribert; Radke, Peter W
2007-06-01
Long-term results of recent landmark trials document both benefits and risks of drug-eluting stents (DES) for coronary revascularization. Interestingly, the conclusions drawn from these data vary widely since significant differences in DES penetration rates become obvious when the utilization of this technology is compared between hospitals or even countries. Based on the recommendations of the European Society of Cardiology, the FDA as well as data derived from the BASKET-LATE study, we propose that a maximum penetration rate of 50% for DES seems appropriate at present. Analysis of the length/diameter distribution combined with the use of validated restenosis reference charts allows identification of high-risk patients regarding restenosis risk and modeling the use of DES depending on financial resources and clinical indication. Such algorithm provides the rational for preprocedural risk stratification and efficient use of resources.
Some physical approaches to protein folding
NASA Astrophysics Data System (ADS)
Bascle, J.; Garel, T.; Orland, H.
1993-02-01
To understand how a protein folds is a problem which has important biological implications. In this article, we would like to present a physics-oriented point of view, which is twofold. First of all, we introduce simple statistical mechanics models which display, in the thermodynamic limit, folding and related transitions. These models can be divided into (i) crude spin glass-like models (with their Mattis analogs), where one may look for possible correlations between the chain self-interactions and the folded structure, (ii) glass-like models, where one emphasizes the geometrical competition between one- or two-dimensional local order (mimicking α helix or β sheet structures), and the requirement of global compactness. Both models are too simple to predict the spatial organization of a realistic protein, but are useful for the physicist and should have some feedback in other glassy systems (glasses, collapsed polymers .... ). These remarks lead us to the second physical approach, namely a new Monte-Carlo method, where one grows the protein atom-by-atom (or residue-by-residue), using a standard form (CHARMM .... ) for the total energy. A detailed comparison with other Monte-Carlo schemes, or Molecular Dynamics calculations, is then possible; we will sketch such a comparison for poly-alanines. Our twofold approach illustrates some of the difficulties one encounters in the protein folding problem, in particular those associated with the existence of a large number of metastable states. Le repliement des protéines est un problème qui a de nombreuses implications biologiques. Dans cet article, nous présentons, de deux façons différentes, un point de vue de physicien. Nous introduisons tout d'abord des modèles simples de mécanique statistique qui exhibent, à la limite thermodynamique, des transitions de repliement. Ces modèles peuvent être divisés en (i) verres de spin (éventuellement à la Mattis), où l'on peut chercher des corrélations entre les interactions intrachaîne et la structure repliée, (ii) verres, où l'on met l'accent sur la compétition géométrique entre l'ordre local uni- ou bi-dimensionnel (qui modèle les structures en hélices α ou en feuillets β), et la contrainte globale de compacité. Ces deux types de modèles sont trop simples pour l'étude de vraies protéines, mais ils devraient s'appliquer dans le domaine de la transition vitreuse, des polymères collapsés,... La deuxième voie d'étude est une méthode Monte-Carlo, où on fait croître la protéine atome par atome (ou résidu par résidu), à l'aide d'une forme donnée de l'énergie totale de la protéine (CHARMM, ... ). Cette méthode peut être alors comparée aux autres méthodes numériques; nous comparons ainsi nos résultats avec des calculs de dynamique moléculaire pour le cas des poly-alanines. Cette double approche est une bonne illustration des difficultés que l'on rencontre dans le problème du repliement des protéines (nombreux états métastables, ... ).
Isotope shift constant and nuclear charge model
NASA Astrophysics Data System (ADS)
Fang, Z.; Redi, O.; Stroke, H. H.
1992-04-01
We use the method of Zimmermann [Z. Phys. A 321 (1985) 23-30], which he used to calculate the isotope shift constant for a uniform nuclear charge distribution, to obtain it for a diffuse nuclear charge model. The two models give results that differ slightly on the level of precision of current experiments. The same parameters are used to calculate the model sensitivity of the contributions to the isotope shifts of higher moments of the nuclear charge distribution as formulated by Seltzer [Phys. Rev. 188 (1969) 1916-1919]. These are found to be essentially model independent. Tables are given of the numerical calculations. Nous employons la méthode de Zimmermann [Z. Phys. A 321 (1985) 23-30], qu'il avait utilisé dans un calcul de la constante du déplacement isotopique pour une distribution de charge uniforme, pour l'obtenir avec un modèle de charge nucléaire avec forme quasi-trapézoïdale. Les deux modèles donnent des résultats dont la difference excède de peu la précision des mesures actuelles. Les mêmes paramètres sont utilisés pour comparer la dépendance aux deux modèles de la contribution au déplacement isotopique des moments plus élevés de la distribution de la charge nucléaire dans la formulation de Seltzer [Phys. Rev. 188 (1969) 1916-1919]. On trouve que ces contributions sont essentiellement indépendantes du modèle. Des tables de calculs numériques sont présentées.
Contribution a l'etude et au developpement de nouvelles poudres de fonte
NASA Astrophysics Data System (ADS)
Boisvert, Mathieu
L'obtention de graphite libre dans des pieces fabriquees par metallurgie des poudres (M/P) est un defi auquel plusieurs chercheurs se sont attardes. En effet, la presence de graphite apres frittage ameliore l'usinabilite des pieces, permettant donc de reduire les couts de production, et peut aussi engendrer une amelioration des proprietes tribologiques. L'approche utilisee dans cette these pour tenter d'obtenir du graphite libre apres frittage est par l'utilisation de nouvelles poudres de fontes atomisees a l'eau. L'atomisation a l'eau etant un procede de production de poudres relativement peu couteux qui permet de grandes capacites de production, le transfert des decouvertes de ce doctorat vers des applications industrielles sera donc economiquement plus favorable. En plus de l'objectif d'obtenir du graphite libre apres frittage, un autre aspect important des travaux est le controle de la morphologie du graphite libre apres frittage. En effet, il est connu dans la litterature des fontes coulees/moulees que la morphologie du graphite influencera les proprietes des fontes, ce qui est aussi vrai pour les pieces de M/P. Les fontes ductiles, pour lesquelles le graphite est sous forme de nodules spheroidaux isoles les uns des autres, possedent des proprietes mecaniques superieures aux fontes grises pour lesquelles le graphite est sous forme lamellaire et continu dans la matrice. Les resultats presentes dans cette these montrent qu'il est possible, dans des melanges contenant des poudres de fontes, d'avoir un controle sur la morphologie du graphite et donc sur les proprietes des pieces. Le controle de la morphologie du graphite a principalement ete realise par le type de frittage et le phenomene de diffusion " uphill " du carbone cause par des gradients en silicium. En effet, pour les frittages en phase solide, tous les nodules de graphite sont presents a l'interieur des grains de poudre apres frittage. Pour les frittages en phase liquide, l'intensite de la diffusion " uphill " permet de conserver plus ou moins de graphite nodulaire a l'interieur des regions riches en silicium, alors que le reste du graphite precipite sous forme lamellaire/vermiculaire dans les regions interparticulaires. L'etude des poudres de fontes et la recherche des mecanismes regissant la morphologie du graphite dans les fontes coulees/moulees nous ont amenes a produire des poudres de fontes traitees au magnesium avant l'atomisation. Plusieurs resultats fondamentaux ont ete obtenus de la caracterisation des poudres traitees au magnesium et de leur comparaison avec des poudres de chimies similaires non traitees au magnesium. D'abord, il a ete possible d'observer des bifilms d'oxyde de silicium dans la structure du graphite primaire d'une poudre de fonte grise hypereutectique. Il s'agit des premieres images montrant la structure double de ces defauts, venant ainsi appuyer la theorie elaboree par le professeur John Campbell. Ensuite, il a ete montre que le traitement au magnesium forme une atmosphere protectrice gazeuse autogeneree qui empeche l'oxydation de la surface du bain liquide et donc, la formation et l'entrainement des bifilms. Le role du magnesium sur la morphologie du graphite est de diminuer le soufre en solution en formant des precipites de sulfure de magnesium et ainsi d'augmenter l'energie d'interface graphite-liquide. En reponse a cette grande energie d'interface graphite-liquide, le graphite cherche a minimiser son ratio surface/volume, ce qui favorise la formation de graphite spheroidal. De plus, deux types de germination ont ete observes dans la poudre de fonte hypereutectique traitee au magnesium. Le premier type est une germination heterogene sur un nombre limite de particules faites de magnesium, d'aluminium, de silicium et d'oxygene. Le deuxieme type est une germination homogene des nodules dans certaines regions du liquide plus riches en silicium. L'observation du centre reel tridimensionnel, en microscopie electronique en transmission en haute resolution, d'un des nodules ayant subi une germination homogene a permis de confirmer que le mode de croissance du graphite spheroidal se produit selon le modele de la croissance en feuille de chou. (Abstract shortened by ProQuest.).
Distribution of angiographic measures of restenosis after drug-eluting stent implantation.
Byrne, R A; Eberle, S; Kastrati, A; Dibra, A; Ndrepepa, G; Iijima, R; Mehilli, J; Schömig, A
2009-10-01
A bimodal distribution of measures of restenosis has been demonstrated at 6-8 months after bare metal stent implantation. Drug-eluting stent (DES) treatment has attenuated the impact of certain factors (eg, diabetes) on restenosis but its effect on the distribution of indices of restenosis is not known. To perform a detailed analysis of the metrics of restenosis indices after DES implantation. Design, settings, Prospective observational study of patients undergoing DES implantation (Cypher, sirolimus-eluting stent; or Taxus, paclitaxel-eluting stent) at two German centres, with repeat angiography scheduled at 6-8 months after coronary stenting. In-stent late luminal loss (LLL) and in-segment percentage diameter stenosis (%DS) as determined by quantitative coronary angiography at recatheterisation. Paired cineangiograms were available for 2057 patients. Overall mean (SD) LLL was 0.31 (0.50) mm; mean (SD) %DS was 30.3 (15.7)%. Distribution of both LLL and %DS differed significantly from normal (Kolmogorov-Smirnov test; p<0.001 for each). For both parameters a mixed distribution model better described the data (likelihood ratio test with 3df; p<0.001 for each). This consisted of two normally distributed subpopulations with means (SD) of 0.10 (0.25) mm and 0.69 (0.60) mm for LLL, and means (SD) of 22.2 (8.6)% and 40.1 (16.6)% for %DS. The results were consistent across subgroups of DES type, "on-label" versus "off-label" indication, and presence or absence of diabetes. LLL and %DS at follow-up angiography after DES implantation have a complex mixed distribution that may be accurately represented by a bimodal distribution model. The introduction of DES treatment has not resulted in elimination of a variable propensity to restenosis among subpopulations of patients with stented lesions.
Change in genetic correlation due to selection using animal model evaluation.
Strandén, I; Mäntysaari, E A; Mäki-Tanila, A
1993-01-12
Monte Carlo simulation and analytical calculations were used to study the effect of selection on genetic correlation between two traits. The simulated breeding program was based on a closed adult multiple ovulation and embryo transfer nucleus breeding scheme. Selection was on an index calculated using multi-trait animal model (AM). Analytical formulae applicable to any evaluation method were derived to predict change in genetic (co)variance due to selection under multi-trait selection using different evaluation methods. Two formulae were investigated, one assuming phenotypic selection and the other based on a recursive two-generation AM selection index. The recursive AM method approximated information due to relatives by a relationship matrix of two generations. Genetic correlation after selection was compared under different levels of initial genetic and environmental correlations with two different selection criteria. Changes in genetic correlation were similar in simulation and analytical predictions. After one round of selection the recursive AM method and the simulation gave similar predictions while the phenotypic selection predicted usually more change in genetic correlation. After several rounds of selection both analytical formulae predicted more change in genetic correlation than the simulation. ZUSAMMENFASSUNG: Änderung der genetischen Korrelation bei Selektion mit einem Tiermodell Der Selektionseffekt auf die genetische Korrelation zwischen zwei Merkmalen wurde mit Hilfe von Monte Carlo-Simulation und analytischen Berechnungen untersucht. Ein geschlossener Adulter - MOET (Multiple Ovulation and Embryo Transfer) Zuchtplan wurde simuliert. Die Selektion gründete sich auf einen Index, der die Zuchtwertschätzung des Mehrmerkmals-Tiermodells benutzte. Analytische Formeln für die Voraussage der Änderung der genetischen (Ko)varianz unter multivariate Selektion für verschiedene Zuchtwertschätzungsmethode wurden deduziert. Zwei Formeln wurden studiert, die erste nahm phänotypische Auswahl an und die andere gründete sich auf ein wiederholte Mehrmerkmals-Tiermodell von zwei Generationen. Das wiederholte Mehrmerkmals-Tiermodell approximierte die Information aus den Verwandten mit Hilfe einer Verwandtschaftsmatrix der zwei Generationen. Die genetische Korrelation nach der Selektion aus der Simulation und den analytischen Formeln wurde mit verschiedenen reellen genetischen und umweltbedingten Korrelationen mit zwei Selektionskriterien verglichen. Sie änderte sich ähnlich bei Simulation und analytischen Formeln. Nach einem Selektionszyklus kamen das wiederholte Mehrmerkmals-Tiermodell und die Simulation zu gleichen Voraussagen, aber die phänotypische Selektion sagte mehr Änderung voraus. Nach mehreren selektierten Generationen sagten die beiden analytischen Formeln mehr Änderung in der genetischen Korrelation voraus als die Simulation. RÉSUMÉ: Changement de corrélation génétique du à la sélection en utilisant une évaluation de type modèle animal Une simulation Monte Carlo et des calculs analytiques ont été utilisés pour étudier l'effet de la sélection sur la corrélation génétique entre deux caractères. Le programme de sélection simulé a été basé sur le schéma d'un noyau de sélection adulte et fermé avec superovulation et transfert embryonnaire. La sélection portait sur un indice calculé à partir d'un modèle animal multicaractères (AM). Des formules analytiques applicables à n'importe quelle méthode d'évaluation ont été développées pour prédire le changement de (co)variance génétique du à la sélection multicaractères en utilisant différentes méthodes d'évaluation. Deux formules ont été étudiées, l'une supposant une sélection phénotypique et l'autre basée sur un index de sélection de type AM sur deux générations. La méthode AM récurrente prenait en compte l'information des apparentés de manière approximative à travers la matrice de parenté sur deux générations. La corrélation génétique après sélection a été comparée à différents niveaux de corrélations génétique et environnementale pour deux critères de sélection différents. Les changements de corrélation génétique étaient similaires dans les simulations et les prédictions analytiques. Après un cycle de sélection, la méthode récurrente AM et la simulation donnaient les prédictions similaires alors que la sélection phénotypique prédisait habituellement des changements de corrélations génétiques plus importants. Après plusieurs cycles de sélection, les deux formules analytiques prédisaient des changements de corrélation plus importants que la simulation. RÉSUMÉ: Cambio en la correlación genética debido a selección usando evaluaciones con modelo animal Se estudió el efecto de la selección sobre la correlación genética entre dos caracteres utilizando simulación de Monte Carlo y cálculos analíticos. El esquema de selección simulado estuvo basado en un núcleo adulto y cerrado de ovulación multiple y transferencia de embriones. El criterio de selección fue un indice calculado a partir de un modelo animal multicarácter (AM). Se derivaron fórmulas analíticas aplicables a cualquier método de evaluación para predecir cambios debidos a selección en las (co)varianzas genéticas bajo selección multicarácter usando distintos métodos de valoración. Se investigaron dos fórmulas, una que asumía selección fenotípica y la otra basada en un índice de selección AM recurrente con dos generaciones. El método AM recurrente aproximaba la información de parientes a través de una matriz de relaciones aditivas que contemplaba dos generaciones. La correlación genética tras la selección fue comparada bajo distintos niveles de correlación genética y ambiental iniciales con dos criterios de selección diferentes. Los cambios en correlatión genética fueron similares en las predicciones analíticas y con simulación. Tras un ciclo de selección, el método AM recurrente y la simulación produjeron predicciones similares mientras que la selección fenotípica predijo, generalmente, más cambio en la correlación genética. Después de varios ciclos de selección, las dos fórmulas analíticas predijeron más cambios en la correlación genética que la simulación. 1993 Blackwell Verlag GmbH.
Conductivite dans le modele de Hubbard bi-dimensionnel a faible couplage
NASA Astrophysics Data System (ADS)
Bergeron, Dominic
Le modele de Hubbard bi-dimensionnel (2D) est souvent considere comme le modele minimal pour les supraconducteurs a haute temperature critique a base d'oxyde de cuivre (SCHT). Sur un reseau carre, ce modele possede les phases qui sont communes a tous les SCHT, la phase antiferromagnetique, la phase supraconductrice et la phase dite du pseudogap. Il n'a pas de solution exacte, toutefois, plusieurs methodes approximatives permettent d'etudier ses proprietes de facon numerique. Les proprietes optiques et de transport sont bien connues dans les SCHT et sont donc de bonne candidates pour valider un modele theorique et aider a comprendre mieux la physique de ces materiaux. La presente these porte sur le calcul de ces proprietes pour le modele de Hubbard 2D a couplage faible ou intermediaire. La methode de calcul utilisee est l'approche auto-coherente a deux particules (ACDP), qui est non-perturbative et inclue l'effet des fluctuations de spin et de charge a toutes les longueurs d'onde. La derivation complete de l'expression de la conductivite dans l'approche ACDP est presentee. Cette expression contient ce qu'on appelle les corrections de vertex, qui tiennent compte des correlations entre quasi-particules. Pour rendre possible le calcul numerique de ces corrections, des algorithmes utilisant, entre autres, des transformees de Fourier rapides et des splines cubiques sont developpes. Les calculs sont faits pour le reseau carre avec sauts aux plus proches voisins autour du point critique antiferromagnetique. Aux dopages plus faibles que le point critique, la conductivite optique presente une bosse dans l'infrarouge moyen a basse temperature, tel qu'observe dans plusieurs SCHT. Dans la resistivite en fonction de la temperature, on trouve un comportement isolant dans le pseudogap lorsque les corrections de vertex sont negligees et metallique lorsqu'elles sont prises en compte. Pres du point critique, la resistivite est lineaire en T a basse temperature et devient progressivement proportionnelle a T 2 a fort dopage. Quelques resultats avec sauts aux voisins plus eloignes sont aussi presentes. Mots-cles: Hubbard, point critique quantique, conductivite, corrections de vertex
How Many Kilonovae Can Be Found in Past, Present, and Future Survey Data Sets?
NASA Astrophysics Data System (ADS)
Scolnic, D.; Kessler, R.; Brout, D.; Cowperthwaite, P. S.; Soares-Santos, M.; Annis, J.; Herner, K.; Chen, H.-Y.; Sako, M.; Doctor, Z.; Butler, R. E.; Palmese, A.; Diehl, H. T.; Frieman, J.; Holz, D. E.; Berger, E.; Chornock, R.; Villar, V. A.; Nicholl, M.; Biswas, R.; Hounsell, R.; Foley, R. J.; Metzger, J.; Rest, A.; García-Bellido, J.; Möller, A.; Nugent, P.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Cunha, C. E.; D’Andrea, C. B.; da Costa, L. N.; Davis, C.; Doel, P.; Drlica-Wagner, A.; Eifler, T. F.; Flaugher, B.; Fosalba, P.; Gaztanaga, E.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Hartley, W. G.; Honscheid, K.; James, D. J.; Johnson, M. W. G.; Johnson, M. D.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Lahav, O.; Li, T. S.; Lima, M.; Maia, M. A. G.; March, M.; Marshall, J. L.; Menanteau, F.; Miquel, R.; Neilsen, E.; Plazas, A. A.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Smith, R. C.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, R. C.; Tucker, D. L.; Walker, A. R.; DES Collaboration
2018-01-01
The discovery of a kilonova (KN) associated with the Advanced LIGO (aLIGO)/Virgo event GW170817 opens up new avenues of multi-messenger astrophysics. Here, using realistic simulations, we provide estimates of the number of KNe that could be found in data from past, present, and future surveys without a gravitational-wave trigger. For the simulation, we construct a spectral time-series model based on the DES-GW multi-band light curve from the single known KN event, and we use an average of BNS rates from past studies of {10}3 {{Gpc}}-3 {{yr}}-1, consistent with the one event found so far. Examining past and current data sets from transient surveys, the number of KNe we expect to find for ASAS-SN, SDSS, PS1, SNLS, DES, and SMT is between 0 and 0.3. We predict the number of detections per future survey to be 8.3 from ATLAS, 10.6 from ZTF, 5.5/69 from LSST (the Deep Drilling/Wide Fast Deep), and 16.0 from WFIRST. The maximum redshift of KNe discovered for each survey is z=0.8 for WFIRST, z=0.25 for LSST, and z=0.04 for ZTF and ATLAS. This maximum redshift for WFIRST is well beyond the sensitivity of aLIGO and some future GW missions. For the LSST survey, we also provide contamination estimates from Type Ia and core-collapse supernovae: after light curve and template-matching requirements, we estimate a background of just two events. More broadly, we stress that future transient surveys should consider how to optimize their search strategies to improve their detection efficiency and to consider similar analyses for GW follow-up programs.
How Many Kilonovae Can Be Found in Past, Present, and Future Survey Data Sets?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scolnic, D.; Kessler, R.; Brout, D.
The discovery of a kilonova (KN) associated with the Advanced LIGO (aLIGO)/Virgo event GW170817 opens up new avenues of multi-messenger astrophysics. Here, using realistic simulations, we provide estimates of the number of KNe that could be found in data from past, present, and future surveys without a gravitational-wave trigger. For the simulation, we construct a spectral time-series model based on the DES-GW multi-band light curve from the single known KN event, and we use an average of BNS rates from past studies ofmore » $${10}^{3}\\,{\\mathrm{Gpc}}^{-3}\\,{\\mathrm{yr}}^{-1}$$, consistent with the one event found so far. Examining past and current data sets from transient surveys, the number of KNe we expect to find for ASAS-SN, SDSS, PS1, SNLS, DES, and SMT is between 0 and 0.3. We predict the number of detections per future survey to be 8.3 from ATLAS, 10.6 from ZTF, 5.5/69 from LSST (the Deep Drilling/Wide Fast Deep), and 16.0 from WFIRST. The maximum redshift of KNe discovered for each survey is $z=0.8$ for WFIRST, $z=0.25$ for LSST, and $z=0.04$ for ZTF and ATLAS. This maximum redshift for WFIRST is well beyond the sensitivity of aLIGO and some future GW missions. For the LSST survey, we also provide contamination estimates from Type Ia and core-collapse supernovae: after light curve and template-matching requirements, we estimate a background of just two events. Finally, more broadly, we stress that future transient surveys should consider how to optimize their search strategies to improve their detection efficiency and to consider similar analyses for GW follow-up programs.« less
How Many Kilonovae Can Be Found in Past, Present, and Future Survey Data Sets?
Scolnic, D.; Kessler, R.; Brout, D.; ...
2017-12-22
The discovery of a kilonova (KN) associated with the Advanced LIGO (aLIGO)/Virgo event GW170817 opens up new avenues of multi-messenger astrophysics. Here, using realistic simulations, we provide estimates of the number of KNe that could be found in data from past, present, and future surveys without a gravitational-wave trigger. For the simulation, we construct a spectral time-series model based on the DES-GW multi-band light curve from the single known KN event, and we use an average of BNS rates from past studies ofmore » $${10}^{3}\\,{\\mathrm{Gpc}}^{-3}\\,{\\mathrm{yr}}^{-1}$$, consistent with the one event found so far. Examining past and current data sets from transient surveys, the number of KNe we expect to find for ASAS-SN, SDSS, PS1, SNLS, DES, and SMT is between 0 and 0.3. We predict the number of detections per future survey to be 8.3 from ATLAS, 10.6 from ZTF, 5.5/69 from LSST (the Deep Drilling/Wide Fast Deep), and 16.0 from WFIRST. The maximum redshift of KNe discovered for each survey is $z=0.8$ for WFIRST, $z=0.25$ for LSST, and $z=0.04$ for ZTF and ATLAS. This maximum redshift for WFIRST is well beyond the sensitivity of aLIGO and some future GW missions. For the LSST survey, we also provide contamination estimates from Type Ia and core-collapse supernovae: after light curve and template-matching requirements, we estimate a background of just two events. Finally, more broadly, we stress that future transient surveys should consider how to optimize their search strategies to improve their detection efficiency and to consider similar analyses for GW follow-up programs.« less
Hill, Kevin D.; Sampson, Mario R.; Li, Jennifer S.; Tunks, Robert D.; Schulman, Scott R.; Cohen-Wolkowiez, Michael
2015-01-01
Aims Sildenafil is frequently prescribed to children with single ventricle heart defects. These children have unique hepatic physiology with elevated hepatic pressures which may alter drug pharmacokinetics. We sought to determine the impact of hepatic pressure on sildenafil pharmacokinetics in children with single ventricle heart defects. Methods A population pharmacokinetic model was developed using data from 20 single ventricle children receiving single dose intravenous sildenafil during cardiac catheterization. Nonlinear mixed effect modeling was used for model development and covariate effects were evaluated based on estimated precision and clinical significance. Results The analysis included a median (range) of 4 (2–5) pharmacokinetic samples per child. The final structural model was a two-compartment model for sildenafil with a one-compartment model for des-methyl-sildenafil (active metabolite), with assumed 100% sildenafil to des-methyl-sildenafil conversion. Sildenafil clearance was unaffected by hepatic pressure (clearance = 0.62 L/H/kg); however, clearance of des-methyl-sildenafil (1.94 × (hepatic pressure/9)−1.33 L/h/kg) was predicted to decrease ~7 fold as hepatic pressure increased from 4 to 18 mm Hg. Predicted drug exposure was increased by ~1.5 fold in subjects with hepatic pressures ≥ 10 mm Hg versus < 10 mm Hg (median area under the curve = 533 μg*h/L versus 792 μg*h/L). Discussion Elevated hepatic pressure delays clearance of the sildenafil metabolite, des-methyl-sildenafil and increases drug exposure. We speculate that this results from impaired biliary clearance. Hepatic pressure should be considered when prescribing sildenafil to children. These data demonstrate the importance of pharmacokinetic assessment in patients with unique cardiovascular physiology that may affect drug metabolism. PMID:26197839
NASA Astrophysics Data System (ADS)
Faure, Bastien
The neutronic calculation of a reactor's core is usually done in two steps. After solving the neutron transport equation over an elementary domain of the core, a set of parameters, namely macroscopic cross sections and potentially diffusion coefficients, are defined in order to perform a full core calculation. In the first step, the cell or assembly is calculated using the "fundamental mode theory", the pattern being inserted in an infinite lattice of periodic structures. This simple representation allows a precise modeling for the geometry and the energy variable and can be treated within transport theory with minimalist approximations. However, it supposes that the reactor's core can be treated as a periodic lattice of elementary domains, which is already a big hypothesis, and cannot, at first sight, take into account neutron leakage between two different zones and out of the core. The leakage models propose to correct the transport equation with an additional leakage term in order to represent this phenomenon. For historical reasons, numerical methods for solving the transport equation being limited by computer's features (processor speeds and memory sizes), the leakage term is, in most cases, modeled by a homogeneous and isotropic probability within a "homogeneous leakage model". Driven by technological innovation in the computer science field, "heterogeneous leakage models" have been developed and implemented in several neutron transport calculation codes. This work focuses on a study of some of those models, including the TIBERE model from the DRAGON-3 code developed at Ecole Polytechnique de Montreal, as well as the heterogeneous model from the APOLLO-3 code developed at Commissariat a l'Energie Atomique et aux energies alternatives. The research based on sodium cooled fast reactors and light water reactors has allowed us to demonstrate the interest of those models compared to a homogeneous leakage model. In particular, it has been shown that a heterogeneous model has a significant impact on the calculation of the out of core leakage rate that permits a better estimation of the transport equation eigenvalue Keff . The neutron streaming between two zones of different compositions was also proven to be better calculated.
Zuverlässigkeit digitaler Schaltungen unter Einfluss von intrinsischem Rauschen
NASA Astrophysics Data System (ADS)
Kleeberger, V. B.; Schlichtmann, U.
2011-08-01
Die kontinuierlich fortschreitende Miniaturisierung in integrierten Schaltungen führt zu einem Anstieg des intrinsischen Rauschens. Um den Einfluss von intrinsischem Rauschen auf die Zuverlässigkeit zukünftiger digitaler Schaltungen analysieren zu können, werden Methoden benötigt, die auf CAD-Verfahren wie Analogsimulation statt auf abschätzenden Berechnungen beruhen. Dieser Beitrag stellt eine neue Methode vor, die den Einfluss von intrinsischem Rauschen in digitalen Schaltungen für eine gegebene Prozesstechnologie analysieren kann. Die Amplituden von thermischen, 1/f und Schrotrauschen werden mit Hilfe eines SPICE Simulators bestimmt. Anschließend wird der Einfluss des Rauschens auf die Schaltungszuverlässigkeit durch Simulation analysiert. Zusätzlich zur Analyse werden Möglichkeiten aufgezeigt, wie die durch Rauschen hervorgerufenen Effekte im Schaltungsentwurf mit berücksichtigt werden können. Im Gegensatz zum Stand der Technik kann die vorgestellte Methode auf beliebige Logikimplementierungen und Prozesstechnologien angewendet werden. Zusätzlich wird gezeigt, dass bisherige Ansätze den Einfluss von Rauschen bis um das Vierfache überschätzen.
Semi-relativistic quark-antiquark potential models
NASA Astrophysics Data System (ADS)
Basdevant, J. L.; Boukraa, S.
We study the qualitative and quantitative properties of the spectrum of a two-body hamiltonian with relativistic kinematics. We show that this kinematics leads in a natural way to the observed features of light flavour (u, d, s) spectroscopy. After having established the basic properties of the operator √ p2 + m2 + V(r) in the cases of linear or logarithmic potentials, we show that, to first approximation, all q1 q2 meson states can be reproduced with a very simple universal flavour-independent potential whose parameters are directly related to basic physical quantities : the Regge slopes of light flavours and the quasi-logarithmic coupling strength of heavy quarks. We can derive equivalent effective non-relativistic hamiltonians which justify the successes of NR approaches. The main difficulties encountered, in particular in incorporating spin effects, appear to be due to the fact that, in phenomenological potential models, chiral symmetry and the ensuing Goldstone nature of the pion cannot be implemented in a natural way. Hence, such an approach can take its full predictive power only if it is based on a deeper field-theoretic level. Devant le succès spectaculaire des modèles potentiels pour la spectroscopie des états formés de quarks lourds (cc, bb, etc.), il est tentant d'essayer d'étendre de tels modèles aux états comprenant des quarks légers, u, d et s. En particulier, la spectroscopie des états mésoniques « lourds » est phénoménologiquement dominée par l'existence d'un potentiel confinant universel, quasi logarithmique entre 0,1 fm et 1 fm, tandis que celle des états légers se décrit de façon simple par des trajectoires de Regge linéaires qui ont une pente universelle. 11 est donc légitime de se demander s'il s'agit là de deux secteurs disjoints du monde hadronique, ou bien s'il existe une certaine unité dans la spectroscopie mésonique au plan phénoménologique. Cependant, en présence de quarks légers, voire de masse nulle, les modèles non relativistes traditionnellement utilisés sont sujets à des critiques évidentes. 11 est donc naturel de mettre à l'épreuve des modèles semi-relativistes, c'est-à-dire des hamiltoniens à cinématique relativiste, qui constituent l'extension la plus simple des modèles non relativistes. Nous étudions ici les propriétés qualitatives et quantitatives des spectres d'hamiltoniens de ce type. Nous montrons que l'usage de la cinématique relativiste rend compte de façon naturelle des caractéristiques principales de la spectroscopie des saveurs légères (u, d, s). Après avoir établi les propriétés essentielles de l'opérateur (p2 + m2)1/2 + V(r) dans le cas du potentiel linéaire et du potentiel logarithmique, nous montrons qu'en première approximation tous les états q1 q2 peuvent être obtenus avec un potentiel universel, indépendant de la saveur, extrêmement simple. Les paramètres de ce potentiel sont directement reliés à des grandeurs physiques de base : la pente universelle des trajectoires de Regge des états légers d'une part, la constante de couplage du potentiel logarithmique des états lourds de l'autre. Nous obtenons des hamiltoniens effectifs non relativistes équivalents qui justifient en un certain sens le succès des approches non relativistes. Les principales difficultés que l'on rencontre, en particulier lorsqu'on tente d'incorporer les corrections de structure fine et hyperfine, semblent provenir de ce que la symétrie chirale, et sa conséquence que le méson soit pratiquement un boson de Goldstone, ne peut être incorporée de façon simple et naturelle dans un modèle hamiltonien. En conséquence, une pareille approche ne peut prendre sa pleine puissance prédictive (avec un nombre restreint de paramètres) que si elle est fondée sur des bases plus profondes déduites d'une théorie des champs.
The Origin of Inlet Buzz in a Mach 1.7 Low Boom Inlet Design
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Weir, Lois
2014-01-01
Supersonic inlets with external compression, having a good level performance at the critical operating point, exhibit a marked instability of the flow in some subcritical operation below a critical value of the capture mass flow ratio. This takes the form of severe oscillations of the shock system, commonly known as "buzz". The underlying purpose of this study is to indicate how Detached Eddy Simulation (DES) analysis of supersonic inlets will alter how we envision unsteady inlet aerodynamics, particularly inlet buzz. Presented in this paper is a discussion regarding the physical explanation underlying inlet buzz as indicated by DES analysis. It is the normal shock wave boundary layer separation along the spike surface which reduces the capture mass flow that is the controlling mechanism which determines the onset of inlet buzz, and it is the aerodynamic characteristics of a choked nozzle that provide the feedback mechanism that sustains the buzz cycle by imposing a fixed mean corrected inlet weight flow. Comparisons between the DES analysis of the Lockheed Martin Corporation (LMCO) N+2 inlet and schlieren photographs taken during the test of the Gulfstream Large Scale Low Boom (LSLB) inlet in the NASA 8x6 ft. Supersonic Wind Tunnel (SWT) show a strong similarity both in turbulent flow field structure and shock wave formation during the buzz cycle. This demonstrates the value of DES analysis for the design and understanding of supersonic inlets.
NASA Astrophysics Data System (ADS)
Grosdidier, Yves
2000-12-01
Les spectres des étoiles Wolf-Rayet pop. I (WR) présentent de larges raies en émission dues à des vents stellaires chauds en expansion rapide (vitesse terminale de l'ordre de 1000 km/s). Le modèle standard des étoiles WR reproduit qualitativement le profil général et l'intensité des raies observées. Mais la spectroscopie intensive à moyenne résolution de ces étoiles révèle l'existence de variations stochastiques dans les raies (sous-pics mobiles en accélération échelles de temps: environ 10-100 min.). Ces variations ne sont pas comprises dans le cadre du modèle standard et suggèrent une fragmentation intrinsèque des vents. Cette thèse de doctorat présente une étude de la variabilité des raies spectrales en émission des étoiles WR pop. II; la question de l'impact d'un vent WR fragmenté sur le milieu circumstellaire est aussi étudiée: 1) à partir du suivi spectroscopique intensif des raies CIIIl5696 et CIVl5801/12, nous analysons quantitativement (via le calcul des Spectres de Variance Temporelle) les vents issus de 5 étoiles centrales de nébuleuses planétaires (NP) galactiques présentant le phénomène WR; 2) nous étudions l'impact de la fragmentation des vents issus de deux étoiles WR pop. I sur le milieu circumstellaire via: i) l'imagerie IR (NICMOS2/HST) de WR 137, et ii) l'imagerie H-alpha (WFPC2/HST) et l'interférométrie Fabry-Perot H-alpha (SIS-CFHT) de la nébuleuse M 1-67 (étoile centrale: WR 124). Les principaux résultats sont les suivants: VENTS WR POP. II: (1) Nous démontrons la variabilité spectroscopique intrinsèque des vents issus des noyaux de NP HD 826 ([WC 8]), BD +30 3639 ([WC 9]) et LSS 3169 ([WC 9]), observés durant respectivement 22, 15 et 1 nuits, et rapportons des indications de variabilité pour les noyaux [WC 9] HD 167362 et He 2-142. Les variabilités de HD 826 et BD +30 3639 apparaissent parfois plus soutenues (``bursts'' qui se maintiennent durant plusieurs nuits); (2) La cinématique des sous-pics de BD +30 3639 suggère une anisotropie transitoire de la distribution des fragments dans le vent; (3) Le phénomène WR apparaît purement atmosphérique: la cinématique des sous-pics, les amplitudes et les échelles de temps caractéristiques des variations, ainsi que les accélérations observées sont similaires pour les deux populations. Mais, pour HD 826, une accélération maximale d'environ 70 m/s2 est détectée, valeur significativement plus importante que celles rapportées pour les autres étoiles WR pop. I & II (environ 15 m/s2). La petitesse du rayon de HD 826 en serait la cause; (4) Comme pour les WR pop. I, de grands paramètres (β supérieur ou égal à 3-10) sont requis pour ajuster les accélérations observées avec une loi de vitesse de type beta. La loi beta sous-estime systématiquement les gradients de vitesse au sein de la région de formation de la raie CIIIl5696; (5) Les vents WR pop. II étant fragmentés, l'estimation des taux de perte de masse actuels à partir de méthodes supposant les atmosphères homogènes conduit à une surestimation i) des taux de perte de masse eux-mêmes, et ii) des masses initiales des étoiles avant qu'elles n'entrent dans la phase WR. IMPACT DES VENTS: (1) Au périastre, de la poussière est détectée dans l'environnement de la binaire WC+OB WR 137. La formation de poussières est soit facilitée, soit provoquée par la collision des deux vents chauds; le rôle capital de la fragmentation des vents (fournissant une compression localisée supplémentaire du plasma) est suggéré (2) La nébuleuse M 1-67 affiche une interaction avec le milieu interstellaire (MIS) non-négligeable (``bow-shock''). Les champs de densité et de vitesse sont très perturbés. Ces perturbations sont reliées, d'une part, à l'histoire des vents issus de WR 124 durant sa propre évolution, d'autre part, à l'interaction avec le MIS. Les fonctions de structure des champs de densitéet de vitesse de M 1-67 ne révèlent aucun indice en faveur d'une turbulence au sein de la nébuleuse (3) Des simulations hydrodynamiques 2D réalisées avec le code ZEUS-3D montrent qu'un fragment dense formé près du coeur hydrostatique stellaire ne peut probablement pas, sans adjoindre les effets de bouclier et de confinement radiatifs, atteindre des distances nébulaires.
Masked areas in shear peak statistics. A forward modeling approach
Bard, D.; Kratochvil, J. M.; Dawson, W.
2016-03-09
The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impactmore » of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.« less
MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bard, D.; Kratochvil, J. M.; Dawson, W., E-mail: djbard@slac.stanford.edu
2016-03-10
The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impactmore » of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.« less
NASA Astrophysics Data System (ADS)
Ollivier, J.; Farhi, E.; Ferrand, M.; Benoit, M.
2005-11-01
L'École Thématique “Neutrons et Biologie” s'est tenue du 22 au 26 Mai 2004 à Praz/Arly (Haute-Savoie, France), dans le cadre des 12 èmes Journees de la Diffusion Neutronique de la Societe Française de Neutronique. Cette école a ete organisee avec le concours financier du CNRS (formation permanente), du Laboratoire Léon Brillouin (CEA Saclay), de la region Rhône-Alpes, du conseil général de Haute-Savoie et de l'Université Joseph Fourier de Grenoble. Une cinquantaine de participants, dont une vingtaine d'intervenants, ont largement contribué à la réussite de l'École. D'un point de vue scientifique, l'École s'est déclinée en sept sessions thématiques majeures: - une première session introductive a été consacrée à une revue globale des méthodes biophysiques ayant un fort impact pour l'étude de la structure et de la dynamique des macromolécules biologiques (J. Parello). Un accent tout particulier à été apporté pour décrire les neutrons en tant que composante importante de la panoplie des techniques couramment utilisées en biophysique moléculaire (J. Schweitzer). - une session dédiée aux mesures dynamiques par diffusion incohérente de neutrons a été largement developpée. Qu'ils s'agissent de vibrations et de relaxations moléculaires dans les protéines (J.M. Zanotti), de dynamique globale des protéines (G. Zaccaï), ainsi que de dynamique de l'eau d'hydratation (F. Gabel), de nombreux exemples ont permis d'illustrer la pertinence des neutrons pour étudier la dynamique fonctionnelle des protéines sur l'échelle de temps picosecond nanoseconde. L'analyse des données de diffusion inélastique de neutrons ne peut se passer de modélisation théorique analytique des propriétés dynamique des biomolécules (D. Bicout). - une large place avait été réservée aux études structurales en biologie. Cette troisième session a rassemblé des contributions en diffusion aux petits angles de neutrons pour l'étude structurale en solution (D. Lairez), en réflectométrie de neutrons pour l'étude de systèmes des membranes ou de protéines en interaction avec des membranes (G. Fragneto), ainsi qu'en diffraction de fibres appliquée à l'étude de l'ADN (T. Forsyth). - les simulations de dynamique moléculaire constituent une méthode théorique unique pour étudier, au niveau atomique, les mouvements internes des macromolécules biologiques, que ce soit à l'équilibre (G. Kneller, S. Crouzy) ou hors équilibre (B. Gilquin). Les trajectoires de dynamique moléculaire s'étendent aujourd'hui à la centaine de nanosecondes, et peuvent être de ce fait utilisées par certains programmes pour calculer les observables expérimentales fournies par la diffusion de neutrons (G. Kneller, T. Hinsen). - l'ouverture des neutrons à des techniques instrumentales permettant d'approcher, d'une part, des états hors-équilibre par le biais d'études cinétiques couplées à des mélanges rapides pour des études de croissance de phases (I. Grillo) ou de repliement de protéines, d'autre part, des conditions expérimentales extrèmes (hautes-pressions, M. Plazanet), nous ont semblé constituer des émergences prometteuses. À ce titre, une revue sur le repliement des protéines (V. Forge) a précisé l'importance de nombreuses techniques (fluorescence intrinsèque, dichroïsme, infra-rouge, RMN) dans le domaine, tout en permettant d'entrevoir l'intérêt des études par neutrons. - en marge des sessions purement “neutrons”, il nous semblait important de pouvoir présenter des techniques et méthodes souvent reconnues comme très complémentaires des neutrons, en privilégiant un volet “études structurales” et un volet “études dynamiques”. Côté méthodes dynamiques, la RMN (M. Blackledge) a été positionnée comme une technique permettant d'étudier la flexibilité moléculaire sur des echelles de temps plus lentes (ms). Côté méthodes structurales, la bio-cristallographie des RX appliquée à l'études des structures virales (P. Gouet) a permis de mettre en évidence des aspects complémentaires entre RX et neutrons, et de souligner les avantages et inconvenients respectifs de ces techniques. - l'École Thématique s'est achevée par une session commune avec les Journées Rossat-Mignod, au cours de laquelle J. Helliwell a établi une revue comparative RX/neutrons sur les développements récents en bio-cristallographie des protéines, suivi de deux presentations portant sur la dynamique incohérente et cohérente de systèmes membranaires (F. Natali, M. Rheinstadter). Contrairement aux années précédentes, le contenu de cet ouvrage se veut davantage refléter les applications des neutrons en biologie et biophysique moléculaire, en se reportant à des travaux scientifiques précis, plutôt que d'être constitué d'un recueil de cours, certes trés pédagogiques, mais quelquefois trop éloignés de l'expérience. Nous espérons que ce choix saura satisfaire le lecteur et encourager de nouveaux biologistes à utiliser les neutrons dès que possible pour leurs systèmes d'intérêt. Bonne lecture et bonnes manips !!!!
NASA Astrophysics Data System (ADS)
Bossavit, A.
1993-03-01
Macroscopic modelling of superconductors demands a substitution of some nonlinear behavior law for Ohm's law. For this, a version of Bean's “critical state” model, derived from the setting of a convex functional of the current density field, valid in dimension 3 without any previous assumption about the direction of currents, is proposed. It is shown how two standard three-dimensional finite element methods (“h-formulation” and “e-formulation”), once fitted with this model, can deal with situations were superconductors are present. La modélisation macroscopique des supraconducteurs suppose le remplacement de la loi d'Ohm par une loi de comportement non linéaire adéquate. On présente à cet effet une version du “modèle de Bean”, ou modèle de l'état critique, basée sur la construction d'une certaine fonctionnelle convexe du champ des densités de courant, qui est valable en dimension 3 sans hypothèses préalables sur la direction des courants. On montre comment adapter deux méthodes standards de calcul de courants de Foucault par élérnents finis en trois dimensions (“en h” et “en e”) à la présence de supraconducteurs, en incorporant ce modèle.
NASA Astrophysics Data System (ADS)
Louis, Ognel Pierre
Le but de cette etude est de developper un outil permettant d'estimer le niveau de risque de perte de vigueur des peuplements forestiers de la region de Gounamitz au nord-ouest du Nouveau-Brunswick via des donnees d'inventaires forestiers et des donnees de teledetection. Pour ce faire, un marteloscope de 100m x 100m et 20 parcelles d'echantillonnages ont ete delimites. A l'interieur de ces derniers, le niveau de risque de perte de vigueur des arbres ayant un DHP superieur ou egal a 9 cm a ete determine. Afin de caracteriser le risque de perte de vigueur des arbres, leurs positions spatiales ont ete repertoriees a partir d'un GPS en tenant compte des defauts au niveau des tiges. Pour mener a bien ce travail, les indices de vegetation et de textures et les bandes spectrales de l'image aeroportee ont ete extraits et consideres comme variables independantes. Le niveau de risque de perte de vigueur obtenu par espece d'arbre a travers les inventaires forestiers a ete considere comme variable dependante. En vue d'obtenir la superficie des peuplements forestiers de la region d'etude, une classification dirigee des images a partir de l'algorithme maximum de vraisemblance a ete effectuee. Le niveau de risque de perte de vigueur par type d'arbre a ensuite ete estime a l'aide des reseaux de neurones en utilisant un reseau dit perceptron multicouches. Il s'agit d'un modele de reseau de neurones compose de : 11 neurones sur la couche d'entree, correspondant aux variables independantes, 35 neurones sur la couche cachee et 4 neurones sur la couche de sortie. La prediction a partir des reseaux de neurones produit une matrice de confusion qui permet d'obtenir des mesures quantitatives d'estimation, notamment un pourcentage de classification globale de 91,7% pour la prediction du risque de perte de vigueur du peuplement de resineux et de 89,7% pour celui du peuplement de feuillus. L'evaluation de la performance des reseaux de neurones fournit une valeur de MSE globale de 0,04, et une RMSE (Mean Square Error) globale de 0,20 pour le peuplement de feuillus. Quant au peuplement de resineux, une valeur de MSE (Mean Square Error) globale de 0,05 et une valeur de RMSE globale de 0,22 ont ete obtenues. Pour la validation des resultats, le niveau de risque de perte de vigueur predit a ete compare avec le risque de perte de vigueur de reference. Les resultats obtenus donnent un coefficient de determination de 0,98 pour le peuplement de feuillus et 0,93 pour le peuplement de resineux.
Revankar, Nikhil; Ward, Alexandra J; Pelligra, Christopher G; Kongnakorn, Thitima; Fan, Weihong; LaPensee, Kenneth T
2014-10-01
The economic implications from the US Medicare perspective of adopting alternative treatment strategies for acute bacterial skin and skin structure infections (ABSSSIs) are substantial. The objective of this study is to describe a modeling framework that explores the impact of decisions related to both the location of care and switching to different antibiotics at discharge. A discrete event simulation (DES) was developed to model the treatment pathway of each patient through various locations (emergency department [ED], inpatient, and outpatient) and the treatments prescribed (empiric antibiotic, switching to a different antibiotic at discharge, or a second antibiotic). Costs are reported in 2012 USD. The mean number of days on antibiotic in a cohort assigned to a full course of vancomycin was 11.2 days, with 64% of the treatment course being administered in the outpatient setting. Mean total costs per patient were $8671, with inpatient care accounting for 58% of the costs accrued. The majority of outpatient costs were associated with parenteral administration rather than drug acquisition or monitoring. Scenarios modifying the treatment pathway to increase the proportion of patients receiving the first dose in the ED, and then managing them in the outpatient setting or prescribing an oral antibiotic at discharge to avoid the cost associated with administering parenteral therapy, therefore have a major impact and lower the typical cost per patient by 11-20%. Since vancomycin is commonly used as empiric therapy in clinical practice, based on these analyses, a shift in treatment practice could result in substantial savings from the Medicare perspective. The choice of antibiotic and location of care influence the costs and resource use associated with the management of ABSSSIs. The DES framework presented here can provide insight into the potential economic implications of decisions that modify the treatment pathway.