Sample records for key iter relevant

  1. Final case for a stainless steel diagnostic first wall on ITER

    NASA Astrophysics Data System (ADS)

    Pitts, R. A.; Bazylev, B.; Linke, J.; Landman, I.; Lehnen, M.; Loesser, D.; Loewenhoff, Th.; Merola, M.; Roccella, R.; Saibene, G.; Smith, M.; Udintsev, V. S.

    2015-08-01

    In 2010 the ITER Organization (IO) proposed to eliminate the beryllium armour on the plasma-facing surface of the diagnostic port plugs and instead to use bare stainless steel (SS), simplifying the design and providing significant cost reduction. Transport simulations at the IO confirmed that charge-exchange sputtering of the SS surfaces would not affect burning plasma operation through core impurity contamination, but a second key issue is the potential melt damage/material loss inflicted by the intense photon radiation flashes expected at the thermal quench of disruptions mitigated by massive gas injection. This paper addresses this second issue through a combination of ITER relevant experimental heat load tests and qualitative theoretical arguments of melt layer stability. It demonstrates that SS can be employed as material for the port plug plasma-facing surface and this has now been adopted into the ITER baseline.

  2. Traditional Tales and Imaginary Contexts in Primary Design and Technology: A Case Study

    ERIC Educational Resources Information Center

    McLain, Matt; McLain, Mel; Tsai, Jess; Martin, Mike; Bell, Dawne; Wooff, David

    2017-01-01

    Working with contexts is a key component to design and technology activity and education. The most recent iteration of the national curriculum programme of study for design and technology, in England, sets out that children between the ages of 5 and 7 "should work in a range of relevant contexts" (DfE, 2013, p.193); suggested contexts…

  3. Developing DIII-D To Prepare For ITER And The Path To Fusion Energy

    NASA Astrophysics Data System (ADS)

    Buttery, Richard; Hill, David; Solomon, Wayne; Guo, Houyang; DIII-D Team

    2017-10-01

    DIII-D pursues the advancement of fusion energy through scientific understanding and discovery of solutions. Research targets two key goals. First, to prepare for ITER we must resolve how to use its flexible control tools to rapidly reach Q =10, and develop the scientific basis to interpret results from ITER for fusion projection. Second, we must determine how to sustain a high performance fusion core in steady state conditions, with minimal actuators and a plasma exhaust solution. DIII-D will target these missions with: (i) increased electron heating and balanced torque neutral beams to simulate burning plasma conditions (ii) new 3D coil arrays to resolve control of transients (iii) off axis current drive to study physics in steady state regimes (iv) divertors configurations to promote detachment with low upstream density (v) a reactor relevant wall to qualify materials and resolve physics in reactor-like conditions. With new diagnostics and leading edge simulation, this will position the US for success in ITER and a unique knowledge to accelerate the approach to fusion energy. Supported by the US DOE under DE-FC02-04ER54698.

  4. Mobile sociology. 2000.

    PubMed

    Urry, John

    2010-01-01

    This article seeks to develop a manifesto for a sociology concerned with the diverse mobilities of peoples, objects, images, information, and wastes; and of the complex interdependencies between, and social consequences of, such diverse mobilities. A number of key concepts relevant for such a sociology are elaborated: 'gamekeeping', networks, fluids, scapes, flows, complexity and iteration. The article concludes by suggesting that a 'global civil society' might constitute the social base of a sociology of mobilities as we move into the twenty-first century.

  5. Design optimization of first wall and breeder unit module size for the Indian HCCB blanket module

    NASA Astrophysics Data System (ADS)

    Deepak, SHARMA; Paritosh, CHAUDHURI

    2018-04-01

    The Indian test blanket module (TBM) program in ITER is one of the major steps in the Indian fusion reactor program for carrying out the R&D activities in the critical areas like design of tritium breeding blankets relevant to future Indian fusion devices (ITER relevant and DEMO). The Indian Lead–Lithium Cooled Ceramic Breeder (LLCB) blanket concept is one of the Indian DEMO relevant TBM, to be tested in ITER as a part of the TBM program. Helium-Cooled Ceramic Breeder (HCCB) is an alternative blanket concept that consists of lithium titanate (Li2TiO3) as ceramic breeder (CB) material in the form of packed pebble beds and beryllium as the neutron multiplier. Specifically, attentions are given to the optimization of first wall coolant channel design and size of breeder unit module considering coolant pressure and thermal loads for the proposed Indian HCCB blanket based on ITER relevant TBM and loading conditions. These analyses will help proceeding further in designing blankets for loads relevant to the future fusion device.

  6. Experiences with a generator tool for building clinical application modules.

    PubMed

    Kuhn, K A; Lenz, R; Elstner, T; Siegele, H; Moll, R

    2003-01-01

    To elaborate main system characteristics and relevant deployment experiences for the health information system (HIS) Orbis/OpenMed, which is in widespread use in Germany, Austria, and Switzerland. In a deployment phase of 3 years in a 1.200 bed university hospital, where the system underwent significant improvements, the system's functionality and its software design have been analyzed in detail. We focus on an integrated CASE tool for generating embedded clinical applications and for incremental system evolution. We present a participatory and iterative software engineering process developed for efficient utilization of such a tool. The system's functionality is comparable to other commercial products' functionality; its components are embedded in a vendor-specific application framework, and standard interfaces are being used for connecting subsystems. The integrated generator tool is a remarkable feature; it became a key factor of our project. Tool generated applications are workflow enabled and embedded into the overall data base schema. Rapid prototyping and iterative refinement are supported, so application modules can be adapted to the users' work practice. We consider tools supporting an iterative and participatory software engineering process highly relevant for health information system architects. The potential of a system to continuously evolve and to be effectively adapted to changing needs may be more important than sophisticated but hard-coded HIS functionality. More work will focus on HIS software design and on software engineering. Methods and tools are needed for quick and robust adaptation of systems to health care processes and changing requirements.

  7. Teachers Supporting Teachers in Urban Schools: What Iterative Research Designs Can Teach Us.

    PubMed

    Shernoff, Elisa S; Maríñez-Lora, Ane M; Frazier, Stacy L; Jakobsons, Lara J; Atkins, Marc S; Bonner, Deborah

    2011-12-01

    Despite alarming rates and negative consequences associated with urban teacher attrition, mentoring programs often fail to target the strongest predictors of attrition: effectiveness around classroom management and engaging learners; and connectedness to colleagues. Using a mixed-method iterative development framework, we highlight the process of developing and evaluating the feasibility of a multi-component professional development model for urban early career teachers. The model includes linking novices with peer-nominated key opinion leader teachers and an external coach who work together to (1) provide intensive support in evidence-based practices for classroom management and engaging learners, and (2) connect new teachers with their larger network of colleagues. Fidelity measures and focus group data illustrated varying attendance rates throughout the school year and that although seminars and professional learning communities were delivered as intended, adaptations to enhance the relevance, authenticity, level, and type of instrumental support were needed. Implications for science and practice are discussed.

  8. Exploring emerging learning needs: a UK-wide consultation on environmental sustainability learning objectives for medical education.

    PubMed

    Walpole, Sarah C; Mortimer, Frances; Inman, Alice; Braithwaite, Isobel; Thompson, Trevor

    2015-12-24

    This study aimed to engage wide-ranging stakeholders and develop consensus learning objectives for undergraduate and postgraduate medical education. A UK-wide consultation garnered opinions of healthcare students, healthcare educators and other key stakeholders about environmental sustainability in medical education. The policy Delphi approach informed this study. Draft learning objectives were revised iteratively during three rounds of consultation: online questionnaire or telephone interview, face-to-face seminar and email consultation. Twelve draft learning objectives were developed based on review of relevant literature. In round one, 64 participants' median ratings of the learning objectives were 3.5 for relevance and 3.0 for feasibility on a Likert scale of one to four. Revisions were proposed, e.g. to highlight relevance to public health and professionalism. Thirty three participants attended round two. Conflicting opinions were explored. Added content areas included health benefits of sustainable behaviours. To enhance usability, restructuring provided three overarching learning objectives, each with subsidiary points. All participants from rounds one and two were contacted in round three, and no further edits were required. This is the first attempt to define consensus learning objectives for medical students about environmental sustainability. Allowing a wide range of stakeholders to comment on multiple iterations of the document stimulated their engagement with the issues raised and ownership of the resulting learning objectives.

  9. The PRIMA Test Facility: SPIDER and MITICA test-beds for ITER neutral beam injectors

    NASA Astrophysics Data System (ADS)

    Toigo, V.; Piovan, R.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Fiorentin, A.; Gambetta, G.; Gnesotto, F.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Moresco, M.; Ocello, E.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Recchia, M.; Rizzolo, A.; Rostagni, G.; Sartori, E.; Siragusa, M.; Sonato, P.; Sottocornola, A.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Kashiwagi, M.; Hanada, M.; Tobari, H.; Watanabe, K.; Maejima, T.; Kojima, A.; Umeda, N.; Yamanaka, H.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Heinemann, B.; Kraus, W.; Hanke, S.; Hauer, V.; Ochoa, S.; Blatchford, P.; Chuilon, B.; Xue, Y.; De Esch, H. P. L.; Hemsworth, R.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Cavenago, M.; D'Arienzo, M.; Sandri, S.; Tonti, A.

    2017-08-01

    The ITER Neutral Beam Test Facility (NBTF), called PRIMA (Padova Research on ITER Megavolt Accelerator), is hosted in Padova, Italy and includes two experiments: MITICA, the full-scale prototype of the ITER heating neutral beam injector, and SPIDER, the full-size radio frequency negative-ions source. The NBTF realization and the exploitation of SPIDER and MITICA have been recognized as necessary to make the future operation of the ITER heating neutral beam injectors efficient and reliable, fundamental to the achievement of thermonuclear-relevant plasma parameters in ITER. This paper reports on design and R&D carried out to construct PRIMA, SPIDER and MITICA, and highlights the huge progress made in just a few years, from the signature of the agreement for the NBTF realization in 2011, up to now—when the buildings and relevant infrastructures have been completed, SPIDER is entering the integrated commissioning phase and the procurements of several MITICA components are at a well advanced stage.

  10. Technological Distractions (Part 2): A Summary of Approaches to Manage Clinical Alarms With Intent to Reduce Alarm Fatigue.

    PubMed

    Winters, Bradford D; Cvach, Maria M; Bonafide, Christopher P; Hu, Xiao; Konkani, Avinash; O'Connor, Michael F; Rothschild, Jeffrey M; Selby, Nicholas M; Pelter, Michele M; McLean, Barbara; Kane-Gill, Sandra L

    2018-01-01

    Alarm fatigue is a widely recognized safety and quality problem where exposure to high rates of clinical alarms results in desensitization leading to dismissal of or slowed response to alarms. Nonactionable alarms are thought to be especially problematic. Despite these concerns, the number of clinical alarm signals has been increasing as an everincreasing number of medical technologies are added to the clinical care environment. PubMed, SCOPUS, Embase, and CINAHL. We performed a systematic review of the literature focused on clinical alarms. We asked a primary key question; "what interventions have been attempted and resulted in the success of reducing alarm fatigue?" and 3-secondary key questions; "what are the negative effects on patients/families; what are the balancing outcomes (unintended consequences of interventions); and what human factor approaches apply to making an effective alarm?" Articles relevant to the Key Questions were selected through an iterative review process and relevant data was extracted using a standardized tool. We found 62 articles that had relevant and usable data for at least one key question. We found that no study used/developed a clear definition of "alarm fatigue." For our primary key question 1, the relevant studies focused on three main areas: quality improvement/bundled activities; intervention comparisons; and analysis of algorithm-based false and total alarm suppression. All sought to reduce the number of total alarms and/or false alarms to improve the positive predictive value. Most studies were successful to varying degrees. None measured alarm fatigue directly. There is no agreed upon valid metric(s) for alarm fatigue, and the current methods are mostly indirect. Assuming that reducing the number of alarms and/or improving positive predictive value can reduce alarm fatigue, there are promising avenues to address patient safety and quality problem. Further investment is warranted not only in interventions that may reduce alarm fatigue but also in defining how to best measure it.

  11. Teachers Supporting Teachers in Urban Schools: What Iterative Research Designs Can Teach Us

    PubMed Central

    Shernoff, Elisa S.; Maríñez-Lora, Ane M.; Frazier, Stacy L.; Jakobsons, Lara J.; Atkins, Marc S.; Bonner, Deborah

    2012-01-01

    Despite alarming rates and negative consequences associated with urban teacher attrition, mentoring programs often fail to target the strongest predictors of attrition: effectiveness around classroom management and engaging learners; and connectedness to colleagues. Using a mixed-method iterative development framework, we highlight the process of developing and evaluating the feasibility of a multi-component professional development model for urban early career teachers. The model includes linking novices with peer-nominated key opinion leader teachers and an external coach who work together to (1) provide intensive support in evidence-based practices for classroom management and engaging learners, and (2) connect new teachers with their larger network of colleagues. Fidelity measures and focus group data illustrated varying attendance rates throughout the school year and that although seminars and professional learning communities were delivered as intended, adaptations to enhance the relevance, authenticity, level, and type of instrumental support were needed. Implications for science and practice are discussed. PMID:23275682

  12. An Iterative Information-Reduced Quadriphase-Shift-Keyed Carrier Synchronization Scheme Using Decision Feedback for Low Signal-to-Noise Ratio Applications

    NASA Technical Reports Server (NTRS)

    Simon, M.; Tkacenko, A.

    2006-01-01

    In a previous publication [1], an iterative closed-loop carrier synchronization scheme for binary phase-shift keyed (BPSK) modulation was proposed that was based on feeding back data decisions to the input of the loop, the purpose being to remove the modulation prior to carrier synchronization as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. The idea there was that, with sufficient independence between the received data and the decisions on it that are fed back (as would occur in an error-correction coding environment with sufficient decoding delay), a pure tone in the presence of noise would ultimately be produced (after sufficient iteration and low enough error probability) and thus could be tracked without any squaring loss. This article demonstrates that, with some modification, the same idea of iterative information reduction through decision feedback can be applied to quadrature phase-shift keyed (QPSK) modulation, something that was mentioned in the previous publication but never pursued.

  13. Closed form of the Baker-Campbell-Hausdorff formula for the generators of semisimple complex Lie algebras

    NASA Astrophysics Data System (ADS)

    Matone, Marco

    2016-11-01

    Recently it has been introduced an algorithm for the Baker-Campbell-Hausdorff (BCH) formula, which extends the Van-Brunt and Visser recent results, leading to new closed forms of BCH formula. More recently, it has been shown that there are 13 types of such commutator algebras. We show, by providing the explicit solutions, that these include the generators of the semisimple complex Lie algebras. More precisely, for any pair, X, Y of the Cartan-Weyl basis, we find W, linear combination of X, Y, such that exp (X) exp (Y)=exp (W). The derivation of such closed forms follows, in part, by using the above mentioned recent results. The complete derivation is provided by considering the structure of the root system. Furthermore, if X, Y, and Z are three generators of the Cartan-Weyl basis, we find, for a wide class of cases, W, a linear combination of X, Y and Z, such that exp (X) exp (Y) exp (Z)=exp (W). It turns out that the relevant commutator algebras are type 1c-i, type 4 and type 5. A key result concerns an iterative application of the algorithm leading to relevant extensions of the cases admitting closed forms of the BCH formula. Here we provide the main steps of such an iteration that will be developed in a forthcoming paper.

  14. Enabling Incremental Iterative Development at Scale: Quality Attribute Refinement and Allocation in Practice

    DTIC Science & Technology

    2015-06-01

    abstract constraints along six dimen- sions for expansion: user, actions, data , business rules, interfaces, and quality attributes [Gottesdiener 2010...relevant open source systems. For example, the CONNECT and HADOOP Distributed File System (HDFS) projects have many user stories that deal with...Iteration Zero involves architecture planning before writing any code. An overly long Iteration Zero is equivalent to the dysfunctional “ Big Up-Front

  15. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  16. Development of a Targeted Smoking Relapse-Prevention Intervention for Cancer Patients.

    PubMed

    Meltzer, Lauren R; Meade, Cathy D; Diaz, Diana B; Carrington, Monica S; Brandon, Thomas H; Jacobsen, Paul B; McCaffrey, Judith C; Haura, Eric B; Simmons, Vani N

    2018-04-01

    We describe the series of iterative steps used to develop a smoking relapse-prevention intervention customized to the needs of cancer patients. Informed by relevant literature and a series of preliminary studies, an educational tool (DVD) was developed to target the unique smoking relapse risk factors among cancer patients. Learner verification interviews were conducted with 10 cancer patients who recently quit smoking to elicit feedback and inform the development of the DVD. The DVD was then refined using iterative processes and feedback from the learner verification interviews. Major changes focused on visual appeal, and the inclusion of additional testimonials and graphics to increase comprehension of key points and further emphasize the message that the patient is in control of their ability to maintain their smoking abstinence. Together, these steps resulted in the creation of a DVD titled Surviving Smokefree®, which represents the first smoking relapse-prevention intervention for cancer patients. If found effective, the Surviving Smokefree® DVD is an easily disseminable and low-cost portable intervention which can assist cancer patients in maintaining smoking abstinence.

  17. Plasma facing components: a conceptual design strategy for the first wall in FAST tokamak

    NASA Astrophysics Data System (ADS)

    Labate, C.; Di Gironimo, G.; Renno, F.

    2015-09-01

    Satellite tokamaks are conceived with the main purpose of developing new or alternative ITER- and DEMO-relevant technologies, able to contribute in resolving the pending issues about plasma operation. In particular, a high criticality needs to be associated to the design of plasma facing components, i.e. first wall (FW) and divertor, due to physical, topological and thermo-structural reasons. In such a context, the design of the FW in FAST fusion plant, whose operational range is close to ITER’s one, takes place. According to the mission of experimental satellites, the FW design strategy, which is presented in this paper relies on a series of innovative design choices and proposals with a particular attention to the typical key points of plasma facing components design. Such an approach, taking into account a series of involved physical constraints and functional requirements to be fulfilled, marks a clear borderline with the FW solution adopted in ITER, in terms of basic ideas, manufacturing aspects, remote maintenance procedure, manifolds management, cooling cycle and support system configuration.

  18. Recent progress of the JT-60SA project

    NASA Astrophysics Data System (ADS)

    Shirai, H.; Barabaschi, P.; Kamada, Y.; the JT-60SA Team

    2017-10-01

    The JT-60SA project has been implemented for the purpose of an early realization of fusion energy. With a powerful and versatile NBI and ECRF system, a flexible plasma-shaping capability, and various kinds of in-vessel coils to suppress MHD instabilities, JT-60SA plays an essential role in addressing the key physics and engineering issues of ITER and DEMO. It aims to achieve the long sustainment of high integrated performance plasmas under the high β N condition required in DEMO. The fabrication and installation of components and systems of JT-60SA procured by the EU and Japan are steadily progressing. The installation of toroidal field (TF) coils around the vacuum vessel started in December 2016. The commissioning of the cryogenic system and power supply system has been implemented in the Naka site, and JT-60SA will start operation in 2019. The JT-60SA research plan covers a wide area of issues in ITER and DEMO relevant operation regimes, and has been regularly updated on the basis of intensive discussion among European and Japanese researchers.

  19. Work function measurements during plasma exposition at conditions relevant in negative ion sources for the ITER neutral beam injection.

    PubMed

    Gutser, R; Wimmer, C; Fantz, U

    2011-02-01

    Cesium seeded sources for surface generated negative hydrogen ions are major components of neutral beam injection systems in future large-scale fusion experiments such as ITER. The stability and delivered current density depend highly on the work function during vacuum and plasma phases of the ion source. One of the most important quantities that affect the source performance is the work function. A modified photocurrent method was developed to measure the temporal behavior of the work function during and after cesium evaporation. The investigation of cesium exposed Mo and MoLa samples under ITER negative hydrogen ion based neutral beam injection relevant surface and plasma conditions showed the influence of impurities which result in a fast degradation when the plasma exposure or the cesium flux onto the sample is stopped. A minimum work function close to that of bulk cesium was obtained under the influence of the plasma exposition, while a significantly higher work function was observed under ITER-like vacuum conditions.

  20. Improving Retrieval Performance by Relevance Feedback.

    ERIC Educational Resources Information Center

    Salton, Gerard; Buckley, Chris

    1990-01-01

    Briefly describes the principal relevance feedback methods that have been introduced over the years and evaluates the effectiveness of the methods in producing improved query formulations. Prescriptions are given for conducting text retrieval operations iteratively using relevance feedback. (24 references) (Author/CLB)

  1. Comparing direct and iterative equation solvers in a large structural analysis software system

    NASA Technical Reports Server (NTRS)

    Poole, E. L.

    1991-01-01

    Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.

  2. EDITORIAL: ECRH physics and technology in ITER

    NASA Astrophysics Data System (ADS)

    Luce, T. C.

    2008-05-01

    It is a great pleasure to introduce you to this special issue containing papers from the 4th IAEA Technical Meeting on ECRH Physics and Technology in ITER, which was held 6-8 June 2007 at the IAEA Headquarters in Vienna, Austria. The meeting was attended by more than 40 ECRH experts representing 13 countries and the IAEA. Presentations given at the meeting were placed into five separate categories EC wave physics: current understanding and extrapolation to ITER Application of EC waves to confinement and stability studies, including active control techniques for ITER Transmission systems/launchers: state of the art and ITER relevant techniques Gyrotron development towards ITER needs System integration and optimisation for ITER. It is notable that the participants took seriously the focal point of ITER, rather than simply contributing presentations on general EC physics and technology. The application of EC waves to ITER presents new challenges not faced in the current generation of experiments from both the physics and technology viewpoints. High electron temperatures and the nuclear environment have a significant impact on the application of EC waves. The needs of ITER have also strongly motivated source and launcher development. Finally, the demonstrated ability for precision control of instabilities or non-inductive current drive in addition to bulk heating to fusion burn has secured a key role for EC wave systems in ITER. All of the participants were encouraged to submit their contributions to this special issue, subject to the normal publication and technical merit standards of Nuclear Fusion. Almost half of the participants chose to do so; many of the others had been published in other publications and therefore could not be included in this special issue. The papers included here are a representative sample of the meeting. The International Advisory Committee also asked the three summary speakers from the meeting to supply brief written summaries (O. Sauter: EC wave physics and applications, M. Thumm: Source and transmission line development, and S. Cirant: ITER specific system designs). These summaries are included in this issue to give a more complete view of the technical meeting. Finally, it is appropriate to mention the future of this meeting series. With the ratification of the ITER agreement and the formation of the ITER International Organization, it was recognized that meetings conducted by outside agencies with an exclusive focus on ITER would be somewhat unusual. However, the participants at this meeting felt that the gathering of international experts with diverse specialities within EC wave physics and technology to focus on using EC waves in future fusion devices like ITER was extremely valuable. It was therefore recommended that this series of meetings continue, but with the broader focus on the application of EC waves to steady-state and burning plasma experiments including demonstration power plants. As the papers in this special issue show, the EC community is already taking seriously the challenges of applying EC waves to fusion devices with high neutron fluence and continuous operation at high reliability.

  3. Security of quantum key distribution with iterative sifting

    NASA Astrophysics Data System (ADS)

    Tamaki, Kiyoshi; Lo, Hoi-Kwong; Mizutani, Akihiro; Kato, Go; Lim, Charles Ci Wen; Azuma, Koji; Curty, Marcos

    2018-01-01

    Several quantum key distribution (QKD) protocols employ iterative sifting. After each quantum transmission round, Alice and Bob disclose part of their setting information (including their basis choices) for the detected signals. This quantum phase then ends when the basis dependent termination conditions are met, i.e., the numbers of detected signals per basis exceed certain pre-agreed threshold values. Recently, however, Pfister et al (2016 New J. Phys. 18 053001) showed that the basis dependent termination condition makes QKD insecure, especially in the finite key regime, and they suggested to disclose all the setting information after finishing the quantum phase. However, this protocol has two main drawbacks: it requires that Alice possesses a large memory, and she also needs to have some a priori knowledge about the transmission rate of the quantum channel. Here we solve these two problems by introducing a basis-independent termination condition to the iterative sifting in the finite key regime. The use of this condition, in combination with Azuma’s inequality, provides a precise estimation on the amount of privacy amplification that needs to be applied, thus leading to the security of QKD protocols, including the loss-tolerant protocol (Tamaki et al 2014 Phys. Rev. A 90 052314), with iterative sifting. Our analysis indicates that to announce the basis information after each quantum transmission round does not compromise the key generation rate of the loss-tolerant protocol. Our result allows the implementation of wider classes of classical post-processing techniques in QKD with quantified security.

  4. Long-pulse stability limits of the ITER baseline scenario

    DOE PAGES

    Jackson, G. L.; Luce, T. C.; Solomon, W. M.; ...

    2015-01-14

    DIII-D has made significant progress in developing the techniques required to operate ITER, and in understanding their impact on performance when integrated into operational scenarios at ITER relevant parameters. We demonstrated long duration plasmas, stable to m/n =2/1 tearing modes (TMs), with an ITER similar shape and I p/aB T, in DIII-D, that evolve to stationary conditions. The operating region most likely to reach stable conditions has normalized pressure, B N≈1.9–2.1 (compared to the ITER baseline design of 1.6 – 1.8), and a Greenwald normalized density fraction, f GW 0.42 – 0.70 (the ITER design is f GW ≈ 0.8).more » The evolution of the current profile, using internal inductance (l i) as an indicator, is found to produce a smaller fraction of stable pulses when l i is increased above ≈ 1.1 at the beginning of β N flattop. Stable discharges with co-neutral beam injection (NBI) are generally accompanied with a benign n=2 MHD mode. However if this mode exceeds ≈ 10 G, the onset of a m/n=2/1 tearing mode occurs with a loss of confinement. In addition, stable operation with low applied external torque, at or below the extrapolated value expected for ITER has also been demonstrated. With electron cyclotron (EC) injection, the operating region of stable discharges has been further extended at ITER equivalent levels of torque and to ELM free discharges at higher torque but with the addition of an n=3 magnetic perturbation from the DIII-D internal coil set. Lastly, the characterization of the ITER baseline scenario evolution for long pulse duration, extension to more ITER relevant values of torque and electron heating, and suppression of ELMs have significantly advanced the physics basis of this scenario, although significant effort remains in the simultaneous integration of all these requirements.« less

  5. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  6. Thermo-mechanical analysis of ITER first mirrors and its use for the ITER equatorial visible∕infrared wide angle viewing system optical design.

    PubMed

    Joanny, M; Salasca, S; Dapena, M; Cantone, B; Travère, J M; Thellier, C; Fermé, J J; Marot, L; Buravand, O; Perrollaz, G; Zeile, C

    2012-10-01

    ITER first mirrors (FMs), as the first components of most ITER optical diagnostics, will be exposed to high plasma radiation flux and neutron load. To reduce the FMs heating and optical surface deformation induced during ITER operation, the use of relevant materials and cooling system are foreseen. The calculations led on different materials and FMs designs and geometries (100 mm and 200 mm) show that the use of CuCrZr and TZM, and a complex integrated cooling system can limit efficiently the FMs heating and reduce their optical surface deformation under plasma radiation flux and neutron load. These investigations were used to evaluate, for the ITER equatorial port visible∕infrared wide angle viewing system, the impact of the FMs properties change during operation on the instrument main optical performances. The results obtained are presented and discussed.

  7. Nonperturbative measurement of the local magnetic field using pulsed polarimetry for fusion reactor conditions (invited)a)

    NASA Astrophysics Data System (ADS)

    Smith, Roger J.

    2008-10-01

    A novel diagnostic technique for the remote and nonperturbative sensing of the local magnetic field in reactor relevant plasmas is presented. Pulsed polarimetry [Patent No. 12/150,169 (pending)] combines optical scattering with the Faraday effect. The polarimetric light detection and ranging (LIDAR)-like diagnostic has the potential to be a local Bpol diagnostic on ITER and can achieve spatial resolutions of millimeters on high energy density (HED) plasmas using existing lasers. The pulsed polarimetry method is based on nonlocal measurements and subtle effects are introduced that are not present in either cw polarimetry or Thomson scattering LIDAR. Important features include the capability of simultaneously measuring local Te, ne, and B∥ along the line of sight, a resiliency to refractive effects, a short measurement duration providing near instantaneous data in time, and location for real-time feedback and control of magnetohydrodynamic (MHD) instabilities and the realization of a widely applicable internal magnetic field diagnostic for the magnetic fusion energy program. The technique improves for higher neB∥ product and higher ne and is well suited for diagnosing the transient plasmas in the HED program. Larger devices such as ITER and DEMO are also better suited to the technique, allowing longer pulse lengths and thereby relaxing key technology constraints making pulsed polarimetry a valuable asset for next step devices. The pulsed polarimetry technique is clarified by way of illustration on the ITER tokamak and plasmas within the magnetized target fusion program within present technological means.

  8. Towards Current Profile Control in ITER: Potential Approaches and Research Needs

    NASA Astrophysics Data System (ADS)

    Schuster, E.; Barton, J. E.; Wehner, W. P.

    2014-10-01

    Many challenging plasma control problems still need to be addressed in order for the ITER Plasma Control System (PCS) to be able to successfully achieve the ITER project goals. For instance, setting up a suitable toroidal current density profile is key for one possible advanced scenario characterized by noninductive sustainment of the plasma current and steady-state operation. The nonlinearity and high dimensionality exhibited by the plasma demand a model-based current-profile control synthesis procedure that can accommodate this complexity through embedding the known physics within the design. The development of a model capturing the dynamics of the plasma relevant for control design enables not only the design of feedback controllers for regulation or tracking but also the design of optimal feedforward controllers for a systematic model-based approach to scenario planning, the design of state estimators for a reliable real-time reconstruction of the plasma internal profiles based on limited and noisy diagnostics, and the development of a fast predictive simulation code for closed-loop performance evaluation before implementation. Progress towards control-oriented modeling of the current profile evolution and associated control design has been reported following both data-driven and first-principles-driven approaches. An overview of these two approaches will be provided, as well as a discussion on research needs associated with each one of the model applications described above. Supported by the US Department of Energy under DE-SC0001334 and DE-SC0010661.

  9. Development of the ITER magnetic diagnostic set and specification.

    PubMed

    Vayakis, G; Arshad, S; Delhom, D; Encheva, A; Giacomin, T; Jones, L; Patel, K M; Pérez-Lasala, M; Portales, M; Prieto, D; Sartori, F; Simrock, S; Snipes, J A; Udintsev, V S; Watts, C; Winter, A; Zabeo, L

    2012-10-01

    ITER magnetic diagnostics are now in their detailed design and R&D phase. They have passed their conceptual design reviews and a working diagnostic specification has been prepared aimed at the ITER project requirements. This paper highlights specific design progress, in particular, for the in-vessel coils, steady state sensors, saddle loops and divertor sensors. Key changes in the measurement specifications, and a working concept of software and electronics are also outlined.

  10. Modeling of Steady-state Scenarios for the Fusion Nuclear Science Facility, Advanced Tokamak Approach

    NASA Astrophysics Data System (ADS)

    Garofalo, A. M.; Chan, V. S.; Prater, R.; Smith, S. P.; St. John, H. E.; Meneghini, O.

    2013-10-01

    A Fusion National Science Facility (FNSF) would complement ITER in addressing the community identified science and technology gaps to a commercially attractive DEMO, including breeding tritium and completing the fuel cycle, qualifying nuclear materials for high fluence, developing suitable materials for the plasma-boundary interface, and demonstrating power extraction. Steady-state plasma operation is highly desirable to address the requirements for fusion nuclear technology testing [1]. The Advanced Tokamak (AT) is a strong candidate for an FNSF as a consequence of its mature physics base, capability to address the key issues with a more compact device, and the direct relevance to an attractive target power plant. Key features of AT are fully noninductive current drive, strong plasma cross section shaping, internal profiles consistent with high bootstrap fraction, and operation at high beta, typically above the free boundary limit, βN > 3 . Work supported by GA IR&D funding, DE-FC02-04ER54698, and DE-FG02-95ER43309.

  11. Optimal Design of Calibration Signals in Space-Borne Gravitational Wave Detectors

    NASA Technical Reports Server (NTRS)

    Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Ferroni, Valerio; hide

    2016-01-01

    Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterisation of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.

  12. Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors

    NASA Technical Reports Server (NTRS)

    Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.

    2014-01-01

    Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.

  13. Development of a multimedia educational programme for first-time hearing aid users: a participatory design.

    PubMed

    Ferguson, Melanie; Leighton, Paul; Brandreth, Marian; Wharrad, Heather

    2018-05-02

    To develop content for a series of interactive video tutorials (or reusable learning objects, RLOs) for first-time adult hearing aid users, to enhance knowledge of hearing aids and communication. RLO content was based on an electronically-delivered Delphi review, workshops, and iterative peer-review and feedback using a mixed-methods participatory approach. An expert panel of 33 hearing healthcare professionals, and workshops involving 32 hearing aid users and 11 audiologists. This ensured that social, emotional and practical experiences of the end-user alongside clinical validity were captured. Content for evidence-based, self-contained RLOs based on pedagogical principles was developed for delivery via DVD for television, PC or internet. Content was developed based on Delphi review statements about essential information that reached consensus (≥90%), visual representations of relevant concepts relating to hearing aids and communication, and iterative peer-review and feedback of content. This participatory approach recognises and involves key stakeholders in the design process to create content for a user-friendly multimedia educational intervention, to supplement the clinical management of first-time hearing aid users. We propose participatory methodologies are used in the development of content for e-learning interventions in hearing-related research and clinical practice.

  14. ITALICS: an algorithm for normalization and DNA copy number calling for Affymetrix SNP arrays.

    PubMed

    Rigaill, Guillem; Hupé, Philippe; Almeida, Anna; La Rosa, Philippe; Meyniel, Jean-Philippe; Decraene, Charles; Barillot, Emmanuel

    2008-03-15

    Affymetrix SNP arrays can be used to determine the DNA copy number measurement of 11 000-500 000 SNPs along the genome. Their high density facilitates the precise localization of genomic alterations and makes them a powerful tool for studies of cancers and copy number polymorphism. Like other microarray technologies it is influenced by non-relevant sources of variation, requiring correction. Moreover, the amplitude of variation induced by non-relevant effects is similar or greater than the biologically relevant effect (i.e. true copy number), making it difficult to estimate non-relevant effects accurately without including the biologically relevant effect. We addressed this problem by developing ITALICS, a normalization method that estimates both biological and non-relevant effects in an alternate, iterative manner, accurately eliminating irrelevant effects. We compared our normalization method with other existing and available methods, and found that ITALICS outperformed these methods for several in-house datasets and one public dataset. These results were validated biologically by quantitative PCR. The R package ITALICS (ITerative and Alternative normaLIzation and Copy number calling for affymetrix Snp arrays) has been submitted to Bioconductor.

  15. Iteration and superposition encryption scheme for image sequences based on multi-dimensional keys

    NASA Astrophysics Data System (ADS)

    Han, Chao; Shen, Yuzhen; Ma, Wenlin

    2017-12-01

    An iteration and superposition encryption scheme for image sequences based on multi-dimensional keys is proposed for high security, big capacity and low noise information transmission. Multiple images to be encrypted are transformed into phase-only images with the iterative algorithm and then are encrypted by different random phase, respectively. The encrypted phase-only images are performed by inverse Fourier transform, respectively, thus new object functions are generated. The new functions are located in different blocks and padded zero for a sparse distribution, then they propagate to a specific region at different distances by angular spectrum diffraction, respectively and are superposed in order to form a single image. The single image is multiplied with a random phase in the frequency domain and then the phase part of the frequency spectrums is truncated and the amplitude information is reserved. The random phase, propagation distances, truncated phase information in frequency domain are employed as multiple dimensional keys. The iteration processing and sparse distribution greatly reduce the crosstalk among the multiple encryption images. The superposition of image sequences greatly improves the capacity of encrypted information. Several numerical experiments based on a designed optical system demonstrate that the proposed scheme can enhance encrypted information capacity and make image transmission at a highly desired security level.

  16. Iteration in Early-Elementary Engineering Design

    ERIC Educational Resources Information Center

    McFarland Kendall, Amber Leigh

    2017-01-01

    K-12 standards and curricula are beginning to include engineering design as a key practice within Science Technology Engineering and Mathematics (STEM) education. However, there is little research on how the youngest students engage in engineering design within the elementary classroom. This dissertation focuses on iteration as an essential aspect…

  17. Performance of multi-aperture grid extraction systems for an ITER-relevant RF-driven negative hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Franzen, P.; Gutser, R.; Fantz, U.; Kraus, W.; Falter, H.; Fröschle, M.; Heinemann, B.; McNeely, P.; Nocentini, R.; Riedl, R.; Stäbler, A.; Wünderlich, D.

    2011-07-01

    The ITER neutral beam system requires a negative hydrogen ion beam of 48 A with an energy of 0.87 MeV, and a negative deuterium beam of 40 A with an energy of 1 MeV. The beam is extracted from a large ion source of dimension 1.9 × 0.9 m2 by an acceleration system consisting of seven grids with 1280 apertures each. Currently, apertures with a diameter of 14 mm in the first grid are foreseen. In 2007, the IPP RF source was chosen as the ITER reference source due to its reduced maintenance compared with arc-driven sources and the successful development at the BATMAN test facility of being equipped with the small IPP prototype RF source ( {\\sim}\\frac{1}{8} of the area of the ITER NBI source). These results, however, were obtained with an extraction system with 8 mm diameter apertures. This paper reports on the comparison of the source performance at BATMAN of an ITER-relevant extraction system equipped with chamfered apertures with a 14 mm diameter and 8 mm diameter aperture extraction system. The most important result is that there is almost no difference in the achieved current density—being consistent with ion trajectory calculations—and the amount of co-extracted electrons. Furthermore, some aspects of the beam optics of both extraction systems are discussed.

  18. The ESA Nanosatellite Beacons for Space Weather Monitoring Study

    NASA Astrophysics Data System (ADS)

    Hapgood, M.; Eckersley, S.; Lundin, R.; Kluge, M.

    2008-09-01

    This paper will present final results from this ESA-funded study that has investigated how current and emerging concepts for nanosats may be used to monitor space weather conditions and provide improved access to data needed for space weather services. The study has reviewed requirements developed in previous ESA space weather studies to establish a set of service and measurements requirements appropriate to nanosat solutions. The output is conveniently represented as a set of five distinct classes of nanosat constellations, each in different orbit locations and which can address a specific group of measurement requirements. One example driving requirement for several of the constellations was the need for real-time data reception. Given this background, the study then iterated a set of instrument and spacecraft solutions to address each of the nanosat constellations from the requirements. Indeed, iteration has proved to be a critical aspect of the study. The instrument solutions have driven a refinement of requirements through assessment of whether or not the physical parameters to be measured dictate instrument components too large for a nanosat. In addition, the study has also reviewed miniaturization trends for instruments relevant to space weather monitoring by nanosats, looking at the near, mid and far-term timescales. Within the spacecraft solutions the study reviewed key technology trends relevant to space weather monitoring by nanosats: (a) micro and nano-technology devices for spacecraft communications, navigation, propulsion and power, and (b) development and flight experience with nanosats for science and for engineering demonstration. These requirements and solutions were then subject to an iterative system and mission analysis including key mission design issues (e.g. launch/transfer, mission geometry, instrument accommodation, numbers of spacecraft, communications architectures, de-orbit, nanosat reliability and constellation robustness) and the impact of nanosat fundamental limitations (e.g. mass, volume/size, power, communications). As a result, top-level Strawman mission concepts were developed for each constellation, and ROM costs were derived for programme development, operation and maintenance over a ten-year period. Nanosat reliability and constellation robustness were shown to be a key driver in deriving mission costs. In parallel with the mission analysis the study results have been reviewed to identify key issues that determine the prospects for a space weather nanosat programme and to make recommendations on measures to enable implementation of such a programme. As a follow-on to this study, a student MSc project was initiated by Astrium at Cranfield University to analyse a potential space weather precursor demonstration mission in GTO (one of the recommendations from this ESA study), composing of a reduced constellation of nanosats, launched on ASAP or some other low cost method. The demonstration would include: 1/ Low cost multiple manufacture techniques for a fully industrial nanosat constellation programme 2/ Real time datalinks and fully operational mission for space weather 3/ Miniaturised payloads to fit in a nanosat for space weather monitoring: 4/ Other possible demonstrations of advanced technology The aim was to comply with ESA demonstration mission (i.e. PROBA-type) requirements, to be representative on issues such as cost and risk

  19. Supporting interoperability of collaborative networks through engineering of a service-based Mediation Information System (MISE 2.0)

    NASA Astrophysics Data System (ADS)

    Benaben, Frederick; Mu, Wenxin; Boissel-Dallier, Nicolas; Barthe-Delanoe, Anne-Marie; Zribi, Sarah; Pingaud, Herve

    2015-08-01

    The Mediation Information System Engineering project is currently finishing its second iteration (MISE 2.0). The main objective of this scientific project is to provide any emerging collaborative situation with methods and tools to deploy a Mediation Information System (MIS). MISE 2.0 aims at defining and designing a service-based platform, dedicated to initiating and supporting the interoperability of collaborative situations among potential partners. This MISE 2.0 platform implements a model-driven engineering approach to the design of a service-oriented MIS dedicated to supporting the collaborative situation. This approach is structured in three layers, each providing their own key innovative points: (i) the gathering of individual and collaborative knowledge to provide appropriate collaborative business behaviour (key point: knowledge management, including semantics, exploitation and capitalisation), (ii) deployment of a mediation information system able to computerise the previously deduced collaborative processes (key point: the automatic generation of collaborative workflows, including connection with existing devices or services) (iii) the management of the agility of the obtained collaborative network of organisations (key point: supervision of collaborative situations and relevant exploitation of the gathered data). MISE covers business issues (through BPM), technical issues (through an SOA) and agility issues of collaborative situations (through EDA).

  20. Overview of the JET results in support to ITER

    NASA Astrophysics Data System (ADS)

    Litaudon, X.; Abduallev, S.; Abhangi, M.; Abreu, P.; Afzal, M.; Aggarwal, K. M.; Ahlgren, T.; Ahn, J. H.; Aho-Mantila, L.; Aiba, N.; Airila, M.; Albanese, R.; Aldred, V.; Alegre, D.; Alessi, E.; Aleynikov, P.; Alfier, A.; Alkseev, A.; Allinson, M.; Alper, B.; Alves, E.; Ambrosino, G.; Ambrosino, R.; Amicucci, L.; Amosov, V.; Andersson Sundén, E.; Angelone, M.; Anghel, M.; Angioni, C.; Appel, L.; Appelbee, C.; Arena, P.; Ariola, M.; Arnichand, H.; Arshad, S.; Ash, A.; Ashikawa, N.; Aslanyan, V.; Asunta, O.; Auriemma, F.; Austin, Y.; Avotina, L.; Axton, M. D.; Ayres, C.; Bacharis, M.; Baciero, A.; Baião, D.; Bailey, S.; Baker, A.; Balboa, I.; Balden, M.; Balshaw, N.; Bament, R.; Banks, J. W.; Baranov, Y. F.; Barnard, M. A.; Barnes, D.; Barnes, M.; Barnsley, R.; Baron Wiechec, A.; Barrera Orte, L.; Baruzzo, M.; Basiuk, V.; Bassan, M.; Bastow, R.; Batista, A.; Batistoni, P.; Baughan, R.; Bauvir, B.; Baylor, L.; Bazylev, B.; Beal, J.; Beaumont, P. S.; Beckers, M.; Beckett, B.; Becoulet, A.; Bekris, N.; Beldishevski, M.; Bell, K.; Belli, F.; Bellinger, M.; Belonohy, É.; Ben Ayed, N.; Benterman, N. A.; Bergsåker, H.; Bernardo, J.; Bernert, M.; Berry, M.; Bertalot, L.; Besliu, C.; Beurskens, M.; Bieg, B.; Bielecki, J.; Biewer, T.; Bigi, M.; Bílková, P.; Binda, F.; Bisoffi, A.; Bizarro, J. P. S.; Björkas, C.; Blackburn, J.; Blackman, K.; Blackman, T. R.; Blanchard, P.; Blatchford, P.; Bobkov, V.; Boboc, A.; Bodnár, G.; Bogar, O.; Bolshakova, I.; Bolzonella, T.; Bonanomi, N.; Bonelli, F.; Boom, J.; Booth, J.; Borba, D.; Borodin, D.; Borodkina, I.; Botrugno, A.; Bottereau, C.; Boulting, P.; Bourdelle, C.; Bowden, M.; Bower, C.; Bowman, C.; Boyce, T.; Boyd, C.; Boyer, H. J.; Bradshaw, J. M. A.; Braic, V.; Bravanec, R.; Breizman, B.; Bremond, S.; Brennan, P. D.; Breton, S.; Brett, A.; Brezinsek, S.; Bright, M. D. J.; Brix, M.; Broeckx, W.; Brombin, M.; Brosławski, A.; Brown, D. P. D.; Brown, M.; Bruno, E.; Bucalossi, J.; Buch, J.; Buchanan, J.; Buckley, M. A.; Budny, R.; Bufferand, H.; Bulman, M.; Bulmer, N.; Bunting, P.; Buratti, P.; Burckhart, A.; Buscarino, A.; Busse, A.; Butler, N. K.; Bykov, I.; Byrne, J.; Cahyna, P.; Calabrò, G.; Calvo, I.; Camenen, Y.; Camp, P.; Campling, D. C.; Cane, J.; Cannas, B.; Capel, A. J.; Card, P. J.; Cardinali, A.; Carman, P.; Carr, M.; Carralero, D.; Carraro, L.; Carvalho, B. B.; Carvalho, I.; Carvalho, P.; Casson, F. J.; Castaldo, C.; Catarino, N.; Caumont, J.; Causa, F.; Cavazzana, R.; Cave-Ayland, K.; Cavinato, M.; Cecconello, M.; Ceccuzzi, S.; Cecil, E.; Cenedese, A.; Cesario, R.; Challis, C. D.; Chandler, M.; Chandra, D.; Chang, C. S.; Chankin, A.; Chapman, I. T.; Chapman, S. C.; Chernyshova, M.; Chitarin, G.; Ciraolo, G.; Ciric, D.; Citrin, J.; Clairet, F.; Clark, E.; Clark, M.; Clarkson, R.; Clatworthy, D.; Clements, C.; Cleverly, M.; Coad, J. P.; Coates, P. A.; Cobalt, A.; Coccorese, V.; Cocilovo, V.; Coda, S.; Coelho, R.; Coenen, J. W.; Coffey, I.; Colas, L.; Collins, S.; Conka, D.; Conroy, S.; Conway, N.; Coombs, D.; Cooper, D.; Cooper, S. R.; Corradino, C.; Corre, Y.; Corrigan, G.; Cortes, S.; Coster, D.; Couchman, A. S.; Cox, M. P.; Craciunescu, T.; Cramp, S.; Craven, R.; Crisanti, F.; Croci, G.; Croft, D.; Crombé, K.; Crowe, R.; Cruz, N.; Cseh, G.; Cufar, A.; Cullen, A.; Curuia, M.; Czarnecka, A.; Dabirikhah, H.; Dalgliesh, P.; Dalley, S.; Dankowski, J.; Darrow, D.; Davies, O.; Davis, W.; Day, C.; Day, I. E.; De Bock, M.; de Castro, A.; de la Cal, E.; de la Luna, E.; De Masi, G.; de Pablos, J. L.; De Temmerman, G.; De Tommasi, G.; de Vries, P.; Deakin, K.; Deane, J.; Degli Agostini, F.; Dejarnac, R.; Delabie, E.; den Harder, N.; Dendy, R. O.; Denis, J.; Denner, P.; Devaux, S.; Devynck, P.; Di Maio, F.; Di Siena, A.; Di Troia, C.; Dinca, P.; D'Inca, R.; Ding, B.; Dittmar, T.; Doerk, H.; Doerner, R. P.; Donné, T.; Dorling, S. E.; Dormido-Canto, S.; Doswon, S.; Douai, D.; Doyle, P. T.; Drenik, A.; Drewelow, P.; Drews, P.; Duckworth, Ph.; Dumont, R.; Dumortier, P.; Dunai, D.; Dunne, M.; Ďuran, I.; Durodié, F.; Dutta, P.; Duval, B. P.; Dux, R.; Dylst, K.; Dzysiuk, N.; Edappala, P. V.; Edmond, J.; Edwards, A. M.; Edwards, J.; Eich, Th.; Ekedahl, A.; El-Jorf, R.; Elsmore, C. G.; Enachescu, M.; Ericsson, G.; Eriksson, F.; Eriksson, J.; Eriksson, L. G.; Esposito, B.; Esquembri, S.; Esser, H. G.; Esteve, D.; Evans, B.; Evans, G. E.; Evison, G.; Ewart, G. D.; Fagan, D.; Faitsch, M.; Falie, D.; Fanni, A.; Fasoli, A.; Faustin, J. M.; Fawlk, N.; Fazendeiro, L.; Fedorczak, N.; Felton, R. C.; Fenton, K.; Fernades, A.; Fernandes, H.; Ferreira, J.; Fessey, J. A.; Février, O.; Ficker, O.; Field, A.; Fietz, S.; Figueiredo, A.; Figueiredo, J.; Fil, A.; Finburg, P.; Firdaouss, M.; Fischer, U.; Fittill, L.; Fitzgerald, M.; Flammini, D.; Flanagan, J.; Fleming, C.; Flinders, K.; Fonnesu, N.; Fontdecaba, J. M.; Formisano, A.; Forsythe, L.; Fortuna, L.; Fortuna-Zalesna, E.; Fortune, M.; Foster, S.; Franke, T.; Franklin, T.; Frasca, M.; Frassinetti, L.; Freisinger, M.; Fresa, R.; Frigione, D.; Fuchs, V.; Fuller, D.; Futatani, S.; Fyvie, J.; Gál, K.; Galassi, D.; Gałązka, K.; Galdon-Quiroga, J.; Gallagher, J.; Gallart, D.; Galvão, R.; Gao, X.; Gao, Y.; Garcia, J.; Garcia-Carrasco, A.; García-Muñoz, M.; Gardarein, J.-L.; Garzotti, L.; Gaudio, P.; Gauthier, E.; Gear, D. F.; Gee, S. J.; Geiger, B.; Gelfusa, M.; Gerasimov, S.; Gervasini, G.; Gethins, M.; Ghani, Z.; Ghate, M.; Gherendi, M.; Giacalone, J. C.; Giacomelli, L.; Gibson, C. S.; Giegerich, T.; Gil, C.; Gil, L.; Gilligan, S.; Gin, D.; Giovannozzi, E.; Girardo, J. B.; Giroud, C.; Giruzzi, G.; Glöggler, S.; Godwin, J.; Goff, J.; Gohil, P.; Goloborod'ko, V.; Gomes, R.; Gonçalves, B.; Goniche, M.; Goodliffe, M.; Goodyear, A.; Gorini, G.; Gosk, M.; Goulding, R.; Goussarov, A.; Gowland, R.; Graham, B.; Graham, M. E.; Graves, J. P.; Grazier, N.; Grazier, P.; Green, N. R.; Greuner, H.; Grierson, B.; Griph, F. S.; Grisolia, C.; Grist, D.; Groth, M.; Grove, R.; Grundy, C. N.; Grzonka, J.; Guard, D.; Guérard, C.; Guillemaut, C.; Guirlet, R.; Gurl, C.; Utoh, H. H.; Hackett, L. J.; Hacquin, S.; Hagar, A.; Hager, R.; Hakola, A.; Halitovs, M.; Hall, S. J.; Hallworth Cook, S. P.; Hamlyn-Harris, C.; Hammond, K.; Harrington, C.; Harrison, J.; Harting, D.; Hasenbeck, F.; Hatano, Y.; Hatch, D. R.; Haupt, T. D. V.; Hawes, J.; Hawkes, N. C.; Hawkins, J.; Hawkins, P.; Haydon, P. W.; Hayter, N.; Hazel, S.; Heesterman, P. J. L.; Heinola, K.; Hellesen, C.; Hellsten, T.; Helou, W.; Hemming, O. N.; Hender, T. C.; Henderson, M.; Henderson, S. S.; Henriques, R.; Hepple, D.; Hermon, G.; Hertout, P.; Hidalgo, C.; Highcock, E. G.; Hill, M.; Hillairet, J.; Hillesheim, J.; Hillis, D.; Hizanidis, K.; Hjalmarsson, A.; Hobirk, J.; Hodille, E.; Hogben, C. H. A.; Hogeweij, G. M. D.; Hollingsworth, A.; Hollis, S.; Homfray, D. A.; Horáček, J.; Hornung, G.; Horton, A. R.; Horton, L. D.; Horvath, L.; Hotchin, S. P.; Hough, M. R.; Howarth, P. J.; Hubbard, A.; Huber, A.; Huber, V.; Huddleston, T. M.; Hughes, M.; Huijsmans, G. T. A.; Hunter, C. L.; Huynh, P.; Hynes, A. M.; Iglesias, D.; Imazawa, N.; Imbeaux, F.; Imríšek, M.; Incelli, M.; Innocente, P.; Irishkin, M.; Ivanova-Stanik, I.; Jachmich, S.; Jacobsen, A. S.; Jacquet, P.; Jansons, J.; Jardin, A.; Järvinen, A.; Jaulmes, F.; Jednoróg, S.; Jenkins, I.; Jeong, C.; Jepu, I.; Joffrin, E.; Johnson, R.; Johnson, T.; Johnston, Jane; Joita, L.; Jones, G.; Jones, T. T. C.; Hoshino, K. K.; Kallenbach, A.; Kamiya, K.; Kaniewski, J.; Kantor, A.; Kappatou, A.; Karhunen, J.; Karkinsky, D.; Karnowska, I.; Kaufman, M.; Kaveney, G.; Kazakov, Y.; Kazantzidis, V.; Keeling, D. L.; Keenan, T.; Keep, J.; Kempenaars, M.; Kennedy, C.; Kenny, D.; Kent, J.; Kent, O. N.; Khilkevich, E.; Kim, H. T.; Kim, H. S.; Kinch, A.; king, C.; King, D.; King, R. F.; Kinna, D. J.; Kiptily, V.; Kirk, A.; Kirov, K.; Kirschner, A.; Kizane, G.; Klepper, C.; Klix, A.; Knight, P.; Knipe, S. J.; Knott, S.; Kobuchi, T.; Köchl, F.; Kocsis, G.; Kodeli, I.; Kogan, L.; Kogut, D.; Koivuranta, S.; Kominis, Y.; Köppen, M.; Kos, B.; Koskela, T.; Koslowski, H. R.; Koubiti, M.; Kovari, M.; Kowalska-Strzęciwilk, E.; Krasilnikov, A.; Krasilnikov, V.; Krawczyk, N.; Kresina, M.; Krieger, K.; Krivska, A.; Kruezi, U.; Książek, I.; Kukushkin, A.; Kundu, A.; Kurki-Suonio, T.; Kwak, S.; Kwiatkowski, R.; Kwon, O. J.; Laguardia, L.; Lahtinen, A.; Laing, A.; Lam, N.; Lambertz, H. T.; Lane, C.; Lang, P. T.; Lanthaler, S.; Lapins, J.; Lasa, A.; Last, J. R.; Łaszyńska, E.; Lawless, R.; Lawson, A.; Lawson, K. D.; Lazaros, A.; Lazzaro, E.; Leddy, J.; Lee, S.; Lefebvre, X.; Leggate, H. J.; Lehmann, J.; Lehnen, M.; Leichtle, D.; Leichuer, P.; Leipold, F.; Lengar, I.; Lennholm, M.; Lerche, E.; Lescinskis, A.; Lesnoj, S.; Letellier, E.; Leyland, M.; Leysen, W.; Li, L.; Liang, Y.; Likonen, J.; Linke, J.; Linsmeier, Ch.; Lipschultz, B.; Liu, G.; Liu, Y.; Lo Schiavo, V. P.; Loarer, T.; Loarte, A.; Lobel, R. C.; Lomanowski, B.; Lomas, P. J.; Lönnroth, J.; López, J. M.; López-Razola, J.; Lorenzini, R.; Losada, U.; Lovell, J. J.; Loving, A. B.; Lowry, C.; Luce, T.; Lucock, R. M. A.; Lukin, A.; Luna, C.; Lungaroni, M.; Lungu, C. P.; Lungu, M.; Lunniss, A.; Lupelli, I.; Lyssoivan, A.; Macdonald, N.; Macheta, P.; Maczewa, K.; Magesh, B.; Maget, P.; Maggi, C.; Maier, H.; Mailloux, J.; Makkonen, T.; Makwana, R.; Malaquias, A.; Malizia, A.; Manas, P.; Manning, A.; Manso, M. E.; Mantica, P.; Mantsinen, M.; Manzanares, A.; Maquet, Ph.; Marandet, Y.; Marcenko, N.; Marchetto, C.; Marchuk, O.; Marinelli, M.; Marinucci, M.; Markovič, T.; Marocco, D.; Marot, L.; Marren, C. A.; Marshal, R.; Martin, A.; Martin, Y.; Martín de Aguilera, A.; Martínez, F. J.; Martín-Solís, J. R.; Martynova, Y.; Maruyama, S.; Masiello, A.; Maslov, M.; Matejcik, S.; Mattei, M.; Matthews, G. F.; Maviglia, F.; Mayer, M.; Mayoral, M. L.; May-Smith, T.; Mazon, D.; Mazzotta, C.; McAdams, R.; McCarthy, P. J.; McClements, K. G.; McCormack, O.; McCullen, P. A.; McDonald, D.; McIntosh, S.; McKean, R.; McKehon, J.; Meadows, R. C.; Meakins, A.; Medina, F.; Medland, M.; Medley, S.; Meigh, S.; Meigs, A. G.; Meisl, G.; Meitner, S.; Meneses, L.; Menmuir, S.; Mergia, K.; Merrigan, I. R.; Mertens, Ph.; Meshchaninov, S.; Messiaen, A.; Meyer, H.; Mianowski, S.; Michling, R.; Middleton-Gear, D.; Miettunen, J.; Militello, F.; Militello-Asp, E.; Miloshevsky, G.; Mink, F.; Minucci, S.; Miyoshi, Y.; Mlynář, J.; Molina, D.; Monakhov, I.; Moneti, M.; Mooney, R.; Moradi, S.; Mordijck, S.; Moreira, L.; Moreno, R.; Moro, F.; Morris, A. W.; Morris, J.; Moser, L.; Mosher, S.; Moulton, D.; Murari, A.; Muraro, A.; Murphy, S.; Asakura, N. N.; Na, Y. S.; Nabais, F.; Naish, R.; Nakano, T.; Nardon, E.; Naulin, V.; Nave, M. F. F.; Nedzelski, I.; Nemtsev, G.; Nespoli, F.; Neto, A.; Neu, R.; Neverov, V. S.; Newman, M.; Nicholls, K. J.; Nicolas, T.; Nielsen, A. H.; Nielsen, P.; Nilsson, E.; Nishijima, D.; Noble, C.; Nocente, M.; Nodwell, D.; Nordlund, K.; Nordman, H.; Nouailletas, R.; Nunes, I.; Oberkofler, M.; Odupitan, T.; Ogawa, M. T.; O'Gorman, T.; Okabayashi, M.; Olney, R.; Omolayo, O.; O'Mullane, M.; Ongena, J.; Orsitto, F.; Orszagh, J.; Oswuigwe, B. I.; Otin, R.; Owen, A.; Paccagnella, R.; Pace, N.; Pacella, D.; Packer, L. W.; Page, A.; Pajuste, E.; Palazzo, S.; Pamela, S.; Panja, S.; Papp, P.; Paprok, R.; Parail, V.; Park, M.; Parra Diaz, F.; Parsons, M.; Pasqualotto, R.; Patel, A.; Pathak, S.; Paton, D.; Patten, H.; Pau, A.; Pawelec, E.; Soldan, C. Paz; Peackoc, A.; Pearson, I. J.; Pehkonen, S.-P.; Peluso, E.; Penot, C.; Pereira, A.; Pereira, R.; Pereira Puglia, P. P.; Perez von Thun, C.; Peruzzo, S.; Peschanyi, S.; Peterka, M.; Petersson, P.; Petravich, G.; Petre, A.; Petrella, N.; Petržilka, V.; Peysson, Y.; Pfefferlé, D.; Philipps, V.; Pillon, M.; Pintsuk, G.; Piovesan, P.; Pires dos Reis, A.; Piron, L.; Pironti, A.; Pisano, F.; Pitts, R.; Pizzo, F.; Plyusnin, V.; Pomaro, N.; Pompilian, O. G.; Pool, P. J.; Popovichev, S.; Porfiri, M. T.; Porosnicu, C.; Porton, M.; Possnert, G.; Potzel, S.; Powell, T.; Pozzi, J.; Prajapati, V.; Prakash, R.; Prestopino, G.; Price, D.; Price, M.; Price, R.; Prior, P.; Proudfoot, R.; Pucella, G.; Puglia, P.; Puiatti, M. E.; Pulley, D.; Purahoo, K.; Pütterich, Th.; Rachlew, E.; Rack, M.; Ragona, R.; Rainford, M. S. J.; Rakha, A.; Ramogida, G.; Ranjan, S.; Rapson, C. J.; Rasmussen, J. J.; Rathod, K.; Rattá, G.; Ratynskaia, S.; Ravera, G.; Rayner, C.; Rebai, M.; Reece, D.; Reed, A.; Réfy, D.; Regan, B.; Regaña, J.; Reich, M.; Reid, N.; Reimold, F.; Reinhart, M.; Reinke, M.; Reiser, D.; Rendell, D.; Reux, C.; Reyes Cortes, S. D. A.; Reynolds, S.; Riccardo, V.; Richardson, N.; Riddle, K.; Rigamonti, D.; Rimini, F. G.; Risner, J.; Riva, M.; Roach, C.; Robins, R. J.; Robinson, S. A.; Robinson, T.; Robson, D. W.; Roccella, R.; Rodionov, R.; Rodrigues, P.; Rodriguez, J.; Rohde, V.; Romanelli, F.; Romanelli, M.; Romanelli, S.; Romazanov, J.; Rowe, S.; Rubel, M.; Rubinacci, G.; Rubino, G.; Ruchko, L.; Ruiz, M.; Ruset, C.; Rzadkiewicz, J.; Saarelma, S.; Sabot, R.; Safi, E.; Sagar, P.; Saibene, G.; Saint-Laurent, F.; Salewski, M.; Salmi, A.; Salmon, R.; Salzedas, F.; Samaddar, D.; Samm, U.; Sandiford, D.; Santa, P.; Santala, M. I. K.; Santos, B.; Santucci, A.; Sartori, F.; Sartori, R.; Sauter, O.; Scannell, R.; Schlummer, T.; Schmid, K.; Schmidt, V.; Schmuck, S.; Schneider, M.; Schöpf, K.; Schwörer, D.; Scott, S. D.; Sergienko, G.; Sertoli, M.; Shabbir, A.; Sharapov, S. E.; Shaw, A.; Shaw, R.; Sheikh, H.; Shepherd, A.; Shevelev, A.; Shumack, A.; Sias, G.; Sibbald, M.; Sieglin, B.; Silburn, S.; Silva, A.; Silva, C.; Simmons, P. A.; Simpson, J.; Simpson-Hutchinson, J.; Sinha, A.; Sipilä, S. K.; Sips, A. C. C.; Sirén, P.; Sirinelli, A.; Sjöstrand, H.; Skiba, M.; Skilton, R.; Slabkowska, K.; Slade, B.; Smith, N.; Smith, P. G.; Smith, R.; Smith, T. J.; Smithies, M.; Snoj, L.; Soare, S.; Solano, E. R.; Somers, A.; Sommariva, C.; Sonato, P.; Sopplesa, A.; Sousa, J.; Sozzi, C.; Spagnolo, S.; Spelzini, T.; Spineanu, F.; Stables, G.; Stamatelatos, I.; Stamp, M. F.; Staniec, P.; Stankūnas, G.; Stan-Sion, C.; Stead, M. J.; Stefanikova, E.; Stepanov, I.; Stephen, A. V.; Stephen, M.; Stevens, A.; Stevens, B. D.; Strachan, J.; Strand, P.; Strauss, H. R.; Ström, P.; Stubbs, G.; Studholme, W.; Subba, F.; Summers, H. P.; Svensson, J.; Świderski, Ł.; Szabolics, T.; Szawlowski, M.; Szepesi, G.; Suzuki, T. T.; Tál, B.; Tala, T.; Talbot, A. R.; Talebzadeh, S.; Taliercio, C.; Tamain, P.; Tame, C.; Tang, W.; Tardocchi, M.; Taroni, L.; Taylor, D.; Taylor, K. A.; Tegnered, D.; Telesca, G.; Teplova, N.; Terranova, D.; Testa, D.; Tholerus, E.; Thomas, J.; Thomas, J. D.; Thomas, P.; Thompson, A.; Thompson, C.-A.; Thompson, V. K.; Thorne, L.; Thornton, A.; Thrysøe, A. S.; Tigwell, P. A.; Tipton, N.; Tiseanu, I.; Tojo, H.; Tokitani, M.; Tolias, P.; Tomeš, M.; Tonner, P.; Towndrow, M.; Trimble, P.; Tripsky, M.; Tsalas, M.; Tsavalas, P.; Tskhakaya jun, D.; Turner, I.; Turner, M. M.; Turnyanskiy, M.; Tvalashvili, G.; Tyrrell, S. G. J.; Uccello, A.; Ul-Abidin, Z.; Uljanovs, J.; Ulyatt, D.; Urano, H.; Uytdenhouwen, I.; Vadgama, A. P.; Valcarcel, D.; Valentinuzzi, M.; Valisa, M.; Vallejos Olivares, P.; Valovic, M.; Van De Mortel, M.; Van Eester, D.; Van Renterghem, W.; van Rooij, G. J.; Varje, J.; Varoutis, S.; Vartanian, S.; Vasava, K.; Vasilopoulou, T.; Vega, J.; Verdoolaege, G.; Verhoeven, R.; Verona, C.; Verona Rinati, G.; Veshchev, E.; Vianello, N.; Vicente, J.; Viezzer, E.; Villari, S.; Villone, F.; Vincenzi, P.; Vinyar, I.; Viola, B.; Vitins, A.; Vizvary, Z.; Vlad, M.; Voitsekhovitch, I.; Vondráček, P.; Vora, N.; Vu, T.; Pires de Sa, W. W.; Wakeling, B.; Waldon, C. W. F.; Walkden, N.; Walker, M.; Walker, R.; Walsh, M.; Wang, E.; Wang, N.; Warder, S.; Warren, R. J.; Waterhouse, J.; Watkins, N. W.; Watts, C.; Wauters, T.; Weckmann, A.; Weiland, J.; Weisen, H.; Weiszflog, M.; Wellstood, C.; West, A. T.; Wheatley, M. R.; Whetham, S.; Whitehead, A. M.; Whitehead, B. D.; Widdowson, A. M.; Wiesen, S.; Wilkinson, J.; Williams, J.; Williams, M.; Wilson, A. R.; Wilson, D. J.; Wilson, H. R.; Wilson, J.; Wischmeier, M.; Withenshaw, G.; Withycombe, A.; Witts, D. M.; Wood, D.; Wood, R.; Woodley, C.; Wray, S.; Wright, J.; Wright, J. C.; Wu, J.; Wukitch, S.; Wynn, A.; Xu, T.; Yadikin, D.; Yanling, W.; Yao, L.; Yavorskij, V.; Yoo, M. G.; Young, C.; Young, D.; Young, I. D.; Young, R.; Zacks, J.; Zagorski, R.; Zaitsev, F. S.; Zanino, R.; Zarins, A.; Zastrow, K. D.; Zerbini, M.; Zhang, W.; Zhou, Y.; Zilli, E.; Zoita, V.; Zoletnik, S.; Zychor, I.; JET Contributors

    2017-10-01

    The 2014-2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L-H power threshold in Deuterium and Hydrogen are given, stressing the importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H  =  1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D-T campaign and 14 MeV neutron calibration strategy are reviewed.

  1. Beryllium for fusion application - recent results

    NASA Astrophysics Data System (ADS)

    Khomutov, A.; Barabash, V.; Chakin, V.; Chernov, V.; Davydov, D.; Gorokhov, V.; Kawamura, H.; Kolbasov, B.; Kupriyanov, I.; Longhurst, G.; Scaffidi-Argentina, F.; Shestakov, V.

    2002-12-01

    The main issues for the application of beryllium in fusion reactors are analyzed taking into account the latest results since the ICFRM-9 (Colorado, USA, October 1999) and presented at 5th IEA Be Workshop (10-12 October 2001, Moscow Russia). Considerable progress has been made recently in understanding the problems connected with the selection of the beryllium grades for different applications, characterization of the beryllium at relevant operational conditions (irradiation effects, thermal fatigue, etc.), and development of required manufacturing technologies. The key remaining problems related to the application of beryllium as an armour in near-term fusion reactors (e.g. ITER) are discussed. The features of the application of beryllium and beryllides as a neutron multiplier in the breeder blanket for power reactors (e.g. DEMO) in pebble-bed form are described.

  2. Scientific Hybrid Realtiy Environments (SHyRE): Bringing Field Work into the Laboratory

    NASA Technical Reports Server (NTRS)

    Miller, M. J.; Graff, T.; Young, K.; Coan, D.; Whelley, P.; Richardson, J.; Knudson, C.; Bleacher, J.; Garry, W. B.; Delgado, F.; hide

    2018-01-01

    The use of analog environments in preparing for future planetary surface exploration is key in ensuring we both understand the processes shaping other planetary surfaces as well as develop the technology, systems, and concepts of operations necessary to operate in these geologic environments. While conducting fieldwork and testing technology in relevant terrestrial field environments is crucial in this development, it is often the case that operational testing requires a time-intensive iterative process that is hampered by the rigorous conditions (e.g. terrain, weather, location, etc.) found in most field environments. Additionally, field deployments can be costly and must be scheduled months in advance, therefore limiting the testing opportunities required to investigate and compare science operational concepts to only once or twice per year.

  3. Negotiating the question: using science-manager communication to develop management-relevant science products

    NASA Astrophysics Data System (ADS)

    Beechie, T. J.; Snover, A. K.

    2014-12-01

    Natural resource managers often ask scientists to answer questions that cannot be answered, and scientists commonly offer research that is not useful to managers. To produce management-relevant science, managers and scientists must communicate clearly to identify research that is scientifically doable and will produce results that managers find useful. Scientists might also consider that journals with high impact scores are rarely used by managers, while managers might consider that publishing in top tier journals is important to maintain scientific credentials. We offer examples from climate change and river restoration research, in which agency scientists and managers worked together to identify key management questions that scientists could answer and which could inform management. In our first example, we describe how climate scientists worked with agency staff to develop guidance for selecting appropriate climate change scenarios for use in ecological impacts assessments and Endangered Species Act decision making. Within NOAA Fisheries, agency researchers provide science to guide agency managers, and a key question has been how to adapt river restoration efforts for climate change. Based on discussions with restoration practitioners and agency staff, we developed adaptation guidance that summarizes current science to lead managers to develop climate-resilient restoration plans, as well as maps of population vulnerability for endangered steelhead. From these experiences we have learned that collaborative definition of relevant and producible knowledge requires (1) iterative discussions that go beyond simply asking managers what they need or scientists what they can produce, and (2) candid conversation about the intended applications and potential limitations of the knowledge.

  4. ELM-induced transient tungsten melting in the JET divertor

    NASA Astrophysics Data System (ADS)

    Coenen, J. W.; Arnoux, G.; Bazylev, B.; Matthews, G. F.; Autricque, A.; Balboa, I.; Clever, M.; Dejarnac, R.; Coffey, I.; Corre, Y.; Devaux, S.; Frassinetti, L.; Gauthier, E.; Horacek, J.; Jachmich, S.; Komm, M.; Knaup, M.; Krieger, K.; Marsen, S.; Meigs, A.; Mertens, Ph.; Pitts, R. A.; Puetterich, T.; Rack, M.; Stamp, M.; Sergienko, G.; Tamain, P.; Thompson, V.; Contributors, JET-EFDA

    2015-02-01

    The original goals of the JET ITER-like wall included the study of the impact of an all W divertor on plasma operation (Coenen et al 2013 Nucl. Fusion 53 073043) and fuel retention (Brezinsek et al 2013 Nucl. Fusion 53 083023). ITER has recently decided to install a full-tungsten (W) divertor from the start of operations. One of the key inputs required in support of this decision was the study of the possibility of W melting and melt splashing during transients. Damage of this type can lead to modifications of surface topology which could lead to higher disruption frequency or compromise subsequent plasma operation. Although every effort will be made to avoid leading edges, ITER plasma stored energies are sufficient that transients can drive shallow melting on the top surfaces of components. JET is able to produce ELMs large enough to allow access to transient melting in a regime of relevance to ITER. Transient W melt experiments were performed in JET using a dedicated divertor module and a sequence of IP = 3.0 MA/BT = 2.9 T H-mode pulses with an input power of PIN = 23 MW, a stored energy of ˜6 MJ and regular type I ELMs at ΔWELM = 0.3 MJ and fELM ˜ 30 Hz. By moving the outer strike point onto a dedicated leading edge in the W divertor the base temperature was raised within ˜1 s to a level allowing transient, ELM-driven melting during the subsequent 0.5 s. Such ELMs (δW ˜ 300 kJ per ELM) are comparable to mitigated ELMs expected in ITER (Pitts et al 2011 J. Nucl. Mater. 415 (Suppl.) S957-64). Although significant material losses in terms of ejections into the plasma were not observed, there is indirect evidence that some small droplets (˜80 µm) were released. Almost 1 mm (˜6 mm3) of W was moved by ˜150 ELMs within 7 subsequent discharges. The impact on the main plasma parameters was minor and no disruptions occurred. The W-melt gradually moved along the leading edge towards the high-field side, driven by j × B forces. The evaporation rate determined from spectroscopy is 100 times less than expected from steady state melting and is thus consistent only with transient melting during the individual ELMs. Analysis of IR data and spectroscopy together with modelling using the MEMOS code Bazylev et al 2009 J. Nucl. Mater. 390-391 810-13 point to transient melting as the main process. 3D MEMOS simulations on the consequences of multiple ELMs on damage of tungsten castellated armour have been performed. These experiments provide the first experimental evidence for the absence of significant melt splashing at transient events resembling mitigated ELMs on ITER and establish a key experimental benchmark for the MEMOS code.

  5. Iterative key-residues interrogation of a phytase with thermostability increasing substitutions identified in directed evolution.

    PubMed

    Shivange, Amol V; Roccatano, Danilo; Schwaneberg, Ulrich

    2016-01-01

    Bacterial phytases have attracted industrial interest as animal feed supplement due to their high activity and sufficient thermostability (required for feed pelleting). We devised an approach named KeySIDE,  an iterative Key-residues interrogation of the wild type with Substitutions Identified in Directed Evolution for improving Yersinia mollaretii phytase (Ymphytase) thermostability by combining key beneficial substitutions and elucidating their individual roles. Directed evolution yielded in a discovery of nine positions in Ymphytase and combined iteratively to identify key positions. The "best" combination (M6: T77K, Q154H, G187S, and K289Q) resulted in significantly improved thermal resistance; the residual activity improved from 35 % (wild type) to 89 % (M6) at 58 °C and 20-min incubation. Melting temperature increased by 3 °C in M6 without a loss of specific activity. Molecular dynamics simulation studies revealed reduced flexibility in the loops located next to helices (B, F, and K) which possess substitutions (Helix-B: T77K, Helix-F: G187S, and Helix-K: K289E/Q). Reduced flexibility in the loops might be caused by strengthened hydrogen bonding network (e.g., G187S and K289E/K289Q) and a salt bridge (T77K). Our results demonstrate a promising approach to design phytases in food research, and we hope that the KeySIDE might become an attractive approach for understanding of structure-function relationships of enzymes.

  6. Plasma facing materials performance under ITER-relevant mitigated disruption photonic heat loads

    NASA Astrophysics Data System (ADS)

    Klimov, N. S.; Putrik, A. B.; Linke, J.; Pitts, R. A.; Zhitlukhin, A. M.; Kuprianov, I. B.; Spitsyn, A. V.; Ogorodnikova, O. V.; Podkovyrov, V. L.; Muzichenko, A. D.; Ivanov, B. V.; Sergeecheva, Ya. V.; Lesina, I. G.; Kovalenko, D. V.; Barsuk, V. A.; Danilina, N. A.; Bazylev, B. N.; Giniyatulin, R. N.

    2015-08-01

    PFMs (Plasma-facing materials: ITER grade stainless steel, beryllium, and ferritic-martensitic steels) as well as deposited erosion products of PFCs (Be-like, tungsten, and carbon based) were tested in QSPA under photonic heat loads relevant to those expected from photon radiation during disruptions mitigated by massive gas injection in ITER. Repeated pulses slightly above the melting threshold on the bulk materials eventually lead to a regular, "corrugated" surface, with hills and valleys spaced by 0.2-2 mm. The results indicate that hill growth (growth rate of ∼1 μm per pulse) and sample thinning in the valleys is a result of melt-layer redistribution. The measurements on the 316L(N)-IG indicate that the amount of tritium absorbed by the sample from the gas phase significantly increases with pulse number as well as the modified layer thickness. Repeated pulses significantly below the melting threshold on the deposited erosion products lead to a decrease of hydrogen isotopes trapped during the deposition of the eroded material.

  7. Definition of acceptance criteria for the ITER divertor plasma-facing components through systematic experimental analysis

    NASA Astrophysics Data System (ADS)

    Escourbiac, F.; Richou, M.; Guigon, R.; Constans, S.; Durocher, A.; Merola, M.; Schlosser, J.; Riccardi, B.; Grosman, A.

    2009-12-01

    Experience has shown that a critical part of the high-heat flux (HHF) plasma-facing component (PFC) is the armour to heat sink bond. An experimental study was performed in order to define acceptance criteria with regards to thermal hydraulics and fatigue performance of the International Thermonuclear Experimental Reactor (ITER) divertor PFCs. This study, which includes the manufacturing of samples with calibrated artificial defects relevant to the divertor design, is reported in this paper. In particular, it was concluded that defects detectable with non-destructive examination (NDE) techniques appeared to be acceptable during HHF experiments relevant to heat fluxes expected in the ITER divertor. On the basis of these results, a set of acceptance criteria was proposed and applied to the European vertical target medium-size qualification prototype: 98% of the inspected carbon fibre composite (CFC) monoblocks and 100% of tungsten (W) monoblock and flat tiles elements (i.e. 80% of the full units) were declared acceptable.

  8. Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.

    PubMed

    Koch, S; Bosch, H; Giereth, M; Ertl, T

    2011-05-01

    Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.

  9. Nonperturbative measurement of the local magnetic field using pulsed polarimetry for fusion reactor conditions (invited).

    PubMed

    Smith, Roger J

    2008-10-01

    A novel diagnostic technique for the remote and nonperturbative sensing of the local magnetic field in reactor relevant plasmas is presented. Pulsed polarimetry [Patent No. 12/150,169 (pending)] combines optical scattering with the Faraday effect. The polarimetric light detection and ranging (LIDAR)-like diagnostic has the potential to be a local B(pol) diagnostic on ITER and can achieve spatial resolutions of millimeters on high energy density (HED) plasmas using existing lasers. The pulsed polarimetry method is based on nonlocal measurements and subtle effects are introduced that are not present in either cw polarimetry or Thomson scattering LIDAR. Important features include the capability of simultaneously measuring local T(e), n(e), and B(parallel) along the line of sight, a resiliency to refractive effects, a short measurement duration providing near instantaneous data in time, and location for real-time feedback and control of magnetohydrodynamic (MHD) instabilities and the realization of a widely applicable internal magnetic field diagnostic for the magnetic fusion energy program. The technique improves for higher n(e)B(parallel) product and higher n(e) and is well suited for diagnosing the transient plasmas in the HED program. Larger devices such as ITER and DEMO are also better suited to the technique, allowing longer pulse lengths and thereby relaxing key technology constraints making pulsed polarimetry a valuable asset for next step devices. The pulsed polarimetry technique is clarified by way of illustration on the ITER tokamak and plasmas within the magnetized target fusion program within present technological means.

  10. Scientific and technical challenges on the road towards fusion electricity

    NASA Astrophysics Data System (ADS)

    Donné, A. J. H.; Federici, G.; Litaudon, X.; McDonald, D. C.

    2017-10-01

    The goal of the European Fusion Roadmap is to deliver fusion electricity to the grid early in the second half of this century. It breaks the quest for fusion energy into eight missions, and for each of them it describes a research and development programme to address all the open technical gaps in physics and technology and estimates the required resources. It points out the needs to intensify industrial involvement and to seek all opportunities for collaboration outside Europe. The roadmap covers three periods: the short term, which runs parallel to the European Research Framework Programme Horizon 2020, the medium term and the long term. ITER is the key facility of the roadmap as it is expected to achieve most of the important milestones on the path to fusion power. Thus, the vast majority of present resources are dedicated to ITER and its accompanying experiments. The medium term is focussed on taking ITER into operation and bringing it to full power, as well as on preparing the construction of a demonstration power plant DEMO, which will for the first time demonstrate fusion electricity to the grid around the middle of this century. Building and operating DEMO is the subject of the last roadmap phase: the long term. Clearly, the Fusion Roadmap is tightly connected to the ITER schedule. Three key milestones are the first operation of ITER, the start of the DT operation in ITER and reaching the full performance at which the thermal fusion power is 10 times the power put in to the plasma. The Engineering Design Activity of DEMO needs to start a few years after the first ITER plasma, while the start of the construction phase will be a few years after ITER reaches full performance. In this way ITER can give viable input to the design and development of DEMO. Because the neutron fluence in DEMO will be much higher than in ITER, it is important to develop and validate materials that can handle these very high neutron loads. For the testing of the materials, a dedicated 14 MeV neutron source is needed. This DEMO Oriented Neutron Source (DONES) is therefore an important facility to support the fusion roadmap.

  11. Formation and termination of runaway beams in ITER disruptions

    NASA Astrophysics Data System (ADS)

    Martín-Solís, J. R.; Loarte, A.; Lehnen, M.

    2017-06-01

    A self-consistent analysis of the relevant physics regarding the formation and termination of runaway beams during mitigated disruptions by Ar and Ne injection is presented for selected ITER scenarios with the aim of improving our understanding of the physics underlying the runaway heat loads onto the plasma facing components (PFCs) and identifying open issues for developing and accessing disruption mitigation schemes for ITER. This is carried out by means of simplified models, but still retaining sufficient details of the key physical processes, including: (a) the expected dominant runaway generation mechanisms (avalanche and primary runaway seeds: Dreicer and hot tail runaway generation, tritium decay and Compton scattering of γ rays emitted by the activated wall), (b) effects associated with the plasma and runaway current density profile shape, and (c) corrections to the runaway dynamics to account for the collisions of the runaways with the partially stripped impurity ions, which are found to have strong effects leading to low runaway current generation and low energy conversion during current termination for mitigated disruptions by noble gas injection (particularly for Ne injection) for the shortest current quench times compatible with acceptable forces on the ITER vessel and in-vessel components ({τ\\text{res}}∼ 22~\\text{ms} ). For the case of long current quench times ({τ\\text{res}}∼ 66~\\text{ms} ), runaway beams up to  ∼10 MA can be generated during the disruption current quench and, if the termination of the runaway current is slow enough, the generation of runaways by the avalanche mechanism can play an important role, increasing substantially the energy deposited by the runaways onto the PFCs up to a few hundreds of MJs. Mixed impurity (Ar or Ne) plus deuterium injection proves to be effective in controlling the formation of the runaway current during the current quench, even for the longest current quench times, as well as in decreasing the energy deposited on the runaway electrons during current termination.

  12. High power millimeter wave experiment of ITER relevant electron cyclotron heating and current drive system.

    PubMed

    Takahashi, K; Kajiwara, K; Oda, Y; Kasugai, A; Kobayashi, N; Sakamoto, K; Doane, J; Olstad, R; Henderson, M

    2011-06-01

    High power, long pulse millimeter (mm) wave experiments of the RF test stand (RFTS) of Japan Atomic Energy Agency (JAEA) were performed. The system consists of a 1 MW/170 GHz gyrotron, a long and short distance transmission line (TL), and an equatorial launcher (EL) mock-up. The RFTS has an ITER-relevant configuration, i.e., consisted by a 1 MW-170 GHz gyrotron, a mm wave TL, and an EL mock-up. The TL is composed of a matching optics unit, evacuated circular corrugated waveguides, 6-miter bends, an in-line waveguide switch, and an isolation valve. The EL-mock-up is fabricated according to the current design of the ITER launcher. The Gaussian-like beam radiation with the steering capability of 20°-40° from the EL mock-up was also successfully proved. The high power, long pulse power transmission test was conducted with the metallic load replaced by the EL mock-up, and the transmission of 1 MW/800 s and 0.5 MW/1000 s was successfully demonstrated with no arcing and no damages. The transmission efficiency of the TL was 96%. The results prove the feasibility of the ITER electron cyclotron heating and current drive system. © 2011 American Institute of Physics

  13. An Atlas of Peroxiredoxins Created Using an Active Site Profile-Based Approach to Functionally Relevant Clustering of Proteins.

    PubMed

    Harper, Angela F; Leuthaeuser, Janelle B; Babbitt, Patricia C; Morris, John H; Ferrin, Thomas E; Poole, Leslie B; Fetrow, Jacquelyn S

    2017-02-01

    Peroxiredoxins (Prxs or Prdxs) are a large protein superfamily of antioxidant enzymes that rapidly detoxify damaging peroxides and/or affect signal transduction and, thus, have roles in proliferation, differentiation, and apoptosis. Prx superfamily members are widespread across phylogeny and multiple methods have been developed to classify them. Here we present an updated atlas of the Prx superfamily identified using a novel method called MISST (Multi-level Iterative Sequence Searching Technique). MISST is an iterative search process developed to be both agglomerative, to add sequences containing similar functional site features, and divisive, to split groups when functional site features suggest distinct functionally-relevant clusters. Superfamily members need not be identified initially-MISST begins with a minimal representative set of known structures and searches GenBank iteratively. Further, the method's novelty lies in the manner in which isofunctional groups are selected; rather than use a single or shifting threshold to identify clusters, the groups are deemed isofunctional when they pass a self-identification criterion, such that the group identifies itself and nothing else in a search of GenBank. The method was preliminarily validated on the Prxs, as the Prxs presented challenges of both agglomeration and division. For example, previous sequence analysis clustered the Prx functional families Prx1 and Prx6 into one group. Subsequent expert analysis clearly identified Prx6 as a distinct functionally relevant group. The MISST process distinguishes these two closely related, though functionally distinct, families. Through MISST search iterations, over 38,000 Prx sequences were identified, which the method divided into six isofunctional clusters, consistent with previous expert analysis. The results represent the most complete computational functional analysis of proteins comprising the Prx superfamily. The feasibility of this novel method is demonstrated by the Prx superfamily results, laying the foundation for potential functionally relevant clustering of the universe of protein sequences.

  14. An Atlas of Peroxiredoxins Created Using an Active Site Profile-Based Approach to Functionally Relevant Clustering of Proteins

    PubMed Central

    Babbitt, Patricia C.; Ferrin, Thomas E.

    2017-01-01

    Peroxiredoxins (Prxs or Prdxs) are a large protein superfamily of antioxidant enzymes that rapidly detoxify damaging peroxides and/or affect signal transduction and, thus, have roles in proliferation, differentiation, and apoptosis. Prx superfamily members are widespread across phylogeny and multiple methods have been developed to classify them. Here we present an updated atlas of the Prx superfamily identified using a novel method called MISST (Multi-level Iterative Sequence Searching Technique). MISST is an iterative search process developed to be both agglomerative, to add sequences containing similar functional site features, and divisive, to split groups when functional site features suggest distinct functionally-relevant clusters. Superfamily members need not be identified initially—MISST begins with a minimal representative set of known structures and searches GenBank iteratively. Further, the method’s novelty lies in the manner in which isofunctional groups are selected; rather than use a single or shifting threshold to identify clusters, the groups are deemed isofunctional when they pass a self-identification criterion, such that the group identifies itself and nothing else in a search of GenBank. The method was preliminarily validated on the Prxs, as the Prxs presented challenges of both agglomeration and division. For example, previous sequence analysis clustered the Prx functional families Prx1 and Prx6 into one group. Subsequent expert analysis clearly identified Prx6 as a distinct functionally relevant group. The MISST process distinguishes these two closely related, though functionally distinct, families. Through MISST search iterations, over 38,000 Prx sequences were identified, which the method divided into six isofunctional clusters, consistent with previous expert analysis. The results represent the most complete computational functional analysis of proteins comprising the Prx superfamily. The feasibility of this novel method is demonstrated by the Prx superfamily results, laying the foundation for potential functionally relevant clustering of the universe of protein sequences. PMID:28187133

  15. Overview of the JET results in support to ITER

    DOE PAGES

    Litaudon, X.; Abduallev, S.; Abhangi, M.; ...

    2017-06-15

    Here, the 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing themore » importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed.« less

  16. Overview of the JET results in support to ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litaudon, X.; Abduallev, S.; Abhangi, M.

    Here, the 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing themore » importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed.« less

  17. A machine learning heuristic to identify biologically relevant and minimal biomarker panels from omics data

    PubMed Central

    2015-01-01

    Background Investigations into novel biomarkers using omics techniques generate large amounts of data. Due to their size and numbers of attributes, these data are suitable for analysis with machine learning methods. A key component of typical machine learning pipelines for omics data is feature selection, which is used to reduce the raw high-dimensional data into a tractable number of features. Feature selection needs to balance the objective of using as few features as possible, while maintaining high predictive power. This balance is crucial when the goal of data analysis is the identification of highly accurate but small panels of biomarkers with potential clinical utility. In this paper we propose a heuristic for the selection of very small feature subsets, via an iterative feature elimination process that is guided by rule-based machine learning, called RGIFE (Rule-guided Iterative Feature Elimination). We use this heuristic to identify putative biomarkers of osteoarthritis (OA), articular cartilage degradation and synovial inflammation, using both proteomic and transcriptomic datasets. Results and discussion Our RGIFE heuristic increased the classification accuracies achieved for all datasets when no feature selection is used, and performed well in a comparison with other feature selection methods. Using this method the datasets were reduced to a smaller number of genes or proteins, including those known to be relevant to OA, cartilage degradation and joint inflammation. The results have shown the RGIFE feature reduction method to be suitable for analysing both proteomic and transcriptomics data. Methods that generate large ‘omics’ datasets are increasingly being used in the area of rheumatology. Conclusions Feature reduction methods are advantageous for the analysis of omics data in the field of rheumatology, as the applications of such techniques are likely to result in improvements in diagnosis, treatment and drug discovery. PMID:25923811

  18. An approach to functionally relevant clustering of the protein universe: Active site profile‐based clustering of protein structures and sequences

    PubMed Central

    Knutson, Stacy T.; Westwood, Brian M.; Leuthaeuser, Janelle B.; Turner, Brandon E.; Nguyendac, Don; Shea, Gabrielle; Kumar, Kiran; Hayden, Julia D.; Harper, Angela F.; Brown, Shoshana D.; Morris, John H.; Ferrin, Thomas E.; Babbitt, Patricia C.

    2017-01-01

    Abstract Protein function identification remains a significant problem. Solving this problem at the molecular functional level would allow mechanistic determinant identification—amino acids that distinguish details between functional families within a superfamily. Active site profiling was developed to identify mechanistic determinants. DASP and DASP2 were developed as tools to search sequence databases using active site profiling. Here, TuLIP (Two‐Level Iterative clustering Process) is introduced as an iterative, divisive clustering process that utilizes active site profiling to separate structurally characterized superfamily members into functionally relevant clusters. Underlying TuLIP is the observation that functionally relevant families (curated by Structure‐Function Linkage Database, SFLD) self‐identify in DASP2 searches; clusters containing multiple functional families do not. Each TuLIP iteration produces candidate clusters, each evaluated to determine if it self‐identifies using DASP2. If so, it is deemed a functionally relevant group. Divisive clustering continues until each structure is either a functionally relevant group member or a singlet. TuLIP is validated on enolase and glutathione transferase structures, superfamilies well‐curated by SFLD. Correlation is strong; small numbers of structures prevent statistically significant analysis. TuLIP‐identified enolase clusters are used in DASP2 GenBank searches to identify sequences sharing functional site features. Analysis shows a true positive rate of 96%, false negative rate of 4%, and maximum false positive rate of 4%. F‐measure and performance analysis on the enolase search results and comparison to GEMMA and SCI‐PHY demonstrate that TuLIP avoids the over‐division problem of these methods. Mechanistic determinants for enolase families are evaluated and shown to correlate well with literature results. PMID:28054422

  19. An approach to functionally relevant clustering of the protein universe: Active site profile-based clustering of protein structures and sequences.

    PubMed

    Knutson, Stacy T; Westwood, Brian M; Leuthaeuser, Janelle B; Turner, Brandon E; Nguyendac, Don; Shea, Gabrielle; Kumar, Kiran; Hayden, Julia D; Harper, Angela F; Brown, Shoshana D; Morris, John H; Ferrin, Thomas E; Babbitt, Patricia C; Fetrow, Jacquelyn S

    2017-04-01

    Protein function identification remains a significant problem. Solving this problem at the molecular functional level would allow mechanistic determinant identification-amino acids that distinguish details between functional families within a superfamily. Active site profiling was developed to identify mechanistic determinants. DASP and DASP2 were developed as tools to search sequence databases using active site profiling. Here, TuLIP (Two-Level Iterative clustering Process) is introduced as an iterative, divisive clustering process that utilizes active site profiling to separate structurally characterized superfamily members into functionally relevant clusters. Underlying TuLIP is the observation that functionally relevant families (curated by Structure-Function Linkage Database, SFLD) self-identify in DASP2 searches; clusters containing multiple functional families do not. Each TuLIP iteration produces candidate clusters, each evaluated to determine if it self-identifies using DASP2. If so, it is deemed a functionally relevant group. Divisive clustering continues until each structure is either a functionally relevant group member or a singlet. TuLIP is validated on enolase and glutathione transferase structures, superfamilies well-curated by SFLD. Correlation is strong; small numbers of structures prevent statistically significant analysis. TuLIP-identified enolase clusters are used in DASP2 GenBank searches to identify sequences sharing functional site features. Analysis shows a true positive rate of 96%, false negative rate of 4%, and maximum false positive rate of 4%. F-measure and performance analysis on the enolase search results and comparison to GEMMA and SCI-PHY demonstrate that TuLIP avoids the over-division problem of these methods. Mechanistic determinants for enolase families are evaluated and shown to correlate well with literature results. © 2017 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.

  20. The Role of Combined ICRF and NBI Heating in JET Hybrid Plasmas in Quest for High D-T Fusion Yield

    NASA Astrophysics Data System (ADS)

    Mantsinen, Mervi; Challis, Clive; Frigione, Domenico; Graves, Jonathan; Hobirk, Joerg; Belonohy, Eva; Czarnecka, Agata; Eriksson, Jacob; Gallart, Dani; Goniche, Marc; Hellesen, Carl; Jacquet, Philippe; Joffrin, Emmanuel; King, Damian; Krawczyk, Natalia; Lennholm, Morten; Lerche, Ernesto; Pawelec, Ewa; Sips, George; Solano, Emilia R.; Tsalas, Maximos; Valisa, Marco

    2017-10-01

    Combined ICRF and NBI heating played a key role in achieving the world-record fusion yield in the first deuterium-tritium campaign at the JET tokamak in 1997. The current plans for JET include new experiments with deuterium-tritium (D-T) plasmas with more ITER-like conditions given the recently installed ITER-like wall (ILW). In the 2015-2016 campaigns, significant efforts have been devoted to the development of high-performance plasma scenarios compatible with ILW in preparation of the forthcoming D-T campaign. Good progress was made in both the inductive (baseline) and the hybrid scenario: a new record JET ILW fusion yield with a significantly extended duration of the high-performance phase was achieved. This paper reports on the progress with the hybrid scenario which is a candidate for ITER longpulse operation (˜1000 s) thanks to its improved normalized confinement, reduced plasma current and higher plasma beta with respect to the ITER reference baseline scenario. The combined NBI+ICRF power in the hybrid scenario was increased to 33 MW and the record fusion yield, averaged over 100 ms, to 2.9x1016 neutrons/s from the 2014 ILW fusion record of 2.3x1016 neutrons/s. Impurity control with ICRF waves was one of the key means for extending the duration of the high-performance phase. The main results are reviewed covering both key core and edge plasma issues.

  1. Event-driven simulation in SELMON: An overview of EDSE

    NASA Technical Reports Server (NTRS)

    Rouquette, Nicolas F.; Chien, Steve A.; Charest, Leonard, Jr.

    1992-01-01

    EDSE (event-driven simulation engine), a model-based event-driven simulator implemented for SELMON, a tool for sensor selection and anomaly detection in real-time monitoring is described. The simulator is used in conjunction with a causal model to predict future behavior of the model from observed data. The behavior of the causal model is interpreted as equivalent to the behavior of the physical system being modeled. An overview of the functionality of the simulator and the model-based event-driven simulation paradigm on which it is based is provided. Included are high-level descriptions of the following key properties: event consumption and event creation, iterative simulation, synchronization and filtering of monitoring data from the physical system. Finally, how EDSE stands with respect to the relevant open issues of discrete-event and model-based simulation is discussed.

  2. Military Standard Common APSE (Ada Programming Support Environment) Interface Set (CAIS).

    DTIC Science & Technology

    1985-01-01

    QUEUEASE. LAST-KEY (QUEENAME) . LASTREI.TIONI(QUEUE-NAME). FILE-NODE. PORN . ATTRIBUTTES. ACCESSCONTROL. LEVEL); CLOSE (QUEUE BASE); CLOSE(FILE NODE...PROPOSED XIIT-STD-C.4 31 J NNUAfY logs procedure zTERT (ITERATOR: out NODE ITERATON; MAMIE: NAME STRING.KIND: NODE KID : KEY : RELATIONSHIP KEY PA1TTE1 :R

  3. Whole-body Magnetic Resonance Imaging in Inflammatory Arthritis: Systematic Literature Review and First Steps Toward Standardization and an OMERACT Scoring System.

    PubMed

    Østergaard, Mikkel; Eshed, Iris; Althoff, Christian E; Poggenborg, Rene P; Diekhoff, Torsten; Krabbe, Simon; Weckbach, Sabine; Lambert, Robert G W; Pedersen, Susanne J; Maksymowych, Walter P; Peterfy, Charles G; Freeston, Jane; Bird, Paul; Conaghan, Philip G; Hermann, Kay-Geert A

    2017-11-01

    Whole-body magnetic resonance imaging (WB-MRI) is a relatively new technique that can enable assessment of the overall inflammatory status of people with arthritis, but standards for image acquisition, definitions of key pathologies, and a quantification system are required. Our aim was to perform a systematic literature review (SLR) and to develop consensus definitions of key pathologies, anatomical locations for assessment, a set of MRI sequences and imaging planes for the different body regions, and a preliminary scoring system for WB-MRI in inflammatory arthritis. An SLR was initially performed, searching for WB-MRI studies in arthritis, osteoarthritis, spondyloarthritis, or enthesitis. These results were presented to a meeting of the MRI in Arthritis Working Group together with an MR image review. Following this, preliminary standards for WB-MRI in inflammatory arthritides were developed with further iteration at the Working Group meetings at the Outcome Measures in Rheumatology (OMERACT) 2016. The SLR identified 10 relevant original articles (7 cross-sectional and 3 longitudinal, mostly focusing on synovitis and/or enthesitis in spondyloarthritis, 4 with reproducibility data). The Working Group decided on inflammation in peripheral joints and entheses as primary focus areas, and then developed consensus MRI definitions for these pathologies, selected anatomical locations for assessment, agreed on a core set of MRI sequences and imaging planes for the different regions, and proposed a preliminary scoring system. It was decided to test and further develop the system by iterative multireader exercises. These first steps in developing an OMERACT WB-MRI scoring system for use in inflammatory arthritides offer a framework for further testing and refinement.

  4. A new method for assessing content validity in model-based creation and iteration of eHealth interventions.

    PubMed

    Kassam-Adams, Nancy; Marsac, Meghan L; Kohser, Kristen L; Kenardy, Justin A; March, Sonja; Winston, Flaura K

    2015-04-15

    The advent of eHealth interventions to address psychological concerns and health behaviors has created new opportunities, including the ability to optimize the effectiveness of intervention activities and then deliver these activities consistently to a large number of individuals in need. Given that eHealth interventions grounded in a well-delineated theoretical model for change are more likely to be effective and that eHealth interventions can be costly to develop, assuring the match of final intervention content and activities to the underlying model is a key step. We propose to apply the concept of "content validity" as a crucial checkpoint to evaluate the extent to which proposed intervention activities in an eHealth intervention program are valid (eg, relevant and likely to be effective) for the specific mechanism of change that each is intended to target and the intended target population for the intervention. The aims of this paper are to define content validity as it applies to model-based eHealth intervention development, to present a feasible method for assessing content validity in this context, and to describe the implementation of this new method during the development of a Web-based intervention for children. We designed a practical 5-step method for assessing content validity in eHealth interventions that includes defining key intervention targets, delineating intervention activity-target pairings, identifying experts and using a survey tool to gather expert ratings of the relevance of each activity to its intended target, its likely effectiveness in achieving the intended target, and its appropriateness with a specific intended audience, and then using quantitative and qualitative results to identify intervention activities that may need modification. We applied this method during our development of the Coping Coach Web-based intervention for school-age children. In the evaluation of Coping Coach content validity, 15 experts from five countries rated each of 15 intervention activity-target pairings. Based on quantitative indices, content validity was excellent for relevance and good for likely effectiveness and age-appropriateness. Two intervention activities had item-level indicators that suggested the need for further review and potential revision by the development team. This project demonstrated that assessment of content validity can be straightforward and feasible to implement and that results of this assessment provide useful information for ongoing development and iterations of new eHealth interventions, complementing other sources of information (eg, user feedback, effectiveness evaluations). This approach can be utilized at one or more points during the development process to guide ongoing optimization of eHealth interventions.

  5. Cultural adaptation of evidence-based practice utilizing an iterative stakeholder process and theoretical framework: problem solving therapy for Chinese older adults

    PubMed Central

    Chu, Joyce P.; Huynh, Loanie; Areán, Patricia

    2011-01-01

    Objectives Main objectives were to familiarize the reader with a theoretical framework for modifying evidence-based interventions for cultural groups, and to provide an example of one method, Formative Method for Adapting Psychotherapies (FMAP), in the adaptation of an evidence-based intervention for a cultural group notorious for refusing mental health treatment. Methods Provider and client stakeholder input combined with an iterative testing process within the FMAP framework was utilized to create the Problem Solving Therapy—Chinese Older Adult (PST-COA) manual for depression. Data from pilot-testing the intervention with a clinically depressed Chinese elderly woman are reported. Results PST-COA is categorized as a ‘culturally-adapted’ treatment, where core mediating mechanisms of PST were preserved, but cultural themes of measurement methodology, stigma, hierarchical provider-client relationship expectations, and acculturation enhanced core components to make PST more understandable and relevant for Chinese elderly. Modifications also encompassed therapeutic framework and peripheral elements affecting engagement and retention. PST-COA applied with a depressed Chinese older adult indicated remission of clinical depression and improvement in mood. Fidelity with and acceptability of the treatment was sufficient as the client completed and reported high satisfaction with PST-COA. Conclusions PST, as a non-emotion-focused, evidence-based intervention, is a good fit for depressed Chinese elderly. Through an iterative stakeholder process of cultural adaptation, several culturally-specific modifications were applied to PST to create the PST-COA manual. PST-COA preserves core therapeutic PST elements but includes cultural adaptations in therapeutic framework and key administration and content areas that ensure greater applicability and effectiveness for the Chinese elderly community. PMID:21500283

  6. Studies on the behaviour of tritium in components and structure materials of tritium confinement and detritiation systems of ITER

    NASA Astrophysics Data System (ADS)

    Kobayashi, K.; Isobe, K.; Iwai, Y.; Hayashi, T.; Shu, W.; Nakamura, H.; Kawamura, Y.; Yamada, M.; Suzuki, T.; Miura, H.; Uzawa, M.; Nishikawa, M.; Yamanishi, T.

    2007-12-01

    Confinement and the removal of tritium are key subjects for the safety of ITER. The ITER buildings are confinement barriers of tritium. In a hot cell, tritium is often released as vapour and is in contact with the inner walls. The inner walls of the ITER tritium plant building will also be exposed to tritium in an accident. The tritium released in the buildings is removed by the atmosphere detritiation systems (ADS), where the tritium is oxidized by catalysts and is removed as water. A special gas of SF6 is used in ITER and is expected to be released in an accident such as a fire. Although the SF6 gas has potential as a catalyst poison, the performance of ADS with the existence of SF6 has not been confirmed as yet. Tritiated water is produced in the regeneration process of ADS and is subsequently processed by the ITER water detritiation system (WDS). One of the key components of the WDS is an electrolysis cell. To overcome the issues in a global tritium confinement, a series of experimental studies have been carried out as an ITER R&D task: (1) tritium behaviour in concrete; (2) the effect of SF6 on the performance of ADS and (3) tritium durability of the electrolysis cell of the ITER-WDS. (1) The tritiated water vapour penetrated up to 50 mm into the concrete from the surface in six months' exposure. The penetration rate of tritium in the concrete was thus appreciably first, the isotope exchange capacity of the cement paste plays an important role in tritium trapping and penetration into concrete materials when concrete is exposed to tritiated water vapour. It is required to evaluate the effect of coating on the penetration rate quantitatively from the actual tritium tests. (2) SF6 gas decreased the detritiation factor of ADS. Since the effect of SF6 depends closely on its concentration, the amount of SF6 released into the tritium handling area in an accident should be reduced by some ideas of arrangement of components in the buildings. (3) It was expected that the electrolysis cell of the ITER-WDS could endure 3 years' operation under the ITER design conditions. Measuring the concentration of the fluorine ions could be a promising technique for monitoring the damage to the electrolysis cell.

  7. A 2D systems approach to iterative learning control for discrete linear processes with zero Markov parameters

    NASA Astrophysics Data System (ADS)

    Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.

    2011-07-01

    In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.

  8. Validation of the model for ELM suppression with 3D magnetic fields using low torque ITER baseline scenario discharges in DIII-D

    DOE PAGES

    Moyer, Richard A.; Paz-Soldan, Carlos; Nazikian, Raffi; ...

    2017-09-18

    Here, experiments have been executed in the DIII-D tokamak to extend suppression of Edge Localized Modes (ELMs) with Resonant Magnetic Perturbations (RMPs) to ITER-relevant levels of beam torque. The results support the hypothesis for RMP ELM suppression based on transition from an ideal screened response to a tearing response at a resonant surface that prevents expansion of the pedestal to an unstable width.

  9. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework

    PubMed Central

    Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.

    2016-01-01

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968

  10. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework

    NASA Astrophysics Data System (ADS)

    Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.

    2016-05-01

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.

  11. Experiments on water detritiation and cryogenic distillation at TLK; Impact on ITER fuel cycle subsystems interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cristescu, I.; Cristescu, I. R.; Doerr, L.

    2008-07-15

    The ITER Isotope Separation System (ISS) and Water Detritiation System (WDS) should be integrated in order to reduce potential chronic tritium emissions from the ISS. This is achieved by routing the top (protium) product from the ISS to a feed point near the bottom end of the WDS Liquid Phase Catalytic Exchange (LPCE) column. This provides an additional barrier against ISS emissions and should mitigate the memory effects due to process parameter fluctuations in the ISS. To support the research activities needed to characterize the performances of various components for WDS and ISS processes under various working conditions and configurationsmore » as needed for ITER design, an experimental facility called TRENTA representative of the ITER WDS and ISS protium separation column, has been commissioned and is in operation at TLK The experimental program on TRENTA facility is conducted to provide the necessary design data related to the relevant ITER operating modes. The operation availability and performances of ISS-WDS have impact on ITER fuel cycle subsystems with consequences on the design integration. The preliminary experimental data on TRENTA facility are presented. (authors)« less

  12. Image security based on iterative random phase encoding in expanded fractional Fourier transform domains

    NASA Astrophysics Data System (ADS)

    Liu, Zhengjun; Chen, Hang; Blondel, Walter; Shen, Zhenmin; Liu, Shutian

    2018-06-01

    A novel image encryption method is proposed by using the expanded fractional Fourier transform, which is implemented with a pair of lenses. Here the centers of two lenses are separated at the cross section of axis in optical system. The encryption system is addressed with Fresnel diffraction and phase modulation for the calculation of information transmission. The iterative process with the transform unit is utilized for hiding secret image. The structure parameters of a battery of lenses can be used for additional keys. The performance of encryption method is analyzed theoretically and digitally. The results show that the security of this algorithm is enhanced markedly by the added keys.

  13. Development of a mirror-based endoscope for divertor spectroscopy on JET with the new ITER-like wall (invited).

    PubMed

    Huber, A; Brezinsek, S; Mertens, Ph; Schweer, B; Sergienko, G; Terra, A; Arnoux, G; Balshaw, N; Clever, M; Edlingdon, T; Egner, S; Farthing, J; Hartl, M; Horton, L; Kampf, D; Klammer, J; Lambertz, H T; Matthews, G F; Morlock, C; Murari, A; Reindl, M; Riccardo, V; Samm, U; Sanders, S; Stamp, M; Williams, J; Zastrow, K D; Zauner, C

    2012-10-01

    A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II, C I, C II, C III) with high optical transmittance (≥ 30% in the designed wavelength range) as well as high spatial resolution that is ≤ 2 mm at the object plane and ≤ 3 mm for the full depth of field (± 0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the λ > 0.95 μm range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.

  14. Development of a mirror-based endoscope for divertor spectroscopy on JET with the new ITER-like wall (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, A.; Brezinsek, S.; Mertens, Ph.

    2012-10-15

    A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II,more » C I, C II, C III) with high optical transmittance ({>=}30% in the designed wavelength range) as well as high spatial resolution that is {<=}2 mm at the object plane and {<=}3 mm for the full depth of field ({+-}0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the {lambda} > 0.95 {mu}m range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.« less

  15. Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure

    NASA Astrophysics Data System (ADS)

    Bogani, C.; Gasparo, M. G.; Papini, A.

    2009-07-01

    We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.

  16. Iterated learning and the evolution of language.

    PubMed

    Kirby, Simon; Griffiths, Tom; Smith, Kenny

    2014-10-01

    Iterated learning describes the process whereby an individual learns their behaviour by exposure to another individual's behaviour, who themselves learnt it in the same way. It can be seen as a key mechanism of cultural evolution. We review various methods for understanding how behaviour is shaped by the iterated learning process: computational agent-based simulations; mathematical modelling; and laboratory experiments in humans and non-human animals. We show how this framework has been used to explain the origins of structure in language, and argue that cultural evolution must be considered alongside biological evolution in explanations of language origins. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Secret shared multiple-image encryption based on row scanning compressive ghost imaging and phase retrieval in the Fresnel domain

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2017-09-01

    A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.

  18. Inductive flux usage and its optimization in tokamak operation

    DOE PAGES

    Luce, Timothy C.; Humphreys, David A.; Jackson, Gary L.; ...

    2014-07-30

    The energy flow from the poloidal field coils of a tokamak to the electromagnetic and kinetic stored energy of the plasma are considered in the context of optimizing the operation of ITER. The goal is to optimize the flux usage in order to allow the longest possible burn in ITER at the desired conditions to meet the physics objectives (500 MW fusion power with energy gain of 10). A mathematical formulation of the energy flow is derived and applied to experiments in the DIII-D tokamak that simulate the ITER design shape and relevant normalized current and pressure. The rate ofmore » rise of the plasma current was varied, and the fastest stable current rise is found to be the optimum for flux usage in DIII-D. A method to project the results to ITER is formulated. The constraints of the ITER poloidal field coil set yield an optimum at ramp rates slower than the maximum stable rate for plasmas similar to the DIII-D plasmas. Finally, experiments in present-day tokamaks for further optimization of the current rise and validation of the projections are suggested.« less

  19. Arc detection for the ICRF system on ITER

    NASA Astrophysics Data System (ADS)

    D'Inca, R.

    2011-12-01

    The ICRF system for ITER is designed to respect the high voltage breakdown limits. However arcs can still statistically happen and must be quickly detected and suppressed by shutting the RF power down. For the conception of a reliable and efficient detector, the analysis of the mechanism of arcs is necessary to find their unique signature. Numerous systems have been conceived to address the issues of arc detection. VSWR-based detectors, RF noise detectors, sound detectors, optical detectors, S-matrix based detectors. Until now, none of them has succeeded in demonstrating the fulfillment of all requirements and the studies for ITER now follow three directions: improvement of the existing concepts to fix their flaws, development of new theoretically fully compliant detectors (like the GUIDAR) and combination of several detectors to benefit from the advantages of each of them. Together with the physical and engineering challenges, the development of an arc detection system for ITER raises methodological concerns to extrapolate the results from basic experiments and present machines to the ITER scale ICRF system and to conduct a relevant risk analysis.

  20. Using In-Training Evaluation Report (ITER) Qualitative Comments to Assess Medical Students and Residents: A Systematic Review.

    PubMed

    Hatala, Rose; Sawatsky, Adam P; Dudek, Nancy; Ginsburg, Shiphra; Cook, David A

    2017-06-01

    In-training evaluation reports (ITERs) constitute an integral component of medical student and postgraduate physician trainee (resident) assessment. ITER narrative comments have received less attention than the numeric scores. The authors sought both to determine what validity evidence informs the use of narrative comments from ITERs for assessing medical students and residents and to identify evidence gaps. Reviewers searched for relevant English-language studies in MEDLINE, EMBASE, Scopus, and ERIC (last search June 5, 2015), and in reference lists and author files. They included all original studies that evaluated ITERs for qualitative assessment of medical students and residents. Working in duplicate, they selected articles for inclusion, evaluated quality, and abstracted information on validity evidence using Kane's framework (inferences of scoring, generalization, extrapolation, and implications). Of 777 potential articles, 22 met inclusion criteria. The scoring inference is supported by studies showing that rich narratives are possible, that changing the prompt can stimulate more robust narratives, and that comments vary by context. Generalization is supported by studies showing that narratives reach thematic saturation and that analysts make consistent judgments. Extrapolation is supported by favorable relationships between ITER narratives and numeric scores from ITERs and non-ITER performance measures, and by studies confirming that narratives reflect constructs deemed important in clinical work. Evidence supporting implications is scant. The use of ITER narratives for trainee assessment is generally supported, except that evidence is lacking for implications and decisions. Future research should seek to confirm implicit assumptions and evaluate the impact of decisions.

  1. Irradiation tests of ITER candidate Hall sensors using two types of neutron spectra.

    PubMed

    Ďuran, I; Bolshakova, I; Viererbl, L; Sentkerestiová, J; Holyaka, R; Lahodová, Z; Bém, P

    2010-10-01

    We report on irradiation tests of InSb based Hall sensors at two irradiation facilities with two distinct types of neutron spectra. One was a fission reactor neutron spectrum with a significant presence of thermal neutrons, while another one was purely fast neutron field. Total neutron fluence of the order of 10(16) cm(-2) was accumulated in both cases, leading to significant drop of Hall sensor sensitivity in case of fission reactor spectrum, while stable performance was observed at purely fast neutron spectrum. This finding suggests that performance of this particular type of Hall sensors is governed dominantly by transmutation. Additionally, it further stresses the need to test ITER candidate Hall sensors under neutron flux with ITER relevant spectrum.

  2. First results of the ITER-relevant negative ion beam test facility ELISE (invited).

    PubMed

    Fantz, U; Franzen, P; Heinemann, B; Wünderlich, D

    2014-02-01

    An important step in the European R&D roadmap towards the neutral beam heating systems of ITER is the new test facility ELISE (Extraction from a Large Ion Source Experiment) for large-scale extraction from a half-size ITER RF source. The test facility was constructed in the last years at Max-Planck-Institut für Plasmaphysik Garching and is now operational. ELISE is gaining early experience of the performance and operation of large RF-driven negative hydrogen ion sources with plasma illumination of a source area of 1 × 0.9 m(2) and an extraction area of 0.1 m(2) using 640 apertures. First results in volume operation, i.e., without caesium seeding, are presented.

  3. Image super-resolution via adaptive filtering and regularization

    NASA Astrophysics Data System (ADS)

    Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming

    2014-11-01

    Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.

  4. Executing SPARQL Queries over the Web of Linked Data

    NASA Astrophysics Data System (ADS)

    Hartig, Olaf; Bizer, Christian; Freytag, Johann-Christoph

    The Web of Linked Data forms a single, globally distributed dataspace. Due to the openness of this dataspace, it is not possible to know in advance all data sources that might be relevant for query answering. This openness poses a new challenge that is not addressed by traditional research on federated query processing. In this paper we present an approach to execute SPARQL queries over the Web of Linked Data. The main idea of our approach is to discover data that might be relevant for answering a query during the query execution itself. This discovery is driven by following RDF links between data sources based on URIs in the query and in partial results. The URIs are resolved over the HTTP protocol into RDF data which is continuously added to the queried dataset. This paper describes concepts and algorithms to implement our approach using an iterator-based pipeline. We introduce a formalization of the pipelining approach and show that classical iterators may cause blocking due to the latency of HTTP requests. To avoid blocking, we propose an extension of the iterator paradigm. The evaluation of our approach shows its strengths as well as the still existing challenges.

  5. Long-term fuel retention and release in JET ITER-Like Wall at ITER-relevant baking temperatures

    NASA Astrophysics Data System (ADS)

    Heinola, K.; Likonen, J.; Ahlgren, T.; Brezinsek, S.; De Temmerman, G.; Jepu, I.; Matthews, G. F.; Pitts, R. A.; Widdowson, A.; Contributors, JET

    2017-08-01

    The fuel outgassing efficiency from plasma-facing components exposed in JET-ILW has been studied at ITER-relevant baking temperatures. Samples retrieved from the W divertor and Be main chamber were annealed at 350 and 240 °C, respectively. Annealing was performed with thermal desoprtion spectrometry (TDS) for 0, 5 and 15 h to study the deuterium removal effectiveness at the nominal baking temperatures. The remained fraction was determined by emptying the samples fully of deuterium by heating W and Be samples up to 1000 and 775 °C,respectively. Results showed the deposits in the divertor having an increasing effect to the remaining retention at temperatures above baking. Highest remaining fractions 54 and 87 % were observed with deposit thicknesses of 10 and 40 μm, respectively. Substantially high fractions were obtained in the main chamber samples from the deposit-free erosion zone of the limiter midplane, in which the dominant fuel retention mechanism is via implantation: 15 h annealing resulted in retained deuterium higher than 90 % . TDS results from the divertor were simulated with TMAP7 calculations. The spectra were modelled with three deuterium activation energies resulting in good agreement with the experiments.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekedahl, Annika, E-mail: annika.ekedahl@cea.fr; Bourdelle, Clarisse; Artaud, Jean-François

    The longstanding expertise of the Tore Supra team in long pulse heating and current drive with radiofrequency (RF) systems will now be exploited in the WEST device (tungsten-W Environment in Steady-state Tokamak) [1]. WEST will allow an integrated long pulse tokamak programme for testing W-divertor components at ITER-relevant heat flux (10-20 MW/m{sup 2}), while treating crucial aspects for ITER-operation, such as avoidance of W-accumulation in long discharges, monitoring and control of heat fluxes on the metallic plasma facing components (PFCs) and coupling of RF waves in H-mode plasmas. Scenario modelling using the METIS-code shows that ITER-relevant heat fluxes are compatiblemore » with the sustainment of long pulse H-mode discharges, at high power (up to 15 MW / 30 s at I{sub P} = 0.8 MA) or high fluence (up to 10 MW / 1000 s at I{sub P} = 0.6 MA) [2], all based on RF heating and current drive using Ion Cyclotron Resonance Heating (ICRH) and Lower Hybrid Current Drive (LHCD). This paper gives a description of the ICRH and LHCD systems in WEST, together with the modelling of the power deposition of the RF waves in the WEST-scenarios.« less

  7. Routes to the past: neural substrates of direct and generative autobiographical memory retrieval.

    PubMed

    Addis, Donna Rose; Knapp, Katie; Roberts, Reece P; Schacter, Daniel L

    2012-02-01

    Models of autobiographical memory propose two routes to retrieval depending on cue specificity. When available cues are specific and personally-relevant, a memory can be directly accessed. However, when available cues are generic, one must engage a generative retrieval process to produce more specific cues to successfully access a relevant memory. The current study sought to characterize the neural bases of these retrieval processes. During functional magnetic resonance imaging (fMRI), participants were shown personally-relevant cues to elicit direct retrieval, or generic cues (nouns) to elicit generative retrieval. We used spatiotemporal partial least squares to characterize the spatial and temporal characteristics of the networks associated with direct and generative retrieval. Both retrieval tasks engaged regions comprising the autobiographical retrieval network, including hippocampus, and medial prefrontal and parietal cortices. However, some key neural differences emerged. Generative retrieval differentially recruited lateral prefrontal and temporal regions early on during the retrieval process, likely supporting the strategic search operations and initial recovery of generic autobiographical information. However, many regions were activated more strongly during direct versus generative retrieval, even when we time-locked the analysis to the successful recovery of events in both conditions. This result suggests that there may be fundamental differences between memories that are accessed directly and those that are recovered via the iterative search and retrieval process that characterizes generative retrieval. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Routes to the past: Neural substrates of direct and generative autobiographical memory retrieval

    PubMed Central

    Addis, Donna Rose; Knapp, Katie; Roberts, Reece P.; Schacter, Daniel L.

    2011-01-01

    Models of autobiographical memory propose two routes to retrieval depending on cue specificity. When available cues are specific and personally-relevant, a memory can be directly accessed. However, when available cues are generic, one must engage a generative retrieval process to produce more specific cues to successfully access a relevant memory. The current study sought to characterize the neural bases of these retrieval processes. During functional magnetic resonance imaging (fMRI), participants were shown personally-relevant cues to elicit direct retrieval, or generic cues (nouns) to elicit generative retrieval. We used spatiotemporal partial least squares to characterize the spatial and temporal characteristics of the networks associated with direct and generative retrieval. Both retrieval tasks engaged regions comprising the autobiographical retrieval network, including hippocampus, and medial prefrontal and parietal cortices. However, some key neural differences emerged. Generative retrieval differentially recruited lateral prefrontal and temporal regions early on during the retrieval process, likely supporting the strategic search operations and initial recovery of generic autobiographical information. However, many regions were activated more strongly during direct versus generative retrieval, even when we time-locked the analysis to the successful recovery of events in both conditions. This result suggests that there may be fundamental differences between memories that are accessed directly and those that are recovered via the iterative search and retrieval process that characterizes generative retrieval. PMID:22001264

  9. A Consensus-Driven Agenda for Emergency Medicine Firearm Injury Prevention Research

    PubMed Central

    Ranney, Megan L.; Fletcher, Jonathan; Alter, Harrison; Barsotti, Christopher; Bebarta, Vikhyat S.; Betz, Marian E.; Carter, Patrick M.; Cerdá, Magdalena; Cunningham, Rebecca M.; Crane, Peter; Fahimi, Jahan; Miller, Matthew J.; Rowhani-Rahbar, Ali; Vogel, Jody A.; Wintemute, Garen J.; Shah, Manish N.; Waseem, Muhammad

    2016-01-01

    Objective To identify critical Emergency Medicine (EM)-focused firearm injury research questions and to develop an evidence-based research agenda. Methods National content experts were recruited to a technical advisory group for the American College of Emergency Physicians Research Committee. Nominal Group Technique (NGT) was used to identify research questions by consensus. The technical advisory group decided to focus on five widely accepted categorizations of firearm injury. Subgroups conducted literature reviews on each topic and developed preliminary lists of EM-relevant research questions. In-person meetings and conference calls were held to iteratively refine the extensive list of research questions, following NGT guidelines. Feedback from external stakeholders was reviewed and integrated. Results Fifty-nine final EM-relevant research questions were identified, including questions that cut across all firearm injury topics and questions specific to self-directed violence (suicide and attempted suicide); intimate partner violence; peer (non-partner) violence; mass violence; and unintentional (“accidental”) injury. Some questions could be addressed through research conducted in emergency departments (EDs); others would require work in other settings. Conclusions The technical advisory group identified key EM-relevant firearm injury research questions. EM-specific data is limited for most of these questions. Funders and researchers should consider increasing their attention to firearm injury prevention and control, particularly to the questions identified here and in other recently developed research agendas. PMID:27998625

  10. A Consensus-Driven Agenda for Emergency Medicine Firearm Injury Prevention Research.

    PubMed

    Ranney, Megan L; Fletcher, Jonathan; Alter, Harrison; Barsotti, Christopher; Bebarta, Vikhyat S; Betz, Marian E; Carter, Patrick M; Cerdá, Magdalena; Cunningham, Rebecca M; Crane, Peter; Fahimi, Jahan; Miller, Matthew J; Rowhani-Rahbar, Ali; Vogel, Jody A; Wintemute, Garen J; Waseem, Muhammad; Shah, Manish N

    2017-02-01

    To identify critical emergency medicine-focused firearm injury research questions and develop an evidence-based research agenda. National content experts were recruited to a technical advisory group for the American College of Emergency Physicians Research Committee. Nominal group technique was used to identify research questions by consensus. The technical advisory group decided to focus on 5 widely accepted categorizations of firearm injury. Subgroups conducted literature reviews on each topic and developed preliminary lists of emergency medicine-relevant research questions. In-person meetings and conference calls were held to iteratively refine the extensive list of research questions, following nominal group technique guidelines. Feedback from external stakeholders was reviewed and integrated. Fifty-nine final emergency medicine-relevant research questions were identified, including questions that cut across all firearm injury topics and questions specific to self-directed violence (suicide and attempted suicide), intimate partner violence, peer (nonpartner) violence, mass violence, and unintentional ("accidental") injury. Some questions could be addressed through research conducted in emergency departments; others would require work in other settings. The technical advisory group identified key emergency medicine-relevant firearm injury research questions. Emergency medicine-specific data are limited for most of these questions. Funders and researchers should consider increasing their attention to firearm injury prevention and control, particularly to the questions identified here and in other recently developed research agendas. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  11. Automated detection using natural language processing of radiologists recommendations for additional imaging of incidental findings.

    PubMed

    Dutta, Sayon; Long, William J; Brown, David F M; Reisner, Andrew T

    2013-08-01

    As use of radiology studies increases, there is a concurrent increase in incidental findings (eg, lung nodules) for which the radiologist issues recommendations for additional imaging for follow-up. Busy emergency physicians may be challenged to carefully communicate recommendations for additional imaging not relevant to the patient's primary evaluation. The emergence of electronic health records and natural language processing algorithms may help address this quality gap. We seek to describe recommendations for additional imaging from our institution and develop and validate an automated natural language processing algorithm to reliably identify recommendations for additional imaging. We developed a natural language processing algorithm to detect recommendations for additional imaging, using 3 iterative cycles of training and validation. The third cycle used 3,235 radiology reports (1,600 for algorithm training and 1,635 for validation) of discharged emergency department (ED) patients from which we determined the incidence of discharge-relevant recommendations for additional imaging and the frequency of appropriate discharge documentation. The test characteristics of the 3 natural language processing algorithm iterations were compared, using blinded chart review as the criterion standard. Discharge-relevant recommendations for additional imaging were found in 4.5% (95% confidence interval [CI] 3.5% to 5.5%) of ED radiology reports, but 51% (95% CI 43% to 59%) of discharge instructions failed to note those findings. The final natural language processing algorithm had 89% (95% CI 82% to 94%) sensitivity and 98% (95% CI 97% to 98%) specificity for detecting recommendations for additional imaging. For discharge-relevant recommendations for additional imaging, sensitivity improved to 97% (95% CI 89% to 100%). Recommendations for additional imaging are common, and failure to document relevant recommendations for additional imaging in ED discharge instructions occurs frequently. The natural language processing algorithm's performance improved with each iteration and offers a promising error-prevention tool. Copyright © 2013 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  12. Microwave beam broadening due to turbulent plasma density fluctuations within the limit of the Born approximation and beyond

    NASA Astrophysics Data System (ADS)

    Köhn, A.; Guidi, L.; Holzhauer, E.; Maj, O.; Poli, E.; Snicker, A.; Weber, H.

    2018-07-01

    Plasma turbulence, and edge density fluctuations in particular, can under certain conditions broaden the cross-section of injected microwave beams significantly. This can be a severe problem for applications relying on well-localized deposition of the microwave power, like the control of MHD instabilities. Here we investigate this broadening mechanism as a function of fluctuation level, background density and propagation length in a fusion-relevant scenario using two numerical codes, the full-wave code IPF-FDMC and the novel wave kinetic equation solver WKBeam. The latter treats the effects of fluctuations using a statistical approach, based on an iterative solution of the scattering problem (Born approximation). The full-wave simulations are used to benchmark this approach. The Born approximation is shown to be valid over a large parameter range, including ITER-relevant scenarios.

  13. Thermal fatigue testing of a diffusion-bonded beryllium divertor mock-up under ITER-relevant conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Youchison, D.L.; Watson, R.D.; McDonald, J.M.

    Thermal response and thermal fatigue tests of four 5-mm-thick beryllium tiles on a Russian Federation International Thermonuclear Experimental Reactor (ITER)-relevant divertor mock-up were completed on the electron beam test system at Sandia National Laboratories. Thermal response tests were performed on the tiles to an absorbed heat flux of 5 MW/m{sup 2} and surface temperatures near 300{degree}C using 1.4 MPa water at 5 m/s flow velocity and an inlet temperature of 8 to 15{degree}C. One tile was exposed to incrementally increasing heat fluxes up to 9.5 MW/m{sup 2} and surface temperatures up to 690{degree}C before debonding at 10MW/m{sup 2}. A secondmore » tile debonded in 25 to 30 cycles at <0.5 MW/m{sup 2}. However, a third tile debonded after 9200 thermal fatigue cycles at 5 MW/m{sup 2}, while another debonded after 6800 cycles. Posttest surface analysis indicated that fatigue failure occurred in the intermetallic layers between the beryllium and copper. No fatigue cracking of the bulk beryllium was observed. It appears that microcracks growing at the diffusion bond produced the observed gradual temperature increases during thermal cycling. These experiments indicate that diffusion-bonded beryllium tiles can survive several thousand thermal cycles under ITER-relevant conditions. However, the reliability of the diffusion-bonded joint remains a serious issue. 17 refs., 25 figs., 6 tabs.« less

  14. A Review of Supplementary Medical Aspects of Post-Cold War UN Peacekeeping Operations: Trends, Lessons Learned, Courses of Action, and Recommendations.

    PubMed

    Johnson, Ralph J

    2015-01-01

    Post-Cold War United Nations Peace Keeping Operations (UN PKOs) have been increasingly involved in dangerous areas with ill-defined boundaries, harsh and remote geographies, simmering internecine armed conflict, and disregard on the part of some local parties for peacekeepers' security and role. In the interest of force protection and optimizing operations, a key component of UN PKOs is healthcare and medical treatment. The expectation is that UN PKO medical support will adjust to the general intent and structure of UN PKOs. To do so requires effective policies and planning informed by a review of all medical aspects of UN PKO operations, including those considered supplementary, that is, less crucial but contributing nonetheless. Medical aspects considered paramount and key to UN PKOs have received relatively thorough treatment elsewhere. The intent of this article is to report on ancillary and supplemental medical aspects practical to post-Cold War UN PKO operations assembled through an iterative inquiry of open-source articles. Recommendations are made about possible courses of action in terms of addressing trends found in such medical aspects of PKOs and relevance of US/NATO/European Union models and research.

  15. Translating evidence into practice: the role of health research funders

    PubMed Central

    2012-01-01

    Background A growing body of work on knowledge translation (KT) reveals significant gaps between what is known to improve health, and what is done to improve health. The literature and practice also suggest that KT has the potential to narrow those gaps, leading to more evidence-informed healthcare. In response, Canadian health research funders and agencies have made KT a priority. This article describes how one funding agency determined its KT role and in the process developed a model that other agencies could use when considering KT programs. Discussion While ‘excellence’ is an important criterion by which to evaluate and fund health research, it alone does not ensure relevance to societal health priorities. There is increased demand for return on investments in health research in the form of societal and health system benefits. Canadian health research funding agencies are responding to these demands by emphasizing relevance as a funding criterion and supporting researchers and research users to use the evidence generated. Based on recommendations from the literature, an environmental scan, broad circulation of an iterative discussion paper, and an expert working group process, our agency developed a plan to maximize our role in KT. Key to the process was development of a model comprising five key functional areas that together create the conditions for effective KT: advancing KT science; building KT capacity; managing KT projects; funding KT activities; and advocating for KT. Observations made during the planning process of relevance to the KT enterprise are: the importance of delineating KT and communications, and information and knowledge; determining responsibility for KT; supporting implementation and evaluation; and promoting the message that both research and KT take time to realize results. Summary Challenges exist in fulfilling expectations that research evidence results in beneficial impacts for society. However, health agencies are well placed to help maximize the use of evidence in health practice and policy. We propose five key functional areas of KT for health agencies, and encourage partnerships and discussion to advance the field. PMID:22531033

  16. Elliptic polylogarithms and iterated integrals on elliptic curves. II. An application to the sunrise integral

    NASA Astrophysics Data System (ADS)

    Broedel, Johannes; Duhr, Claude; Dulat, Falko; Tancredi, Lorenzo

    2018-06-01

    We introduce a class of iterated integrals that generalize multiple polylogarithms to elliptic curves. These elliptic multiple polylogarithms are closely related to similar functions defined in pure mathematics and string theory. We then focus on the equal-mass and non-equal-mass sunrise integrals, and we develop a formalism that enables us to compute these Feynman integrals in terms of our iterated integrals on elliptic curves. The key idea is to use integration-by-parts identities to identify a set of integral kernels, whose precise form is determined by the branch points of the integral in question. These kernels allow us to express all iterated integrals on an elliptic curve in terms of them. The flexibility of our approach leads us to expect that it will be applicable to a large variety of integrals in high-energy physics.

  17. Learning to Teach Elementary Science Through Iterative Cycles of Enactment in Culturally and Linguistically Diverse Contexts

    NASA Astrophysics Data System (ADS)

    Bottoms, SueAnn I.; Ciechanowski, Kathryn M.; Hartman, Brian

    2015-12-01

    Iterative cycles of enactment embedded in culturally and linguistically diverse contexts provide rich opportunities for preservice teachers (PSTs) to enact core practices of science. This study is situated in the larger Families Involved in Sociocultural Teaching and Science, Technology, Engineering and Mathematics (FIESTAS) project, which weaves together cycles of enactment, core practices in science education and culturally relevant pedagogies. The theoretical foundation draws upon situated learning theory and communities of practice. Using video analysis by PSTs and course artifacts, the authors studied how the iterative process of these cycles guided PSTs development as teachers of elementary science. Findings demonstrate how PSTs were drawing on resources to inform practice, purposefully noticing their practice, renegotiating their roles in teaching, and reconsidering "professional blindness" through cultural practice.

  18. Development of a text messaging system to improve receipt of survivorship care in adolescent and young adult survivors of childhood cancer.

    PubMed

    Casillas, Jacqueline; Goyal, Anju; Bryman, Jason; Alquaddoomi, Faisal; Ganz, Patricia A; Lidington, Emma; Macadangdang, Joshua; Estrin, Deborah

    2017-08-01

    This study aimed to develop and examine the acceptability, feasibility, and usability of a text messaging, or Short Message Service (SMS), system for improving the receipt of survivorship care for adolescent and young adult (AYA) survivors of childhood cancer. Researchers developed and refined the text messaging system based on qualitative data from AYA survivors in an iterative three-stage process. In stage 1, a focus group (n = 4) addressed acceptability; in stage 2, key informant interviews (n = 10) following a 6-week trial addressed feasibility; and in stage 3, key informant interviews (n = 23) following a 6-week trial addressed usability. Qualitative data were analyzed using a constant comparative analytic approach exploring in-depth themes. The final system includes programmed reminders to schedule and attend late effect screening appointments, tailored suggestions for community resources for cancer survivors, and messages prompting participant feedback regarding the appointments and resources. Participants found the text messaging system an acceptable form of communication, the screening reminders and feedback prompts feasible for improving the receipt of survivorship care, and the tailored suggestions for community resources usable for connecting survivors to relevant services. Participants suggested supplementing survivorship care visits and forming AYA survivor social networks as future implementations for the text messaging system. The text messaging system may assist AYA survivors by coordinating late effect screening appointments, facilitating a partnership with the survivorship care team, and connecting survivors with relevant community resources. The text messaging system has the potential to improve the receipt of survivorship care.

  19. Impacts of Permafrost on Infrastructure and Ecosystem Services

    NASA Astrophysics Data System (ADS)

    Trochim, E.; Schuur, E.; Schaedel, C.; Kelly, B. P.

    2017-12-01

    The Study of Environmental Arctic Change (SEARCH) program developed knowledge pyramids as a tool for advancing scientific understanding and making this information accessible for decision makers. Knowledge pyramids are being used to synthesize, curate and disseminate knowledge of changing land ice, sea ice, and permafrost in the Arctic. Each pyramid consists of a one-two page summary brief in broadly accessible language and literature organized by levels of detail including synthesizes and scientific building blocks. Three knowledge pyramids have been produced related to permafrost on carbon, infrastructure, and ecosystem services. Each brief answers key questions with high societal relevance framed in policy-relevant terms. The knowledge pyramids concerning infrastructure and ecosystem services were developed in collaboration with researchers specializing in the specific topic areas in order to identify the most pertinent issues and accurately communicate information for integration into policy and planning. For infrastructure, the main issue was the need to build consensus in the engineering and science communities for developing improved methods for incorporating data applicable to building infrastructure on permafrost. In ecosystem services, permafrost provides critical landscape properties which affect basic human needs including fuel and drinking water availability, access to hunting and harvest, and fish and wildlife habitat. Translating these broad and complex topics necessitated a systematic and iterative approach to identifying key issues and relating them succinctly to the best state of the art research. The development of the knowledge pyramids provoked collaboration and synthesis across distinct research and engineering communities. The knowledge pyramids also provide a solid basis for policy development and the format allows the content to be regularly updated as the research community advances.

  20. Novel aspects of plasma control in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, D.; Jackson, G.; Walker, M.

    2015-02-15

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less

  1. Novel aspects of plasma control in ITER

    DOE PAGES

    Humphreys, David; Ambrosino, G.; de Vries, Peter; ...

    2015-02-12

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g. current profile regulation, tearing mode suppression (TM)), control mathematics (e.g. algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g. methods for management of highly-subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Finally, issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less

  2. The Laboratory Course Assessment Survey: A Tool to Measure Three Dimensions of Research-Course Design

    PubMed Central

    Corwin, Lisa A.; Runyon, Christopher; Robinson, Aspen; Dolan, Erin L.

    2015-01-01

    Course-based undergraduate research experiences (CUREs) are increasingly being offered as scalable ways to involve undergraduates in research. Yet few if any design features that make CUREs effective have been identified. We developed a 17-item survey instrument, the Laboratory Course Assessment Survey (LCAS), that measures students’ perceptions of three design features of biology lab courses: 1) collaboration, 2) discovery and relevance, and 3) iteration. We assessed the psychometric properties of the LCAS using established methods for instrument design and validation. We also assessed the ability of the LCAS to differentiate between CUREs and traditional laboratory courses, and found that the discovery and relevance and iteration scales differentiated between these groups. Our results indicate that the LCAS is suited for characterizing and comparing undergraduate biology lab courses and should be useful for determining the relative importance of the three design features for achieving student outcomes. PMID:26466990

  3. High density operation for reactor-relevant power exhaust

    NASA Astrophysics Data System (ADS)

    Wischmeier, M.; ASDEX Upgrade Team; Jet Efda Contributors

    2015-08-01

    With increasing size of a tokamak device and associated fusion power gain an increasing power flux density towards the divertor needs to be handled. A solution for handling this power flux is crucial for a safe and economic operation. Using purely geometric arguments in an ITER-like divertor this power flux can be reduced by approximately a factor 100. Based on a conservative extrapolation of current technology for an integrated engineering approach to remove power deposited on plasma facing components a further reduction of the power flux density via volumetric processes in the plasma by up to a factor of 50 is required. Our current ability to interpret existing power exhaust scenarios using numerical transport codes is analyzed and an operational scenario as a potential solution for ITER like divertors under high density and highly radiating reactor-relevant conditions is presented. Alternative concepts for risk mitigation as well as strategies for moving forward are outlined.

  4. Proposed Performance Measures and Strategies for Implementation of the Fatigue Risk Management Guidelines for Emergency Medical Services.

    PubMed

    Martin-Gill, Christian; Higgins, J Stephen; Van Dongen, Hans P A; Buysse, Daniel J; Thackery, Ronald W; Kupas, Douglas F; Becker, David S; Dean, Bradley E; Lindbeck, George H; Guyette, Francis X; Penner, Josef H; Violanti, John M; Lang, Eddy S; Patterson, P Daniel

    2018-02-15

    Performance measures are a key component of implementation, dissemination, and evaluation of evidence-based guidelines (EBGs). We developed performance measures for Emergency Medical Services (EMS) stakeholders to enable the implementation of guidelines for fatigue risk management in the EMS setting. Panelists associated with the Fatigue in EMS Project, which was supported by the National Highway Traffic Safety Administration (NHTSA), used an iterative process to develop a draft set of performance measures linked to 5 recommendations for fatigue risk management in EMS. We used a cross-sectional survey design and the Content Validity Index (CVI) to quantify agreement among panelists on the wording and content of draft measures. An anonymous web-based tool was used to solicit the panelists' perceptions of clarity and relevance of draft measures. Panelists rated the clarity and relevance separately for each draft measure on a 4-point scale. CVI scores ≥0.78 for clarity and relevance were specified a priori to signify agreement and completion of measurement development. Panelists judged 5 performance measures for fatigue risk management as clear and relevant. These measures address use of fatigue and/or sleepiness survey instruments, optimal duration of shifts, access to caffeine as a fatigue countermeasure, use of napping during shift work, and the delivery of education and training on fatigue risk management for EMS personnel. Panelists complemented performance measures with suggestions for implementation by EMS agencies. Performance measures for fatigue risk management in the EMS setting will facilitate the implementation and evaluation of the EBG for Fatigue in EMS.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bourham, Mohamed A.; Gilligan, John G.

    Safety considerations in large future fusion reactors like ITER are important before licensing the reactor. Several scenarios are considered hazardous, which include safety of plasma-facing components during hard disruptions, high heat fluxes and thermal stresses during normal operation, accidental energy release, and aerosol formation and transport. Disruption events, in large tokamaks like ITER, are expected to produce local heat fluxes on plasma-facing components, which may exceed 100 GW/m{sup 2} over a period of about 0.1 ms. As a result, the surface temperature dramatically increases, which results in surface melting and vaporization, and produces thermal stresses and surface erosion. Plasma-facing componentsmore » safety issues extends to cover a wide range of possible scenarios, including disruption severity and the impact of plasma-facing components on disruption parameters, accidental energy release and short/long term LOCA's, and formation of airborne particles by convective current transport during a LOVA (water/air ingress disruption) accident scenario. Study, and evaluation of, disruption-induced aerosol generation and mobilization is essential to characterize database on particulate formation and distribution for large future fusion tokamak reactor like ITER. In order to provide database relevant to ITER, the SIRENS electrothermal plasma facility at NCSU has been modified to closely simulate heat fluxes expected in ITER.« less

  6. Advanced density profile reflectometry; the state-of-the-art and measurement prospects for ITER

    NASA Astrophysics Data System (ADS)

    Doyle, E. J.

    2006-10-01

    Dramatic progress in millimeter-wave technology has allowed the realization of a key goal for ITER diagnostics, the routine measurement of the plasma density profile from millimeter-wave radar (reflectometry) measurements. In reflectometry, the measured round-trip group delay of a probe beam reflected from a plasma cutoff is used to infer the density distribution in the plasma. Reflectometer systems implemented by UCLA on a number of devices employ frequency-modulated continuous-wave (FM-CW), ultrawide-bandwidth, high-resolution radar systems. One such system on DIII-D has routinely demonstrated measurements of the density profile over a range of electron density of 0-6.4x10^19,m-3, with ˜25 μs time and ˜4 mm radial resolution, meeting key ITER requirements. This progress in performance was made possible by multiple advances in the areas of millimeter-wave technology, novel measurement techniques, and improved understanding, including: (i) fast sweep, solid-state, wide bandwidth sources and power amplifiers, (ii) dual polarization measurements to expand the density range, (iii) adaptive radar-based data analysis with parallel processing on a Unix cluster, (iv) high memory depth data acquisition, and (v) advances in full wave code modeling. The benefits of advanced system performance will be illustrated using measurements from a wide range of phenomena, including ELM and fast-ion driven mode dynamics, L-H transition studies and plasma-wall interaction. The measurement capabilities demonstrated by these systems provide a design basis for the development of the main ITER profile reflectometer system. This talk will explore the extent to which these reflectometer system designs, results and experience can be translated to ITER, and will identify what new studies and experimental tests are essential.

  7. A suite of diagnostics to validate and optimize the prototype ITER neutral beam injector

    NASA Astrophysics Data System (ADS)

    Pasqualotto, R.; Agostini, M.; Barbisan, M.; Brombin, M.; Cavazzana, R.; Croci, G.; Dalla Palma, M.; Delogu, R. S.; De Muri, M.; Muraro, A.; Peruzzo, S.; Pimazzoni, A.; Pomaro, N.; Rebai, M.; Rizzolo, A.; Sartori, E.; Serianni, G.; Spagnolo, S.; Spolaore, M.; Tardocchi, M.; Zaniol, B.; Zaupa, M.

    2017-10-01

    The ITER project requires additional heating provided by two neutral beam injectors using 40 A negative deuterium ions accelerated at 1 MV. As the beam requirements have never been experimentally met, a test facility is under construction at Consorzio RFX, which hosts two experiments: SPIDER, full-size 100 kV ion source prototype, and MITICA, 1 MeV full-size ITER injector prototype. Since diagnostics in ITER injectors will be mainly limited to thermocouples, due to neutron and gamma radiation and to limited access, it is crucial to thoroughly investigate and characterize in more accessible experiments the key parameters of source plasma and beam, using several complementary diagnostics assisted by modelling. In SPIDER and MITICA the ion source parameters will be measured by optical emission spectroscopy, electrostatic probes, cavity ring down spectroscopy for H^- density and laser absorption spectroscopy for cesium density. Measurements over multiple lines-of-sight will provide the spatial distribution of the parameters over the source extension. The beam profile uniformity and its divergence are studied with beam emission spectroscopy, complemented by visible tomography and neutron imaging, which are novel techniques, while an instrumented calorimeter based on custom unidirectional carbon fiber composite tiles observed by infrared cameras will measure the beam footprint on short pulses with the highest spatial resolution. All heated components will be monitored with thermocouples: as these will likely be the only measurements available in ITER injectors, their capabilities will be investigated by comparison with other techniques. SPIDER and MITICA diagnostics are described in the present paper with a focus on their rationale, key solutions and most original and effective implementations.

  8. Coded DS-CDMA Systems with Iterative Channel Estimation and no Pilot Symbols

    DTIC Science & Technology

    2010-08-01

    ar X iv :1 00 8. 31 96 v1 [ cs .I T ] 1 9 A ug 2 01 0 1 Coded DS - CDMA Systems with Iterative Channel Estimation and no Pilot Symbols Don...sequence code-division multiple-access ( DS - CDMA ) systems with quadriphase-shift keying in which channel estimation, coherent demodulation, and decoding...amplitude, phase, and the interference power spectral density (PSD) due to the combined interference and thermal noise is proposed for DS - CDMA systems

  9. Validity and reliability of an in-training evaluation report to measure the CanMEDS roles in emergency medicine residents.

    PubMed

    Kassam, Aliya; Donnon, Tyrone; Rigby, Ian

    2014-03-01

    There is a question of whether a single assessment tool can assess the key competencies of residents as mandated by the Royal College of Physicians and Surgeons of Canada CanMEDS roles framework. The objective of the present study was to investigate the reliability and validity of an emergency medicine (EM) in-training evaluation report (ITER). ITER data from 2009 to 2011 were combined for residents across the 5 years of the EM residency training program. An exploratory factor analysis with varimax rotation was used to explore the construct validity of the ITER. A total of 172 ITERs were completed on residents across their first to fifth year of training. A combined, 24-item ITER yielded a five-factor solution measuring the CanMEDs role Medical Expert/Scholar, Communicator/Collaborator, Professional, Health Advocate and Manager subscales. The factor solution accounted for 79% of the variance, and reliability coefficients (Cronbach alpha) ranged from α  =  0.90 to 0.95 for each subscale and α  =  0.97 overall. The combined, 24-item ITER used to assess residents' competencies in the EM residency program showed strong reliability and evidence of construct validity for assessment of the CanMEDS roles. Further research is needed to develop and test ITER items that will differentiate each CanMEDS role exclusively.

  10. Learning by Restorying

    ERIC Educational Resources Information Center

    Slabon, Wayne A.; Richards, Randy L.; Dennen, Vanessa P.

    2014-01-01

    In this paper, we introduce restorying, a pedagogical approach based on social constructivism that employs successive iterations of rewriting and discussing personal, student-generated, domain-relevant stories to promote conceptual application, critical thinking, and ill-structured problem solving skills. Using a naturalistic, qualitative case…

  11. Negotiating Tensions Between Theory and Design in the Development of Mailings for People Recovering From Acute Coronary Syndrome

    PubMed Central

    Presseau, Justin; Nicholas Angl, Emily; Jokhio, Iffat; Schwalm, JD; Grimshaw, Jeremy M; Bosiak, Beth; Natarajan, Madhu K; Ivers, Noah M

    2017-01-01

    Background Taking all recommended secondary prevention cardiac medications and fully participating in a formal cardiac rehabilitation program significantly reduces mortality and morbidity in the year following a heart attack. However, many people who have had a heart attack stop taking some or all of their recommended medications prematurely and many do not complete a formal cardiac rehabilitation program. Objective The objective of our study was to develop a user-centered, theory-based, scalable intervention of printed educational materials to encourage and support people who have had a heart attack to use recommended secondary prevention cardiac treatments. Methods Prior to the design process, we conducted theory-based interviews and surveys with patients who had had a heart attack to identify key determinants of secondary prevention behaviors. Our interdisciplinary research team then partnered with a patient advisor and design firm to undertake an iterative, theory-informed, user-centered design process to operationalize techniques to address these determinants. User-centered design requires considering users’ needs, goals, strengths, limitations, context, and intuitive processes; designing prototypes adapted to users accordingly; observing how potential users respond to the prototype; and using those data to refine the design. To accomplish these tasks, we conducted user research to develop personas (archetypes of potential users), developed a preliminary prototype using behavior change theory to map behavior change techniques to identified determinants of medication adherence, and conducted 2 design cycles, testing materials via think-aloud and semistructured interviews with a total of 11 users (10 patients who had experienced a heart attack and 1 caregiver). We recruited participants at a single cardiac clinic using purposive sampling informed by our personas. We recorded sessions with users and extracted key themes from transcripts. We held interdisciplinary team discussions to interpret findings in the context of relevant theory-based evidence and iteratively adapted the intervention accordingly. Results Through our iterative development and testing, we identified 3 key tensions: (1) evidence from theory-based studies versus users’ feelings, (2) informative versus persuasive communication, and (3) logistical constraints for the intervention versus users’ desires or preferences. We addressed these by (1) identifying root causes for users’ feelings and addressing those to better incorporate theory- and evidence-based features, (2) accepting that our intervention was ethically justified in being persuasive, and (3) making changes to the intervention where possible, such as attempting to match imagery in the materials to patients’ self-images. Conclusions Theory-informed interventions must be operationalized in ways that fit with user needs. Tensions between users’ desires or preferences and health care system goals and constraints must be identified and addressed to the greatest extent possible. A cluster randomized controlled trial of the final intervention is currently underway. PMID:28249831

  12. CORSICA modelling of ITER hybrid operation scenarios

    NASA Astrophysics Data System (ADS)

    Kim, S. H.; Bulmer, R. H.; Campbell, D. J.; Casper, T. A.; LoDestro, L. L.; Meyer, W. H.; Pearlstein, L. D.; Snipes, J. A.

    2016-12-01

    The hybrid operating mode observed in several tokamaks is characterized by further enhancement over the high plasma confinement (H-mode) associated with reduced magneto-hydro-dynamic (MHD) instabilities linked to a stationary flat safety factor (q ) profile in the core region. The proposed ITER hybrid operation is currently aiming at operating for a long burn duration (>1000 s) with a moderate fusion power multiplication factor, Q , of at least 5. This paper presents candidate ITER hybrid operation scenarios developed using a free-boundary transport modelling code, CORSICA, taking all relevant physics and engineering constraints into account. The ITER hybrid operation scenarios have been developed by tailoring the 15 MA baseline ITER inductive H-mode scenario. Accessible operation conditions for ITER hybrid operation and achievable range of plasma parameters have been investigated considering uncertainties on the plasma confinement and transport. ITER operation capability for avoiding the poloidal field coil current, field and force limits has been examined by applying different current ramp rates, flat-top plasma currents and densities, and pre-magnetization of the poloidal field coils. Various combinations of heating and current drive (H&CD) schemes have been applied to study several physics issues, such as the plasma current density profile tailoring, enhancement of the plasma energy confinement and fusion power generation. A parameterized edge pedestal model based on EPED1 added to the CORSICA code has been applied to hybrid operation scenarios. Finally, fully self-consistent free-boundary transport simulations have been performed to provide information on the poloidal field coil voltage demands and to study the controllability with the ITER controllers. Extended from Proc. 24th Int. Conf. on Fusion Energy (San Diego, 2012) IT/P1-13.

  13. Status of the 1 MeV Accelerator Design for ITER NBI

    NASA Astrophysics Data System (ADS)

    Kuriyama, M.; Boilson, D.; Hemsworth, R.; Svensson, L.; Graceffa, J.; Schunke, B.; Decamps, H.; Tanaka, M.; Bonicelli, T.; Masiello, A.; Bigi, M.; Chitarin, G.; Luchetta, A.; Marcuzzi, D.; Pasqualotto, R.; Pomaro, N.; Serianni, G.; Sonato, P.; Toigo, V.; Zaccaria, P.; Kraus, W.; Franzen, P.; Heinemann, B.; Inoue, T.; Watanabe, K.; Kashiwagi, M.; Taniguchi, M.; Tobari, H.; De Esch, H.

    2011-09-01

    The beam source of neutral beam heating/current drive system for ITER is needed to accelerate the negative ion beam of 40A with D- at 1 MeV for 3600 sec. In order to realize the beam source, design and R&D works are being developed in many institutions under the coordination of ITER organization. The development of the key issues of the ion source including source plasma uniformity, suppression of co-extracted electron in D beam operation and also after the long beam duration time of over a few 100 sec, is progressed mainly in IPP with the facilities of BATMAN, MANITU and RADI. In the near future, ELISE, that will be tested the half size of the ITER ion source, will start the operation in 2011, and then SPIDER, which demonstrates negative ion production and extraction with the same size and same structure as the ITER ion source, will start the operation in 2014 as part of the NBTF. The development of the accelerator is progressed mainly in JAEA with the MeV test facility, and also the computer simulation of beam optics also developed in JAEA, CEA and RFX. The full ITER heating and current drive beam performance will be demonstrated in MITICA, which will start operation in 2016 as part of the NBTF.

  14. Implementation of an integrated preoperative care pathway and regional electronic clinical portal for preoperative assessment.

    PubMed

    Bouamrane, Matt-Mouley; Mair, Frances S

    2014-11-19

    Effective surgical pre-assessment will depend upon the collection of relevant medical information, good data management and communication between the members of the preoperative multi-disciplinary team. NHS Greater Glasgow and Clyde has implemented an electronic preoperative integrated care pathway (eForm) allowing all hospitals to access a comprehensive patient medical history via a clinical portal on the health-board intranet. We conducted six face-to-face semi-structured interviews and participated in one focus group and two workshops with key stakeholders involved in the Planned Care Improvement (PCIP) and Electronic Patient Record programmes. We used qualitative methods and Normalisation Process Theory in order to identify the key factors which led to the successful deployment of the preoperative eForm in the health-board. In January 2013, more than 90,000 patient preoperative assessments had been completed via the electronic portal. Two complementary strategic efforts were instrumental in the successful deployment of the preoperative eForm. At the local health-board level: the PCIP led to the rationalisation of surgical pre-assessment clinics and the standardisation of preoperative processes. At the national level: the eHealth programme selected portal technology as an iterative strategic technology solution towards a virtual electronic patient record. Our study has highlighted clear synergies between these two standardisation efforts. The adoption of the eForm into routine preoperative work practices can be attributed to: (i) a policy context - including performance targets - promoting the rationalisation of surgical pre-assessment pathways, (ii) financial and organisational resources to support service redesign and the use of information technology for operationalising the standardisation of preoperative processes, (iii) a sustained engagement with stakeholders throughout the iterative phases of the preoperative clinics redesign, guidelines standardisation and the eForm development, (iv) the use of a pragmatic and domain-agnostic technology solution and finally: (v) a consensual and contextualised implementation.

  15. Solidification and Acceleration of Large Cryogenic Pellets Relevant for Plasma Disruption Mitigation

    DOE PAGES

    Combs, Stephen Kirk; Meitner, S. J.; Gebhart, T. E.; ...

    2016-01-01

    The technology for producing, accelerating, and shattering large pellets (before injection into plasmas) for disruption mitigation has been under development at the Oak Ridge National Laboratory for several years, including a system on DIII-D that has been used to provide some significant experimental results. The original proof-of-principle testing was carried out using a pipe gun injector cooled by a cryogenic refrig- erator (temperatures ~8-20 K) and equipped with a stainless steel tube to produce 16.5-mm pellets composed of either pure D 2, pure Ne, or a dual layer with a thin outer shell of D 2 and core of Ne.more » Recently, significant progress has been made in the laboratory using that same pipe gun and a new injector that is an ITER test apparatus cooled with liquid helium. The new injector operates at ~5-8 K, which is similar to temperatures expected with cooling provided by the flow of supercritical helium on ITER. An alternative technique for producing/solidifying large pellets directly from a premixed gas has now been successfully tested in the laboratory. Also, two additional pellet sizes have been tested recently (nominal 24.4 and 34.0 mm diameters). With larger pellets, the number of injectors required for ITER disruption mitigation can be reduced, resulting in less cost and a smaller footprint for the hardware. An attractive option is longer pellets, and 24.4-mm pellets with a length/diameter ratio of ~3 have been successfully tested. Since pellet speed is the key parameter in determining the response time of a shattered pellet system to a plasma disruption event, recent tests have concentrated on documenting the speeds with different hardware configurations and operating parameters; speeds of ~100-800 m/s have been recorded. The data and results from laboratory testing are presented and discussed, and a simple model for the pellet solidification process is described.« less

  16. Beryllium R&D for blanket application

    NASA Astrophysics Data System (ADS)

    Donne, M. Dalle; Longhurst, G. R.; Kawamura, H.; Scaffidi-Argentina, F.

    1998-10-01

    The paper describes the main problems and the R&D for the beryllium to be used as neutron multiplier in blankets. As the four ITER partners propose to use beryllium in the form of pebbles for their DEMO relevant blankets (only the Russians consider the porous beryllium option as an alternative) and the ITER breeding blanket will use beryllium pebbles as well, the paper is mainly based on beryllium pebbles. Also the work on the chemical reactivity of fully dense and porous beryllium in contact with water steam is described, due to the safety importance of this point.

  17. Standard and reduced radiation dose liver CT images: adaptive statistical iterative reconstruction versus model-based iterative reconstruction-comparison of findings and image quality.

    PubMed

    Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M

    2014-12-01

    To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.

  18. Designing Indigenous Language Revitalization

    ERIC Educational Resources Information Center

    Hermes, Mary; Bang, Megan; Marin, Ananda

    2012-01-01

    Endangered Indigenous languages have received little attention within the American educational research community. However, within Native American communities, language revitalization is pushing education beyond former iterations of culturally relevant curriculum and has the potential to radically alter how we understand culture and language in…

  19. Clinical effort against secondhand smoke exposure: development of framework and intervention.

    PubMed

    Winickoff, Jonathan P; Park, Elyse R; Hipple, Bethany J; Berkowitz, Anna; Vieira, Cecilia; Friebely, Joan; Healey, Erica A; Rigotti, Nancy A

    2008-08-01

    The purpose of this work was to describe a novel process and present results of formative research to develop a pediatric office intervention that uses available systems of care for addressing parental smoking. The scientific development of the intervention occurred in 3 stages. In stage 1, we designed an office system for parental tobacco control in the pediatric outpatient setting on the basis of complementary conceptual frameworks of preventive services delivery, conceptualized for the child health care setting through a process of key interviews with leaders in the field of implementing practice change; existing Public Health Service guidelines that had been shown effective in adult practices; and adaptation of an evidence-based adult office system for tobacco control. This was an iterative process that yielded a theoretically framed intervention prototype. In stage 2, we performed focus-group testing in pediatric practices with pediatricians, nurses, clinical assistants, and key office staff. Using qualitative methods, we adapted the intervention prototype on the basis of this feedback to include 5 key implementation steps for the child health care setting. In stage 3, we presented the intervention to breakout groups at 2 national meetings of pediatric practitioners for additional refinements. The main result was a theoretically grounded intervention that was responsive to the barriers and suggestions raised in the focus groups and at the national meetings. The Clinical Effort Against Secondhand Smoke Exposure intervention was designed to be flexible and adaptable to the particular practices' staffing, resources, and physical configuration. Practice staff can choose materials relevant to their own particular systems of care (www.ceasetobacco.org). Conceptually grounded and focus-group-tested strategies for parental tobacco control are now available for implementation in the pediatric outpatient setting. The tobacco-control intervention-development process might have particular relevance for other chronic pediatric conditions that have a strong evidence base and have available treatments or resources that are underused.

  20. Collaborative human-machine analysis using a controlled natural language

    NASA Astrophysics Data System (ADS)

    Mott, David H.; Shemanski, Donald R.; Giammanco, Cheryl; Braines, Dave

    2015-05-01

    A key aspect of an analyst's task in providing relevant information from data is the reasoning about the implications of that data, in order to build a picture of the real world situation. This requires human cognition, based upon domain knowledge about individuals, events and environmental conditions. For a computer system to collaborate with an analyst, it must be capable of following a similar reasoning process to that of the analyst. We describe ITA Controlled English (CE), a subset of English to represent analyst's domain knowledge and reasoning, in a form that it is understandable by both analyst and machine. CE can be used to express domain rules, background data, assumptions and inferred conclusions, thus supporting human-machine interaction. A CE reasoning and modeling system can perform inferences from the data and provide the user with conclusions together with their rationale. We present a logical problem called the "Analysis Game", used for training analysts, which presents "analytic pitfalls" inherent in many problems. We explore an iterative approach to its representation in CE, where a person can develop an understanding of the problem solution by incremental construction of relevant concepts and rules. We discuss how such interactions might occur, and propose that such techniques could lead to better collaborative tools to assist the analyst and avoid the "pitfalls".

  1. Monte Carlo simulation of ion-material interactions in nuclear fusion devices

    NASA Astrophysics Data System (ADS)

    Nieto Perez, M.; Avalos-Zuñiga, R.; Ramos, G.

    2017-06-01

    One of the key aspects regarding the technological development of nuclear fusion reactors is the understanding of the interaction between high-energy ions coming from the confined plasma and the materials that the plasma-facing components are made of. Among the multiple issues important to plasma-wall interactions in fusion devices, physical erosion and composition changes induced by energetic particle bombardment are considered critical due to possible material flaking, changes to surface roughness, impurity transport and the alteration of physicochemical properties of the near surface region due to phenomena such as redeposition or implantation. A Monte Carlo code named MATILDA (Modeling of Atomic Transport in Layered Dynamic Arrays) has been developed over the years to study phenomena related to ion beam bombardment such as erosion rate, composition changes, interphase mixing and material redeposition, which are relevant issues to plasma-aided manufacturing of microelectronics, components on object exposed to intense solar wind, fusion reactor technology and other important industrial fields. In the present work, the code is applied to study three cases of plasma material interactions relevant to fusion devices in order to highlight the code's capabilities: (1) the Be redeposition process on the ITER divertor, (2) physical erosion enhancement in castellated surfaces and (3) damage to multilayer mirrors used on EUV diagnostics in fusion devices due to particle bombardment.

  2. A power information user (PIU) model to promote information integration in Tennessee's public health community.

    PubMed

    Sathe, Nila A; Lee, Patricia; Giuse, Nunzia Bettinsoli

    2004-10-01

    Observation and immersion in the user community are critical factors in designing and implementing informatics solutions; such practices ensure relevant interventions and promote user acceptance. Libraries can adapt these strategies to developing instruction and outreach. While needs assessment is typically a core facet of library instruction, sustained, iterative assessment underlying the development of user-centered instruction is key to integrating resource use into the workflow. This paper describes the Eskind Biomedical Library's (EBL's) recent work with the Tennessee public health community to articulate a training model centered around developing power information users (PIUs). PIUs are community-based individuals with an advanced understanding of information seeking and resource use and are committed to championing information integration. As model development was informed by observation of PIU workflow and information needs, it also allowed for informal testing of the applicability of assessment via domain immersion in library outreach. Though the number of PIUs involved in the project was small, evaluation indicated that the model was useful for promoting information use in PIU workgroups and that the concept of domain immersion was relevant to library-related projects. Moreover, EBL continues to employ principles of domain understanding inherent in the PIU model to develop further interventions for the public health community and library users.

  3. Harmonics analysis of the ITER poloidal field converter based on a piecewise method

    NASA Astrophysics Data System (ADS)

    Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU

    2017-12-01

    Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.

  4. Gauss-Seidel Iterative Method as a Real-Time Pile-Up Solver of Scintillation Pulses

    NASA Astrophysics Data System (ADS)

    Novak, Roman; Vencelj, Matja¿

    2009-12-01

    The pile-up rejection in nuclear spectroscopy has been confronted recently by several pile-up correction schemes that compensate for distortions of the signal and subsequent energy spectra artifacts as the counting rate increases. We study here a real-time capability of the event-by-event correction method, which at the core translates to solving many sets of linear equations. Tight time limits and constrained front-end electronics resources make well-known direct solvers inappropriate. We propose a novel approach based on the Gauss-Seidel iterative method, which turns out to be a stable and cost-efficient solution to improve spectroscopic resolution in the front-end electronics. We show the method convergence properties for a class of matrices that emerge in calorimetric processing of scintillation detector signals and demonstrate the ability of the method to support the relevant resolutions. The sole iteration-based error component can be brought below the sliding window induced errors in a reasonable number of iteration steps, thus allowing real-time operation. An area-efficient hardware implementation is proposed that fully utilizes the method's inherent parallelism.

  5. An overview of ITER diagnostics (invited)

    NASA Astrophysics Data System (ADS)

    Young, Kenneth M.; Costley, A. E.; ITER-JCT Home Team; ITER Diagnostics Expert Group

    1997-01-01

    The requirements for plasma measurements for operating and controlling the ITER device have now been determined. Initial criteria for the measurement quality have been set, and the diagnostics that might be expected to achieve these criteria have been chosen. The design of the first set of diagnostics to achieve these goals is now well under way. The design effort is concentrating on the components that interact most strongly with the other ITER systems, particularly the vacuum vessel, blankets, divertor modules, cryostat, and shield wall. The relevant details of the ITER device and facility design and specific examples of diagnostic design to provide the necessary measurements are described. These designs have to take account of the issues associated with very high 14 MeV neutron fluxes and fluences, nuclear heating, high heat loads, and high mechanical forces that can arise during disruptions. The design work is supported by an extensive research and development program, which to date has concentrated on the effects these levels of radiation might cause on diagnostic components. A brief outline of the organization of the diagnostic development program is given.

  6. In-vessel tritium retention and removal in ITER

    NASA Astrophysics Data System (ADS)

    Federici, G.; Anderl, R. A.; Andrew, P.; Brooks, J. N.; Causey, R. A.; Coad, J. P.; Cowgill, D.; Doerner, R. P.; Haasz, A. A.; Janeschitz, G.; Jacob, W.; Longhurst, G. R.; Nygren, R.; Peacock, A.; Pick, M. A.; Philipps, V.; Roth, J.; Skinner, C. H.; Wampler, W. R.

    Tritium retention inside the vacuum vessel has emerged as a potentially serious constraint in the operation of the International Thermonuclear Experimental Reactor (ITER). In this paper we review recent tokamak and laboratory data on hydrogen, deuterium and tritium retention for materials and conditions which are of direct relevance to the design of ITER. These data, together with significant advances in understanding the underlying physics, provide the basis for modelling predictions of the tritium inventory in ITER. We present the derivation, and discuss the results, of current predictions both in terms of implantation and codeposition rates, and critically discuss their uncertainties and sensitivity to important design and operation parameters such as the plasma edge conditions, the surface temperature, the presence of mixed-materials, etc. These analyses are consistent with recent tokamak findings and show that codeposition of tritium occurs on the divertor surfaces primarily with carbon eroded from a limited area of the divertor near the strike zones. This issue remains an area of serious concern for ITER. The calculated codeposition rates for ITER are relatively high and the in-vessel tritium inventory limit could be reached, under worst assumptions, in approximately a week of continuous operation. We discuss the implications of these estimates on the design, operation and safety of ITER and present a strategy for resolving the issues. We conclude that as long as carbon is used in ITER - and more generically in any other next-step experimental fusion facility fuelled with tritium - the efficient control and removal of the codeposited tritium is essential. There is a critical need to develop and test in situ cleaning techniques and procedures that are beyond the current experience of present-day tokamaks. We review some of the principal methods that are being investigated and tested, in conjunction with the R&D work still required to extrapolate their applicability to ITER. Finally, unresolved issues are identified and recommendations are made on potential R&D avenues for their resolution.

  7. Modifying Photovoice for community-based participatory Indigenous research.

    PubMed

    Castleden, Heather; Garvin, Theresa

    2008-03-01

    Scientific research occurs within a set of socio-political conditions, and in Canada research involving Indigenous communities has a historical association with colonialism. Consequently, Indigenous peoples have been justifiably sceptical and reluctant to become the subjects of academic research. Community-Based Participatory Research (CBPR) is an attempt to develop culturally relevant research models that address issues of injustice, inequality, and exploitation. The work reported here evaluates the use of Photovoice, a CBPR method that uses participant-employed photography and dialogue to create social change, which was employed in a research partnership with a First Nation in Western Canada. Content analysis of semi-structured interviews (n=45) evaluated participants' perspectives of the Photovoice process as part of a larger study on health and environment issues. The analysis revealed that Photovoice effectively balanced power, created a sense of ownership, fostered trust, built capacity, and responded to cultural preferences. The authors discuss the necessity of modifying Photovoice, by building in an iterative process, as being key to the methodological success of the project.

  8. Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks

    PubMed Central

    Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng

    2017-01-01

    In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement. PMID:28677636

  9. Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks.

    PubMed

    Li, Shanshan; Zhou, Xiaotian; Wang, Cheng-Xiang; Yuan, Dongfeng; Zhang, Wensheng

    2017-07-04

    In this paper, we propose to combine Orthogonal Frequency Division Multiplexing-Interleave Division Multiple Access (OFDM-IDMA) with Simultaneous Wireless Information and Power Transfer (SWIPT), resulting in SWIPT aided OFDM-IDMA scheme for power-limited sensor networks. In the proposed system, the Receive Node (RN) applies Power Splitting (PS) to coordinate the Energy Harvesting (EH) and Information Decoding (ID) process, where the harvested energy is utilized to guarantee the iterative Multi-User Detection (MUD) of IDMA to work under sufficient number of iterations. Our objective is to minimize the total transmit power of Source Node (SN), while satisfying the requirements of both minimum harvested energy and Bit Error Rate (BER) performance from individual receive nodes. We formulate such a problem as a joint power allocation and splitting one, where the iteration number of MUD is also taken into consideration as the key parameter to affect both EH and ID constraints. To solve it, a sub-optimal algorithm is proposed to determine the power profile, PS ratio and iteration number of MUD in an iterative manner. Simulation results verify that the proposed algorithm can provide significant performance improvement.

  10. On the meniscus formation and the negative hydrogen ion extraction from ITER neutral beam injection relevant ion source

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Fantz, U.; Franzen, P.; Minea, T.

    2014-10-01

    The development of a large area (Asource,ITER = 0.9 × 2 m2) hydrogen negative ion (NI) source constitutes a crucial step in construction of the neutral beam injectors of the international fusion reactor ITER. To understand the plasma behaviour in the boundary layer close to the extraction system the 3D PIC MCC code ONIX is exploited. Direct cross checked analysis of the simulation and experimental results from the ITER-relevant BATMAN source testbed with a smaller area (Asource,BATMAN ≈ 0.32 × 0.59 m2) has been conducted for a low perveance beam, but for a full set of plasma parameters available. ONIX has been partially benchmarked by comparison to the results obtained using the commercial particle tracing code for positive ion extraction KOBRA3D. Very good agreement has been found in terms of meniscus position and its shape for simulations of different plasma densities. The influence of the initial plasma composition on the final meniscus structure was then investigated for NIs. As expected from the Child-Langmuir law, the results show that not only does the extraction potential play a crucial role on the meniscus formation, but also the initial plasma density and its electronegativity. For the given parameters, the calculated meniscus locates a few mm downstream of the plasma grid aperture provoking a direct NI extraction. Most of the surface produced NIs do not reach the plasma bulk, but move directly towards the extraction grid guided by the extraction field. Even for artificially increased electronegativity of the bulk plasma the extracted NI current from this region is low. This observation indicates a high relevance of the direct NI extraction. These calculations show that the extracted NI current from the bulk region is low even if a complete ion-ion plasma is assumed, meaning that direct extraction from surface produced ions should be present in order to obtain sufficiently high extracted NI current density. The calculated extracted currents, both ions and electrons, agree rather well with the experiment.

  11. On Green's function retrieval by iterative substitution of the coupled Marchenko equations

    NASA Astrophysics Data System (ADS)

    van der Neut, Joost; Vasconcelos, Ivan; Wapenaar, Kees

    2015-11-01

    Iterative substitution of the coupled Marchenko equations is a novel methodology to retrieve the Green's functions from a source or receiver array at an acquisition surface to an arbitrary location in an acoustic medium. The methodology requires as input the single-sided reflection response at the acquisition surface and an initial focusing function, being the time-reversed direct wavefield from the acquisition surface to a specified location in the subsurface. We express the iterative scheme that is applied by this methodology explicitly as the successive actions of various linear operators, acting on an initial focusing function. These operators involve multidimensional crosscorrelations with the reflection data and truncations in time. We offer physical interpretations of the multidimensional crosscorrelations by subtracting traveltimes along common ray paths at the stationary points of the underlying integrals. This provides a clear understanding of how individual events are retrieved by the scheme. Our interpretation also exposes some of the scheme's limitations in terms of what can be retrieved in case of a finite recording aperture. Green's function retrieval is only successful if the relevant stationary points are sampled. As a consequence, internal multiples can only be retrieved at a subsurface location with a particular ray parameter if this location is illuminated by the direct wavefield with this specific ray parameter. Several assumptions are required to solve the Marchenko equations. We show that these assumptions are not always satisfied in arbitrary heterogeneous media, which can result in incomplete Green's function retrieval and the emergence of artefacts. Despite these limitations, accurate Green's functions can often be retrieved by the iterative scheme, which is highly relevant for seismic imaging and inversion of internal multiple reflections.

  12. ITER-relevant calibration technique for soft x-ray spectrometer.

    PubMed

    Rzadkiewicz, J; Książek, I; Zastrow, K-D; Coffey, I H; Jakubowska, K; Lawson, K D

    2010-10-01

    The ITER-oriented JET research program brings new requirements for the low-Z impurity monitoring, in particular for the Be—the future main wall component of JET and ITER. Monitoring based on Bragg spectroscopy requires an absolute sensitivity calibration, which is challenging for large tokamaks. This paper describes both “component-by-component” and “continua” calibration methods used for the Be IV channel (75.9 Å) of the Bragg rotor spectrometer deployed on JET. The calibration techniques presented here rely on multiorder reflectivity calculations and measurements of continuum radiation emitted from helium plasmas. These offer excellent conditions for the absolute photon flux calibration due to their low level of impurities. It was found that the component-by-component method gives results that are four times higher than those obtained by means of the continua method. A better understanding of this discrepancy requires further investigations.

  13. On iterative algorithms for quantitative photoacoustic tomography in the radiative transport regime

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Zhou, Tie

    2017-11-01

    In this paper, we present a numerical reconstruction method for quantitative photoacoustic tomography (QPAT), based on the radiative transfer equation (RTE), which models light propagation more accurately than diffusion approximation (DA). We investigate the reconstruction of absorption coefficient and scattering coefficient of biological tissues. An improved fixed-point iterative method to retrieve the absorption coefficient, given the scattering coefficient, is proposed for its cheap computational cost; the convergence of this method is also proved. The Barzilai-Borwein (BB) method is applied to retrieve two coefficients simultaneously. Since the reconstruction of optical coefficients involves the solutions of original and adjoint RTEs in the framework of optimization, an efficient solver with high accuracy is developed from Gao and Zhao (2009 Transp. Theory Stat. Phys. 38 149-92). Simulation experiments illustrate that the improved fixed-point iterative method and the BB method are competitive methods for QPAT in the relevant cases.

  14. Capsule implosion optimization during the indirect-drive National Ignition Campaign

    NASA Astrophysics Data System (ADS)

    Landen, O. L.; Edwards, J.; Haan, S. W.; Robey, H. F.; Milovich, J.; Spears, B. K.; Weber, S. V.; Clark, D. S.; Lindl, J. D.; MacGowan, B. J.; Moses, E. I.; Atherton, J.; Amendt, P. A.; Boehly, T. R.; Bradley, D. K.; Braun, D. G.; Callahan, D. A.; Celliers, P. M.; Collins, G. W.; Dewald, E. L.; Divol, L.; Frenje, J. A.; Glenzer, S. H.; Hamza, A.; Hammel, B. A.; Hicks, D. G.; Hoffman, N.; Izumi, N.; Jones, O. S.; Kilkenny, J. D.; Kirkwood, R. K.; Kline, J. L.; Kyrala, G. A.; Marinak, M. M.; Meezan, N.; Meyerhofer, D. D.; Michel, P.; Munro, D. H.; Olson, R. E.; Nikroo, A.; Regan, S. P.; Suter, L. J.; Thomas, C. A.; Wilson, D. C.

    2011-05-01

    Capsule performance optimization campaigns will be conducted at the National Ignition Facility [G. H. Miller, E. I. Moses, and C. R. Wuest, Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition. The campaigns will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models using a variety of ignition capsule surrogates before proceeding to cryogenic-layered implosions and ignition experiments. The quantitative goals and technique options and down selections for the tuning campaigns are first explained. The computationally derived sensitivities to key laser and target parameters are compared to simple analytic models to gain further insight into the physics of the tuning techniques. The results of the validation of the tuning techniques at the OMEGA facility [J. M. Soures et al., Phys. Plasmas 3, 2108 (1996)] under scaled hohlraum and capsule conditions relevant to the ignition design are shown to meet the required sensitivity and accuracy. A roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget. Finally, we show how the tuning precision will be improved after a number of shots and iterations to meet an acceptable level of residual uncertainty.

  15. Particle model of full-size ITER-relevant negative ion source.

    PubMed

    Taccogna, F; Minelli, P; Ippolito, N

    2016-02-01

    This work represents the first attempt to model the full-size ITER-relevant negative ion source including the expansion, extraction, and part of the acceleration regions keeping the mesh size fine enough to resolve every single aperture. The model consists of a 2.5D particle-in-cell Monte Carlo collision representation of the plane perpendicular to the filter field lines. Magnetic filter and electron deflection field have been included and a negative ion current density of j(H(-)) = 660 A/m(2) from the plasma grid (PG) is used as parameter for the neutral conversion. The driver is not yet included and a fixed ambipolar flux is emitted from the driver exit plane. Results show the strong asymmetry along the PG driven by the electron Hall (E × B and diamagnetic) drift perpendicular to the filter field. Such asymmetry creates an important dis-homogeneity in the electron current extracted from the different apertures. A steady state is not yet reached after 15 μs.

  16. Relations between elliptic multiple zeta values and a special derivation algebra

    NASA Astrophysics Data System (ADS)

    Broedel, Johannes; Matthes, Nils; Schlotterer, Oliver

    2016-04-01

    We investigate relations between elliptic multiple zeta values (eMZVs) and describe a method to derive the number of indecomposable elements of given weight and length. Our method is based on representing eMZVs as iterated integrals over Eisenstein series and exploiting the connection with a special derivation algebra. Its commutator relations give rise to constraints on the iterated integrals over Eisenstein series relevant for eMZVs and thereby allow to count the indecomposable representatives. Conversely, the above connection suggests apparently new relations in the derivation algebra. Under https://tools.aei.mpg.de/emzv we provide relations for eMZVs over a wide range of weights and lengths.

  17. Iterative learning control with applications in energy generation, lasers and health care.

    PubMed

    Rogers, E; Tutty, O R

    2016-09-01

    Many physical systems make repeated executions of the same finite time duration task. One example is a robot in a factory or warehouse whose task is to collect an object in sequence from a location, transfer it over a finite duration, place it at a specified location or on a moving conveyor and then return for the next one and so on. Iterative learning control was especially developed for systems with this mode of operation and this paper gives an overview of this control design method using relatively recent relevant applications in wind turbines, free-electron lasers and health care, as exemplars to demonstrate its applicability.

  18. Fostering Earth Observation Regional Networks - Integrative and iterative approaches to capacity building

    NASA Astrophysics Data System (ADS)

    Habtezion, S.

    2015-12-01

    Fostering Earth Observation Regional Networks - Integrative and iterative approaches to capacity building Fostering Earth Observation Regional Networks - Integrative and iterative approaches to capacity building Senay Habtezion (shabtezion@start.org) / Hassan Virji (hvirji@start.org)Global Change SySTem for Analysis, Training and Research (START) (www.start.org) 2000 Florida Avenue NW, Suite 200 Washington, DC 20009 USA As part of the Global Observation of Forest and Land Cover Dynamics (GOFC-GOLD) project partnership effort to promote use of earth observations in advancing scientific knowledge, START works to bridge capacity needs related to earth observations (EOs) and their applications in the developing world. GOFC-GOLD regional networks, fostered through the support of regional and thematic workshops, have been successful in (1) enabling participation of scientists for developing countries and from the US to collaborate on key GOFC-GOLD and Land Cover and Land Use Change (LCLUC) issues, including NASA Global Data Set validation and (2) training young developing country scientists to gain key skills in EOs data management and analysis. Members of the regional networks are also engaged and reengaged in other EOs programs (e.g. visiting scientists program; data initiative fellowship programs at the USGS EROS Center and Boston University), which has helped strengthen these networks. The presentation draws from these experiences in advocating for integrative and iterative approaches to capacity building through the lens of the GOFC-GOLD partnership effort. Specifically, this presentation describes the role of the GODC-GOLD partnership in nurturing organic networks of scientists and EOs practitioners in Asia, Africa, Eastern Europe and Latin America.

  19. ITER Central Solenoid Module Fabrication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, John

    The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort betweenmore » the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first ITER module is in progress. The seven modules will be individually shipped to Cadarache, France upon their completion. This paper describes the processes and status of the fabrication of the CS Modules for ITER.« less

  20. Optical asymmetric watermarking using modified wavelet fusion and diffractive imaging

    NASA Astrophysics Data System (ADS)

    Mehra, Isha; Nishchal, Naveen K.

    2015-05-01

    In most of the existing image encryption algorithms the generated keys are in the form of a noise like distribution with a uniform distributed histogram. However, the noise like distribution is an apparent sign indicating the presence of the keys. If the keys are to be transferred through some communication channels, then this may lead to a security problem. This is because; the noise like features may easily catch people's attention and bring more attacks. To address this problem it is required to transfer the keys to some other meaningful images to disguise the attackers. The watermarking schemes are complementary to image encryption schemes. In most of the iterative encryption schemes, support constraints play an important role of the keys in order to decrypt the meaningful data. In this article, we have transferred the support constraints which are generated by axial translation of CCD camera using amplitude-, and phase- truncation approach, into different meaningful images. This has been done by developing modified fusion technique in wavelet transform domain. The second issue is, in case, the meaningful images are caught by the attacker then how to solve the copyright protection. To resolve this issue, watermark detection plays a crucial role. For this purpose, it is necessary to recover the original image using the retrieved watermarks/support constraints. To address this issue, four asymmetric keys have been generated corresponding to each watermarked image to retrieve the watermarks. For decryption, an iterative phase retrieval algorithm is applied to extract the plain-texts from corresponding retrieved watermarks.

  1. Advances in the high bootstrap fraction regime on DIII-D towards the Q = 5 mission of ITER steady state

    DOE PAGES

    Qian, Jinping P.; Garofalo, Andrea M.; Gong, Xianzu Z.; ...

    2017-03-20

    Recent EAST/DIII-D joint experiments on the high poloidal betamore » $${{\\beta}_{\\text{P}}}$$ regime in DIII-D have extended operation with internal transport barriers (ITBs) and excellent energy confinement (H 98y2 ~ 1.6) to higher plasma current, for lower q 95 ≤ 7.0, and more balanced neutral beam injection (NBI) (torque injection < 2 Nm), for lower plasma rotation than previous results. Transport analysis and experimental measurements at low toroidal rotation suggest that the E × B shear effect is not key to the ITB formation in these high $${{\\beta}_{\\text{P}}}$$ discharges. Experiments and TGLF modeling show that the Shafranov shift has a key stabilizing effect on turbulence. Extrapolation of the DIII-D results using a 0D model shows that with the improved confinement, the high bootstrap fraction regime could achieve fusion gain Q = 5 in ITER at $${{\\beta}_{\\text{N}}}$$ ~ 2.9 and q 95 ~ 7. With the optimization of q(0), the required improved confinement is achievable when using 1.5D TGLF-SAT1 for transport simulations. Furthermore, results reported in this paper suggest that the DIII-D high $${{\\beta}_{\\text{P}}}$$ scenario could be a candidate for ITER steady state operation.« less

  2. Exploring the knowledge behind predictions in everyday cognition: an iterated learning study.

    PubMed

    Stephens, Rachel G; Dunn, John C; Rao, Li-Lin; Li, Shu

    2015-10-01

    Making accurate predictions about events is an important but difficult task. Recent work suggests that people are adept at this task, making predictions that reflect surprisingly accurate knowledge of the distributions of real quantities. Across three experiments, we used an iterated learning procedure to explore the basis of this knowledge: to what extent is domain experience critical to accurate predictions and how accurate are people when faced with unfamiliar domains? In Experiment 1, two groups of participants, one resident in Australia, the other in China, predicted the values of quantities familiar to both (movie run-times), unfamiliar to both (the lengths of Pharaoh reigns), and familiar to one but unfamiliar to the other (cake baking durations and the lengths of Beijing bus routes). While predictions from both groups were reasonably accurate overall, predictions were inaccurate in the selectively unfamiliar domains and, surprisingly, predictions by the China-resident group were also inaccurate for a highly familiar domain: local bus route lengths. Focusing on bus routes, two follow-up experiments with Australia-resident groups clarified the knowledge and strategies that people draw upon, plus important determinants of accurate predictions. For unfamiliar domains, people appear to rely on extrapolating from (not simply directly applying) related knowledge. However, we show that people's predictions are subject to two sources of error: in the estimation of quantities in a familiar domain and extension to plausible values in an unfamiliar domain. We propose that the key to successful predictions is not simply domain experience itself, but explicit experience of relevant quantities.

  3. Development of a video-based education and process change intervention to improve advance cardiopulmonary resuscitation decision-making.

    PubMed

    Waldron, Nicholas; Johnson, Claire E; Saul, Peter; Waldron, Heidi; Chong, Jeffrey C; Hill, Anne-Marie; Hayes, Barbara

    2016-10-06

    Advance cardiopulmonary resuscitation (CPR) decision-making and escalation of care discussions are variable in routine clinical practice. We aimed to explore physician barriers to advance CPR decision-making in an inpatient hospital setting and develop a pragmatic intervention to support clinicians to undertake and document routine advance care planning discussions. Two focus groups, which involved eight consultants and ten junior doctors, were conducted following a review of the current literature. A subsequent iterative consensus process developed two intervention elements: (i) an updated 'Goals of Patient Care' (GOPC) form and process; (ii) an education video and resources for teaching advance CPR decision-making and communication. A multidisciplinary group of health professionals and policy-makers with experience in systems development, education and research provided critical feedback. Three key themes emerged from the focus groups and the literature, which identified a structure for the intervention: (i) knowing what to say; (ii) knowing how to say it; (iii) wanting to say it. The themes informed the development of a video to provide education about advance CPR decision-making framework, improving communication and contextualising relevant clinical issues. Critical feedback assisted in refining the video and further guided development and evolution of a medical GOPC approach to discussing and recording medical treatment and advance care plans. Through an iterative process of consultation and review, video-based education and an expanded GOPC form and approach were developed to address physician and systemic barriers to advance CPR decision-making and documentation. Implementation and evaluation across hospital settings is required to examine utility and determine effect on quality of care.

  4. Evaluation of power transfer efficiency for a high power inductively coupled radio-frequency hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Jain, P.; Recchia, M.; Cavenago, M.; Fantz, U.; Gaio, E.; Kraus, W.; Maistrello, A.; Veltri, P.

    2018-04-01

    Neutral beam injection (NBI) for plasma heating and current drive is necessary for International Thermonuclear Experimental reactor (ITER) tokamak. Due to its various advantages, a radio frequency (RF) driven plasma source type was selected as a reference ion source for the ITER heating NBI. The ITER relevant RF negative ion sources are inductively coupled (IC) devices whose operational working frequency has been chosen to be 1 MHz and are characterized by high RF power density (˜9.4 W cm-3) and low operational pressure (around 0.3 Pa). The RF field is produced by a coil in a cylindrical chamber leading to a plasma generation followed by its expansion inside the chamber. This paper recalls different concepts based on which a methodology is developed to evaluate the efficiency of the RF power transfer to hydrogen plasma. This efficiency is then analyzed as a function of the working frequency and in dependence of other operating source and plasma parameters. The study is applied to a high power IC RF hydrogen ion source which is similar to one simplified driver of the ELISE source (half the size of the ITER NBI source).

  5. Method for protein structure alignment

    DOEpatents

    Blankenbecler, Richard; Ohlsson, Mattias; Peterson, Carsten; Ringner, Markus

    2005-02-22

    This invention provides a method for protein structure alignment. More particularly, the present invention provides a method for identification, classification and prediction of protein structures. The present invention involves two key ingredients. First, an energy or cost function formulation of the problem simultaneously in terms of binary (Potts) assignment variables and real-valued atomic coordinates. Second, a minimization of the energy or cost function by an iterative method, where in each iteration (1) a mean field method is employed for the assignment variables and (2) exact rotation and/or translation of atomic coordinates is performed, weighted with the corresponding assignment variables.

  6. Carbon charge exchange analysis in the ITER-like wall environment.

    PubMed

    Menmuir, S; Giroud, C; Biewer, T M; Coffey, I H; Delabie, E; Hawkes, N C; Sertoli, M

    2014-11-01

    Charge exchange spectroscopy has long been a key diagnostic tool for fusion plasmas and is well developed in devices with Carbon Plasma-Facing Components. Operation with the ITER-like wall at JET has resulted in changes to the spectrum in the region of the Carbon charge exchange line at 529.06 nm and demonstrates the need to revise the core charge exchange analysis for this line. An investigation has been made of this spectral region in different plasma conditions and the revised description of the spectral lines to be included in the analysis is presented.

  7. Development of the prototype pneumatic transfer system for ITER neutron activation system.

    PubMed

    Cheon, M S; Seon, C R; Pak, S; Lee, H G; Bertalot, L

    2012-10-01

    The neutron activation system (NAS) measures neutron fluence at the first wall and the total neutron flux from the ITER plasma, providing evaluation of the fusion power for all operational phases. The pneumatic transfer system (PTS) is one of the key components of the NAS for the proper operation of the system, playing a role of transferring encapsulated samples between the capsule loading machine, irradiation stations, counting stations, and disposal bin. For the validation and the optimization of the design, a prototype of the PTS was developed and capsule transfer tests were performed with the developed system.

  8. Augmenting the one-shot framework by additional constraints

    DOE PAGES

    Bosse, Torsten

    2016-05-12

    The (multistep) one-shot method for design optimization problems has been successfully implemented for various applications. To this end, a slowly convergent primal fixed-point iteration of the state equation is augmented by an adjoint iteration and a corresponding preconditioned design update. In this paper we present a modification of the method that allows for additional equality constraints besides the usual state equation. Finally, a retardation analysis and the local convergence of the method in terms of necessary and sufficient conditions are given, which depend on key characteristics of the underlying problem and the quality of the utilized preconditioner.

  9. Augmenting the one-shot framework by additional constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosse, Torsten

    The (multistep) one-shot method for design optimization problems has been successfully implemented for various applications. To this end, a slowly convergent primal fixed-point iteration of the state equation is augmented by an adjoint iteration and a corresponding preconditioned design update. In this paper we present a modification of the method that allows for additional equality constraints besides the usual state equation. Finally, a retardation analysis and the local convergence of the method in terms of necessary and sufficient conditions are given, which depend on key characteristics of the underlying problem and the quality of the utilized preconditioner.

  10. ICRH system performance during ITER-Like Wall operations at JET and the outlook for DT campaign

    NASA Astrophysics Data System (ADS)

    Monakhov, Igor; Blackman, Trevor; Dumortier, Pierre; Durodié, Frederic; Jacquet, Philippe; Lerche, Ernesto; Noble, Craig

    2017-10-01

    Performance of JET ICRH system since installation of the metal ITER-Like Wall (ILW) has been assessed statistically. The data demonstrate steady increase of the RF power coupled to plasmas over recent years with the maximum pulse-average and peak values exceeding respectively 6MW and 8MW in 2016. Analysis and extrapolation of power capabilities of conventional JET ICRH antennas is provided and key performance-limiting factors are discussed. The RF plant operational frequency options are presented highlighting the issues of efficient ICRH application within a foreseeable range of DT plasma scenarios.

  11. A Structured Decision Approach for Integrating and Analyzing Community Perspectives in Re-Use Planning of Vacant Properties in Cleveland, Ohio

    EPA Science Inventory

    An integrated GIS-based, multi-attribute decision model deployed in a web-based platform is presented enabling an iterative, spatially explicit and collaborative analysis of relevant and available information for repurposing vacant land. The process incorporated traditional and ...

  12. Iterative Exploration, Design and Evaluation of Support for Query Reformulation in Interactive Information Retrieval.

    ERIC Educational Resources Information Center

    Belkin, N. J.; Cool, C.; Kelly, D.; Lin, S. -J.; Park, S. Y.; Perez-Carballo, J.; Sikora, C.

    2001-01-01

    Reports on the progressive investigation of techniques for supporting interactive query reformulation in the TREC (Text Retrieval Conference) Interactive Track. Highlights include methods of term suggestion; interface design to support different system functionalities; an overview of each year's TREC investigation; and relevance to the development…

  13. Negotiating Tensions Between Theory and Design in the Development of Mailings for People Recovering From Acute Coronary Syndrome.

    PubMed

    Witteman, Holly O; Presseau, Justin; Nicholas Angl, Emily; Jokhio, Iffat; Schwalm, J D; Grimshaw, Jeremy M; Bosiak, Beth; Natarajan, Madhu K; Ivers, Noah M

    2017-03-01

    Taking all recommended secondary prevention cardiac medications and fully participating in a formal cardiac rehabilitation program significantly reduces mortality and morbidity in the year following a heart attack. However, many people who have had a heart attack stop taking some or all of their recommended medications prematurely and many do not complete a formal cardiac rehabilitation program. The objective of our study was to develop a user-centered, theory-based, scalable intervention of printed educational materials to encourage and support people who have had a heart attack to use recommended secondary prevention cardiac treatments. Prior to the design process, we conducted theory-based interviews and surveys with patients who had had a heart attack to identify key determinants of secondary prevention behaviors. Our interdisciplinary research team then partnered with a patient advisor and design firm to undertake an iterative, theory-informed, user-centered design process to operationalize techniques to address these determinants. User-centered design requires considering users' needs, goals, strengths, limitations, context, and intuitive processes; designing prototypes adapted to users accordingly; observing how potential users respond to the prototype; and using those data to refine the design. To accomplish these tasks, we conducted user research to develop personas (archetypes of potential users), developed a preliminary prototype using behavior change theory to map behavior change techniques to identified determinants of medication adherence, and conducted 2 design cycles, testing materials via think-aloud and semistructured interviews with a total of 11 users (10 patients who had experienced a heart attack and 1 caregiver). We recruited participants at a single cardiac clinic using purposive sampling informed by our personas. We recorded sessions with users and extracted key themes from transcripts. We held interdisciplinary team discussions to interpret findings in the context of relevant theory-based evidence and iteratively adapted the intervention accordingly. Through our iterative development and testing, we identified 3 key tensions: (1) evidence from theory-based studies versus users' feelings, (2) informative versus persuasive communication, and (3) logistical constraints for the intervention versus users' desires or preferences. We addressed these by (1) identifying root causes for users' feelings and addressing those to better incorporate theory- and evidence-based features, (2) accepting that our intervention was ethically justified in being persuasive, and (3) making changes to the intervention where possible, such as attempting to match imagery in the materials to patients' self-images. Theory-informed interventions must be operationalized in ways that fit with user needs. Tensions between users' desires or preferences and health care system goals and constraints must be identified and addressed to the greatest extent possible. A cluster randomized controlled trial of the final intervention is currently underway. ©Holly O Witteman, Justin Presseau, Emily Nicholas Angl, Iffat Jokhio, JD Schwalm, Jeremy M Grimshaw, Beth Bosiak, Madhu K Natarajan, Noah M Ivers. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 01.03.2017.

  14. Fast divide-and-conquer algorithm for evaluating polarization in classical force fields

    NASA Astrophysics Data System (ADS)

    Nocito, Dominique; Beran, Gregory J. O.

    2017-03-01

    Evaluation of the self-consistent polarization energy forms a major computational bottleneck in polarizable force fields. In large systems, the linear polarization equations are typically solved iteratively with techniques based on Jacobi iterations (JI) or preconditioned conjugate gradients (PCG). Two new variants of JI are proposed here that exploit domain decomposition to accelerate the convergence of the induced dipoles. The first, divide-and-conquer JI (DC-JI), is a block Jacobi algorithm which solves the polarization equations within non-overlapping sub-clusters of atoms directly via Cholesky decomposition, and iterates to capture interactions between sub-clusters. The second, fuzzy DC-JI, achieves further acceleration by employing overlapping blocks. Fuzzy DC-JI is analogous to an additive Schwarz method, but with distance-based weighting when averaging the fuzzy dipoles from different blocks. Key to the success of these algorithms is the use of K-means clustering to identify natural atomic sub-clusters automatically for both algorithms and to determine the appropriate weights in fuzzy DC-JI. The algorithm employs knowledge of the 3-D spatial interactions to group important elements in the 2-D polarization matrix. When coupled with direct inversion in the iterative subspace (DIIS) extrapolation, fuzzy DC-JI/DIIS in particular converges in a comparable number of iterations as PCG, but with lower computational cost per iteration. In the end, the new algorithms demonstrated here accelerate the evaluation of the polarization energy by 2-3 fold compared to existing implementations of PCG or JI/DIIS.

  15. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  16. Message from the Editor

    NASA Astrophysics Data System (ADS)

    Stambaugh, Ronald D.

    2013-01-01

    The journal Nuclear Fusion has played a key role in the development of the physics basis for fusion energy. That physics basis has been sufficiently advanced to enable construction of such major facilities as ITER along the tokamak line in magnetic fusion and the National Ignition Facility (NIF) in laser-driven fusion. In the coming decade, while ITER is being constructed and brought into deuterium-tritium (DT) operation, this physics basis will be significantly deepened and extended, with particular key remaining issues addressed. Indeed such a focus was already evident with about 19% of the papers submitted to the 24th IAEA Fusion Energy Conference in San Diego, USA appearing in the directly labelled ITER and IFE categories. Of course many of the papers in the other research categories were aimed at issues relevant to these major fusion directions. About 17% of the papers submitted in the 'Experiment and Theory' categories dealt with the highly ITER relevant and inter-related issues of edge-localized modes, non-axisymmetric fields and plasma rotation. It is gratifying indeed to see how the international community is able to make such a concerted effort, facilitated by the ITPA and the ITER-IO, around such a major issue for ITER. In addition to deepening and extending the physics bases for the mainline approaches to fusion energy, the coming decade should see significant progress in the physics basis for additional fusion concepts. The stellarator concept should reach a high level of maturity with such facilities as LHD operating in Japan and already producing significant results and the W7-X in the EU coming online soon. Physics issues that require pulses of hundreds of seconds to investigate can be confronted in the new superconducting tokamaks coming online in Asia and in the major stellarators. The basis for steady-state operation of a tokamak may be further developed in the upper half of the tokamak operating space—the wall stabilized regime. New divertor geometries are already being investigated. Progress should continue on additional driver approaches in inertial fusion. Nuclear Fusion will continue to play a major role in documenting the significant advances in fusion plasma science on the way to fusion energy. Successful outcomes in projects like ITER and NIF will bring sharply into focus the remaining significant issues in fusion materials science and fusion nuclear science and technology needed to move from the scientific feasibility of fusion to the actual realization of fusion power production. These issues are largely common to magnetic and inertial fusion. Progress in these areas has been limited by the lack of suitable major research facilities. Hopefully the coming decade will see progress along these lines. Nuclear Fusion will play its part with increased papers reporting significant advances in fusion materials and nuclear science and technology. The reputation and status of the journal remains high; paper submissions are increasing and the Impact Factor for the journal remains high at 4.09 for 2011. We look forward in the coming months to publishing expanded versions of many of the outstanding papers presented at the IAEA FEC in San Diego. We congratulate Dr Patrick Diamond of the University of California at San Diego for winning the 2012 Nuclear Fusion Prize for his paper [1] and Dr Hajime Urano of the Japan Atomic Energy Agency for winning the 2011 Nuclear Fusion Prize for his paper [2]. Papers of such quality by our many authors enable the high standard of the journal to be maintained. The Nuclear Fusion editorial office understands how much effort is required by our referees. The Editorial Board decided that an expression of thanks to our most loyal referees is appropriate and so, since January 2005, we have been offering ten of the most active referees over the past year a personal subscription to Nuclear Fusion with electronic access for one year, free of charge. This year, three of the top referees have reviewed five manuscripts in the period November 2011 to December 2012 and provided excellent advice to the authors. We have excluded our Board Members, Guest Editors of special editions and those referees who were already listed in recent years. The following people have been selected: Marina Becoulet, CEA-Cadarache, France Jiaqui Dong, Southwestern Institute of Physics, China Emiliano Fable, Max-Planck-Institut für Plasmaphysik, Germany Ambrogio Fasoli, Ecole Polytechnique Federale de Lausanne, Switzerland Eric Fredrickson, Princeton Plasma Physics Laboratory, USA Manuel Garcia-Munoz, Max-Planck-Institut fuer Plasmaphysik, Germany William Heidbrink, California University, USA Katsumi Ida, National Inst. For Fusion Science, Japan Peter Stangeby, Toronto University, Canada James Strachan, Princeton Plasma Physics Laboratory, USA Victor Yavorskij, Ukraine National Academy of Sciences, Ukraine In addition, there is a group of several hundred referees who have helped us in the past year to maintain the high scientific standard of Nuclear Fusion. At the end of this issue we give the full list of all referees for 2012. Our thanks to them!

  17. A fresh look at electron cyclotron current drive power requirements for stabilization of tearing modes in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Haye, R. J., E-mail: lahaye@fusion.gat.com

    2015-12-10

    ITER is an international project to design and build an experimental fusion reactor based on the “tokamak” concept. ITER relies upon localized electron cyclotron current drive (ECCD) at the rational safety factor q=2 to suppress or stabilize the expected poloidal mode m=2, toroidal mode n=1 neoclassical tearing mode (NTM) islands. Such islands if unmitigated degrade energy confinement, lock to the resistive wall (stop rotating), cause loss of “H-mode” and induce disruption. The International Tokamak Physics Activity (ITPA) on MHD, Disruptions and Magnetic Control joint experiment group MDC-8 on Current Drive Prevention/Stabilization of Neoclassical Tearing Modes started in 2005, after whichmore » assessments were made for the requirements for ECCD needed in ITER, particularly that of rf power and alignment on q=2 [1]. Narrow well-aligned rf current parallel to and of order of one percent of the total plasma current is needed to replace the “missing” current in the island O-points and heal or preempt (avoid destabilization by applying ECCD on q=2 in absence of the mode) the island [2-4]. This paper updates the advances in ECCD stabilization on NTMs learned in DIII-D experiments and modeling during the last 5 to 10 years as applies to stabilization by localized ECCD of tearing modes in ITER. This includes the ECCD (inside the q=1 radius) stabilization of the NTM “seeding” instability known as sawteeth (m/n=1/1) [5]. Recent measurements in DIII-D show that the ITER-similar current profile is classically unstable, curvature stabilization must not be neglected, and the small island width stabilization effect from helical ion polarization currents is stronger than was previously thought [6]. The consequences of updated assumptions in ITER modeling of the minimum well-aligned ECCD power needed are all-in-all favorable (and well-within the ITER 24 gyrotron capability) when all effects are included. However, a “wild card” may be broadening of the localized ECCD by the presence of the island; various theories predict broadening could occur and there is experimental evidence for broadening in DIII-D. Wider than now expected ECCD in ITER would make alignment easier to do but weaken the stabilization and thus require more rf power. In addition to updated modeling for ITER, advances in the ITER-relevant DIII-D ECCD gyrotron launch mirror control system hardware and real-time plasma control system have been made [7] and there are plans for application in DIII-D ITER demonstration discharges.« less

  18. A fresh look at electron cyclotron current drive power requirements for stabilization of tearing modes in ITER

    NASA Astrophysics Data System (ADS)

    La Haye, R. J.

    2015-12-01

    ITER is an international project to design and build an experimental fusion reactor based on the "tokamak" concept. ITER relies upon localized electron cyclotron current drive (ECCD) at the rational safety factor q=2 to suppress or stabilize the expected poloidal mode m=2, toroidal mode n=1 neoclassical tearing mode (NTM) islands. Such islands if unmitigated degrade energy confinement, lock to the resistive wall (stop rotating), cause loss of "H-mode" and induce disruption. The International Tokamak Physics Activity (ITPA) on MHD, Disruptions and Magnetic Control joint experiment group MDC-8 on Current Drive Prevention/Stabilization of Neoclassical Tearing Modes started in 2005, after which assessments were made for the requirements for ECCD needed in ITER, particularly that of rf power and alignment on q=2 [1]. Narrow well-aligned rf current parallel to and of order of one percent of the total plasma current is needed to replace the "missing" current in the island O-points and heal or preempt (avoid destabilization by applying ECCD on q=2 in absence of the mode) the island [2-4]. This paper updates the advances in ECCD stabilization on NTMs learned in DIII-D experiments and modeling during the last 5 to 10 years as applies to stabilization by localized ECCD of tearing modes in ITER. This includes the ECCD (inside the q=1 radius) stabilization of the NTM "seeding" instability known as sawteeth (m/n=1/1) [5]. Recent measurements in DIII-D show that the ITER-similar current profile is classically unstable, curvature stabilization must not be neglected, and the small island width stabilization effect from helical ion polarization currents is stronger than was previously thought [6]. The consequences of updated assumptions in ITER modeling of the minimum well-aligned ECCD power needed are all-in-all favorable (and well-within the ITER 24 gyrotron capability) when all effects are included. However, a "wild card" may be broadening of the localized ECCD by the presence of the island; various theories predict broadening could occur and there is experimental evidence for broadening in DIII-D. Wider than now expected ECCD in ITER would make alignment easier to do but weaken the stabilization and thus require more rf power. In addition to updated modeling for ITER, advances in the ITER-relevant DIII-D ECCD gyrotron launch mirror control system hardware and real-time plasma control system have been made [7] and there are plans for application in DIII-D ITER demonstration discharges.

  19. Fast-ion stabilization of tokamak plasma turbulence

    NASA Astrophysics Data System (ADS)

    Di Siena, A.; Görler, T.; Doerk, H.; Poli, E.; Bilato, R.

    2018-05-01

    A significant reduction of the turbulence-induced anomalous heat transport has been observed in recent studies of magnetically confined plasmas in the presence of a significant fast-ion fractions. Therefore, the control of fast-ion populations with external heating might open the way to more optimistic scenarios for future fusion devices. However, little is known about the parameter range of relevance of these fast-ion effects which are often only highlighted in correlation with substantial electromagnetic fluctuations. Here, a significant fast ion induced stabilization is also found in both linear and nonlinear electrostatic gyrokinetic simulations which cannot be explained with the conventional assumptions based on pressure profile and dilution effects. Strong wave-fast particle resonant interactions are observed for realistic parameters where the fast particle trace approximation clearly failed and explained with the help of a reduced Vlasov model. In contrast to previous interpretations, fast particles can actively modify the Poisson field equation—even at low fast particle densities where dilution tends to be negligible and at relatively high temperatures, i.e. T  <  30T e . Further key parameters controlling the role of the fast ions are identified in the following and various ways of further optimizing their beneficial impact are explored. Finally, possible extensions into the electromagnetic regime are briefly discussed and the relevance of these findings for ITER standard scenarios is highlighted.

  20. Computation of Electron Impact Ionization Cross sections of Iron Hydrogen Clusters - Relevance in Fusion Plasmas

    NASA Astrophysics Data System (ADS)

    Patel, Umang; Joshipura, K. N.

    2017-04-01

    Plasma-wall interaction (PWI) is one of the key issues in nuclear fusion research. In nuclear fusion devices, such as the JET tokamak or the ITER, first-wall materials will be directly exposed to plasma components. Erosion of first-wall materials is a consequence of the impact of hydrogen and its isotopes as main constituents of the hot plasma. Besides the formation of gas-phase atomic species in various charge states, di- and polyatomic molecular species are expected to be formed via PWI processes. These compounds may profoundly disturb the fusion plasma, may lead to unfavorable re-deposition of materials and composites in other areas of the vessel. Interaction between atoms, molecules as well transport of impurities are of interest for modelling of fusion plasma. Qion by electron impact are such process also important in low temperature plasma processing, astrophysics etc. We reported electron impact Qionfor iron hydrogen clusters, FeHn (n = 1 to 10) from ionization threshold to 2000 eV. A semi empirical approach called Complex Scattering Potential - Ionization Contribution (CSP-ic) has been employed for the reported calculation. In context of fusion relevant species Qion were reported for beryllium and its hydrides, tungsten and its oxides and cluster of beryllium-tungsten by Huber et al.. Iron hydrogen clusters are another such species whose Qion were calculated through DM and BEB formalisms, same has been compared with present calculations.

  1. Efficient entanglement distillation without quantum memory.

    PubMed

    Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J; Fiurášek, Jaromír; Schnabel, Roman

    2016-05-31

    Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution.

  2. Efficient entanglement distillation without quantum memory

    PubMed Central

    Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J.; Fiurášek, Jaromír; Schnabel, Roman

    2016-01-01

    Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution. PMID:27241946

  3. On iterative processes in the Krylov-Sonneveld subspaces

    NASA Astrophysics Data System (ADS)

    Ilin, Valery P.

    2016-10-01

    The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.

  4. DIII-D accomplishments and plans in support of fusion next steps

    DOE PAGES

    Buttery, R. J; Eidietis, N.; Holcomb, C.; ...

    2013-06-01

    DIII-D is using its flexibility and diagnostics to address the critical science required to enable next step fusion devices. We have adapted operating scenarios for ITER to low torque and are now being optimized for transport. Three ELM mitigation scenarios have been developed to near-ITER parameters. New control techniques are managing the most challenging plasma instabilities. Disruption mitigation tools show promising dissipation strategies for runaway electrons and heat load. An off axis neutral beam upgrade has enabled sustainment of high βN capable steady state regimes. Divertor research is identifying the challenge, physics and candidate solutions for handling the hot plasmamore » exhaust with notable progress in heat flux reduction using the snowflake configuration. Our work is helping optimize design choices and prepare the scientific tools for operation in ITER, and resolve key elements of the plasma configuration and divertor solution for an FNSF.« less

  5. Helping System Engineers Bridge the Peaks

    NASA Technical Reports Server (NTRS)

    Rungta, Neha; Tkachuk, Oksana; Person, Suzette; Biatek, Jason; Whalen, Michael W.; Castle, Joseph; Castle, JosephGundy-Burlet, Karen

    2014-01-01

    In our experience at NASA, system engineers generally follow the Twin Peaks approach when developing safety-critical systems. However, iterations between the peaks require considerable manual, and in some cases duplicate, effort. A significant part of the manual effort stems from the fact that requirements are written in English natural language rather than a formal notation. In this work, we propose an approach that enables system engineers to leverage formal requirements and automated test generation to streamline iterations, effectively "bridging the peaks". The key to the approach is a formal language notation that a) system engineers are comfortable with, b) is supported by a family of automated V&V tools, and c) is semantically rich enough to describe the requirements of interest. We believe the combination of formalizing requirements and providing tool support to automate the iterations will lead to a more efficient Twin Peaks implementation at NASA.

  6. Iterative learning control with applications in energy generation, lasers and health care

    PubMed Central

    Tutty, O. R.

    2016-01-01

    Many physical systems make repeated executions of the same finite time duration task. One example is a robot in a factory or warehouse whose task is to collect an object in sequence from a location, transfer it over a finite duration, place it at a specified location or on a moving conveyor and then return for the next one and so on. Iterative learning control was especially developed for systems with this mode of operation and this paper gives an overview of this control design method using relatively recent relevant applications in wind turbines, free-electron lasers and health care, as exemplars to demonstrate its applicability. PMID:27713654

  7. Key achievements in elementary R&D on water-cooled solid breeder blanket for ITER test blanket module in JAERI

    NASA Astrophysics Data System (ADS)

    Suzuki, S.; Enoeda, M.; Hatano, T.; Hirose, T.; Hayashi, K.; Tanigawa, H.; Ochiai, K.; Nishitani, T.; Tobita, K.; Akiba, M.

    2006-02-01

    This paper presents the significant progress made in the research and development (R&D) of key technologies on the water-cooled solid breeder blanket for the ITER test blanket modules in JAERI. Development of module fabrication technology, bonding technology of armours, measurement of thermo-mechanical properties of pebble beds, neutronics studies on a blanket module mockup and tritium release behaviour from a Li2TiO3 pebble bed under neutron-pulsed operation conditions are summarized. With the improvement of the heat treatment process for blanket module fabrication, a fine-grained microstructure of F82H can be obtained by homogenizing it at 1150 °C followed by normalizing it at 930 °C after the hot isostatic pressing process. Moreover, a promising bonding process for a tungsten armour and an F82H structural material was developed using a solid-state bonding method based on uniaxial hot compression without any artificial compliant layer. As a result of high heat flux tests of F82H first wall mockups, it has been confirmed that a fatigue lifetime correlation, which was developed for the ITER divertor, can be made applicable for the F82H first wall mockup. As for R&D on the breeder material, Li2TiO3, the effect of compression loads on effective thermal conductivity of pebble beds has been clarified for the Li2TiO3 pebble bed. The tritium breeding ratio of a simulated multi-layer blanket structure has successfully been measured using 14 MeV neutrons with an accuracy of 10%. The tritium release rate from the Li2TiO3 pebble has also been successfully measured with pulsed neutron irradiation, which simulates ITER operation.

  8. Alfvén eigenmode evolution computed with the VENUS and KINX codes for the ITER baseline scenario

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isaev, M. Yu., E-mail: isaev-my@nrcki.ru; Medvedev, S. Yu.; Cooper, W. A.

    A new application of the VENUS code is described, which computes alpha particle orbits in the perturbed electromagnetic fields and its resonant interaction with the toroidal Alfvén eigenmodes (TAEs) for the ITER device. The ITER baseline scenario with Q = 10 and the plasma toroidal current of 15 MA is considered as the most important and relevant for the International Tokamak Physics Activity group on energetic particles (ITPA-EP). For this scenario, typical unstable TAE-modes with the toroidal index n = 20 have been predicted that are localized in the plasma core near the surface with safety factor q = 1.more » The spatial structure of ballooning and antiballooning modes has been computed with the ideal MHD code KINX. The linear growth rates and the saturation levels taking into account the damping effects and the different mode frequencies have been calculated with the VENUS code for both ballooning and antiballooning TAE-modes.« less

  9. EC assisted start-up experiments reproduction in FTU and AUG for simulations of the ITER case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granucci, G.; Ricci, D.; Farina, D.

    The breakdown and plasma start-up in ITER are well known issues studied in the last few years in many tokamaks with the aid of calculation based on simplified modeling. The thickness of ITER metallic wall and the voltage limits of the Central Solenoid Power Supply strongly limit the maximum toroidal electric field achievable (0.3 V/m), well below the level used in the present generation of tokamaks. In order to have a safe and robust breakdown, the use of Electron Cyclotron Power to assist plasma formation and current rump up has been foreseen. This has raised attention on plasma formation phasemore » in presence of EC wave, especially in order to predict the required power for a robust breakdown in ITER. Few detailed theory studies have been performed up to nowadays, due to the complexity of the problems. A simplified approach, extended from that proposed in ref[1] has been developed including a impurity multispecies distribution and an EC wave propagation and absorption based on GRAY code. This integrated model (BK0D) has been benchmarked on ohmic and EC assisted experiments on FTU and AUG, finding the key aspects for a good reproduction of data. On the basis of this, the simulation has been devoted to understand the best configuration for ITER case. The dependency of impurity distribution content and neutral gas pressure limits has been considered. As results of the analysis a reasonable amount of power (1 - 2 MW) seems to be enough to extend in a significant way the breakdown and current start up capability of ITER. The work reports the FTU data reproduction and the ITER case simulations.« less

  10. Small-Scale Design Experiments as Working Space for Larger Mobile Communication Challenges

    ERIC Educational Resources Information Center

    Lowe, Sarah; Stuedahl, Dagny

    2014-01-01

    In this paper, a design experiment using Instagram as a cultural probe is submitted as a method for analyzing the challenges that arise when considering the implementation of social media within a distributed communication space. It outlines how small, iterative investigations can reveal deeper research questions relevant to the education of…

  11. UserTesting.com: A Tool for Usability Testing of Online Resources

    ERIC Educational Resources Information Center

    Koundinya, Vikram; Klink, Jenna; Widhalm, Melissa

    2017-01-01

    Extension educators are increasingly using online resources in their program design and delivery. Usability testing is essential for ensuring that these resources are relevant and useful to learners. On the basis of our experiences with iteratively developing products using a testing service called UserTesting, we promote the use of fee-based…

  12. Development of a coping intervention to improve traumatic stress and HIV care engagement among South African women with sexual trauma histories.

    PubMed

    Sikkema, Kathleen J; Choi, Karmel W; Robertson, Corne; Knettel, Brandon A; Ciya, Nonceba; Knippler, Elizabeth T; Watt, Melissa H; Joska, John A

    2018-06-01

    This paper describes the development and preliminary trial run of ImpACT (Improving AIDS Care after Trauma), a brief coping intervention to address traumatic stress and HIV care engagement among South African women with sexual trauma histories. We engaged in an iterative process to culturally adapt a cognitive-behavioral intervention for delivery within a South African primary care clinic. This process involved three phases: (a) preliminary intervention development, drawing on content from a prior evidence-based intervention; (b) contextual adaptation of the curriculum through formative data collection using a multi-method qualitative approach; and (c) pre-testing of trauma screening procedures and a subsequent trial run of the intervention. Feedback from key informant interviews and patient in-depth interviews guided the refinement of session content and adaptation of key intervention elements, including culturally relevant visuals, metaphors, and interactive exercises. The trial run curriculum consisted of four individual sessions and two group sessions. Strong session attendance during the trial run supported the feasibility of ImpACT. Participants responded positively to the logistics of the intervention delivery and the majority of session content. Trial run feedback helped to further refine intervention content and delivery towards a pilot randomized clinical trial to assess the feasibility and potential efficacy of this intervention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Capsule implosion optimization during the indirect-drive National Ignition Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landen, O. L.; Edwards, J.; Haan, S. W.

    2011-05-15

    Capsule performance optimization campaigns will be conducted at the National Ignition Facility [G. H. Miller, E. I. Moses, and C. R. Wuest, Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition. The campaigns will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models using a variety of ignition capsule surrogates before proceeding to cryogenic-layered implosions and ignition experiments. The quantitative goals and technique options and down selections for the tuning campaigns are first explained. The computationally derived sensitivities to key laser and target parameters are compared to simple analyticmore » models to gain further insight into the physics of the tuning techniques. The results of the validation of the tuning techniques at the OMEGA facility [J. M. Soures et al., Phys. Plasmas 3, 2108 (1996)] under scaled hohlraum and capsule conditions relevant to the ignition design are shown to meet the required sensitivity and accuracy. A roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget. Finally, we show how the tuning precision will be improved after a number of shots and iterations to meet an acceptable level of residual uncertainty.« less

  14. Factors promoting sustainability of education innovations: a comparison of faculty perceptions and existing frameworks.

    PubMed

    Loh, Lawrence C; Friedman, Stacey R; Burdick, William P

    2013-01-01

    Health professions education uses innovative projects to promote faculty development and institution change. Faculty perceptions of the factors that promote project sustainability affect how faculty conceptualize and implement their innovations, which influences whether and how they plan for sustainability. This paper compares educators' perceptions of factors that influence sustainability in innovative projects with factors identified in project sustainability literature, to identify areas of convergence and divergence. Using questionnaires, faculty development fellowship participants from Brazil and India shared their perceptions on factors influencing their project's sustainability. An analysis framework was developed from existing project sustainability literature; faculty responses were then coded through an iterative process. Key sustainability themes identified by faculty included project-level factors related to project design, stakeholder support, monitoring and evaluation, and project outcomes. Identified context level factors were related to institutional and governmental support as well as self-motivation and peer support. Availability of resources and funding were identified as relevant at both the project and context levels. Project-level factors were more often cited than context-level factors as key to ensuring sustainability. Faculty development efforts in health professions education should employ strategies to target these themes in promoting innovation sustainability. These include preengagement with institutional leaders, alignment with public sector goals, strategic diffusion of information, project expansion and transferability, capacity building in monitoring and evaluation, and creation of a community of educators for information exchange and support.

  15. Defining competency-based evaluation objectives in family medicine

    PubMed Central

    Lawrence, Kathrine; Allen, Tim; Brailovsky, Carlos; Crichton, Tom; Bethune, Cheri; Donoff, Michel; Laughlin, Tom; Wetmore, Stephen; Carpentier, Marie-Pierre; Visser, Shaun

    2011-01-01

    Abstract Objective To develop key features for priority topics previously identified by the College of Family Physicians of Canada that, together with skill dimensions and phases of the clinical encounter, broadly describe competence in family medicine. Design Modified nominal group methodology, which was used to develop key features for each priority topic through an iterative process. Setting The College of Family Physicians of Canada. Participants An expert group of 7 family physicians and 1 educational consultant, all of whom had experience in assessing competence in family medicine. Group members represented the Canadian family medicine context with respect to region, sex, language, community type, and experience. Methods The group used a modified Delphi process to derive a detailed operational definition of competence, using multiple iterations until consensus was achieved for the items under discussion. The group met 3 to 4 times a year from 2000 to 2007. Main findings The group analyzed 99 topics and generated 773 key features. There were 2 to 20 (average 7.8) key features per topic; 63% of the key features focused on the diagnostic phase of the clinical encounter. Conclusion This project expands previous descriptions of the process of generating key features for assessment, and removes this process from the context of written examinations. A key-features analysis of topics focuses on higher-order cognitive processes of clinical competence. The project did not define all the skill dimensions of competence to the same degree, but it clearly identified those requiring further definition. This work generates part of a discipline-specific, competency-based definition of family medicine for assessment purposes. It limits the domain for assessment purposes, which is an advantage for the teaching and assessment of learners. A validation study on the content of this work would ensure that it truly reflects competence in family medicine. PMID:21998245

  16. Towards a realistic 3D simulation of the extraction region in ITER NBI relevant ion source

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Fantz, U.; Franzen, P.; Minea, T.

    2015-03-01

    The development of negative ion (NI) sources for ITER is strongly accompanied by modelling activities. The ONIX code addresses the physics of formation and extraction of negative hydrogen ions at caesiated sources as well as the amount of co-extracted electrons. In order to be closer to the experimental conditions the code has been improved. It includes now the bias potential applied to first grid (plasma grid) of the extraction system, and the presence of Cs+ ions in the plasma. The simulation results show that such aspects play an important role for the formation of an ion-ion plasma in the boundary region by reducing the depth of the negative potential well in vicinity to the plasma grid that limits the extraction of the NIs produced at the Cs covered plasma grid surface. The influence of the initial temperature of the surface produced NI and its emission rate on the NI density in the bulk plasma that in turn affects the beam formation region was analysed. The formation of the plasma meniscus, the boundary between the plasma and the beam, was investigated for the extraction potentials of 5 and 10 kV. At the smaller extraction potential the meniscus moves closer to the plasma grid but as in the case of 10 kV the deepest meniscus bend point is still outside of the aperture. Finally, a plasma containing the same amount of NI and electrons (nH- =ne =1017 m-3) , representing good source conditioning, was simulated. It is shown that at such conditions the extracted NI current can reach values of ˜32 mA cm-2 using ITER-relevant extraction potential of 10 kV and ˜19 mA cm-2 at 5 kV. These results are in good agreement with experimental measurements performed at the small scale ITER prototype source at the test facility BATMAN.

  17. Size scaling of negative hydrogen ion sources for fusion

    NASA Astrophysics Data System (ADS)

    Fantz, U.; Franzen, P.; Kraus, W.; Schiesko, L.; Wimmer, C.; Wünderlich, D.

    2015-04-01

    The RF-driven negative hydrogen ion source (H-, D-) for the international fusion experiment ITER has a width of 0.9 m and a height of 1.9 m and is based on a ⅛ scale prototype source being in operation at the IPP test facilities BATMAN and MANITU for many years. Among the challenges to meet the required parameters in a caesiated source at a source pressure of 0.3 Pa or less is the challenge in size scaling of a factor of eight. As an intermediate step a ½ scale ITER source went into operation at the IPP test facility ELISE with the first plasma in February 2013. The experience and results gained so far at ELISE allowed a size scaling study from the prototype source towards the ITER relevant size at ELISE, in which operational issues, physical aspects and the source performance is addressed, highlighting differences as well as similarities. The most ITER relevant results are: low pressure operation down to 0.2 Pa is possible without problems; the magnetic filter field created by a current in the plasma grid is sufficient to reduce the electron temperature below the target value of 1 eV and to reduce together with the bias applied between the differently shaped bias plate and the plasma grid the amount of co-extracted electrons. An asymmetry of the co-extracted electron currents in the two grid segments is measured, varying strongly with filter field and bias. Contrary to the prototype source, a dedicated plasma drift in vertical direction is not observed. As in the prototype source, the performance in deuterium is limited by the amount of co-extracted electrons in short as well as in long pulse operation. Caesium conditioning is much harder in deuterium than in hydrogen for which fast and reproducible conditioning is achieved. First estimates reveal a caesium consumption comparable to the one in the prototype source despite the large size.

  18. Multiple-image authentication with a cascaded multilevel architecture based on amplitude field random sampling and phase information multiplexing.

    PubMed

    Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2015-04-10

    A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.

  19. Development of laser-based techniques for in situ characterization of the first wall in ITER and future fusion devices

    NASA Astrophysics Data System (ADS)

    Philipps, V.; Malaquias, A.; Hakola, A.; Karhunen, J.; Maddaluno, G.; Almaviva, S.; Caneve, L.; Colao, F.; Fortuna, E.; Gasior, P.; Kubkowska, M.; Czarnecka, A.; Laan, M.; Lissovski, A.; Paris, P.; van der Meiden, H. J.; Petersson, P.; Rubel, M.; Huber, A.; Zlobinski, M.; Schweer, B.; Gierse, N.; Xiao, Q.; Sergienko, G.

    2013-09-01

    Analysis and understanding of wall erosion, material transport and fuel retention are among the most important tasks for ITER and future devices, since these questions determine largely the lifetime and availability of the fusion reactor. These data are also of extreme value to improve the understanding and validate the models of the in vessel build-up of the T inventory in ITER and future D-T devices. So far, research in these areas is largely supported by post-mortem analysis of wall tiles. However, access to samples will be very much restricted in the next-generation devices (such as ITER, JT-60SA, W7-X, etc) with actively cooled plasma-facing components (PFC) and increasing duty cycle. This has motivated the development of methods to measure the deposition of material and retention of plasma fuel on the walls of fusion devices in situ, without removal of PFC samples. For this purpose, laser-based methods are the most promising candidates. Their feasibility has been assessed in a cooperative undertaking in various European associations under EFDA coordination. Different laser techniques have been explored both under laboratory and tokamak conditions with the emphasis to develop a conceptual design for a laser-based wall diagnostic which is integrated into an ITER port plug, aiming to characterize in situ relevant parts of the inner wall, the upper region of the inner divertor, part of the dome and the upper X-point region.

  20. Iterative near-term ecological forecasting: Needs, opportunities, and challenges

    USGS Publications Warehouse

    Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.

    2018-01-01

    Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  1. Iterative near-term ecological forecasting: Needs, opportunities, and challenges.

    PubMed

    Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P

    2018-02-13

    Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  2. Performance analysis of Rogowski coils and the measurement of the total toroidal current in the ITER machine

    NASA Astrophysics Data System (ADS)

    Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.

    2017-12-01

    The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.

  3. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  4. Design, Manufacture, and Experimental Serviceability Validation of ITER Blanket Components

    NASA Astrophysics Data System (ADS)

    Leshukov, A. Yu.; Strebkov, Yu. S.; Sviridenko, M. N.; Safronov, V. M.; Putrik, A. B.

    2017-12-01

    In 2014, the Russian Federation and the ITER International Organization signed two Procurement Arrangements (PAs) for ITER blanket components: 1.6.P1ARF.01 "Blanket First Wall" of February 14, 2014, and 1.6.P3.RF.01 "Blanket Module Connections" of December 19, 2014. The first PA stipulates development, manufacture, testing, and delivery to the ITER site of 179 Enhanced Heat Flux (EHF) First Wall (FW) Panels intended for withstanding the heat flux from the plasma up to 4.7MW/m2. Two Russian institutions, NIIEFA (Efremov Institute) and NIKIET, are responsible for the implementation of this PA. NIIEFA manufactures plasma-facing components (PFCs) of the EHF FW panels and performs the final assembly and testing of the panels, and NIKIET manufactures FW beam structures, load-bearing structures of PFCs, and all elements of the panel attachment system. As for the second PA, NIKIET is the sole official supplier of flexible blanket supports, electrical insulation key pads (EIKPs), and blanket module/vacuum vessel electrical connectors. Joint activities of NIKIET and NIIEFA for implementing PA 1.6.P1ARF.01 are briefly described, and information on implementation of PA 1.6.P3.RF.01 is given. Results of the engineering design and research efforts in the scope of the above PAs in 2015-2016 are reported, and results of developing the technology for manufacturing ITER blanket components are presented.

  5. Increasing High School Student Interest in Science: An Action Research Study

    NASA Astrophysics Data System (ADS)

    Vartuli, Cindy A.

    An action research study was conducted to determine how to increase student interest in learning science and pursuing a STEM career. The study began by exploring 10th-grade student and teacher perceptions of student interest in science in order to design an instructional strategy for stimulating student interest in learning and pursuing science. Data for this study included responses from 270 students to an on-line science survey and interviews with 11 students and eight science teachers. The action research intervention included two iterations of the STEM Career Project. The first iteration introduced four chemistry classes to the intervention. The researcher used student reflections and a post-project survey to determine if the intervention had influence on the students' interest in pursuing science. The second iteration was completed by three science teachers who had implemented the intervention with their chemistry classes, using student reflections and post-project surveys, as a way to make further procedural refinements and improvements to the intervention and measures. Findings from the exploratory phase of the study suggested students generally had interest in learning science but increasing that interest required including personally relevant applications and laboratory experiences. The intervention included a student-directed learning module in which students investigated three STEM careers and presented information on one of their chosen careers. The STEM Career Project enabled students to explore career possibilities in order to increase their awareness of STEM careers. Findings from the first iteration of the intervention suggested a positive influence on student interest in learning and pursuing science. The second iteration included modifications to the intervention resulting in support for the findings of the first iteration. Results of the second iteration provided modifications that would allow the project to be used for different academic levels. Insights from conducting the action research study provided the researcher with effective ways to make positive changes in her own teaching praxis and the tools used to improve student awareness of STEM career options.

  6. Efficient robust conditional random fields.

    PubMed

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  7. Developing a medication communication framework across continuums of care using the Circle of Care Modeling approach.

    PubMed

    Kitson, Nicole A; Price, Morgan; Lau, Francis Y; Showler, Grey

    2013-10-17

    Medication errors are a common type of preventable errors in health care causing unnecessary patient harm, hospitalization, and even fatality. Improving communication between providers and between providers and patients is a key aspect of decreasing medication errors and improving patient safety. Medication management requires extensive collaboration and communication across roles and care settings, which can reduce (or contribute to) medication-related errors. Medication management involves key recurrent activities (determine need, prescribe, dispense, administer, and monitor/evaluate) with information communicated within and between each. Despite its importance, there is a lack of conceptual models that explore medication communication specifically across roles and settings. This research seeks to address that gap. The Circle of Care Modeling (CCM) approach was used to build a model of medication communication activities across the circle of care. CCM positions the patient in the centre of his or her own healthcare system; providers and other roles are then modeled around the patient as a web of relationships. Recurrent medication communication activities were mapped to the medication management framework. The research occurred in three iterations, to test and revise the model: Iteration 1 consisted of a literature review and internal team discussion, Iteration 2 consisted of interviews, observation, and a discussion group at a Community Health Centre, and Iteration 3 consisted of interviews and a discussion group in the larger community. Each iteration provided further detail to the Circle of Care medication communication model. Specific medication communication activities were mapped along each communication pathway between roles and to the medication management framework. We could not map all medication communication activities to the medication management framework; we added Coordinate as a separate and distinct recurrent activity. We saw many examples of coordination activities, for instance, Medical Office Assistants acting as a liaison between pharmacists and family physicians to clarify prescription details. Through the use of CCM we were able to unearth tacitly held knowledge to expand our understanding of medication communication. Drawing out the coordination activities could be a missing piece for us to better understand how to streamline and improve multi-step communication processes with a goal of improving patient safety.

  8. Developing a medication communication framework across continuums of care using the Circle of Care Modeling approach

    PubMed Central

    2013-01-01

    Background Medication errors are a common type of preventable errors in health care causing unnecessary patient harm, hospitalization, and even fatality. Improving communication between providers and between providers and patients is a key aspect of decreasing medication errors and improving patient safety. Medication management requires extensive collaboration and communication across roles and care settings, which can reduce (or contribute to) medication-related errors. Medication management involves key recurrent activities (determine need, prescribe, dispense, administer, and monitor/evaluate) with information communicated within and between each. Despite its importance, there is a lack of conceptual models that explore medication communication specifically across roles and settings. This research seeks to address that gap. Methods The Circle of Care Modeling (CCM) approach was used to build a model of medication communication activities across the circle of care. CCM positions the patient in the centre of his or her own healthcare system; providers and other roles are then modeled around the patient as a web of relationships. Recurrent medication communication activities were mapped to the medication management framework. The research occurred in three iterations, to test and revise the model: Iteration 1 consisted of a literature review and internal team discussion, Iteration 2 consisted of interviews, observation, and a discussion group at a Community Health Centre, and Iteration 3 consisted of interviews and a discussion group in the larger community. Results Each iteration provided further detail to the Circle of Care medication communication model. Specific medication communication activities were mapped along each communication pathway between roles and to the medication management framework. We could not map all medication communication activities to the medication management framework; we added Coordinate as a separate and distinct recurrent activity. We saw many examples of coordination activities, for instance, Medical Office Assistants acting as a liaison between pharmacists and family physicians to clarify prescription details. Conclusions Through the use of CCM we were able to unearth tacitly held knowledge to expand our understanding of medication communication. Drawing out the coordination activities could be a missing piece for us to better understand how to streamline and improve multi-step communication processes with a goal of improving patient safety. PMID:24134454

  9. Parallel programming of gradient-based iterative image reconstruction schemes for optical tomography.

    PubMed

    Hielscher, Andreas H; Bartel, Sebastian

    2004-02-01

    Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.

  10. Computationally efficient finite-difference modal method for the solution of Maxwell's equations.

    PubMed

    Semenikhin, Igor; Zanuccoli, Mauro

    2013-12-01

    In this work, a new implementation of the finite-difference (FD) modal method (FDMM) based on an iterative approach to calculate the eigenvalues and corresponding eigenfunctions of the Helmholtz equation is presented. Two relevant enhancements that significantly increase the speed and accuracy of the method are introduced. First of all, the solution of the complete eigenvalue problem is avoided in favor of finding only the meaningful part of eigenmodes by using iterative methods. Second, a multigrid algorithm and Richardson extrapolation are implemented. Simultaneous use of these techniques leads to an enhancement in terms of accuracy, which allows a simple method such as the FDMM with a typical three-point difference scheme to be significantly competitive with an analytical modal method.

  11. On the minimum orbital intersection distance computation: a new effective method

    NASA Astrophysics Data System (ADS)

    Hedo, José M.; Ruíz, Manuel; Peláez, Jesús

    2018-06-01

    The computation of the Minimum Orbital Intersection Distance (MOID) is an old, but increasingly relevant problem. Fast and precise methods for MOID computation are needed to select potentially hazardous asteroids from a large catalogue. The same applies to debris with respect to spacecraft. An iterative method that strictly meets these two premises is presented.

  12. Framework and Implementation for Improving Physics Essential Skills via Computer-Based Practice: Vector Math

    ERIC Educational Resources Information Center

    Mikula, Brendon D.; Heckler, Andrew F.

    2017-01-01

    We propose a framework for improving accuracy, fluency, and retention of basic skills essential for solving problems relevant to STEM introductory courses, and implement the framework for the case of basic vector math skills over several semesters in an introductory physics course. Using an iterative development process, the framework begins with…

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moyer, Richard A.; Paz-Soldan, Carlos; Nazikian, Raffi

    Here, experiments have been executed in the DIII-D tokamak to extend suppression of Edge Localized Modes (ELMs) with Resonant Magnetic Perturbations (RMPs) to ITER-relevant levels of beam torque. The results support the hypothesis for RMP ELM suppression based on transition from an ideal screened response to a tearing response at a resonant surface that prevents expansion of the pedestal to an unstable width.

  14. An integrated operational definition and conceptual model of asthma self-management in teens.

    PubMed

    Mammen, Jennifer; Rhee, Hyekyun; Norton, Sally A; Butz, Arlene M; Halterman, Jill S; Arcoleo, Kimberly

    2018-01-19

    A previous definition of adolescent asthma self-management was derived from interviews with clinicians/researchers and published literature; however, it did not incorporate perspectives of teens or parents. Therefore, we conducted in-depth interviews with teens and parents and synthesized present findings with the prior analysis to develop a more encompassing definition and model. Focal concepts were qualitatively extracted from 14-day self-management voice-diaries (n = 14) and 1-hour interviews (n = 42) with teens and parents (28 individuals) along with concepts found in the previous clinical/research oriented analysis. Conceptual structure and relationships were identified and key findings synthesized to develop a revised definition and model of adolescent asthma self-management. There were two primary self-management constructs: processes of self-management and tasks of self-management. Self-management was defined as the iterative process of assessing, deciding, and responding to specific situations in order to achieve personally important outcomes. Clinically relevant asthma self-management tasks included monitoring asthma, managing active issues through pharmacologic and non-pharmacologic strategies, preventing future issues, and communicating with others as needed. Self-management processes were reciprocally influenced by intrapersonal factors (both cognitive and physical), interpersonal factors (family, social and physical environments), and personally relevant asthma and non-asthma outcomes. This is the first definition of asthma self-management incorporating teen, parent, clinician, and researcher perspectives, which suggests that self-management processes and behaviors are influenced by individually variable personal and interpersonal factors, and are driven by personally important outcomes. Clinicians and researchers should investigate teens' symptom perceptions, medication beliefs, current approaches to symptom management, relevant outcomes, and personal priorities.

  15. Prospects for Advanced Tokamak Operation of ITER

    NASA Astrophysics Data System (ADS)

    Neilson, George H.

    1996-11-01

    Previous studies have identified steady-state (or "advanced") modes for ITER, based on reverse-shear profiles and significant bootstrap current. A typical example has 12 MA of plasma current, 1,500 MW of fusion power, and 100 MW of heating and current-drive power. The implementation of these and other steady-state operating scenarios in the ITER device is examined in order to identify key design modifications that can enhance the prospects for successfully achieving advanced tokamak operating modes in ITER compatible with a single null divertor design. In particular, we examine plasma configurations that can be achieved by the ITER poloidal field system with either a monolithic central solenoid (as in the ITER Interim Design), or an alternate "hybrid" central solenoid design which provides for greater flexibility in the plasma shape. The increased control capability and expanded operating space provided by the hybrid central solenoid allows operation at high triangularity (beneficial for improving divertor performance through control of edge-localized modes and for increasing beta limits), and will make it much easier for ITER operators to establish an optimum startup trajectory leading to a high-performance, steady-state scenario. Vertical position control is examined because plasmas made accessible by the hybrid central solenoid can be more elongated and/or less well coupled to the conducting structure. Control of vertical-displacements using the external PF coils remains feasible over much of the expanded operating space. Further work is required to define the full spectrum of axisymmetric plasma disturbances requiring active control In addition to active axisymmetric control, advanced tokamak modes in ITER may require active control of kink modes on the resistive time scale of the conducting structure. This might be accomplished in ITER through the use of active control coils external to the vacuum vessel which are actuated by magnetic sensors near the first wall. The enhanced shaping and positioning flexibility provides a range of options for reducing the ripple-induced losses of fast alpha particles--a major limitation on ITER steady-state modes. An alternate approach that we are pursuing in parallel is the inclusion of ferromagnetic inserts to reduce the toroidal field ripple within the plasma chamber. The inclusion of modest design changes such as the hybrid central solenoid, active control coils for kink modes, and ferromagnetic inserts for TF ripple reduction show can greatly increase the flexibility to accommodate advance tokamak operation in ITER. Increased flexibility is important because the optimum operating scenario for ITER cannot be predicted with certainty. While low-inductance, reverse shear modes appear attractive for steady-state operation, high-inductance, high-beta modes are also viable candidates, and it is important that ITER have the flexibility to explore both these, and other, operating regimes.

  16. Feasibility of a low-dose orbital CT protocol with a knowledge-based iterative model reconstruction algorithm for evaluating Graves' orbitopathy.

    PubMed

    Lee, Ho-Joon; Kim, Jinna; Kim, Ki Wook; Lee, Seung-Koo; Yoon, Jin Sook

    2018-06-23

    To evaluate the clinical feasibility of low-dose orbital CT with a knowledge-based iterative model reconstruction (IMR) algorithm for evaluating Graves' orbitopathy. Low-dose orbital CT was performed with a CTDI vol of 4.4 mGy. In 12 patients for whom prior or subsequent non-low-dose orbital CT data obtained within 12 months were available, background noise, SNR, and CNR were compared for images generated using filtered back projection (FBP), hybrid iterative reconstruction (iDose 4 ), and IMR and non-low-dose CT images. Comparison of clinically relevant measurements for Graves' orbitopathy, such as rectus muscle thickness and retrobulbar fat area, was performed in a subset of 6 patients who underwent CT for causes other than Graves' orbitopathy, by using the Wilcoxon signed-rank test. The lens dose estimated from skin dosimetry on a phantom was 4.13 mGy, which was on average 59.34% lower than that of the non-low-dose protocols. Image quality in terms of background noise, SNR, and CNR was the best for IMR, followed by non-low-dose CT, iDose 4 , and FBP, in descending order. A comparison of clinically relevant measurements revealed no significant difference in the retrobulbar fat area and the inferior and medial rectus muscle thicknesses between the low-dose and non-low-dose CT images. Low-dose CT with IMR may be performed without significantly affecting the measurement of prognostic parameters for Graves' orbitopathy while lowering the lens dose and image noise. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Evaluation of reconstruction techniques in regional cerebral blood flow SPECT using trade-off plots: a Monte Carlo study.

    PubMed

    Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha

    2007-09-01

    The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.

  18. Optimizing the atom types of proteins through iterative knowledge-based potentials

    NASA Astrophysics Data System (ADS)

    Wang, Xin-Xiang; Huang, Sheng-You

    2018-02-01

    Not Available Project supported by the National Natural Science Foundation of China (Grant No. 31670724), the National Key Research and Development Program of China (Grant Nos. 2016YFC1305800 and 2016YFC1305805), and the Startup Grant of Huazhong University of Science and Technology, China.

  19. Developing an Action Concept Inventory

    ERIC Educational Resources Information Center

    McGinness, Lachlan P.; Savage, C. M.

    2016-01-01

    We report on progress towards the development of an Action Concept Inventory (ACI), a test that measures student understanding of action principles in introductory mechanics and optics. The ACI also covers key concepts of many-paths quantum mechanics, from which classical action physics arises. We used a multistage iterative development cycle for…

  20. A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Krauthammer, Prof. Michael

    2010-01-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manuallymore » labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use.« less

  1. Investigation of iterative image reconstruction in low-dose breast CT

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Yang, Kai; Boone, John M.; Han, Xiao; Sidky, Emil Y.; Pan, Xiaochuan

    2014-06-01

    There is interest in developing computed tomography (CT) dedicated to breast-cancer imaging. Because breast tissues are radiation-sensitive, the total radiation exposure in a breast-CT scan is kept low, often comparable to a typical two-view mammography exam, thus resulting in a challenging low-dose-data-reconstruction problem. In recent years, evidence has been found that suggests that iterative reconstruction may yield images of improved quality from low-dose data. In this work, based upon the constrained image total-variation minimization program and its numerical solver, i.e., the adaptive steepest descent-projection onto the convex set (ASD-POCS), we investigate and evaluate iterative image reconstructions from low-dose breast-CT data of patients, with a focus on identifying and determining key reconstruction parameters, devising surrogate utility metrics for characterizing reconstruction quality, and tailoring the program and ASD-POCS to the specific reconstruction task under consideration. The ASD-POCS reconstructions appear to outperform the corresponding clinical FDK reconstructions, in terms of subjective visualization and surrogate utility metrics.

  2. Numerical evaluation of mobile robot navigation in static indoor environment via EGAOR Iteration

    NASA Astrophysics Data System (ADS)

    Dahalan, A. A.; Saudi, A.; Sulaiman, J.; Din, W. R. W.

    2017-09-01

    One of the key issues in mobile robot navigation is the ability for the robot to move from an arbitrary start location to a specified goal location without colliding with any obstacles while traveling, also known as mobile robot path planning problem. In this paper, however, we examined the performance of a robust searching algorithm that relies on the use of harmonic potentials of the environment to generate smooth and safe path for mobile robot navigation in a static known indoor environment. The harmonic potentials will be discretized by using Laplacian’s operator to form a system of algebraic approximation equations. This algebraic linear system will be computed via 4-Point Explicit Group Accelerated Over-Relaxation (4-EGAOR) iterative method for rapid computation. The performance of the proposed algorithm will then be compared and analyzed against the existing algorithms in terms of number of iterations and execution time. The result shows that the proposed algorithm performed better than the existing methods.

  3. Fast sweeping method for the factored eikonal equation

    NASA Astrophysics Data System (ADS)

    Fomel, Sergey; Luo, Songting; Zhao, Hongkai

    2009-09-01

    We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.

  4. Plasma-surface interaction in the context of ITER.

    PubMed

    Kleyn, A W; Lopes Cardozo, N J; Samm, U

    2006-04-21

    The decreasing availability of energy and concern about climate change necessitate the development of novel sustainable energy sources. Fusion energy is such a source. Although it will take several decades to develop it into routinely operated power sources, the ultimate potential of fusion energy is very high and badly needed. A major step forward in the development of fusion energy is the decision to construct the experimental test reactor ITER. ITER will stimulate research in many areas of science. This article serves as an introduction to some of those areas. In particular, we discuss research opportunities in the context of plasma-surface interactions. The fusion plasma, with a typical temperature of 10 keV, has to be brought into contact with a physical wall in order to remove the helium produced and drain the excess energy in the fusion plasma. The fusion plasma is far too hot to be brought into direct contact with a physical wall. It would degrade the wall and the debris from the wall would extinguish the plasma. Therefore, schemes are developed to cool down the plasma locally before it impacts on a physical surface. The resulting plasma-surface interaction in ITER is facing several challenges including surface erosion, material redeposition and tritium retention. In this article we introduce how the plasma-surface interaction relevant for ITER can be studied in small scale experiments. The various requirements for such experiments are introduced and examples of present and future experiments will be given. The emphasis in this article will be on the experimental studies of plasma-surface interactions.

  5. A parametric study of helium retention in beryllium and its effect on deuterium retention

    NASA Astrophysics Data System (ADS)

    Alegre, D.; Baldwin, M. J.; Simmonds, M.; Nishijima, D.; Hollmann, E. M.; Brezinsek, S.; Doerner, R. P.

    2017-12-01

    Beryllium samples have been exposed in the PISCES-B linear plasma device to conditions relevant to the International Thermonuclear Experimental Reactor (ITER) in pure He, D, and D/He mixed plasmas. Except at intermediate sample exposure temperatures (573-673 K) He addition to a D plasma is found to have a beneficial effect as it reduces the D retention in Be (up to ˜55%), although the mechanism is unclear. Retention of He is typically around 1020-1021 He m-2, and is affected primarily by the Be surface temperature during exposition, by the ion fluence at <500 K exposure, but not by the ion impact energy at 573 K. Contamination of the Be surface with high-Z elements from the mask of the sample holder in pure He plasmas is also observed under certain conditions, and leads to unexpectedly large He retention values, as well as changes in the surface morphology. An estimation of the tritium retention in the Be first wall of ITER is provided, being sufficiently low to allow a safe operation of ITER.

  6. Finite element analysis of heat load of tungsten relevant to ITER conditions

    NASA Astrophysics Data System (ADS)

    Zinovev, A.; Terentyev, D.; Delannay, L.

    2017-12-01

    A computational procedure is proposed in order to predict the initiation of intergranular cracks in tungsten with ITER specification microstructure (i.e. characterised by elongated micrometre-sized grains). Damage is caused by a cyclic heat load, which emerges from plasma instabilities during operation of thermonuclear devices. First, a macroscopic thermo-mechanical simulation is performed in order to obtain temperature- and strain field in the material. The strain path is recorded at a selected point of interest of the macroscopic specimen, and is then applied at the microscopic level to a finite element mesh of a polycrystal. In the microscopic simulation, the stress state at the grain boundaries serves as the marker of cracking initiation. The simulated heat load cycle is a representative of edge-localized modes, which are anticipated during normal operations of ITER. Normal stresses at the grain boundary interfaces were shown to strongly depend on the direction of grain orientation with respect to the heat flux direction and to attain higher values if the flux is perpendicular to the elongated grains, where it apparently promotes crack initiation.

  7. Static and Dynamic Performance of Newly Developed ITER Relevant Insulation Systems after Neutron Irradiation

    NASA Astrophysics Data System (ADS)

    Prokopec, R.; Humer, K.; Fillunger, H.; Maix, R. K.; Weber, H. W.

    2006-03-01

    Fiber reinforced plastics will be used as insulation systems for the superconducting magnet coils of ITER. The fast neutron and gamma radiation environment present at the magnet location will lead to serious material degradation, particularly of the insulation. For this reason, advanced radiation-hard resin systems are of special interest. In this study various R-glass fiber / Kapton reinforced DGEBA epoxy and cyanate ester composites fabricated by the vacuum pressure impregnation method were investigated. All systems were irradiated at ambient temperature (340 K) in the TRIGA reactor (Vienna) to a fast neutron fluence of 1×1022 m-2 (E>0.1 MeV). Short-beam shear and static tensile tests were carried out at 77 K prior to and after irradiation. In addition, tension-tension fatigue measurements were used in order to assess the mechanical performance of the insulation systems under the pulsed operation conditions of ITER. For the cyanate ester based system the influence of interleaving Kapton layers on the static and dynamic material behavior was investigated as well.

  8. Electron kinetic effects on interferometry, polarimetry and Thomson scattering measurements in burning plasmas (invited).

    PubMed

    Mirnov, V V; Brower, D L; Den Hartog, D J; Ding, W X; Duff, J; Parke, E

    2014-11-01

    At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = Te/mec(2) model may be insufficient; we present a more precise model with τ(2)-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of Te measurement relevant to ITER operational scenarios.

  9. A Novel Real-Time Reference Key Frame Scan Matching Method.

    PubMed

    Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu

    2017-05-07

    Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.

  10. Design concept of a cryogenic distillation column cascade for a ITER scale fusion reactor

    NASA Astrophysics Data System (ADS)

    Yamanishi, Toshihiko; Enoeda, Mikio; Okuno, Kenji

    1994-07-01

    A column cascade has been proposed for the fuel cycle of a ITER scale fusion reactor. The proposed cascade consists of three columns and has significant features: either top or bottom product is prior to the other for each column; it is avoided to withdraw side streams as products or feeds of down stream columns; and there is no recycle steam between the columns. In addition, the product purity of the cascade can be maintained against the changes of flow rates and compositions of feed streams just by adjusting the top and bottom flow rates. The control system has been designed for each column in the cascade. A key component in the prior product stream was selected, and the analysis method of this key component was proposed. The designed control system never brings instability as long as the concentration of the key component is measured with negligible time lag. The time lag for the measurement considerably affects the stability of the control system. A significant conclusion by the simulation in this work is that permissible time for the measurement is about 0.5 hour to obtain stable control. Hence, the analysis system using the gas chromatography is valid for control of the columns.

  11. The Genetic Drift Inventory: A Tool for Measuring What Advanced Undergraduates Have Mastered about Genetic Drift

    PubMed Central

    Price, Rebecca M.; Andrews, Tessa C.; McElhinny, Teresa L.; Mead, Louise S.; Abraham, Joel K.; Thanukos, Anna; Perez, Kathryn E.

    2014-01-01

    Understanding genetic drift is crucial for a comprehensive understanding of biology, yet it is difficult to learn because it combines the conceptual challenges of both evolution and randomness. To help assess strategies for teaching genetic drift, we have developed and evaluated the Genetic Drift Inventory (GeDI), a concept inventory that measures upper-division students’ understanding of this concept. We used an iterative approach that included extensive interviews and field tests involving 1723 students across five different undergraduate campuses. The GeDI consists of 22 agree–disagree statements that assess four key concepts and six misconceptions. Student scores ranged from 4/22 to 22/22. Statements ranged in mean difficulty from 0.29 to 0.80 and in discrimination from 0.09 to 0.46. The internal consistency, as measured with Cronbach's alpha, ranged from 0.58 to 0.88 across five iterations. Test–retest analysis resulted in a coefficient of stability of 0.82. The true–false format means that the GeDI can test how well students grasp key concepts central to understanding genetic drift, while simultaneously testing for the presence of misconceptions that indicate an incomplete understanding of genetic drift. The insights gained from this testing will, over time, allow us to improve instruction about this key component of evolution. PMID:24591505

  12. The two-phase method for finding a great number of eigenpairs of the symmetric or weakly non-symmetric large eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dul, F.A.; Arczewski, K.

    1994-03-01

    Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less

  13. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    NASA Astrophysics Data System (ADS)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  14. Plasma cleaning of ITER edge Thomson scattering mock-up mirror in the EAST tokamak

    NASA Astrophysics Data System (ADS)

    Yan, Rong; Moser, Lucas; Wang, Baoguo; Peng, Jiao; Vorpahl, Christian; Leipold, Frank; Reichle, Roger; Ding, Rui; Chen, Junling; Mu, Lei; Steiner, Roland; Meyer, Ernst; Zhao, Mingzhong; Wu, Jinhua; Marot, Laurent

    2018-02-01

    First mirrors are the key element of all optical and laser diagnostics in ITER. Facing the plasma directly, the surface of the first mirrors could be sputtered by energetic particles or deposited with contaminants eroded from the first wall (tungsten and beryllium), which would result in the degradation of the reflectivity. The impurity deposits emphasize the necessity of the first mirror in situ cleaning for ITER. The mock-up first mirror system for ITER edge Thomson scattering diagnostics has been cleaned in EAST for the first time in a tokamak using radio frequency capacitively coupled plasma. The cleaning properties, namely the removal of contaminants and homogeneity of cleaning were investigated with molybdenum mirror insets (25 mm diameter) located at five positions over the mock-up plate (center to edge) on which 10 nm of aluminum oxide, used as beryllium proxy, were deposited. The cleaning efficiency was evaluated using energy dispersive x-ray spectroscopy, reflectivity measurements and x-ray photoelectron spectroscopy. Using argon or neon plasma without magnetic field in the laboratory and with a 1.7 T magnetic field in the EAST tokamak, the aluminum oxide films were homogeneously removed. The full recovery of the mirrors’ reflectivity was attained after cleaning in EAST with the magnetic field, and the cleaning efficiency was about 40 times higher than that without the magnetic field. All these results are promising for the plasma cleaning baseline scenario of ITER.

  15. Iteration in Early-Elementary Engineering Design

    NASA Astrophysics Data System (ADS)

    McFarland Kendall, Amber Leigh

    K-12 standards and curricula are beginning to include engineering design as a key practice within Science Technology Engineering and Mathematics (STEM) education. However, there is little research on how the youngest students engage in engineering design within the elementary classroom. This dissertation focuses on iteration as an essential aspect of engineering design, and because research at the college and professional level suggests iteration improves the designer's understanding of problems and the quality of design solutions. My research presents qualitative case studies of students in kindergarten and third-grade as they engage in classroom engineering design challenges which integrate with traditional curricula standards in mathematics, science, and literature. I discuss my results through the lens of activity theory, emphasizing practices, goals, and mediating resources. Through three chapters, I provide insight into how early-elementary students iterate upon their designs by characterizing the ways in which lesson design impacts testing and revision, by analyzing the plan-driven and experimentation-driven approaches that student groups use when solving engineering design challenges, and by investigating how students attend to constraints within the challenge. I connect these findings to teacher practices and curriculum design in order to suggest methods of promoting iteration within open-ended, classroom-based engineering design challenges. This dissertation contributes to the field of engineering education by providing evidence of productive engineering practices in young students and support for the value of engineering design challenges in developing students' participation and agency in these practices.

  16. Advances in the high bootstrap fraction regime on DIII-D towards the Q  =  5 mission of ITER steady state

    NASA Astrophysics Data System (ADS)

    Qian, J. P.; Garofalo, A. M.; Gong, X. Z.; Ren, Q. L.; Ding, S. Y.; Solomon, W. M.; Xu, G. S.; Grierson, B. A.; Guo, W. F.; Holcomb, C. T.; McClenaghan, J.; McKee, G. R.; Pan, C. K.; Huang, J.; Staebler, G. M.; Wan, B. N.

    2017-05-01

    Recent EAST/DIII-D joint experiments on the high poloidal beta {β\\text{P}} regime in DIII-D have extended operation with internal transport barriers (ITBs) and excellent energy confinement (H 98y2 ~ 1.6) to higher plasma current, for lower q 95  ⩽  7.0, and more balanced neutral beam injection (NBI) (torque injection  <  2 Nm), for lower plasma rotation than previous results (Garofalo et al, IAEA 2014, Gong et al 2014 IAEA Int. Conf. on Fusion Energy). Transport analysis and experimental measurements at low toroidal rotation suggest that the E  ×  B shear effect is not key to the ITB formation in these high {β\\text{P}} discharges. Experiments and TGLF modeling show that the Shafranov shift has a key stabilizing effect on turbulence. Extrapolation of the DIII-D results using a 0D model shows that with the improved confinement, the high bootstrap fraction regime could achieve fusion gain Q  =  5 in ITER at {β\\text{N}} ~ 2.9 and q 95 ~ 7. With the optimization of q(0), the required improved confinement is achievable when using 1.5D TGLF-SAT1 for transport simulations. Results reported in this paper suggest that the DIII-D high {β\\text{P}} scenario could be a candidate for ITER steady state operation.

  17. Health literacy and the social determinants of health: a qualitative model from adult learners.

    PubMed

    Rowlands, Gillian; Shaw, Adrienne; Jaswal, Sabrena; Smith, Sian; Harpham, Trudy

    2017-02-01

    Health literacy, ‘the personal characteristics and social resources needed for individuals and communities to access, understand, appraise and use information and services to make decisions about health’, is key to improving peoples’ control over modifiable social determinants of health (SDH). This study listened to adult learners to understand their perspectives on gathering, understanding and using information for health. This qualitative project recruited participants from community skills courses to identify relevant ‘health information’ factors. Subsequently different learners put these together to develop a model of their ‘Journey to health’. Twenty-seven participants were recruited; twenty from community health literacy courses and seven from an adult basic literacy and numeracy course. Participants described health as a ‘journey’ starting from an individual's family, ethnicity and culture. Basic (functional) health literacy skills were needed to gather and understand information. More complex interactive health literacy skills were needed to evaluate the importance and relevance of information in context, and make health decisions. Critical health literacy skills could be used to adapt negative external factors that might inhibit health-promotion. Our model is an iterative linear one moving from ethnicity, community and culture, through lifestyle, to health, with learning revisited in the context of different sources of support. It builds on existing models by highlighting the importance of SDH in the translation of new health knowledge into healthy behaviours, and the importance of health literacy in enabling people to overcome barriers to health.

  18. Fostering Self-Regulated Learning in a Blended Environment Using Group Awareness and Peer Assistance as External Scaffolds

    ERIC Educational Resources Information Center

    Lin, J-W.; Lai, Y-C.; Lai, Y-C.; Chang, L-C.

    2016-01-01

    Most systems for training self-regulated learning (SRL) behaviour focus on the provision of a learner-centred environment. Such systems repeat the training process and place learners alone to experience that process iteratively. According to the relevant literature, external scaffolds are more promising for effective SRL training. In this work,…

  19. Iterated Hamiltonian type systems and applications

    NASA Astrophysics Data System (ADS)

    Tiba, Dan

    2018-04-01

    We discuss, in arbitrary dimension, certain Hamiltonian type systems and prove existence, uniqueness and regularity properties, under the independence condition. We also investigate the critical case, define a class of generalized solutions and prove existence and basic properties. Relevant examples and counterexamples are also indicated. The applications concern representations of implicitly defined manifolds and their perturbations, motivated by differential systems involving unknown geometries.

  20. The 113 GHz ECRH system for JET

    NASA Astrophysics Data System (ADS)

    Verhoeven, A. G. A.; Bongers, W. A.; Elzendoorn, B. S. Q.; Graswinckel, M.; Hellingman, P.; Kamp, J. J.; Kooijman, W.; Kruijt, O. G.; Maagdenberg, J.; Ronden, D.; Stakenborg, J.; Sterk, A. B.; Tichler, J.; Alberti, S.; Goodman, T.; Henderson, M.; Hoekzema, J. A.; Oosterbeek, J. W.; Fernandez, A.; Likin, K.; Bruschi, A.; Cirant, S.; Novak, S.; Piosczyk, B.; Thumm, M.; Bindslev, H.; Kaye, A.; Fleming, C.; Zohm, H.

    2003-02-01

    An ECRH (Electron Cyclotron Resonance Heating) system has been designed for JET in the framework of the JET Enhanced-Performance project (JET-EP) under the European Fusion Development Agreement (EFDA). Due to financial constraints it has recently been decided not to implement this project. Nevertheless, the design work conducted from April 2000 to January 2002 shows a number of features that can be relevant in preparation of future ECRH systems, e.g., for ITER. The ECRH system was foreseen to comprise 6 gyrotrons, 1 MW each, in order to deliver 5 MW into the plasma [1]. The main aim was to enable the control of neo-classical tearing modes (NTM). The paper will concentrate on: • The power-supply and modulation system, including series IGBT switches, to enable independent control of each gyrotron and an all-solid-state body power supply to stabilise the gyrotron output power and to enable fast modulations up to 10 kHz. • A plug-in launcher, that is steerable in both toroidal and poloidal angle, and able to handle 8 separate mm-wave beams. Four steerable launching mirrors were foreseen to handle two mm-wave beams each. Water cooling of all the mirrors was a particularly ITER relevant feature.

  1. Effective Social Media Practices for Communicating Climate Change Science to Community Leaders

    NASA Astrophysics Data System (ADS)

    Estrada, M.; DeBenedict, C.; Bruce, L.

    2016-12-01

    Climate Education Partners (CEP) uses an action research approach to increase climate knowledge and informed decision-making among key influential (KI) leaders in San Diego county. Social media has been one method for disseminating knowledge. During CEP's project years, social media use has proliferated. To capitalize on this trend, CEP iteratively developed a strategic method to engage KIs. First, as with all climate education, CEP identified the audience. Three primary Facebook and Twitter audiences were CEP's internal team, local KIs, and strategic partner organizations. Second, post contents were chosen based on interest to CEP key audiences and followed CEP's communications message triangle, which incorporates the Tripartite Integration Model of Social Influence (TIMSI). This message triangle focuses on San Diegan's valued quality of life, future challenges we face due to the changing climate, and ways in which we are working together to protect our quality of life for future generations. Third, an editorial calendar was created to carefully time posts, which capitalize on when target audiences were using social media most and to maintain consistency. The results of these three actions were significant. Results attained utilizing Facebook and Twitter data, which tracks post reach, total followers/likes, and engagement (likes, comments, mentions, shares). For example we found that specifically mentioning KIs resulted in more re-tweets and resulted in reaching a broader audience. Overall, data shows that CEP's reach to audiences of like-minded individuals and organizations now extends beyond CEP's original local network and reached more than 20,000 accounts on Twitter this year (compared with 460 on Twitter the year before). In summary, through posting and participating in the online conversation strategically, CEP disseminated key educational climate resources and relevant climate change news to educate and engage target audience and amplify our work.

  2. Soil and land use research in Europe: Lessons learned from INSPIRATION bottom-up strategic research agenda setting.

    PubMed

    Bartke, Stephan; Boekhold, Alexandra E; Brils, Jos; Grimski, Detlef; Ferber, Uwe; Gorgon, Justyna; Guérin, Valérie; Makeschin, Franz; Maring, Linda; Nathanail, C Paul; Villeneuve, Jacques; Zeyer, Josef; Schröter-Schlaack, Christoph

    2018-05-01

    We introduce the INSPIRATION bottom-up approach for the development of a strategic research agenda for spatial planning, land use and soil-sediment-water-system management in Europe. Research and innovation needs were identified by more than 500 European funders, endusers, scientists, policy makers, public administrators and consultants. We report both on the concept and on the implementation of the bottom-up approach, provide a critique of the process and draw key lessons for the development of research agendas in the future. Based on identified strengths and weaknesses we identified as key opportunities and threats 1) a high ranking and attentiveness for the research topics on the political agenda, in press and media or in public awareness, 2) availability of funding for research, 3) the resources available for creating the agenda itself, 4) the role of the sponsor of the agenda development, and 5) the continuity of stakeholder engagement as bases for identification of windows of opportunity, creating ownership for the agenda and facilitating its implementation. Our derived key recommendations are 1) a clear definition of the area for which the agenda is to be developed and for the targeted user, 2) a conceptual model to structure the agenda, 3) making clear the expected roles, tasks, input formats regarding the involvement and communication with the stakeholders and project partners, 4) a sufficient number of iterations and checks of the agenda with stakeholders to insure completeness, relevance and creation of co-ownership for the agenda, and 5) from the beginning prepare the infrastructure for the network to implement the agenda. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Extending helium partial pressure measurement technology to JET DTE2 and ITER.

    PubMed

    Klepper, C C; Biewer, T M; Kruezi, U; Vartanian, S; Douai, D; Hillis, D L; Marcus, C

    2016-11-01

    The detection limit for helium (He) partial pressure monitoring via the Penning discharge optical emission diagnostic, mainly used for tokamak divertor effluent gas analysis, is shown here to be possible for He concentrations down to 0.1% in predominantly deuterium effluents. This result from a dedicated laboratory study means that the technique can now be extended to intrinsically (non-injected) He produced as fusion reaction ash in deuterium-tritium experiments. The paper also examines threshold ionization mass spectroscopy as a potential backup to the optical technique, but finds that further development is needed to attain with plasma pulse-relevant response times. Both these studies are presented in the context of continuing development of plasma pulse-resolving, residual gas analysis for the upcoming JET deuterium-tritium campaign (DTE2) and for ITER.

  4. Nonlinear Fourier transform—towards the construction of nonlinear Fourier modes

    NASA Astrophysics Data System (ADS)

    Saksida, Pavle

    2018-01-01

    We study a version of the nonlinear Fourier transform associated with ZS-AKNS systems. This version is suitable for the construction of nonlinear analogues of Fourier modes, and for the perturbation-theoretic study of their superposition. We provide an iterative scheme for computing the inverse of our transform. The relevant formulae are expressed in terms of Bell polynomials and functions related to them. In order to prove the validity of our iterative scheme, we show that our transform has the necessary analytic properties. We show that up to order three of the perturbation parameter, the nonlinear Fourier mode is a complex sinusoid modulated by the second Bernoulli polynomial. We describe an application of the nonlinear superposition of two modes to a problem of transmission through a nonlinear medium.

  5. Regularized iterative integration combined with non-linear diffusion filtering for phase-contrast x-ray computed tomography.

    PubMed

    Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter

    2014-12-29

    Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.

  6. RMP ELM Suppression in DIII-D Plasmas with ITER Similar Shapes and Collisionalities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, T.E.; Fenstermacher, M. E.; Moyer, R.A.

    2008-01-01

    Large Type-I edge localized modes (ELMs) are completely eliminated with small n = 3 resonant magnetic perturbations (RMP) in low average triangularity, = 0.26, plasmas and in ITER similar shaped (ISS) plasmas, = 0.53, with ITER relevant collisionalities ve 0.2. Significant differences in the RMP requirements and in the properties of the ELM suppressed plasmas are found when comparing the two triangularities. In ISS plasmas, the current required to suppress ELMs is approximately 25% higher than in low average triangularity plasmas. It is also found that the width of the resonant q95 window required for ELM suppression is smaller inmore » ISS plasmas than in low average triangularity plasmas. An analysis of the positions and widths of resonant magnetic islands across the pedestal region, in the absence of resonant field screening or a self-consistent plasma response, indicates that differences in the shape of the q profile may explain the need for higher RMP coil currents during ELM suppression in ISS plasmas. Changes in the pedestal profiles are compared for each plasma shape as well as with changes in the injected neutral beam power and the RMP amplitude. Implications of these results are discussed in terms of requirements for optimal ELM control coil designs and for establishing the physics basis needed in order to scale this approach to future burning plasma devices such as ITER.« less

  7. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces

    NASA Astrophysics Data System (ADS)

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-01

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  8. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces.

    PubMed

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-28

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  9. Playing Modeling Games in the Science Classroom: The Case for Disciplinary Integration

    ERIC Educational Resources Information Center

    Sengupta, Pratim; Clark, Doug

    2016-01-01

    The authors extend the theory of "disciplinary integration" of games for science education beyond the virtual world of games, and identify two key themes of a practice-based theoretical commitment to science learning: (1) materiality in the classroom, and (2) iterative design of multiple, complementary, symbolic inscriptions (e.g.,…

  10. The Evolution of Frequency Distributions: Relating Regularization to Inductive Biases through Iterated Learning

    ERIC Educational Resources Information Center

    Reali, Florencia; Griffiths, Thomas L.

    2009-01-01

    The regularization of linguistic structures by learners has played a key role in arguments for strong innate constraints on language acquisition, and has important implications for language evolution. However, relating the inductive biases of learners to regularization behavior in laboratory tasks can be challenging without a formal model. In this…

  11. To Rubric or Not to Rubric: That Is the Question

    ERIC Educational Resources Information Center

    Kenworthy, Amy L.; Hrivnak, George A.

    2014-01-01

    Amy Kenworthy and George A. Hrivnak share their thoughts in this commentary, writing that they were both stimulated by and written in response to Riebe and Jackson's article "Assurance of Graduate Employability Skill Outcomes Through the Use of Rubrics." Having read two iterations of that article, they highlight three key messages…

  12. AFEII Analog Front End Board Design Specifications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubinov, Paul; /Fermilab

    2005-04-01

    This document describes the design of the 2nd iteration of the Analog Front End Board (AFEII), which has the function of receiving charge signals from the Central Fiber Tracker (CFT) and providing digital hit pattern and charge amplitude information from those charge signals. This second iteration is intended to address limitations of the current AFE (referred to as AFEI in this document). These limitations become increasingly deleterious to the performance of the Central Fiber Tracker as instantaneous luminosity increases. The limitations are inherent in the design of the key front end chips on the AFEI board (the SVXIIe and themore » SIFT) and the architecture of the board itself. The key limitations of the AFEI are: (1) SVX saturation; (2) Discriminator to analog readout cross talk; (3) Tick to tick pedestal variation; and (4) Channel to channel pedestal variation. The new version of the AFE board, AFEII, addresses these limitations by use of a new chip, the TriP-t and by architectural changes, while retaining the well understood and desirable features of the AFEI board.« less

  13. Designing and Undertaking a Health Economics Study of Digital Health Interventions.

    PubMed

    McNamee, Paul; Murray, Elizabeth; Kelly, Michael P; Bojke, Laura; Chilcott, Jim; Fischer, Alastair; West, Robert; Yardley, Lucy

    2016-11-01

    This paper introduces and discusses key issues in the economic evaluation of digital health interventions. The purpose is to stimulate debate so that existing economic techniques may be refined or new methods developed. The paper does not seek to provide definitive guidance on appropriate methods of economic analysis for digital health interventions. This paper describes existing guides and analytic frameworks that have been suggested for the economic evaluation of healthcare interventions. Using selected examples of digital health interventions, it assesses how well existing guides and frameworks align to digital health interventions. It shows that digital health interventions may be best characterized as complex interventions in complex systems. Key features of complexity relate to intervention complexity, outcome complexity, and causal pathway complexity, with much of this driven by iterative intervention development over time and uncertainty regarding likely reach of the interventions among the relevant population. These characteristics imply that more-complex methods of economic evaluation are likely to be better able to capture fully the impact of the intervention on costs and benefits over the appropriate time horizon. This complexity includes wider measurement of costs and benefits, and a modeling framework that is able to capture dynamic interactions among the intervention, the population of interest, and the environment. The authors recommend that future research should develop and apply more-flexible modeling techniques to allow better prediction of the interdependency between interventions and important environmental influences. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  14. TH-AB-BRA-09: Stability Analysis of a Novel Dose Calculation Algorithm for MRI Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zelyak, O; Fallone, B; Cross Cancer Institute, Edmonton, AB

    2016-06-15

    Purpose: To determine the iterative deterministic solution stability of the Linear Boltzmann Transport Equation (LBTE) in the presence of magnetic fields. Methods: The LBTE with magnetic fields under investigation is derived using a discrete ordinates approach. The stability analysis is performed using analytical and numerical methods. Analytically, the spectral Fourier analysis is used to obtain the convergence rate of the source iteration procedures based on finding the largest eigenvalue of the iterative operator. This eigenvalue is a function of relevant physical parameters, such as magnetic field strength and material properties, and provides essential information about the domain of applicability requiredmore » for clinically optimal parameter selection and maximum speed of convergence. The analytical results are reinforced by numerical simulations performed using the same discrete ordinates method in angle, and a discontinuous finite element spatial approach. Results: The spectral radius for the source iteration technique of the time independent transport equation with isotropic and anisotropic scattering centers inside infinite 3D medium is equal to the ratio of differential and total cross sections. The result is confirmed numerically by solving LBTE and is in full agreement with previously published results. The addition of magnetic field reveals that the convergence becomes dependent on the strength of magnetic field, the energy group discretization, and the order of anisotropic expansion. Conclusion: The source iteration technique for solving the LBTE with magnetic fields with the discrete ordinates method leads to divergent solutions in the limiting cases of small energy discretizations and high magnetic field strengths. Future investigations into non-stationary Krylov subspace techniques as an iterative solver will be performed as this has been shown to produce greater stability than source iteration. Furthermore, a stability analysis of a discontinuous finite element space-angle approach (which has been shown to provide the greatest stability) will also be investigated. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less

  15. INVESTIGATING ALTERNATIVES TO THE FISH EARLY-LIFE STAGE TEST: A STRATEGY FOR DISCOVERING AND ANNOTATING ADVERSE OUTCOME PATHWAYS FOR EARLY FISH DEVELOPMENT

    PubMed Central

    Villeneuve, Daniel; Volz, David C; Embry, Michelle R; Ankley, Gerald T; Belanger, Scott E; Léonard, Marc; Schirmer, Kristin; Tanguay, Robert; Truong, Lisa; Wehmas, Leah

    2014-01-01

    The fish early-life stage (FELS) test (Organisation for Economic Co-operation and Development [OECD] test guideline 210) is the primary test used internationally to estimate chronic fish toxicity in support of ecological risk assessments and chemical management programs. As part of an ongoing effort to develop efficient and cost-effective alternatives to the FELS test, there is a need to identify and describe potential adverse outcome pathways (AOPs) relevant to FELS toxicity. To support this endeavor, the authors outline and illustrate an overall strategy for the discovery and annotation of FELS AOPs. Key events represented by major developmental landmarks were organized into a preliminary conceptual model of fish development. Using swim bladder inflation as an example, a weight-of-evidence–based approach was used to support linkage of key molecular initiating events to adverse phenotypic outcomes and reduced young-of-year survival. Based on an iterative approach, the feasibility of using key events as the foundation for expanding a network of plausible linkages and AOP knowledge was explored and, in the process, important knowledge gaps were identified. Given the scope and scale of the task, prioritization of AOP development was recommended and key research objectives were defined relative to factors such as current animal-use restrictions in the European Union and increased demands for fish toxicity data in chemical management programs globally. The example and strategy described are intended to guide collective efforts to define FELS-related AOPs and develop resource-efficient predictive assays that address the toxicological domain of the OECD 210 test. Environ Toxicol Chem 2014;33:158–169. © 2013 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited. PMID:24115264

  16. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  17. Digital Model of Fourier and Fresnel Quantized Holograms

    NASA Astrophysics Data System (ADS)

    Boriskevich, Anatoly A.; Erokhovets, Valery K.; Tkachenko, Vadim V.

    Some models schemes of Fourier and Fresnel quantized protective holograms with visual effects are suggested. The condition to arrive at optimum relationship between the quality of reconstructed images, and the coefficient of data reduction about a hologram, and quantity of iterations in the reconstructing hologram process has been estimated through computer model. Higher protection level is achieved by means of greater number both bi-dimensional secret keys (more than 2128) in form of pseudorandom amplitude and phase encoding matrixes, and one-dimensional encoding key parameters for every image of single-layer or superimposed holograms.

  18. A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy

    PubMed Central

    Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.

    2017-01-01

    One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997

  19. Preliminary Neutronics Analysis of the ITER Toroidal Interferometer and Polarimeter Diagnostic Corner Cube Retroreflectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tresemer, K. R.

    2015-07-01

    ITER is an international project under construction in France that will demonstrate nuclear fusion at a power plant-relevant scale. The Toroidal Interferometer and Polarimeter (TIP) Diagnostic will be used to measure the plasma electron line density along 5 laser-beam chords. This line-averaged density measurement will be input to the ITER feedback-control system. The TIP is considered the primary diagnostic for these measurements, which are needed for basic ITER machine control. Therefore, system reliability & accuracy is a critical element in TIP’s design. There are two major challenges to the reliability of the TIP system. First is the survivability and performancemore » of in-vessel optics and second is maintaining optical alignment over long optical paths and large vessel movements. Both of these issues greatly depend on minimizing the overall distortion due to neutron & gamma heating of the Corner Cube Retroreflectors (CCRs). These are small optical mirrors embedded in five first wall locations around the vacuum vessel, corresponding to certain plasma tangency radii. During the development of the design and location of these CCRs, several iterations of neutronics analyses were performed to determine and minimize the total distortion due to nuclear heating of the CCRs. The CCR corresponding to TIP Channel 2 was chosen for analysis as a good middle-road case, being an average distance from the plasma (of the five channels) and having moderate neutron shielding from its blanket shield housing. Results show that Channel 2 meets the requirements of the TIP Diagnostic, but barely. These results suggest other CCRs might be at risk of exceeding thermal deformation due to nuclear heating.« less

  20. Numerical analysis of modified Central Solenoid insert design

    DOE PAGES

    Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; ...

    2015-06-21

    The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagneticmore » simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.« less

  1. PREFACE: Progress in the ITER Physics Basis

    NASA Astrophysics Data System (ADS)

    Ikeda, K.

    2007-06-01

    I would firstly like to congratulate all who have contributed to the preparation of the `Progress in the ITER Physics Basis' (PIPB) on its publication and express my deep appreciation of the hard work and commitment of the many scientists involved. With the signing of the ITER Joint Implementing Agreement in November 2006, the ITER Members have now established the framework for construction of the project, and the ITER Organization has begun work at Cadarache. The review of recent progress in the physics basis for burning plasma experiments encompassed by the PIPB will be a valuable resource for the project and, in particular, for the current Design Review. The ITER design has been derived from a physics basis developed through experimental, modelling and theoretical work on the properties of tokamak plasmas and, in particular, on studies of burning plasma physics. The `ITER Physics Basis' (IPB), published in 1999, has been the reference for the projection methodologies for the design of ITER, but the IPB also highlighted several key issues which needed to be resolved to provide a robust basis for ITER operation. In the intervening period scientists of the ITER Participant Teams have addressed these issues intensively. The International Tokamak Physics Activity (ITPA) has provided an excellent forum for scientists involved in these studies, focusing their work on the high priority physics issues for ITER. Significant progress has been made in many of the issues identified in the IPB and this progress is discussed in depth in the PIPB. In this respect, the publication of the PIPB symbolizes the strong interest and enthusiasm of the plasma physics community for the success of the ITER project, which we all recognize as one of the great scientific challenges of the 21st century. I wish to emphasize my appreciation of the work of the ITPA Coordinating Committee members, who are listed below. Their support and encouragement for the preparation of the PIPB were fundamental to its completion. I am pleased to witness the extensive collaborations, the excellent working relationships and the free exchange of views that have been developed among scientists working on magnetic fusion, and I would particularly like to acknowledge the importance which they assign to ITER in their research. This close collaboration and the spirit of free discussion will be essential to the success of ITER. Finally, the PIPB identifies issues which remain in the projection of burning plasma performance to the ITER scale and in the control of burning plasmas. Continued R&D is therefore called for to reduce the uncertainties associated with these issues and to ensure the efficient operation and exploitation of ITER. It is important that the international fusion community maintains a high level of collaboration in the future to address these issues and to prepare the physics basis for ITER operation. ITPA Coordination Committee R. Stambaugh (Chair of ITPA CC, General Atomics, USA) D.J. Campbell (Previous Chair of ITPA CC, European Fusion Development Agreement—Close Support Unit, ITER Organization) M. Shimada (Co-Chair of ITPA CC, ITER Organization) R. Aymar (ITER International Team, CERN) V. Chuyanov (ITER Organization) J.H. Han (Korea Basic Science Institute, Korea) Y. Huo (Zengzhou University, China) Y.S. Hwang (Seoul National University, Korea) N. Ivanov (Kurchatov Institute, Russia) Y. Kamada (Japan Atomic Energy Agency, Naka, Japan) P.K. Kaw (Institute for Plasma Research, India) S. Konovalov (Kurchatov Institute, Russia) M. Kwon (National Fusion Research Center, Korea) J. Li (Academy of Science, Institute of Plasma Physics, China) S. Mirnov (TRINITI, Russia) Y. Nakamura (National Institute for Fusion Studies, Japan) H. Ninomiya (Japan Atomic Energy Agency, Naka, Japan) E. Oktay (Department of Energy, USA) J. Pamela (European Fusion Development Agreement—Close Support Unit) C. Pan (Southwestern Institute of Physics, China) F. Romanelli (Ente per le Nuove tecnologie, l'Energia e l'Ambiente, Italy and European Fusion Development Agreement—Close Support Unit) N. Sauthoff (Princeton Plasma Physics Laboratory, USA and Oak Ridge National Laboratories, USA) Y. Saxena (Institute for Plasma Research, India) Y. Shimomura (ITER Organization) R. Singh (Institute for Plasma Research, India) S. Takamura (Nagoya University, Japan) K. Toi (National Institute for Fusion Studies, Japan) M. Wakatani (Kyoto University, Japan (deceased)) H. Zohm (Max-Planck-Institut für Plasmaphysik, Garching, Germany)

  2. The Activities of the European Consortium on Nuclear Data Development and Analysis for Fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, U., E-mail: ulrich.fischer@kit.edu; Avrigeanu, M.; Avrigeanu, V.

    This paper presents an overview of the activities of the European Consortium on Nuclear Data Development and Analysis for Fusion. The Consortium combines available European expertise to provide services for the generation, maintenance, and validation of nuclear data evaluations and data files relevant for ITER, IFMIF and DEMO, as well as codes and software tools required for related nuclear calculations.

  3. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, Bruno; Carvalho, Paulo F.; Rodrigues, A.P.

    The ATCA standard specifies a mandatory Shelf Manager (ShM) unit which is a key element for the system operation. It includes the Intelligent Platform Management Controller (IPMC) which monitors the system health, retrieves inventory information and controls the Field Replaceable Units (FRUs). These elements enable the intelligent health monitoring, providing high-availability and safety operation, ensuring the correct system operation. For critical systems like ones of tokamak ITER these features are mandatory to support the long pulse operation. The Nominal Device Support (NDS) was designed and developed for the ITER CODAC Core System (CCS), which will be the responsible for plantmore » Instrumentation and Control (I and C), supervising and monitoring on ITER. It generalizes the Enhanced Physics and Industrial Control System (EPICS) device support interface for Data Acquisition (DAQ) and timing devices. However the support for health management features and ATCA ShM are not yet provided. This paper presents the implementation and test of a NDS for the ATCA ShM, using the ITER Fast Plant System Controller (FPSC) prototype environment. This prototype is fully compatible with the ITER CCS and uses the EPICS Channel Access (CA) protocol as the interface with the Plant Operation Network (PON). The implemented solution running in an EPICS Input / Output Controller (IOC) provides Process Variables (PV) to the PON network with the system information. These PVs can be used for control and monitoring by all CA clients, such as EPICS user interface clients and alarm systems. The results are presented, demonstrating the fully integration and the usability of this solution. (authors)« less

  5. Phase-only asymmetric optical cryptosystem based on random modulus decomposition

    NASA Astrophysics Data System (ADS)

    Xu, Hongfeng; Xu, Wenhui; Wang, Shuaihua; Wu, Shaofan

    2018-06-01

    We propose a phase-only asymmetric optical cryptosystem based on random modulus decomposition (RMD). The cryptosystem is presented for effectively improving the capacity to resist various attacks, including the attack of iterative algorithms. On the one hand, RMD and phase encoding are combined to remove the constraints that can be used in the attacking process. On the other hand, the security keys (geometrical parameters) introduced by Fresnel transform can increase the key variety and enlarge the key space simultaneously. Numerical simulation results demonstrate the strong feasibility, security and robustness of the proposed cryptosystem. This cryptosystem will open up many new opportunities in the application fields of optical encryption and authentication.

  6. Methodological standards and patient-centeredness in comparative effectiveness research: the PCORI perspective.

    PubMed

    2012-04-18

    Rigorous methodological standards help to ensure that medical research produces information that is valid and generalizable, and are essential in patient-centered outcomes research (PCOR). Patient-centeredness refers to the extent to which the preferences, decision-making needs, and characteristics of patients are addressed, and is the key characteristic differentiating PCOR from comparative effectiveness research. The Patient Protection and Affordable Care Act signed into law in 2010 created the Patient-Centered Outcomes Research Institute (PCORI), which includes an independent, federally appointed Methodology Committee. The Methodology Committee is charged to develop methodological standards for PCOR. The 4 general areas identified by the committee in which standards will be developed are (1) prioritizing research questions, (2) using appropriate study designs and analyses, (3) incorporating patient perspectives throughout the research continuum, and (4) fostering efficient dissemination and implementation of results. A Congressionally mandated PCORI methodology report (to be issued in its first iteration in May 2012) will begin to provide standards in each of these areas, and will inform future PCORI funding announcements and review criteria. The work of the Methodology Committee is intended to enable generation of information that is relevant and trustworthy for patients, and to enable decisions that improve patient-centered outcomes.

  7. To adopt is to adapt: the process of implementing the ICF with an acute stroke multidisciplinary team in England.

    PubMed

    Tempest, Stephanie; Harries, Priscilla; Kilbride, Cherry; De Souza, Lorraine

    2012-01-01

    The success of the International Classification of Functioning, Disability and Health (ICF) depends on its uptake in clinical practice. This project aimed to explore ways the ICF could be used with an acute stroke multidisciplinary team and identify key learning from the implementation process. Using an action research approach, iterative cycles of observe, plan, act and evaluate were used within three phases: exploratory; innovatory and reflective. Thematic analysis was undertaken, using a model of immersion and crystallisation, on data collected via interview and focus groups, e-mail communications, minutes from relevant meetings, field notes and a reflective diary. Two overall themes were determined from the data analysis which enabled implementation. There is a need to: (1) adopt the ICF in ways that meet local service needs; and (2) adapt the ICF language and format. The empirical findings demonstrate how to make the ICF classification a clinical reality. First, we need to adopt the ICF as a vehicle to implement local service priorities e.g. to structure a multidisciplinary team report, thus enabling ownership of the implementation process. Second, we need to adapt the ICF terminology and format to make it acceptable for use by clinicians.

  8. Cumulative risk assessment lessons learned: a review of case studies and issue papers.

    PubMed

    Gallagher, Sarah S; Rice, Glenn E; Scarano, Louis J; Teuschler, Linda K; Bollweg, George; Martin, Lawrence

    2015-02-01

    Cumulative risk assessments (CRAs) examine potential risks posed by exposure to multiple and sometimes disparate environmental stressors. CRAs are more resource intensive than single chemical assessments, and pose additional challenges and sources of uncertainty. CRAs may examine the impact of several factors on risk, including exposure magnitude and timing, chemical mixture composition, as well as physical, biological, or psychosocial stressors. CRAs are meant to increase the relevance of risk assessments, providing decision makers with information based on real world exposure scenarios that improve the characterization of actual risks and hazards. The U.S. Environmental Protection Agency has evaluated a number of CRAs, performed by or commissioned for the Agency, to seek insight into CRA concepts, methods, and lessons learned. In this article, ten case studies and five issue papers on key CRA topics are examined and a set of lessons learned are identified for CRA implementation. The lessons address the iterative nature of CRAs, importance of considering vulnerability, need for stakeholder engagement, value of a tiered approach, new methods to assess multiroute exposures to chemical mixtures, and the impact of geographical scale on approach and purpose. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. In-pile testing of ITER first wall mock-ups at relevant thermal loading conditions

    NASA Astrophysics Data System (ADS)

    Litunovsky, N.; Gervash, A.; Lorenzetto, P.; Mazul, I.; Melder, R.

    2009-04-01

    The paper describes the experimental technique and preliminary results of thermal fatigue testing of ITER first wall (FW) water-cooled mock-ups inside the core of the RBT-6 experimental fission reactor (RIAR, Dimitrovgrad, Russia). This experiment has provided simultaneous effect of neutron fluence and thermal cycling damages on the mock-ups. A PC-controlled high-temperature graphite ohmic heater was applied to provide cyclic thermal load onto the mock-ups surface. This experiment lasted for 309 effective irradiation days with a final damage level (CuCrZr) of 1 dpa in the mock-ups. About 3700 thermal cycles with a heat flux of 0.4-0.5 MW/m 2 onto the mock-ups were realized before the heater fails. Then, irradiation was continued in a non-cycling mode.

  10. Macromolecular Crystallization in Microfluidics for the International Space Station

    NASA Technical Reports Server (NTRS)

    Monaco, Lisa A.; Spearing, Scott

    2003-01-01

    At NASA's Marshall Space Flight Center, the Iterative Biological Crystallization (IBC) project has begun development on scientific hardware for macromolecular crystallization on the International Space Station (ISS). Currently ISS crystallization research is limited to solution recipes that were prepared on the ground prior to launch. The proposed hardware will conduct solution mixing and dispensing on board the ISS, be fully automated, and have imaging functions via remote commanding from the ground. Utilizing microfluidic technology, IBC will allow for on orbit iterations. The microfluidics LabChip(R) devices that have been developed, along with Caliper Technologies, will greatly benefit researchers by allowing for precise fluid handling of nano/pico liter sized volumes. IBC will maximize the amount of science return by utilizing the microfluidic approach and be a valuable tool to structural biologists investigating medically relevant projects.

  11. The impact of initialization procedures on unsupervised unmixing of hyperspectral imagery using the constrained positive matrix factorization

    NASA Astrophysics Data System (ADS)

    Masalmah, Yahya M.; Vélez-Reyes, Miguel

    2007-04-01

    The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.

  12. Assessing Children's Understanding of Length Measurement: A Focus on Three Key Concepts

    ERIC Educational Resources Information Center

    Bush, Heidi

    2009-01-01

    In this article, the author presents three different tasks that can be used to assess students' understanding of the concept of length. Three important measurement concepts for students to understand are transitive reasoning, use of identical units, and iteration. In any teaching and learning process it is important to acknowledge students'…

  13. The Iterative Design of a Virtual Design Studio

    ERIC Educational Resources Information Center

    Blevis, Eli; Lim, Youn-kyung; Stolterman, Erik; Makice, Kevin

    2008-01-01

    In this article, the authors explain how they implemented Design eXchange as a shared collaborative online and physical space for design for their students. Their notion for Design eXchange favors a complex mix of key elements namely: (1) a virtual online studio; (2) a forum for review of all things related to design, especially design with the…

  14. Introducing 12 Year-Olds to Elementary Particles

    ERIC Educational Resources Information Center

    Wiener, Gerfried J.; Schmeling, Sascha M.; Hopf, Martin

    2017-01-01

    We present a new learning unit, which introduces 12 year-olds to the subatomic structure of matter. The learning unit was iteratively developed as a design-based research project using the technique of probing acceptance. We give a brief overview of the unit's final version, discuss its key ideas and main concepts, and conclude by highlighting the…

  15. Complex Adaptive Systems and the Origins of Adaptive Structure: What Experiments Can Tell Us

    ERIC Educational Resources Information Center

    Cornish, Hannah; Tamariz, Monica; Kirby, Simon

    2009-01-01

    Language is a product of both biological and cultural evolution. Clues to the origins of key structural properties of language can be found in the process of cultural transmission between learners. Recent experiments have shown that iterated learning by human participants in the laboratory transforms an initially unstructured artificial language…

  16. A Novel Real-Time Reference Key Frame Scan Matching Method

    PubMed Central

    Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu

    2017-01-01

    Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285

  17. Assessment of Preconditioner for a USM3D Hierarchical Adaptive Nonlinear Method (HANIM) (Invited)

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2016-01-01

    Enhancements to the previously reported mixed-element USM3D Hierarchical Adaptive Nonlinear Iteration Method (HANIM) framework have been made to further improve robustness, efficiency, and accuracy of computational fluid dynamic simulations. The key enhancements include a multi-color line-implicit preconditioner, a discretely consistent symmetry boundary condition, and a line-mapping method for the turbulence source term discretization. The USM3D iterative convergence for the turbulent flows is assessed on four configurations. The configurations include a two-dimensional (2D) bump-in-channel, the 2D NACA 0012 airfoil, a three-dimensional (3D) bump-in-channel, and a 3D hemisphere cylinder. The Reynolds Averaged Navier Stokes (RANS) solutions have been obtained using a Spalart-Allmaras turbulence model and families of uniformly refined nested grids. Two types of HANIM solutions using line- and point-implicit preconditioners have been computed. Additional solutions using the point-implicit preconditioner alone (PA) method that broadly represents the baseline solver technology have also been computed. The line-implicit HANIM shows superior iterative convergence in most cases with progressively increasing benefits on finer grids.

  18. A new pivoting and iterative text detection algorithm for biomedical images.

    PubMed

    Xu, Songhua; Krauthammer, Michael

    2010-12-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. Iterating between Tools to Create and Edit Visualizations.

    PubMed

    Bigelow, Alex; Drucker, Steven; Fisher, Danyel; Meyer, Miriah

    2017-01-01

    A common workflow for visualization designers begins with a generative tool, like D3 or Processing, to create the initial visualization; and proceeds to a drawing tool, like Adobe Illustrator or Inkscape, for editing and cleaning. Unfortunately, this is typically a one-way process: once a visualization is exported from the generative tool into a drawing tool, it is difficult to make further, data-driven changes. In this paper, we propose a bridge model to allow designers to bring their work back from the drawing tool to re-edit in the generative tool. Our key insight is to recast this iteration challenge as a merge problem - similar to when two people are editing a document and changes between them need to reconciled. We also present a specific instantiation of this model, a tool called Hanpuku, which bridges between D3 scripts and Illustrator. We show several examples of visualizations that are iteratively created using Hanpuku in order to illustrate the flexibility of the approach. We further describe several hypothetical tools that bridge between other visualization tools to emphasize the generality of the model.

  20. Coupling the Mixed Potential and Radiolysis Models for Used Fuel Degradation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buck, Edgar C.; Jerden, James L.; Ebert, William L.

    The primary purpose of this report is to describe the strategy for coupling three process level models to produce an integrated Used Fuel Degradation Model (FDM). The FDM, which is based on fundamental chemical and physical principals, provides direct calculation of radionuclide source terms for use in repository performance assessments. The G-value for H2O2 production (Gcond) to be used in the Mixed Potential Model (MPM) (H2O2 is the only radiolytic product presently included but others will be added as appropriate) needs to account for intermediate spur reactions. The effects of these intermediate reactions on [H2O2] are accounted for in themore » Radiolysis Model (RM). This report details methods for applying RM calculations that encompass the effects of these fast interactions on [H2O2] as the solution composition evolves during successive MPM iterations and then represent the steady-state [H2O2] in terms of an “effective instantaneous or conditional” generation value (Gcond). It is anticipated that the value of Gcond will change slowly as the reaction progresses through several iterations of the MPM as changes in the nature of fuel surface occur. The Gcond values will be calculated with the RM either after several iterations or when concentrations of key reactants reach threshold values determined from previous sensitivity runs. Sensitivity runs with RM indicate significant changes in G-value can occur over narrow composition ranges. The objective of the mixed potential model (MPM) is to calculate the used fuel degradation rates for a wide range of disposal environments to provide the source term radionuclide release rates for generic repository concepts. The fuel degradation rate is calculated for chemical and oxidative dissolution mechanisms using mixed potential theory to account for all relevant redox reactions at the fuel surface, including those involving oxidants produced by solution radiolysis and provided by the radiolysis model (RM). The RM calculates the concentration of species generated at any specific time and location from the surface of the fuel. Several options being considered for coupling the RM and MPM are described in the report. Different options have advantages and disadvantages based on the extent of coding that would be required and the ease of use of the final product.« less

  1. Sorting Five Human Tumor Types Reveals Specific Biomarkers and Background Classification Genes.

    PubMed

    Roche, Kimberly E; Weinstein, Marvin; Dunwoodie, Leland J; Poehlman, William L; Feltus, Frank A

    2018-05-25

    We applied two state-of-the-art, knowledge independent data-mining methods - Dynamic Quantum Clustering (DQC) and t-Distributed Stochastic Neighbor Embedding (t-SNE) - to data from The Cancer Genome Atlas (TCGA). We showed that the RNA expression patterns for a mixture of 2,016 samples from five tumor types can sort the tumors into groups enriched for relevant annotations including tumor type, gender, tumor stage, and ethnicity. DQC feature selection analysis discovered 48 core biomarker transcripts that clustered tumors by tumor type. When these transcripts were removed, the geometry of tumor relationships changed, but it was still possible to classify the tumors using the RNA expression profiles of the remaining transcripts. We continued to remove the top biomarkers for several iterations and performed cluster analysis. Even though the most informative transcripts were removed from the cluster analysis, the sorting ability of remaining transcripts remained strong after each iteration. Further, in some iterations we detected a repeating pattern of biological function that wasn't detectable with the core biomarker transcripts present. This suggests the existence of a "background classification" potential in which the pattern of gene expression after continued removal of "biomarker" transcripts could still classify tumors in agreement with the tumor type.

  2. DiMES PMI research at DIII-D in support of ITER and beyond

    DOE PAGES

    Rudakov, Dimitry L.; Abrams, Tyler; Ding, Rui; ...

    2017-03-27

    An overview of recent Plasma-Material Interactions (PMI) research at the DIII-D tokamak using the Divertor Material Evaluation System (DiMES) is presented. The DiMES manipulator allows for exposure of material samples in the lower divertor of DIII-D under well-diagnosed ITER-relevant plasma conditions. Plasma parameters during the exposures are characterized by an extensive diagnostic suite including a number of spectroscopic diagnostics, Langmuir probes, IR imaging, and Divertor Thomson Scattering. Post-mortem measurements of net erosion/deposition on the samples are done by Ion Beam Analysis, and results are modelled by the ERO and REDEP/WBC codes with plasma background reproduced by OEDGE/DIVIMP modelling based onmore » experimental inputs. This article highlights experiments studying sputtering erosion, re-deposition and migration of high-Z elements, mostly tungsten and molybdenum, as well as some alternative materials. Results are generally encouraging for use of high-Z PFCs in ITER and beyond, showing high redeposition and reduced net sputter erosion. Two methods of high-Z PFC surface erosion control, with (i) external electrical biasing and (ii) local gas injection, are also discussed. Furthermore, these techniques may find applications in the future devices.« less

  3. Error Field Assessment from Driven Mode Rotation: Results from Extrap-T2R Reversed-Field-Pinch and Perspectives for ITER

    NASA Astrophysics Data System (ADS)

    Volpe, F. A.; Frassinetti, L.; Brunsell, P. R.; Drake, J. R.; Olofsson, K. E. J.

    2012-10-01

    A new ITER-relevant non-disruptive error field (EF) assessment technique not restricted to low density and thus low beta was demonstrated at the Extrap-T2R reversed field pinch. Resistive Wall Modes (RWMs) were generated and their rotation sustained by rotating magnetic perturbations. In particular, stable modes of toroidal mode number n=8 and 10 and unstable modes of n=1 were used in this experiment. Due to finite EFs, and in spite of the applied perturbations rotating uniformly and having constant amplitude, the RWMs were observed to rotate non-uniformly and be modulated in amplitude (in the case of unstable modes, the observed oscillation was superimposed to the mode growth). This behavior was used to infer the amplitude and toroidal phase of n=1, 8 and 10 EFs. The method was first tested against known, deliberately applied EFs, and then against actual intrinsic EFs. Applying equal and opposite corrections resulted in longer discharges and more uniform mode rotation, indicating good EF compensation. The results agree with a simple theoretical model. Extensions to tearing modes, to the non-uniform plasma response to rotating perturbations, and to tokamaks, including ITER, will be discussed.

  4. Optimization of OSEM parameters in myocardial perfusion imaging reconstruction as a function of body mass index: a clinical approach*

    PubMed Central

    de Barros, Pietro Paolo; Metello, Luis F.; Camozzato, Tatiane Sabriela Cagol; Vieira, Domingos Manuel da Silva

    2015-01-01

    Objective The present study is aimed at contributing to identify the most appropriate OSEM parameters to generate myocardial perfusion imaging reconstructions with the best diagnostic quality, correlating them with patients’ body mass index. Materials and Methods The present study included 28 adult patients submitted to myocardial perfusion imaging in a public hospital. The OSEM method was utilized in the images reconstruction with six different combinations of iterations and subsets numbers. The images were analyzed by nuclear cardiology specialists taking their diagnostic value into consideration and indicating the most appropriate images in terms of diagnostic quality. Results An overall scoring analysis demonstrated that the combination of four iterations and four subsets has generated the most appropriate images in terms of diagnostic quality for all the classes of body mass index; however, the role played by the combination of six iterations and four subsets is highlighted in relation to the higher body mass index classes. Conclusion The use of optimized parameters seems to play a relevant role in the generation of images with better diagnostic quality, ensuring the diagnosis and consequential appropriate and effective treatment for the patient. PMID:26543282

  5. Electron kinetic effects on interferometry, polarimetry and Thomson scattering measurements in burning plasmas (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirnov, V. V.; Hartog, D. J. Den; Duff, J.

    2014-11-15

    At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP), and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. The precision of the previous lowest order linear in τ = T{sub e}/m{sub e}c{sup 2} model may be insufficient; we present a more precise model with τ{sup 2}-order corrections to satisfy the high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused bymore » equilibrium current, ECRH, and RF current drive effects. The classical problem of the degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of T{sub e} measurement relevant to ITER operational scenarios.« less

  6. A Path to Planetary Protection Requirements for Human Exploration: A Literature Review and Systems Engineering Approach

    NASA Technical Reports Server (NTRS)

    Johnson, James E.; Conley, Cassie; Siegel, Bette

    2015-01-01

    As systems, technologies, and plans for the human exploration of Mars and other destinations beyond low Earth orbit begin to coalesce, it is imperative that frequent and early consideration is given to how planetary protection practices and policy will be upheld. While the development of formal planetary protection requirements for future human space systems and operations may still be a few years from fruition, guidance to appropriately influence mission and system design will be needed soon to avoid costly design and operational changes. The path to constructing such requirements is a journey that espouses key systems engineering practices of understanding shared goals, objectives and concerns, identifying key stakeholders, and iterating a draft requirement set to gain community consensus. This paper traces through each of these practices, beginning with a literature review of nearly three decades of publications addressing planetary protection concerns with respect to human exploration. Key goals, objectives and concerns, particularly with respect to notional requirements, required studies and research, and technology development needs have been compiled and categorized to provide a current 'state of knowledge'. This information, combined with the identification of key stakeholders in upholding planetary protection concerns for human missions, has yielded a draft requirement set that might feed future iteration among space system designers, exploration scientists, and the mission operations community. Combining the information collected with a proposed forward path will hopefully yield a mutually agreeable set of timely, verifiable, and practical requirements for human space exploration that will uphold international commitment to planetary protection.

  7. Survival and in-vessel redistribution of beryllium droplets after ITER disruptions

    NASA Astrophysics Data System (ADS)

    Vignitchouk, L.; Ratynskaia, S.; Tolias, P.; Pitts, R. A.; De Temmerman, G.; Lehnen, M.; Kiramov, D.

    2018-07-01

    The motion and temperature evolution of beryllium droplets produced by first wall surface melting after ITER major disruptions and vertical displacement events mitigated during the current quench are simulated by the MIGRAINe dust dynamics code. These simulations employ an updated physical model which addresses droplet-plasma interaction in ITER-relevant regimes characterized by magnetized electron collection and thin-sheath ion collection, as well as electron emission processes induced by electron and high-Z ion impacts. The disruption scenarios have been implemented from DINA simulations of the time-evolving plasma parameters, while the droplet injection points are set to the first-wall locations expected to receive the highest thermal quench heat flux according to field line tracing studies. The droplet size, speed and ejection angle are varied within the range of currently available experimental and theoretical constraints, and the final quantities of interest are obtained by weighting single-trajectory output with different size and speed distributions. Detailed estimates of droplet solidification into dust grains and their subsequent deposition in the vessel are obtained. For representative distributions of the droplet injection parameters, the results indicate that at most a few percents of the beryllium mass initially injected is converted into solid dust, while the remaining mass either vaporizes or forms liquid splashes on the wall. Simulated in-vessel spatial distributions are also provided for the surviving dust, with the aim of providing guidance for planned dust diagnostic, retrieval and clean-up systems on ITER.

  8. Advances in the steady-state hybrid regime in DIII-D – a fully non-inductive, ELM-suppressed scenario for ITER

    DOE PAGES

    Petty, Craig C.; Nazikian, Raffi; Park, Jin Myung; ...

    2017-07-19

    Here, the hybrid regime with beta, collisionality, safety factor and plasma shape relevant to the ITER steady-state mission has been successfully integrated with ELM suppression by applying an odd parity n=3 resonant magnetic perturbation (RMP). Fully non-inductive hybrids in the DIII-D tokamak with high beta (β ≤ 2.8%) and high confinement (98y2 ≤ 1.4) in the ITER similar shape have achieved zero surface loop voltage for up to two current relaxation times using efficient central current drive from ECCD and NBCD. The n=3 RMP causes surprisingly little increase in thermal transport during ELM suppression. Poloidal magnetic flux pumping in hybridmore » plasmas maintains q above 1 without loss of current drive efficiency, except that experiments show that extremely peaked ECCD profiles can create sawteeth. During ECCD, Alfvén eigenmode (AE) activity is replaced by a more benign fishbone-like mode, reducing anomalous beam ion diffusion by a factor of 2. While the electron and ion thermal diffusivities substantially increase with higher ECCD power, the loss of confinement can be offset by the decreased fast ion transport resulting from AE suppression. Extrapolations from DIII-D along a dimensionless parameter scaling path as well as those using self-consistent theory-based modeling show that these ELM-suppressed, fully non-inductive hybrids can achieve the Q = 5 ITER steady-state mission.« less

  9. Cognitive representation of "musical fractals": Processing hierarchy and recursion in the auditory domain.

    PubMed

    Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh

    2017-04-01

    The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Indicators and measurement tools for health system integration: a knowledge synthesis protocol.

    PubMed

    Oelke, Nelly D; Suter, Esther; da Silva Lima, Maria Alice Dias; Van Vliet-Brown, Cheryl

    2015-07-29

    Health system integration is a key component of health system reform with the goal of improving outcomes for patients, providers, and the health system. Although health systems continue to strive for better integration, current delivery of health services continues to be fragmented. A key gap in the literature is the lack of information on what successful integration looks like and how to measure achievement towards an integrated system. This multi-site study protocol builds on a prior knowledge synthesis completed by two of the primary investigators which identified 10 key principles that collectively support health system integration. The aim is to answer two research questions: What are appropriate indicators for each of the 10 key integration principles developed in our previous knowledge synthesis and what measurement tools are used to measure these indicators? To enhance generalizability of the findings, a partnership between Canada and Brazil was created as health system integration is a priority in both countries and they share similar contexts. This knowledge synthesis will follow an iterative scoping review process with emerging information from knowledge-user engagement leading to the refinement of research questions and study selection. This paper describes the methods for each phase of the study. Research questions were developed with stakeholder input. Indicator identification and prioritization will utilize a modified Delphi method and patient/user focus groups. Based on priority indicators, a search of the literature will be completed and studies screened for inclusion. Quality appraisal of relevant studies will be completed prior to data extraction. Results will be used to develop recommendations and key messages to be presented through integrated and end-of-grant knowledge translation strategies with researchers and knowledge-users from the three jurisdictions. This project will directly benefit policy and decision-makers by providing an easy accessible set of indicators and tools to measure health system integration across different contexts and cultures. Being able to evaluate the success of integration strategies and initiatives will lead to better health system design and improved health outcomes for patients.

  11. The Laboratory Course Assessment Survey: A Tool to Measure Three Dimensions of Research-Course Design.

    PubMed

    Corwin, Lisa A; Runyon, Christopher; Robinson, Aspen; Dolan, Erin L

    2015-01-01

    Course-based undergraduate research experiences (CUREs) are increasingly being offered as scalable ways to involve undergraduates in research. Yet few if any design features that make CUREs effective have been identified. We developed a 17-item survey instrument, the Laboratory Course Assessment Survey (LCAS), that measures students' perceptions of three design features of biology lab courses: 1) collaboration, 2) discovery and relevance, and 3) iteration. We assessed the psychometric properties of the LCAS using established methods for instrument design and validation. We also assessed the ability of the LCAS to differentiate between CUREs and traditional laboratory courses, and found that the discovery and relevance and iteration scales differentiated between these groups. Our results indicate that the LCAS is suited for characterizing and comparing undergraduate biology lab courses and should be useful for determining the relative importance of the three design features for achieving student outcomes. © 2015 L. A. Corwin et al. CBE—Life Sciences Education © 2015 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  12. High-resolution tungsten spectroscopy relevant to the diagnostic of high-temperature tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Rzadkiewicz, J.; Yang, Y.; Kozioł, K.; O'Mullane, M. G.; Patel, A.; Xiao, J.; Yao, K.; Shen, Y.; Lu, D.; Hutton, R.; Zou, Y.; JET Contributors

    2018-05-01

    The x-ray transitions in Cu- and Ni-like tungsten ions in the 5.19-5.26 Å wavelength range that are relevant as a high-temperature tokamak diagnostic, in particular for JET in the ITER-like wall configuration, have been studied. Tungsten spectra were measured at the upgraded Shanghai- Electron Beam Ion Trap operated with electron-beam energies from 3.16 to 4.55 keV. High-resolution measurements were performed by means of a flat Si 111 crystal spectrometer equipped by a CCD camera. The experimental wavelengths were determined with an accuracy of 0.3-0.4 mÅ. The wavelength of the ground-state transition in Cu-like tungsten from the 3 p53 d104 s 4 d [ (3/2 ,(1/2,5/2 ) 2] 1 /2 level was measured. All measured wavelengths were compared with those measured from JET ITER-like wall plasmas and with other experiments and various theoretical predictions including cowan, relac, multiconfigurational Dirac-Fock (MCDF), and fac calculations. To obtain a higher accuracy from theoretical predictions, the MCDF calculations were extended by taking into account correlation effects (configuration-interaction approach). It was found that such an extension brings the calculations closer to the experimental values in comparison with other calculations.

  13. Investigation of key parameters for the development of reliable ITER baseline operation scenarios using CORSICA

    NASA Astrophysics Data System (ADS)

    Kim, S. H.; Casper, T. A.; Snipes, J. A.

    2018-05-01

    ITER will demonstrate the feasibility of burning plasma operation by operating DT plasmas in the ELMy H-mode regime with a high ratio of fusion power gain Q ~ 10. 15 MA ITER baseline operation scenario has been studied using CORSICA, focusing on the entry to burn, flat-top burning plasma operation and exit from burn. The burning plasma operation for about 400 s of the current flat-top was achieved in H-mode within the various engineering constraints imposed by the poloidal field coil and power supply systems. The target fusion gain (Q ~ 10) was achievable in the 15 MA ITER baseline operation with a moderate amount of the total auxiliary heating power (~50 MW). It has been observed that the tungsten (W) concentration needs to be maintained low level (n w/n e up to the order of 1.0  ×  10-5) to avoid the radiative collapse and uncontrolled early termination of the discharge. The dynamic evolution of the density can modify the H-mode access unless the applied auxiliary heating power is significantly higher than the H-mode threshold power. Several qualitative sensitivity studies have been performed to provide guidance for further optimizing the plasma operation and performance. Increasing the density profile peaking factor was quite effective in increasing the alpha particle self-heating power and fusion power multiplication factor. Varying the combination of auxiliary heating power has shown that the fusion power multiplication factor can be reduced along with the increase in the total auxiliary heating power. As the 15 MA ITER baseline operation scenario requires full capacity of the coil and power supply systems, the operation window for H-mode access and shape modification was narrow. The updated ITER baseline operation scenarios developed in this work will become a basis for further optimization studies necessary along with the improvement in understanding the burning plasma physics.

  14. Plasma-surface interaction in the Be/W environment: Conclusions drawn from the JET-ILW for ITER

    NASA Astrophysics Data System (ADS)

    Brezinsek, S.; JET-EFDA contributors

    2015-08-01

    The JET ITER-Like Wall experiment (JET-ILW) provides an ideal test bed to investigate plasma-surface interaction (PSI) and plasma operation with the ITER plasma-facing material selection employing beryllium in the main chamber and tungsten in the divertor. The main PSI processes: material erosion and migration, (b) fuel recycling and retention, (c) impurity concentration and radiation have be1en studied and compared between JET-C and JET-ILW. The current physics understanding of these key processes in the JET-ILW revealed that both interpretation of previously obtained carbon results (JET-C) and predictions to ITER need to be revisited. The impact of the first-wall material on the plasma was underestimated. Main observations are: (a) low primary erosion source in H-mode plasmas and reduction of the material migration from the main chamber to the divertor (factor 7) as well as within the divertor from plasma-facing to remote areas (factor 30 - 50). The energetic threshold for beryllium sputtering minimises the primary erosion source and inhibits multi-step re-erosion in the divertor. The physical sputtering yield of tungsten is low as 10-5 and determined by beryllium ions. (b) Reduction of the long-term fuel retention (factor 10 - 20) in JET-ILW with respect to JET-C. The remaining retention is caused by implantation and co-deposition with beryllium and residual impurities. Outgassing has gained importance and impacts on the recycling properties of beryllium and tungsten. (c) The low effective plasma charge (Zeff = 1.2) and low radiation capability of beryllium reveal the bare deuterium plasma physics. Moderate nitrogen seeding, reaching Zeff = 1.6 , restores in particular the confinement and the L-H threshold behaviour. ITER-compatible divertor conditions with stable semi-detachment were obtained owing to a higher density limit with ILW. Overall JET demonstrated successful plasma operation in the Be/W material combination and confirms its advantageous PSI behaviour and gives strong support to the ITER material selection.

  15. An Efficient Algorithm for Perturbed Orbit Integration Combining Analytical Continuation and Modified Chebyshev Picard Iteration

    NASA Astrophysics Data System (ADS)

    Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.

    2014-09-01

    Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.

  16. Addressing physical inactivity in Omani adults: perceptions of public health managers.

    PubMed

    Mabry, Ruth M; Al-Busaidi, Zakiya Q; Reeves, Marina M; Owen, Neville; Eakin, Elizabeth G

    2014-03-01

    To explore barriers and solutions to addressing physical inactivity and prolonged sitting in the adult population of Oman. Qualitative study involving semi-structured interviews that took place from October 2011 to January 2012. Participants were recruited through purposive sampling. Data collection and analysis was an iterative process; later interviews explored emerging themes. Interviews were audio-recorded and transcribed and continued until data saturation; this occurred by the tenth interviewee. Thematic content analysis was carried out, guided by an ecological model of health behaviour. Muscat, Oman. Ten mid-level public health managers. Barriers for physical inactivity were grouped around four themes: (i) intrapersonal (lack of motivation, awareness and time); (ii) social (norms restricting women's participation in outdoor activity, low value of physical activity); (iii) environment (lack of places to be active, weather); and (iv) policy (ineffective health communication, limited resources). Solutions focused on culturally sensitive interventions at the environment (building sidewalks and exercise facilities) and policy levels (strengthening existing interventions and coordinating actions with relevant sectors). Participants' responses regarding sitting time were similar to, but much more limited than those related to physical inactivity, except for community participation and voluntarism, which were given greater emphasis as possible solutions to reduce sitting time. Given the increasing prevalence of chronic disease in Oman and the Arabian Gulf, urgent action is required to implement gender-relevant public health policies and programmes to address physical inactivity, a key modifiable risk factor. Additionally, research on the determinants of physical inactivity and prolonged sitting time is required to guide policy makers.

  17. Advancing human health risk assessment: Integrating recent advisory committee recommendations

    PubMed Central

    Becker, Richard A.; Haber, Lynne T.; Pottenger, Lynn H.; Bredfeldt, Tiffany; Fenner-Crisp, Penelope A.

    2013-01-01

    Over the last dozen years, many national and international expert groups have considered specific improvements to risk assessment. Many of their stated recommendations are mutually supportive, but others appear conflicting, at least in an initial assessment. This review identifies areas of consensus and difference and recommends a practical, biology-centric course forward, which includes: (1) incorporating a clear problem formulation at the outset of the assessment with a level of complexity that is appropriate for informing the relevant risk management decision; (2) using toxicokinetics and toxicodynamic information to develop Chemical Specific Adjustment Factors (CSAF); (3) using mode of action (MOA) information and an understanding of the relevant biology as the key, central organizing principle for the risk assessment; (4) integrating MOA information into dose–response assessments using existing guidelines for non-cancer and cancer assessments; (5) using a tiered, iterative approach developed by the World Health Organization/International Programme on Chemical Safety (WHO/IPCS) as a scientifically robust, fit-for-purpose approach for risk assessment of combined exposures (chemical mixtures); and (6) applying all of this knowledge to enable interpretation of human biomonitoring data in a risk context. While scientifically based defaults will remain important and useful when data on CSAF or MOA to refine an assessment are absent or insufficient, assessments should always strive to use these data. The use of available 21st century knowledge of biological processes, clinical findings, chemical interactions, and dose–response at the molecular, cellular, organ and organism levels will minimize the need for extrapolation and reliance on default approaches. PMID:23844697

  18. An analytical and numerical study of Galton-Watson branching processes relevant to population dynamics

    NASA Astrophysics Data System (ADS)

    Jang, Sa-Han

    Galton-Watson branching processes of relevance to human population dynamics are the subject of this thesis. We begin with an historical survey of the invention of the invention of this model in the middle of the 19th century, for the purpose of modelling the extinction of unusual surnames in France and Britain. We then review the principal developments and refinements of this model, and their applications to a wide variety of problems in biology and physics. Next, we discuss in detail the case where the probability generating function for a Galton-Watson branching process is a geometric series, which can be summed in closed form to yield a fractional linear generating function that can be iterated indefinitely in closed form. We then describe the matrix method of Keyfitz and Tyree, and use it to determine how large a matrix must be chosen to model accurately a Galton-Watson branching process for a very large number of generations, of the order of hundreds or even thousands. Finally, we show that any attempt to explain the recent evidence for the existence thousands of generations ago of a 'mitochondrial Eve' and a 'Y-chromosomal Adam' in terms of a the standard Galton-Watson branching process, or indeed any statistical model that assumes equality of probabilities of passing one's genes to one's descendents in later generations, is unlikely to be successful. We explain that such models take no account of the advantages that the descendents of the most successful individuals in earlier generations enjoy over their contemporaries, which must play a key role in human evolution.

  19. Relationship between brainstem, cortical and behavioral measures relevant to pitch salience in humans.

    PubMed

    Krishnan, Ananthanarayan; Bidelman, Gavin M; Smalt, Christopher J; Ananthakrishnan, Saradha; Gandour, Jackson T

    2012-10-01

    Neural representation of pitch-relevant information at both the brainstem and cortical levels of processing is influenced by language or music experience. However, the functional roles of brainstem and cortical neural mechanisms in the hierarchical network for language processing, and how they drive and maintain experience-dependent reorganization are not known. In an effort to evaluate the possible interplay between these two levels of pitch processing, we introduce a novel electrophysiological approach to evaluate pitch-relevant neural activity at the brainstem and auditory cortex concurrently. Brainstem frequency-following responses and cortical pitch responses were recorded from participants in response to iterated rippled noise stimuli that varied in stimulus periodicity (pitch salience). A control condition using iterated rippled noise devoid of pitch was employed to ensure pitch specificity of the cortical pitch response. Neural data were compared with behavioral pitch discrimination thresholds. Results showed that magnitudes of neural responses increase systematically and that behavioral pitch discrimination improves with increasing stimulus periodicity, indicating more robust encoding for salient pitch. Absence of cortical pitch response in the control condition confirms that the cortical pitch response is specific to pitch. Behavioral pitch discrimination was better predicted by brainstem and cortical responses together as compared to each separately. The close correspondence between neural and behavioral data suggest that neural correlates of pitch salience that emerge in early, preattentive stages of processing in the brainstem may drive and maintain with high fidelity the early cortical representations of pitch. These neural representations together contain adequate information for the development of perceptual pitch salience. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Modelling the physics in iterative reconstruction for transmission computed tomography

    PubMed Central

    Nuyts, Johan; De Man, Bruno; Fessler, Jeffrey A.; Zbijewski, Wojciech; Beekman, Freek J.

    2013-01-01

    There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of X-ray CT imaging. IR has the ability to significantly reduce patient dose, it provides the flexibility to reconstruct images from arbitrary X-ray system geometries and it allows to include detailed models of photon transport and detection physics, to accurately correct for a wide variety of image degrading effects. This paper reviews discretisation issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. Widespread implementation of IR with highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling. PMID:23739261

  1. Transformation and diversification in early mammal evolution.

    PubMed

    Luo, Zhe-Xi

    2007-12-13

    Evolution of the earliest mammals shows successive episodes of diversification. Lineage-splitting in Mesozoic mammals is coupled with many independent evolutionary experiments and ecological specializations. Classic scenarios of mammalian morphological evolution tend to posit an orderly acquisition of key evolutionary innovations leading to adaptive diversification, but newly discovered fossils show that evolution of such key characters as the middle ear and the tribosphenic teeth is far more labile among Mesozoic mammals. Successive diversifications of Mesozoic mammal groups multiplied the opportunities for many dead-end lineages to iteratively evolve developmental homoplasies and convergent ecological specializations, parallel to those in modern mammal groups.

  2. Staying on the Journey: Maintaining a Change Momentum with PB4L "School-Wide"

    ERIC Educational Resources Information Center

    Boyd, Sally

    2016-01-01

    How do schools maintain momentum with change and enter new cycles of growth when they are attempting to do things differently? This article draws on a two-year evaluation of the "Positive Behaviour for Learning School-Wide" initiative to identify key factors that enabled schools to engage in a long-term and iterative change process.…

  3. Mentoring for Innovation: Key Factors Affecting Participant Satisfaction in the Process of Collaborative Knowledge Construction in Teacher Training

    ERIC Educational Resources Information Center

    Dorner, Helga; Karpati, Andrea

    2010-01-01

    This paper presents data about the successful use of the Mentored Innovation Model for professional development for a group of Hungarian teachers (n = 23, n = 20 in two iterations), which was employed in the CALIBRATE project in order to enhance their ICT skills and pedagogical competences needed for participation in a multicultural, multilingual…

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Jinping P.; Garofalo, Andrea M.; Gong, Xianzu Z.

    Recent EAST/DIII-D joint experiments on the high poloidal betamore » $${{\\beta}_{\\text{P}}}$$ regime in DIII-D have extended operation with internal transport barriers (ITBs) and excellent energy confinement (H 98y2 ~ 1.6) to higher plasma current, for lower q 95 ≤ 7.0, and more balanced neutral beam injection (NBI) (torque injection < 2 Nm), for lower plasma rotation than previous results. Transport analysis and experimental measurements at low toroidal rotation suggest that the E × B shear effect is not key to the ITB formation in these high $${{\\beta}_{\\text{P}}}$$ discharges. Experiments and TGLF modeling show that the Shafranov shift has a key stabilizing effect on turbulence. Extrapolation of the DIII-D results using a 0D model shows that with the improved confinement, the high bootstrap fraction regime could achieve fusion gain Q = 5 in ITER at $${{\\beta}_{\\text{N}}}$$ ~ 2.9 and q 95 ~ 7. With the optimization of q(0), the required improved confinement is achievable when using 1.5D TGLF-SAT1 for transport simulations. Furthermore, results reported in this paper suggest that the DIII-D high $${{\\beta}_{\\text{P}}}$$ scenario could be a candidate for ITER steady state operation.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Guoyong; Budny, Robert; Gorelenkov, Nikolai

    We report here the work done for the FY14 OFES Theory Performance Target as given below: "Understanding alpha particle confinement in ITER, the world's first burning plasma experiment, is a key priority for the fusion program. In FY 2014, determine linear instability trends and thresholds of energetic particle-driven shear Alfven eigenmodes in ITER for a range of parameters and profiles using a set of complementary simulation models (gyrokinetic, hybrid, and gyrofluid). Carry out initial nonlinear simulations to assess the effects of the unstable modes on energetic particle transport". In the past year (FY14), a systematic study of the alpha-driven Alfvenmore » modes in ITER has been carried out jointly by researchers from six institutions involving seven codes including the transport simulation code TRANSP (R. Budny and F. Poli, PPPL), three gyrokinetic codes: GEM (Y. Chen, Univ. of Colorado), GTC (J. McClenaghan, Z. Lin, UCI), and GYRO (E. Bass, R. Waltz, UCSD/GA), the hybrid code M3D-K (G.Y. Fu, PPPL), the gyro-fluid code TAEFL (D. Spong, ORNL), and the linear kinetic stability code NOVA-K (N. Gorelenkov, PPPL). A range of ITER parameters and profiles are specified by TRANSP simulation of a hybrid scenario case and a steady-state scenario case. Based on the specified ITER equilibria linear stability calculations are done to determine the stability boundary of alpha-driven high-n TAEs using the five initial value codes (GEM, GTC, GYRO, M3D-K, and TAEFL) and the kinetic stability code (NOVA-K). Both the effects of alpha particles and beam ions have been considered. Finally, the effects of the unstable modes on energetic particle transport have been explored using GEM and M3D-K.« less

  6. Neoclassical toroidal viscosity in perturbed equilibria with general tokamak geometry

    NASA Astrophysics Data System (ADS)

    Logan, Nikolas C.; Park, Jong-Kyu; Kim, Kimin; Wang, Zhirui; Berkery, John W.

    2013-12-01

    This paper presents a calculation of neoclassical toroidal viscous torque independent of large-aspect-ratio expansions across kinetic regimes. The Perturbed Equilibrium Nonambipolar Transport (PENT) code was developed for this purpose, and is compared to previous combined regime models as well as regime specific limits and a drift kinetic δf guiding center code. It is shown that retaining general expressions, without circular large-aspect-ratio or other orbit approximations, can be important at experimentally relevant aspect ratio and shaping. The superbanana plateau, a kinetic resonance effect recently recognized for its relevance to ITER, is recovered by the PENT calculations and shown to require highly accurate treatment of geometric effects.

  7. Prospects for measuring the fuel ion ratio in burning ITER plasmas using a DT neutron emission spectrometer.

    PubMed

    Hellesen, C; Skiba, M; Dzysiuk, N; Weiszflog, M; Hjalmarsson, A; Ericsson, G; Conroy, S; Andersson-Sundén, E; Eriksson, J; Binda, F

    2014-11-01

    The fuel ion ratio nt/nd is an essential parameter for plasma control in fusion reactor relevant applications, since maximum fusion power is attained when equal amounts of tritium (T) and deuterium (D) are present in the plasma, i.e., nt/nd = 1.0. For neutral beam heated plasmas, this parameter can be measured using a single neutron spectrometer, as has been shown for tritium concentrations up to 90%, using data obtained with the MPR (Magnetic Proton Recoil) spectrometer during a DT experimental campaign at the Joint European Torus in 1997. In this paper, we evaluate the demands that a DT spectrometer has to fulfill to be able to determine nt/nd with a relative error below 20%, as is required for such measurements at ITER. The assessment shows that a back-scattering time-of-flight design is a promising concept for spectroscopy of 14 MeV DT emission neutrons.

  8. Eigenproblem solution by a combined Sturm sequence and inverse iteration technique.

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1973-01-01

    Description of an efficient and numerically stable algorithm, along with a complete listing of the associated computer program, developed for the accurate computation of specified roots and associated vectors of the eigenvalue problem Aq = lambda Bq with band symmetric A and B, B being also positive-definite. The desired roots are first isolated by the Sturm sequence procedure; then a special variant of the inverse iteration technique is applied for the individual determination of each root along with its vector. The algorithm fully exploits the banded form of relevant matrices, and the associated program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be most significantly economical in comparison to similar existing procedures. The program may be conveniently utilized for the efficient solution of practical engineering problems, involving free vibration and buckling analysis of structures. Results of such analyses are presented for representative structures.

  9. Prospects for measuring the fuel ion ratio in burning ITER plasmas using a DT neutron emission spectrometer

    NASA Astrophysics Data System (ADS)

    Hellesen, C.; Skiba, M.; Dzysiuk, N.; Weiszflog, M.; Hjalmarsson, A.; Ericsson, G.; Conroy, S.; Andersson-Sundén, E.; Eriksson, J.; Binda, F.

    2014-11-01

    The fuel ion ratio nt/nd is an essential parameter for plasma control in fusion reactor relevant applications, since maximum fusion power is attained when equal amounts of tritium (T) and deuterium (D) are present in the plasma, i.e., nt/nd = 1.0. For neutral beam heated plasmas, this parameter can be measured using a single neutron spectrometer, as has been shown for tritium concentrations up to 90%, using data obtained with the MPR (Magnetic Proton Recoil) spectrometer during a DT experimental campaign at the Joint European Torus in 1997. In this paper, we evaluate the demands that a DT spectrometer has to fulfill to be able to determine nt/nd with a relative error below 20%, as is required for such measurements at ITER. The assessment shows that a back-scattering time-of-flight design is a promising concept for spectroscopy of 14 MeV DT emission neutrons.

  10. A new approach to the human muscle model.

    PubMed

    Baildon, R W; Chapman, A E

    1983-01-01

    Hill's (1938) two component muscle model is used as basis for digital computer simulation of human muscular contraction by means of an iterative process. The contractile (CC) and series elastic (SEC) components are lumped components of structures which produce and transmit torque to the external environment. The CC is described in angular terms along four dimensions as a series of non-planar torque-angle-angular velocity surfaces stacked on top of each other, each surface being appropriate to a given level of muscular activation. The SEC is described similarly along dimensions of torque, angular stretch, overall muscle angular displacement and activation. The iterative process introduces negligible error and allows the mechanical outcome of a variety of normal muscular contractions to be evaluated parsimoniously. The model allows analysis of many aspects of muscle behaviour as well as optimization studies. Definition of relevant relations should also allow reproduction and prediction of the outcome of contractions in individuals.

  11. Knobology in use: an experimental evaluation of ergonomics recommendations.

    PubMed

    Overgård, Kjell Ivar; Fostervold, Knut Inge; Bjelland, Hans Vanhauwaert; Hoff, Thomas

    2007-05-01

    The scientific basis for ergonomics recommendations for controls has usually not been related to active goal-directed use. The present experiment tests how different knob sizes and torques affect operator performance. The task employed is to control a pointer by the use of a control knob, and is as such an experimentally defined goal-directed task relevant to machine systems in general. Duration of use, error associated with use (overshooting of the goal area) and movement reproduction were used as performance measures. Significant differences between knob sizes were found for movement reproduction. High torques led to less overshooting as opposed to low torques. The results from duration of use showed a tendency that the differences between knob sizes were reduced from the first iteration to the second iteration. The present results indicate that the ergonomically recommended ranges of knob sizes might differently affect operator performance.

  12. Integrated prototyping environment for programmable automation

    NASA Astrophysics Data System (ADS)

    da Costa, Francis; Hwang, Vincent S. S.; Khosla, Pradeep K.; Lumia, Ronald

    1992-11-01

    We propose a rapid prototyping environment for robotic systems, based on tenets of modularity, reconfigurability and extendibility that may help build robot systems `faster, better, and cheaper.' Given a task specification, (e.g., repair brake assembly), the user browses through a library of building blocks that include both hardware and software components. Software advisors or critics recommend how blocks may be `snapped' together to speedily construct alternative ways to satisfy task requirements. Mechanisms to allow `swapping' competing modules for comparative test and evaluation studies are also included in the prototyping environment. After some iterations, a stable configuration or `wiring diagram' emerges. This customized version of the general prototyping environment still contains all the hooks needed to incorporate future improvements in component technologies and to obviate unplanned obsolescence. The prototyping environment so described is relevant for both interactive robot programming (telerobotics) and iterative robot system development (prototyping).

  13. On a multigrid method for the coupled Stokes and porous media flow problem

    NASA Astrophysics Data System (ADS)

    Luo, P.; Rodrigo, C.; Gaspar, F. J.; Oosterlee, C. W.

    2017-07-01

    The multigrid solution of coupled porous media and Stokes flow problems is considered. The Darcy equation as the saturated porous medium model is coupled to the Stokes equations by means of appropriate interface conditions. We focus on an efficient multigrid solution technique for the coupled problem, which is discretized by finite volumes on staggered grids, giving rise to a saddle point linear system. Special treatment is required regarding the discretization at the interface. An Uzawa smoother is employed in multigrid, which is a decoupled procedure based on symmetric Gauss-Seidel smoothing for velocity components and a simple Richardson iteration for the pressure field. Since a relaxation parameter is part of a Richardson iteration, Local Fourier Analysis (LFA) is applied to determine the optimal parameters. Highly satisfactory multigrid convergence is reported, and, moreover, the algorithm performs well for small values of the hydraulic conductivity and fluid viscosity, that are relevant for applications.

  14. Community, intervention and provider support influences on implementation: reflections from a South African illustration of safety, peace and health promotion.

    PubMed

    van Niekerk, Ashley; Seedat, Mohamed; Kramer, Sherianne; Suffla, Shahnaaz; Bulbulia, Samed; Ismail, Ghouwa

    2014-01-01

    The development, implementation and evaluation of community interventions are important for reducing child violence and injuries in low- to middle-income contexts, with successful implementation critical to effective intervention outcomes. The assessment of implementation processes is required to identify the factors that influence effective implementation. This article draws on a child safety, peace and health initiative to examine key factors that enabled or hindered its implementation, in a context characterised by limited resources. A case study approach was employed. The research team was made up of six researchers and intervention coordinators, who led the development and implementation of the Ukuphepha Child Study in South Africa, and who are also the authors of this article. The study used author observations, reflections and discussions of the factors perceived to influence the implementation of the intervention. The authors engaged in an in-depth and iterative dialogic process aimed at abstracting the experiences of the intervention, with a recursive cycle of reflection and dialogue. Data were analysed utilising inductive content analysis, and categorised using classification frameworks for understanding implementation. The study highlights key factors that enabled or hindered implementation. These included the community context and concomitant community engagement processes; intervention compatibility and adaptability issues; community service provider perceptions of intervention relevance and expectations; and the intervention support system, characterised by training and mentorship support. This evaluation illustrated the complexity of intervention implementation. The study approach sought to support intervention fidelity by fostering and maintaining community endorsement and support, a prerequisite for the unfolding implementation of the intervention.

  15. A three-talk model for shared decision making: multistage consultation process

    PubMed Central

    Durand, Marie Anne; Song, Julia; Aarts, Johanna; Barr, Paul J; Berger, Zackary; Cochran, Nan; Frosch, Dominick; Galasiński, Dariusz; Gulbrandsen, Pål; Han, Paul K J; Härter, Martin; Kinnersley, Paul; Lloyd, Amy; Mishra, Manish; Perestelo-Perez, Lilisbeth; Scholl, Isabelle; Tomori, Kounosuke; Trevena, Lyndal; Witteman, Holly O; Van der Weijden, Trudy

    2017-01-01

    Objectives To revise an existing three-talk model for learning how to achieve shared decision making, and to consult with relevant stakeholders to update and obtain wider engagement. Design Multistage consultation process. Setting Key informant group, communities of interest, and survey of clinical specialties. Participants 19 key informants, 153 member responses from multiple communities of interest, and 316 responses to an online survey from medically qualified clinicians from six specialties. Results After extended consultation over three iterations, we revised the three-talk model by making changes to one talk category, adding the need to elicit patient goals, providing a clear set of tasks for each talk category, and adding suggested scripts to illustrate each step. A new three-talk model of shared decision making is proposed, based on “team talk,” “option talk,” and “decision talk,” to depict a process of collaboration and deliberation. Team talk places emphasis on the need to provide support to patients when they are made aware of choices, and to elicit their goals as a means of guiding decision making processes. Option talk refers to the task of comparing alternatives, using risk communication principles. Decision talk refers to the task of arriving at decisions that reflect the informed preferences of patients, guided by the experience and expertise of health professionals. Conclusions The revised three-talk model of shared decision making depicts conversational steps, initiated by providing support when introducing options, followed by strategies to compare and discuss trade-offs, before deliberation based on informed preferences. PMID:29109079

  16. Community, intervention and provider support influences on implementation: reflections from a South African illustration of safety, peace and health promotion

    PubMed Central

    2014-01-01

    Background The development, implementation and evaluation of community interventions are important for reducing child violence and injuries in low- to middle-income contexts, with successful implementation critical to effective intervention outcomes. The assessment of implementation processes is required to identify the factors that influence effective implementation. This article draws on a child safety, peace and health initiative to examine key factors that enabled or hindered its implementation, in a context characterised by limited resources. Methods A case study approach was employed. The research team was made up of six researchers and intervention coordinators, who led the development and implementation of the Ukuphepha Child Study in South Africa, and who are also the authors of this article. The study used author observations, reflections and discussions of the factors perceived to influence the implementation of the intervention. The authors engaged in an in-depth and iterative dialogic process aimed at abstracting the experiences of the intervention, with a recursive cycle of reflection and dialogue. Data were analysed utilising inductive content analysis, and categorised using classification frameworks for understanding implementation. Results The study highlights key factors that enabled or hindered implementation. These included the community context and concomitant community engagement processes; intervention compatibility and adaptability issues; community service provider perceptions of intervention relevance and expectations; and the intervention support system, characterised by training and mentorship support. Conclusions This evaluation illustrated the complexity of intervention implementation. The study approach sought to support intervention fidelity by fostering and maintaining community endorsement and support, a prerequisite for the unfolding implementation of the intervention. PMID:25081088

  17. Advances in the physics basis for the European DEMO design

    NASA Astrophysics Data System (ADS)

    Wenninger, R.; Arbeiter, F.; Aubert, J.; Aho-Mantila, L.; Albanese, R.; Ambrosino, R.; Angioni, C.; Artaud, J.-F.; Bernert, M.; Fable, E.; Fasoli, A.; Federici, G.; Garcia, J.; Giruzzi, G.; Jenko, F.; Maget, P.; Mattei, M.; Maviglia, F.; Poli, E.; Ramogida, G.; Reux, C.; Schneider, M.; Sieglin, B.; Villone, F.; Wischmeier, M.; Zohm, H.

    2015-06-01

    In the European fusion roadmap, ITER is followed by a demonstration fusion power reactor (DEMO), for which a conceptual design is under development. This paper reports the first results of a coherent effort to develop the relevant physics knowledge for that (DEMO Physics Basis), carried out by European experts. The program currently includes investigations in the areas of scenario modeling, transport, MHD, heating & current drive, fast particles, plasma wall interaction and disruptions.

  18. Substrate mass transfer: analytical approach for immobilized enzyme reactions

    NASA Astrophysics Data System (ADS)

    Senthamarai, R.; Saibavani, T. N.

    2018-04-01

    In this paper, the boundary value problem in immobilized enzyme reactions is formulated and approximate expression for substrate concentration without external mass transfer resistance is presented. He’s variational iteration method is used to give approximate and analytical solutions of non-linear differential equation containing a non linear term related to enzymatic reaction. The relevant analytical solution for the dimensionless substrate concentration profile is discussed in terms of dimensionless reaction parameters α and β.

  19. Development of a set of community-informed Ebola messages for Sierra Leone

    PubMed Central

    de Bruijne, Kars; Jalloh, Alpha M.; Harris, Muriel; Abdullah, Hussainatu; Boye-Thompson, Titus; Sankoh, Osman; Jalloh, Abdul K.; Jalloh-Vos, Heidi

    2017-01-01

    The West African Ebola epidemic of 2013–2016 was by far the largest outbreak of the disease on record. Sierra Leone suffered nearly half of the 28,646 reported cases. This paper presents a set of culturally contextualized Ebola messages that are based on the findings of qualitative interviews and focus group discussions conducted in 'hotspot' areas of rural Bombali District and urban Freetown in Sierra Leone, between January and March 2015. An iterative approach was taken in the message development process, whereby (i) data from formative research was subjected to thematic analysis to identify areas of community concern about Ebola and the national response; (ii) draft messages to address these concerns were produced; (iii) the messages were field tested; (iv) the messages were refined; and (v) a final set of messages on 14 topics was disseminated to relevant national and international stakeholders. Each message included details of its rationale, audience, dissemination channels, messengers, and associated operational issues that need to be taken into account. While developing the 14 messages, a set of recommendations emerged that could be adopted in future public health emergencies. These included the importance of embedding systematic, iterative qualitative research fully into the message development process; communication of the subsequent messages through a two-way dialogue with communities, using trusted messengers, and not only through a one-way, top-down communication process; provision of good, parallel operational services; and engagement with senior policy makers and managers as well as people in key operational positions to ensure national ownership of the messages, and to maximize the chance of their being utilised. The methodological approach that we used to develop our messages along with our suggested recommendations constitute a set of tools that could be incorporated into international and national public health emergency preparedness and response plans. PMID:28787444

  20. What makes a sustainability tool valuable, practical and useful in real-world healthcare practice? A mixed-methods study on the development of the Long Term Success Tool in Northwest London

    PubMed Central

    Lennox, Laura; Doyle, Cathal; Reed, Julie E

    2017-01-01

    Objectives Although improvement initiatives show benefits to patient care, they often fail to sustain. Models and frameworks exist to address this challenge, but issues with design, clarity and usability have been barriers to use in healthcare settings. This work aimed to collaborate with stakeholders to develop a sustainability tool relevant to people in healthcare settings and practical for use in improvement initiatives. Design Tool development was conducted in six stages. A scoping literature review, group discussions and a stakeholder engagement event explored literature findings and their resonance with stakeholders in healthcare settings. Interviews, small-scale trialling and piloting explored the design and tested the practicality of the tool in improvement initiatives. Setting National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care for Northwest London (CLAHRC NWL). Participants CLAHRC NWL improvement initiative teams and staff. Results The iterative design process and engagement of stakeholders informed the articulation of the sustainability factors identified from the literature and guided tool design for practical application. Key iterations of factors and tool design are discussed. From the development process, the Long Term Success Tool (LTST) has been designed. The Tool supports those implementing improvements to reflect on 12 sustainability factors to identify risks to increase chances of achieving sustainability over time. The Tool is designed to provide a platform for improvement teams to share their own views on sustainability as well as learn about the different views held within their team to prompt discussion and actions. Conclusion The development of the LTST has reinforced the importance of working with stakeholders to design strategies which respond to their needs and preferences and can practically be implemented in real-world settings. Further research is required to study the use and effectiveness of the tool in practice and assess engagement with the method over time. PMID:28947436

  1. Linear tearing mode stability equations for a low collisionality toroidal plasma

    NASA Astrophysics Data System (ADS)

    Connor, J. W.; Hastie, R. J.; Helander, P.

    2009-01-01

    Tearing mode stability is normally analysed using MHD or two-fluid Braginskii plasma models. However for present, or future, large hot tokamaks like JET or ITER the collisionality is such as to place them in the banana regime. Here we develop a linear stability theory for the resonant layer physics appropriate to such a regime. The outcome is a set of 'fluid' equations whose coefficients encapsulate all neoclassical physics: the neoclassical Ohm's law, enhanced ion inertia, cross-field transport of particles, heat and momentum all play a role. While earlier treatments have also addressed this type of neoclassical physics we differ in incorporating the more physically relevant 'semi-collisional fluid' regime previously considered in cylindrical geometry; semi-collisional effects tend to screen the resonant surface from the perturbed magnetic field, preventing reconnection. Furthermore we also include thermal physics, which may modify the results. While this electron description is of wide relevance and validity, the fluid treatment of the ions requires the ion banana orbit width to be less than the semi-collisional electron layer. This limits the application of the present theory to low magnetic shear—however, this is highly relevant to the sawtooth instability—or to colder ions. The outcome of the calculation is a set of one-dimensional radial differential equations of rather high order. However, various simplifications that reduce the computational task of solving these are discussed. In the collisional regime, when the set reduces to a single second-order differential equation, the theory extends previous work by Hahm et al (1988 Phys. Fluids 31 3709) to include diamagnetic-type effects arising from plasma gradients, both in Ohm's law and the ion inertia term of the vorticity equation. The more relevant semi-collisional regime pertaining to JET or ITER, is described by a pair of second-order differential equations, extending the cylindrical equations of Drake et al (1983 Phys. Fluids 26 2509) to toroidal geometry.

  2. Advances in the steady-state hybrid regime in DIII-D—a fully non-inductive, ELM-suppressed scenario for ITER

    NASA Astrophysics Data System (ADS)

    Petty, C. C.; Nazikian, R.; Park, J. M.; Turco, F.; Chen, Xi; Cui, L.; Evans, T. E.; Ferraro, N. M.; Ferron, J. R.; Garofalo, A. M.; Grierson, B. A.; Holcomb, C. T.; Hyatt, A. W.; Kolemen, E.; La Haye, R. J.; Lasnier, C.; Logan, N.; Luce, T. C.; McKee, G. R.; Orlov, D.; Osborne, T. H.; Pace, D. C.; Paz-Soldan, C.; Petrie, T. W.; Snyder, P. B.; Solomon, W. M.; Taylor, N. Z.; Thome, K. E.; Van Zeeland, M. A.; Zhu, Y.

    2017-11-01

    The hybrid regime with beta, collisionality, safety factor and plasma shape relevant to the ITER steady-state mission has been successfully integrated with ELM suppression by applying an odd parity n  =  3 resonant magnetic perturbation (RMP). Fully non-inductive hybrids in the DIII-D tokamak with high beta (≤ft< β \\right>   ⩽  2.8%) and high confinement (H98y2  ⩽  1.4) in the ITER similar shape have achieved zero surface loop voltage for up to two current relaxation times using efficient central current drive from ECCD and NBCD. The n  =  3 RMP causes surprisingly little increase in thermal transport during ELM suppression. Poloidal magnetic flux pumping in hybrid plasmas maintains q above 1 without loss of current drive efficiency, except that experiments show that extremely peaked ECCD profiles can create sawteeth. During ECCD, Alfvén eigenmode (AE) activity is replaced by a more benign fishbone-like mode, reducing anomalous beam ion diffusion by a factor of 2. While the electron and ion thermal diffusivities substantially increase with higher ECCD power, the loss of confinement can be offset by the decreased fast ion transport resulting from AE suppression. Extrapolations from DIII-D along a dimensionless parameter scaling path as well as those using self-consistent theory-based modeling show that these ELM-suppressed, fully non-inductive hybrids can achieve the Q fus  =  5 ITER steady-state mission.

  3. User-oriented evaluation of a medical image retrieval system for radiologists.

    PubMed

    Markonis, Dimitrios; Holzer, Markus; Baroz, Frederic; De Castaneda, Rafael Luis Ruiz; Boyer, Célia; Langs, Georg; Müller, Henning

    2015-10-01

    This article reports the user-oriented evaluation of a text- and content-based medical image retrieval system. User tests with radiologists using a search system for images in the medical literature are presented. The goal of the tests is to assess the usability of the system, identify system and interface aspects that need improvement and useful additions. Another objective is to investigate the system's added value to radiology information retrieval. The study provides an insight into required specifications and potential shortcomings of medical image retrieval systems through a concrete methodology for conducting user tests. User tests with a working image retrieval system of images from the biomedical literature were performed in an iterative manner, where each iteration had the participants perform radiology information seeking tasks and then refining the system as well as the user study design itself. During these tasks the interaction of the users with the system was monitored, usability aspects were measured, retrieval success rates recorded and feedback was collected through survey forms. In total, 16 radiologists participated in the user tests. The success rates in finding relevant information were on average 87% and 78% for image and case retrieval tasks, respectively. The average time for a successful search was below 3 min in both cases. Users felt quickly comfortable with the novel techniques and tools (after 5 to 15 min), such as content-based image retrieval and relevance feedback. User satisfaction measures show a very positive attitude toward the system's functionalities while the user feedback helped identifying the system's weak points. The participants proposed several potentially useful new functionalities, such as filtering by imaging modality and search for articles using image examples. The iterative character of the evaluation helped to obtain diverse and detailed feedback on all system aspects. Radiologists are quickly familiar with the functionalities but have several comments on desired functionalities. The analysis of the results can potentially assist system refinement for future medical information retrieval systems. Moreover, the methodology presented as well as the discussion on the limitations and challenges of such studies can be useful for user-oriented medical image retrieval evaluation, as user-oriented evaluation of interactive system is still only rarely performed. Such interactive evaluations can be limited in effort if done iteratively and can give many insights for developing better systems. Copyright © 2015. Published by Elsevier Ireland Ltd.

  4. Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation

    NASA Astrophysics Data System (ADS)

    Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.

    2017-05-01

    In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.

  5. A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images

    PubMed Central

    Xu, Songhua; Krauthammer, Michael

    2010-01-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper’s key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. In this paper, we demonstrate that a projection histogram-based text detection approach is well suited for text detection in biomedical images, with a performance of F score of .60. The approach performs better than comparable approaches for text detection. Further, we show that the iterative application of the algorithm is boosting overall detection performance. A C++ implementation of our algorithm is freely available through email request for academic use. PMID:20887803

  6. Closed-loop control of artificial pancreatic Beta -cell in type 1 diabetes mellitus using model predictive iterative learning control.

    PubMed

    Wang, Youqing; Dassau, Eyal; Doyle, Francis J

    2010-02-01

    A novel combination of iterative learning control (ILC) and model predictive control (MPC), referred to here as model predictive iterative learning control (MPILC), is proposed for glycemic control in type 1 diabetes mellitus. MPILC exploits two key factors: frequent glucose readings made possible by continuous glucose monitoring technology; and the repetitive nature of glucose-meal-insulin dynamics with a 24-h cycle. The proposed algorithm can learn from an individual's lifestyle, allowing the control performance to be improved from day to day. After less than 10 days, the blood glucose concentrations can be kept within a range of 90-170 mg/dL. Generally, control performance under MPILC is better than that under MPC. The proposed methodology is robust to random variations in meal timings within +/-60 min or meal amounts within +/-75% of the nominal value, which validates MPILC's superior robustness compared to run-to-run control. Moreover, to further improve the algorithm's robustness, an automatic scheme for setpoint update that ensures safe convergence is proposed. Furthermore, the proposed method does not require user intervention; hence, the algorithm should be of particular interest for glycemic control in children and adolescents.

  7. Studies on absorption of EC waves in assisted startup experiment on FTU

    NASA Astrophysics Data System (ADS)

    Granucci, G.; Ricci, D.; Farina, D.; Figini, L.; Iraji, D.; Tudisco, O.; Ramponi, G.; Bin, W.

    2012-09-01

    Assistance of EC wave for plasma breakdown and current ramp up is the proposed scenario for the ITER case, characterized by low toroidal electric field. The experimental results on many tokamaks clearly indicate the capabilities of the proposed scheme to have a robust breakdown in ITER. The key aspect of this technique is the EC power required, strongly related to the absorption of the wave in the initial stage of plasma formation. This aspect is generally neglected due to the diagnostics difficulties in the plasma formation phase. As a consequence a multi-pass absorption scheme is usually considered reasonable, leading to a strong absorption after many reflections on the walls. The present study exploits the high temporal and spatial resolution of the fast scanning interferometer of FTU together with the measure of residual power obtained by a sniffer probe. The absorbed EC power is calculated considering also the polarization rotation and the subsequent mode conversion after incidence on the internal wall and compared with that derived from experimental data. The resulting EC power distribution can explain differences observed between perpendicular and oblique injection results, indicating future investigations to define ITER power requirements.

  8. Efficient iterative method for solving the Dirac-Kohn-Sham density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Lin; Shao, Sihong; E, Weinan

    2012-11-06

    We present for the first time an efficient iterative method to directly solve the four-component Dirac-Kohn-Sham (DKS) density functional theory. Due to the existence of the negative energy continuum in the DKS operator, the existing iterative techniques for solving the Kohn-Sham systems cannot be efficiently applied to solve the DKS systems. The key component of our method is a novel filtering step (F) which acts as a preconditioner in the framework of the locally optimal block preconditioned conjugate gradient (LOBPCG) method. The resulting method, dubbed the LOBPCG-F method, is able to compute the desired eigenvalues and eigenvectors in the positive energy band without computing any state in the negative energy band. The LOBPCG-F method introduces mild extra cost compared to the standard LOBPCG method and can be easily implemented. We demonstrate our method in the pseudopotential framework with a planewave basis set which naturally satisfies the kinetic balance prescription. Numerical results for Ptmore » $$_{2}$$, Au$$_{2}$$, TlF, and Bi$$_{2}$$Se$$_{3}$$ indicate that the LOBPCG-F method is a robust and efficient method for investigating the relativistic effect in systems containing heavy elements.« less

  9. Iterative expansion microscopy.

    PubMed

    Chang, Jae-Byum; Chen, Fei; Yoon, Young-Gyu; Jung, Erica E; Babcock, Hazen; Kang, Jeong Seuk; Asano, Shoh; Suk, Ho-Jun; Pak, Nikita; Tillberg, Paul W; Wassie, Asmamaw T; Cai, Dawen; Boyden, Edward S

    2017-06-01

    We recently developed a method called expansion microscopy, in which preserved biological specimens are physically magnified by embedding them in a densely crosslinked polyelectrolyte gel, anchoring key labels or biomolecules to the gel, mechanically homogenizing the specimen, and then swelling the gel-specimen composite by ∼4.5× in linear dimension. Here we describe iterative expansion microscopy (iExM), in which a sample is expanded ∼20×. After preliminary expansion a second swellable polymer mesh is formed in the space newly opened up by the first expansion, and the sample is expanded again. iExM expands biological specimens ∼4.5 × 4.5, or ∼20×, and enables ∼25-nm-resolution imaging of cells and tissues on conventional microscopes. We used iExM to visualize synaptic proteins, as well as the detailed architecture of dendritic spines, in mouse brain circuitry.

  10. Iterative expansion microscopy

    PubMed Central

    Chang, Jae-Byum; Chen, Fei; Yoon, Young-Gyu; Jung, Erica E.; Babcock, Hazen; Kang, Jeong Seuk; Asano, Shoh; Suk, Ho-Jun; Pak, Nikita; Tillberg, Paul W.; Wassie, Asmamaw; Cai, Dawen; Boyden, Edward S.

    2017-01-01

    We recently discovered it was possible to physically magnify preserved biological specimens by embedding them in a densely crosslinked polyelectrolyte gel, anchoring key labels or biomolecules to the gel, mechanically homogenizing the specimen, and then swelling the gel-specimen composite by ~4.5x in linear dimension, a process we call expansion microscopy (ExM). Here we describe iterative expansion microscopy (iExM), in which a sample is expanded, then a second swellable polymer mesh is formed in the space newly opened up by the first expansion, and finally the sample is expanded again. iExM expands biological specimens ~4.5 × 4.5 or ~20x, and enables ~25 nm resolution imaging of cells and tissues on conventional microscopes. We used iExM to visualize synaptic proteins, as well as the detailed architecture of dendritic spines, in mouse brain circuitry. PMID:28417997

  11. Definition of optical systems payloads

    NASA Technical Reports Server (NTRS)

    Downey, J. A., III

    1981-01-01

    The various phases in the formulation of a major NASA project include the inception of the project, planning of the concept, and the project definition. A baseline configuration is established during the planning stage, which serves as a basis for engineering trade studies. Basic technological problems should be recognized early, and a technological verification plan prepared before development of a project begins. A progressive series of iterations is required during the definition phase, illustrating the complex interdependence of existing subsystems. A systems error budget should be established to assess the overall systems performance, identify key performance drivers, and guide performance trades and iterations around these drivers, thus decreasing final systems requirements. Unnecessary interfaces should be avoided, and reasonable design and cost margins maintained. Certain aspects of the definition of the Advanced X-ray Astrophysics Facility are used as an example.

  12. Effects of sparse sampling in combination with iterative reconstruction on quantitative bone microstructure assessment

    NASA Astrophysics Data System (ADS)

    Mei, Kai; Kopp, Felix K.; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Kirschke, Jan S.; Noël, Peter B.; Baum, Thomas

    2017-03-01

    The trabecular bone microstructure is a key to the early diagnosis and advanced therapy monitoring of osteoporosis. Regularly measuring bone microstructure with conventional multi-detector computer tomography (MDCT) would expose patients with a relatively high radiation dose. One possible solution to reduce exposure to patients is sampling fewer projection angles. This approach can be supported by advanced reconstruction algorithms, with their ability to achieve better image quality under reduced projection angles or high levels of noise. In this work, we investigated the performance of iterative reconstruction from sparse sampled projection data on trabecular bone microstructure in in-vivo MDCT scans of human spines. The computed MDCT images were evaluated by calculating bone microstructure parameters. We demonstrated that bone microstructure parameters were still computationally distinguishable when half or less of the radiation dose was employed.

  13. The Cadarache negative ion experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massmann, P.; Bottereau, J.M.; Belchenko, Y.

    1995-12-31

    Up to energies of 140 keV neutral beam injection (NBI) based on positive ions has proven to be a reliable and flexible plasma heating method and has provided major contributions to most of the important experiments on virtually all large tokamaks around the world. As a candidate for additional heating and current drive on next step fusion machines (ITER ao) it is hoped that NBI can be equally successful. The ITER NBI parameters of 1 MeV, 50 MW D{degree} demand primary D{sup {minus}} beams with current densities of at least 15 mA/cm{sup 2}. Although considerable progress has been made inmore » the area of negative ion production and acceleration the high demands still require substantial and urgent development. Regarding negative ion production Cs seeded plasma sources lead the way. Adding a small amount of Cs to the discharge (Cs seeding) not only increases the negative ion yield by a factor 3--5 but also has the advantage that the discharge can be run at lower pressures. This is beneficial for the reduction of stripping losses in the accelerator. Multi-ampere negative ion production in a large plasma source is studied in the MANTIS experiment. Acceleration and neutralization at ITER relevant parameters is the objective of the 1 MV SINGAP experiment.« less

  14. The development and preliminary testing of a multimedia patient-provider survivorship communication module for breast cancer survivors.

    PubMed

    Wen, Kuang-Yi; Miller, Suzanne M; Stanton, Annette L; Fleisher, Linda; Morra, Marion E; Jorge, Alexandra; Diefenbach, Michael A; Ropka, Mary E; Marcus, Alfred C

    2012-08-01

    This paper describes the development of a theory-guided and evidence-based multimedia training module to facilitate breast cancer survivors' preparedness for effective communication with their health care providers after active treatment. The iterative developmental process used included: (1) theory and evidence-based content development and vetting; (2) user testing; (3) usability testing; and (4) participant module utilization. Formative evaluation of the training module prototype occurred through user testing (n = 12), resulting in modification of the content and layout. Usability testing (n = 10) was employed to improve module functionality. Preliminary web usage data (n = 256, mean age = 53, 94.5% White, 75% college graduate and above) showed that 59% of the participants accessed the communication module, for an average of 7 min per login. The iterative developmental process was informative in enhancing the relevance of the communication module. Preliminary web usage results demonstrate the potential feasibility of such a program. Our study demonstrates survivors' openness to the use of a web-based communication skills training module and outlines a systematic iterative user and interface program development and testing process, which can serve as a prototype for others considering such an approach. Copyright © 2012. Published by Elsevier Ireland Ltd.

  15. Numerical Study of High Heat Flux Performances of Flat-Tile Divertor Mock-ups with Hypervapotron Cooling Concept

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Liu, Xiang; Lian, Youyun; Cai, Laizhong

    2015-09-01

    The hypervapotron (HV), as an enhanced heat transfer technique, will be used for ITER divertor components in the dome region as well as the enhanced heat flux first wall panels. W-Cu brazing technology has been developed at SWIP (Southwestern Institute of Physics), and one W/CuCrZr/316LN component of 450 mm×52 mm×166 mm with HV cooling channels will be fabricated for high heat flux (HHF) tests. Before that a relevant analysis was carried out to optimize the structure of divertor component elements. ANSYS-CFX was used in CFD analysis and ABAQUS was adopted for thermal-mechanical calculations. Commercial code FE-SAFE was adopted to compute the fatigue life of the component. The tile size, thickness of tungsten tiles and the slit width among tungsten tiles were optimized and its HHF performances under International Thermonuclear Experimental Reactor (ITER) loading conditions were simulated. One brand new tokamak HL-2M with advanced divertor configuration is under construction in SWIP, where ITER-like flat-tile divertor components are adopted. This optimized design is expected to supply valuable data for HL-2M tokamak. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2011GB110001 and 2011GB110004)

  16. Progress of the ELISE test facility: towards one hour pulses in hydrogen

    NASA Astrophysics Data System (ADS)

    Wünderlich, D.; Fantz, U.; Heinemann, B.; Kraus, W.; Riedl, R.; Wimmer, C.; the NNBI Team

    2016-10-01

    In order to fulfil the ITER requirements, the negative hydrogen ion source used for NBI has to deliver a high source performance, i.e. a high extracted negative ion current and simultaneously a low co-extracted electron current over a pulse length up to 1 h. Negative ions will be generated by the surface process in a low-temperature low-pressure hydrogen or deuterium plasma. Therefore, a certain amount of caesium has to be deposited on the plasma grid in order to obtain a low surface work function and consequently a high negative ion production yield. This caesium is re-distributed by the influence of the plasma, resulting in temporal instabilities of the extracted negative ion current and the co-extracted electrons over long pulses. This paper describes experiments performed in hydrogen operation at the half-ITER-size NNBI test facility ELISE in order to develop a caesium conditioning technique for more stable long pulses at an ITER relevant filling pressure of 0.3 Pa. A significant improvement of the long pulse stability is achieved. Together with different plasma diagnostics it is demonstrated that this improvement is correlated to the interplay of very small variations of parameters like the electrostatic potential and the particle densities close to the extraction system.

  17. Drifts, currents, and power scrape-off width in SOLPS-ITER modeling of DIII-D

    DOE PAGES

    Meier, E. T.; Goldston, R. J.; Kaveeva, E. G.; ...

    2016-12-27

    The effects of drifts and associated flows and currents on the width of the parallel heat flux channel (λ q) in the tokamak scrape-off layer (SOL) are analyzed using the SOLPS-ITER 2D fluid transport code. Motivation is supplied by Goldston’s heuristic drift (HD) model for λ q, which yields the same approximately inverse poloidal magnetic field dependence seen in multi-machine regression. The analysis, focusing on a DIII-D H-mode discharge, reveals HD-like features, including comparable density and temperature fall-off lengths in the SOL, and up-down ion pressure asymmetry that allows net cross-separatrix ion magnetic drift flux to exceed net anomalous ionmore » flux. In experimentally relevant high-recycling cases, scans of both toroidal and poloidal magnetic field (B tor and B pol) are conducted, showing minimal λ q dependence on either component of the field. Insensitivity to B tor is expected, and suggests that SOLPS-ITER is effectively capturing some aspects of HD physics. Absence of λ q dependence on B pol, however, is inconsistent with both the HD model and experimental results. As a result, the inconsistency is attributed to strong variation in the parallel Mach number, which violates one of the premises of the HD model.« less

  18. Investigation of Helicon discharges as RF coupling concept of negative hydrogen ion sources

    NASA Astrophysics Data System (ADS)

    Briefi, S.; Fantz, U.

    2013-02-01

    The ITER reference source for H- and D- requires a high RF input power (up to 90 kW per driver). To reduce the demands on the RF circuit, it is highly desirable to reduce the power consumption while retaining the values of the relevant plasma parameters namely the positive ion density and the atomic hydrogen density. Helicon plasmas are a promising alternative RF coupling concept but they are typically generated in long thin discharge tubes using rare gases and an RF frequency of 13.56 MHz. Hence the applicability to the ITER reference source geometry, frequency and the utilization of hydrogen/deuterium has to be proved. In this paper the strategy of the approach for using Helicon discharges for ITER reference source parameters is introduced and the first promising measurements which were carried out at a small laboratory experiment are presented. With increasing RF power a mode transition to the Helicon regime was observed for argon and argon/hydrogen mixtures. In pure hydrogen/deuterium the mode transition could not yet be achieved as the available RF power is too low. In deuterium a special feature of Helicon discharges, the socalled low field peak, could be observed at a moderate B-field of 3 mT.

  19. The development and preliminary testing of a multimedia patient–provider survivorship communication module for breast cancer survivors

    PubMed Central

    Wen, Kuang-Yi; Miller, Suzanne M.; Stanton, Annette L.; Fleisher, Linda; Morra, Marion E.; Jorge, Alexandra; Diefenbach, Michael A.; Ropka, Mary E.; Marcus, Alfred C.

    2012-01-01

    Objective This paper describes the development of a theory-guided and evidence-based multimedia training module to facilitate breast cancer survivors’ preparedness for effective communication with their health care providers after active treatment. Methods The iterative developmental process used included: (1) theory and evidence-based content development and vetting; (2) user testing; (3) usability testing; and (4) participant module utilization. Results Formative evaluation of the training module prototype occurred through user testing (n = 12), resulting in modification of the content and layout. Usability testing (n = 10) was employed to improve module functionality. Preliminary web usage data (n = 256, mean age = 53, 94.5% White, 75% college graduate and above) showed that 59% of the participants accessed the communication module, for an average of 7 min per login. Conclusion The iterative developmental process was informative in enhancing the relevance of the communication module. Preliminary web usage results demonstrate the potential feasibility of such a program. Practice implications Our study demonstrates survivors’ openness to the use of a web-based communication skills training module and outlines a systematic iterative user and interface program development and testing process, which can serve as a prototype for others considering such an approach. PMID:22770812

  20. Barriers to Retrieving Patient Information from Electronic Health Record Data: Failure Analysis from the TREC Medical Records Track

    PubMed Central

    Edinger, Tracy; Cohen, Aaron M.; Bedrick, Steven; Ambert, Kyle; Hersh, William

    2012-01-01

    Objective: Secondary use of electronic health record (EHR) data relies on the ability to retrieve accurate and complete information about desired patient populations. The Text Retrieval Conference (TREC) 2011 Medical Records Track was a challenge evaluation allowing comparison of systems and algorithms to retrieve patients eligible for clinical studies from a corpus of de-identified medical records, grouped by patient visit. Participants retrieved cohorts of patients relevant to 35 different clinical topics, and visits were judged for relevance to each topic. This study identified the most common barriers to identifying specific clinic populations in the test collection. Methods: Using the runs from track participants and judged visits, we analyzed the five non-relevant visits most often retrieved and the five relevant visits most often overlooked. Categories were developed iteratively to group the reasons for incorrect retrieval for each of the 35 topics. Results: Reasons fell into nine categories for non-relevant visits and five categories for relevant visits. Non-relevant visits were most often retrieved because they contained a non-relevant reference to the topic terms. Relevant visits were most often infrequently retrieved because they used a synonym for a topic term. Conclusions: This failure analysis provides insight into areas for future improvement in EHR-based retrieval with techniques such as more widespread and complete use of standardized terminology in retrieval and data entry systems. PMID:23304287

  1. Barriers to retrieving patient information from electronic health record data: failure analysis from the TREC Medical Records Track.

    PubMed

    Edinger, Tracy; Cohen, Aaron M; Bedrick, Steven; Ambert, Kyle; Hersh, William

    2012-01-01

    Secondary use of electronic health record (EHR) data relies on the ability to retrieve accurate and complete information about desired patient populations. The Text Retrieval Conference (TREC) 2011 Medical Records Track was a challenge evaluation allowing comparison of systems and algorithms to retrieve patients eligible for clinical studies from a corpus of de-identified medical records, grouped by patient visit. Participants retrieved cohorts of patients relevant to 35 different clinical topics, and visits were judged for relevance to each topic. This study identified the most common barriers to identifying specific clinic populations in the test collection. Using the runs from track participants and judged visits, we analyzed the five non-relevant visits most often retrieved and the five relevant visits most often overlooked. Categories were developed iteratively to group the reasons for incorrect retrieval for each of the 35 topics. Reasons fell into nine categories for non-relevant visits and five categories for relevant visits. Non-relevant visits were most often retrieved because they contained a non-relevant reference to the topic terms. Relevant visits were most often infrequently retrieved because they used a synonym for a topic term. This failure analysis provides insight into areas for future improvement in EHR-based retrieval with techniques such as more widespread and complete use of standardized terminology in retrieval and data entry systems.

  2. Anatomical-based partial volume correction for low-dose dedicated cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Chan, Chung; Grobshtein, Yariv; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Stacy, Mitchel R.; Sinusas, Albert J.; Liu, Chi

    2015-09-01

    Due to the limited spatial resolution, partial volume effect has been a major degrading factor on quantitative accuracy in emission tomography systems. This study aims to investigate the performance of several anatomical-based partial volume correction (PVC) methods for a dedicated cardiac SPECT/CT system (GE Discovery NM/CT 570c) with focused field-of-view over a clinically relevant range of high and low count levels for two different radiotracer distributions. These PVC methods include perturbation geometry transfer matrix (pGTM), pGTM followed by multi-target correction (MTC), pGTM with known concentration in blood pool, the former followed by MTC and our newly proposed methods, which perform the MTC method iteratively, where the mean values in all regions are estimated and updated by the MTC-corrected images each time in the iterative process. The NCAT phantom was simulated for cardiovascular imaging with 99mTc-tetrofosmin, a myocardial perfusion agent, and 99mTc-red blood cell (RBC), a pure intravascular imaging agent. Images were acquired at six different count levels to investigate the performance of PVC methods in both high and low count levels for low-dose applications. We performed two large animal in vivo cardiac imaging experiments following injection of 99mTc-RBC for evaluation of intramyocardial blood volume (IMBV). The simulation results showed our proposed iterative methods provide superior performance than other existing PVC methods in terms of image quality, quantitative accuracy, and reproducibility (standard deviation), particularly for low-count data. The iterative approaches are robust for both 99mTc-tetrofosmin perfusion imaging and 99mTc-RBC imaging of IMBV and blood pool activity even at low count levels. The animal study results indicated the effectiveness of PVC to correct the overestimation of IMBV due to blood pool contamination. In conclusion, the iterative PVC methods can achieve more accurate quantification, particularly for low count cardiac SPECT studies, typically obtained from low-dose protocols, gated studies, and dynamic applications.

  3. A user-centered model for designing consumer mobile health (mHealth) applications (apps).

    PubMed

    Schnall, Rebecca; Rojas, Marlene; Bakken, Suzanne; Brown, William; Carballo-Dieguez, Alex; Carry, Monique; Gelaude, Deborah; Mosley, Jocelyn Patterson; Travers, Jasmine

    2016-04-01

    Mobile technologies are a useful platform for the delivery of health behavior interventions. Yet little work has been done to create a rigorous and standardized process for the design of mobile health (mHealth) apps. This project sought to explore the use of the Information Systems Research (ISR) framework as guide for the design of mHealth apps. Our work was guided by the ISR framework which is comprised of 3 cycles: Relevance, Rigor and Design. In the Relevance cycle, we conducted 5 focus groups with 33 targeted end-users. In the Rigor cycle, we performed a review to identify technology-based interventions for meeting the health prevention needs of our target population. In the Design Cycle, we employed usability evaluation methods to iteratively develop and refine mock-ups for a mHealth app. Through an iterative process, we identified barriers and facilitators to the use of mHealth technology for HIV prevention for high-risk MSM, developed 'use cases' and identified relevant functional content and features for inclusion in a design document to guide future app development. Findings from our work support the use of the ISR framework as a guide for designing future mHealth apps. Results from this work provide detailed descriptions of the user-centered design and system development and have heuristic value for those venturing into the area of technology-based intervention work. Findings from this study support the use of the ISR framework as a guide for future mHealth app development. Use of the ISR framework is a potentially useful approach for the design of a mobile app that incorporates end-users' design preferences. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Air Force Maui Optical and Supercomputing Site (AMOS) Application Briefs 2004

    DTIC Science & Technology

    2004-01-01

    Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection...17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 60 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT...SOR methods, the work in parallel matrix iterative methods is very relevant to the parallelization of a CA. The Moore neighborhood used in this work

  5. Quantum chemical study of relative reactivities of a series of amines and nitriles - Relevance to prebiotic chemistry

    NASA Technical Reports Server (NTRS)

    Loew, G. H.; Berkowitz, D.; Chang, S.

    1975-01-01

    Using the Iterative Extended Huckel Theory (IEHT) calculations of the electron distribution and orbital energies of a series of thirteen amines, nitriles and amino-nitriles relevant to prebiotic and cosmo-chemistry have been carried out. Ground state properties such as the energy and nature of the highest occupied (HOMO) and lowest empty (LEMO) molecular orbitals, net atomic charges and number of nonbonding electrons have been identified as criteria for correlating the relative nucleophilicity of amine and nitrile nitrogens and the electrophilicity of nitrile and other unsaturated carbon atoms. The results of such correlations can be partially verified by known chemical behavior of these compounds and are used to predict and understand their role in prebiotic organic synthesis.

  6. Development of steady-state scenarios compatible with ITER-like wall conditions

    NASA Astrophysics Data System (ADS)

    Litaudon, X.; Arnoux, G.; Beurskens, M.; Brezinsek, S.; Challis, C. D.; Crisanti, F.; DeVries, P. C.; Giroud, C.; Pitts, R. A.; Rimini, F. G.; Andrew, Y.; Ariola, M.; Baranov, Yu F.; Brix, M.; Buratti, P.; Cesario, R.; Corre, Y.; DeLa Luna, E.; Fundamenski, W.; Giovannozzi, E.; Gryaznevich, M. P.; Hawkes, N. C.; Hobirk, J.; Huber, A.; Jachmich, S.; Joffrin, E.; Koslowski, H. R.; Liang, Y.; Loarer, Th; Lomas, P.; Luce, T.; Mailloux, J.; Matthews, G. F.; Mazon, D.; McCormick, K.; Moreau, D.; Pericoli, V.; Philipps, V.; Rachlew, E.; Reyes-Cortes, S. D. A.; Saibene, G.; Sharapov, S. E.; Voitsekovitch, I.; Zabeo, L.; Zimmermann, O.; Zastrow, K. D.; JET-EFDA Contributors, the

    2007-12-01

    A key issue for steady-state tokamak operation is to determine the edge conditions that are compatible both with good core confinement and with the power handling and plasma exhaust capabilities of the plasma facing components (PFCs) and divertor systems. A quantitative response to this open question will provide a robust scientific basis for reliable extrapolation of present regimes to an ITER compatible steady-state scenario. In this context, the JET programme addressing steady-state operation is focused on the development of non-inductive, high confinement plasmas with the constraints imposed by the PFCs. A new beryllium main chamber wall and tungsten divertor together with an upgrade of the heating/fuelling capability are currently in preparation at JET. Operation at higher power with this ITER-like wall will impose new constraints on non-inductive scenarios. Recent experiments have focused on the preparation for this new phase of JET operation. In this paper, progress in the development of advanced tokamak (AT) scenarios at JET is reviewed keeping this long-term objective in mind. The approach has consisted of addressing various critical issues separately during the 2006-2007 campaigns with a view to full scenario integration when the JET upgrades are complete. Regimes with internal transport barriers (ITBs) have been developed at q95 ~ 5 and high triangularity, δ (relevant to the ITER steady-state demonstration) by applying more than 30 MW of additional heating power reaching βN ~ 2 at Bo ~ 3.1 T. Operating at higher δ has allowed the edge pedestal and core densities to be increased pushing the ion temperature closer to that of the electrons. Although not yet fully integrated into a performance enhancing ITB scenario, Neon seeding has been successfully explored to increase the radiated power fraction (up to 60%), providing significant reduction of target tile power fluxes (and hence temperatures) and mitigation of edge localized mode (ELM) activity. At reduced toroidal magnetic field strength, high βN regimes have been achieved and q-profile optimization investigated for use in steady-state scenarios. Values of βN above the 'no-wall magnetohydrodynamic limit' (βN ~ 3.0) have been sustained for a resistive current diffusion time in high-δ configurations (at 1.2 MA/1.8 T). In this scenario, ELM activity has been mitigated by applying magnetic perturbations using error field correction coils to provide ergodization of the magnetic field at the plasma edge. In a highly shaped, quasi-double null X-point configuration, ITBs have been generated on the ion heat transport channel and combined with 'grassy' ELMs with ~30 MW of applied heating power (at 1.2 MA/2.7 T, q95 ~ 7). Advanced algorithms and system identification procedures have been developed with a view to developing simultaneously temperature and q-profile control in real-time. These techniques have so far been applied to the control of the q-profile evolution in JET AT scenarios.

  7. The Green House Model of Nursing Home Care in Design and Implementation.

    PubMed

    Cohen, Lauren W; Zimmerman, Sheryl; Reed, David; Brown, Patrick; Bowers, Barbara J; Nolet, Kimberly; Hudak, Sandra; Horn, Susan

    2016-02-01

    To describe the Green House (GH) model of nursing home (NH) care, and examine how GH homes vary from the model, one another, and their founding (or legacy) NH. Data include primary quantitative and qualitative data and secondary quantitative data, derived from 12 GH/legacy NH organizations February 2012-September 2014. This mixed methods, cross-sectional study used structured interviews to obtain information about presence of, and variation in, GH-relevant structures and processes of care. Qualitative questions explored reasons for variation in model implementation. Interview data were analyzed using related-sample tests, and qualitative data were iteratively analyzed using a directed content approach. GH homes showed substantial variation in practices to support resident choice and decision making; neither GH nor legacy homes provided complete choice, and all GH homes excluded residents from some key decisions. GH homes were most consistent with the model and one another in elements to create a real home, such as private rooms and baths and open kitchens, and in staff-related elements, such as self-managed work teams and consistent, universal workers. Although variation in model implementation complicates evaluation, if expansion is to continue, it is essential to examine GH elements and their outcomes. © Health Research and Educational Trust.

  8. Using design science and artificial intelligence to improve health communication: ChronologyMD case example.

    PubMed

    Neuhauser, Linda; Kreps, Gary L; Morrison, Kathleen; Athanasoulis, Marcos; Kirienko, Nikolai; Van Brunt, Deryk

    2013-08-01

    This paper describes how design science theory and methods and use of artificial intelligence (AI) components can improve the effectiveness of health communication. We identified key weaknesses of traditional health communication and features of more successful eHealth/AI communication. We examined characteristics of the design science paradigm and the value of its user-centered methods to develop eHealth/AI communication. We analyzed a case example of the participatory design of AI components in the ChronologyMD project intended to improve management of Crohn's disease. eHealth/AI communication created with user-centered design shows improved relevance to users' needs for personalized, timely and interactive communication and is associated with better health outcomes than traditional approaches. Participatory design was essential to develop ChronologyMD system architecture and software applications that benefitted patients. AI components can greatly improve eHealth/AI communication, if designed with the intended audiences. Design science theory and its iterative, participatory methods linked with traditional health communication theory and methods can create effective AI health communication. eHealth/AI communication researchers, developers and practitioners can benefit from a holistic approach that draws from theory and methods in both design sciences and also human and social sciences to create successful AI health communication. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Using stakeholder engagement to develop a patient-centered pediatric asthma intervention.

    PubMed

    Shelef, Deborah Q; Rand, Cynthia; Streisand, Randi; Horn, Ivor B; Yadav, Kabir; Stewart, Lisa; Fousheé, Naja; Waters, Damian; Teach, Stephen J

    2016-12-01

    Stakeholder engagement has the potential to develop research interventions that are responsive to patient and provider preferences. This approach contrasts with traditional models of clinical research in which researchers determine the study's design. This article describes the effect of stakeholder engagement on the design of a randomized trial of an intervention designed to improve child asthma outcomes by reducing parental stress. The study team developed and implemented a stakeholder engagement process that provided iterative feedback regarding the study design, patient-centered outcomes, and intervention. Stakeholder engagement incorporated the perspectives of parents of children with asthma; local providers of community-based medical, legal, and social services; and national experts in asthma research methodology and implementation. Through a year-long process of multidimensional stakeholder engagement, the research team successfully refined and implemented a patient-centered study protocol. Key stakeholder contributions included selection of patient-centered outcome measures, refinement of intervention content and format, and language framing the study in a culturally appropriate manner. Stakeholder engagement was a useful framework for developing an intervention that was acceptable and relevant to our target population. This approach might have unique benefits in underserved populations, leading to sustainable improvement in health outcomes and reduced disparities. Copyright © 2016 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  10. If you come from a well-known organisation, I will trust you: Exploring and understanding the community's attitudes towards healthcare research in Cambodia.

    PubMed

    Pol, Sreymom; Fox-Lewis, Shivani; Neou, Leakhena; Parker, Michael; Kingori, Patricia; Turner, Claudia

    2018-01-01

    To explore Cambodian community members' understanding of and attitudes towards healthcare research. This qualitative study generated data from semi-structured interviews and focus group discussions. This study was conducted at a non-governmental paediatric hospital and in nearby villages in Siem Reap province, Cambodia. A total of ten semi-structured interviews and four focus group discussions were conducted, involving 27 participants. Iterative data collection and analysis were performed concurrently. Data were analysed by thematic content analysis and the coding structure was developed using relevant literature. Participants did not have a clear understanding of what activities related to research compared with those for routine healthcare. Key attitudes towards research were responsibility and trust: personal (trust of the researcher directly) and institutional (trust of the institution as a whole). Villagers believe the village headman holds responsibility for community activities, while the village headman believes that this responsibility should be shared across all levels of the government system. It is essential for researchers to understand the structure and relationship within the community they wish to work with in order to develop trust among community participants. This aids effective communication and understanding among all parties, enabling high quality ethical research to be conducted.

  11. Convergent evolution as natural experiment: the tape of life reconsidered

    PubMed Central

    Powell, Russell; Mariscal, Carlos

    2015-01-01

    Stephen Jay Gould argued that replaying the ‘tape of life’ would result in radically different evolutionary outcomes. Recently, biologists and philosophers of science have paid increasing attention to the theoretical importance of convergent evolution—the independent origination of similar biological forms and functions—which many interpret as evidence against Gould's thesis. In this paper, we examine the evidentiary relevance of convergent evolution for the radical contingency debate. We show that under the right conditions, episodes of convergent evolution can constitute valid natural experiments that support inferences regarding the deep counterfactual stability of macroevolutionary outcomes. However, we argue that proponents of convergence have problematically lumped causally heterogeneous phenomena into a single evidentiary basket, in effect treating all convergent events as if they are of equivalent theoretical import. As a result, the ‘critique from convergent evolution’ fails to engage with key claims of the radical contingency thesis. To remedy this, we develop ways to break down the heterogeneous set of convergent events based on the nature of the generalizations they support. Adopting this more nuanced approach to convergent evolution allows us to differentiate iterated evolutionary outcomes that are probably common among alternative evolutionary histories and subject to law-like generalizations, from those that do little to undermine and may even support, the Gouldian view of life. PMID:26640647

  12. Convergent evolution as natural experiment: the tape of life reconsidered.

    PubMed

    Powell, Russell; Mariscal, Carlos

    2015-12-06

    Stephen Jay Gould argued that replaying the 'tape of life' would result in radically different evolutionary outcomes. Recently, biologists and philosophers of science have paid increasing attention to the theoretical importance of convergent evolution-the independent origination of similar biological forms and functions-which many interpret as evidence against Gould's thesis. In this paper, we examine the evidentiary relevance of convergent evolution for the radical contingency debate. We show that under the right conditions, episodes of convergent evolution can constitute valid natural experiments that support inferences regarding the deep counterfactual stability of macroevolutionary outcomes. However, we argue that proponents of convergence have problematically lumped causally heterogeneous phenomena into a single evidentiary basket, in effect treating all convergent events as if they are of equivalent theoretical import. As a result, the 'critique from convergent evolution' fails to engage with key claims of the radical contingency thesis. To remedy this, we develop ways to break down the heterogeneous set of convergent events based on the nature of the generalizations they support. Adopting this more nuanced approach to convergent evolution allows us to differentiate iterated evolutionary outcomes that are probably common among alternative evolutionary histories and subject to law-like generalizations, from those that do little to undermine and may even support, the Gouldian view of life.

  13. Use (and abuse) of expert elicitation in support of decision making for public policy

    PubMed Central

    Morgan, M. Granger

    2014-01-01

    The elicitation of scientific and technical judgments from experts, in the form of subjective probability distributions, can be a valuable addition to other forms of evidence in support of public policy decision making. This paper explores when it is sensible to perform such elicitation and how that can best be done. A number of key issues are discussed, including topics on which there are, and are not, experts who have knowledge that provides a basis for making informed predictive judgments; the inadequacy of only using qualitative uncertainty language; the role of cognitive heuristics and of overconfidence; the choice of experts; the development, refinement, and iterative testing of elicitation protocols that are designed to help experts to consider systematically all relevant knowledge when they make their judgments; the treatment of uncertainty about model functional form; diversity of expert opinion; and when it does or does not make sense to combine judgments from different experts. Although it may be tempting to view expert elicitation as a low-cost, low-effort alternative to conducting serious research and analysis, it is neither. Rather, expert elicitation should build on and use the best available research and analysis and be undertaken only when, given those, the state of knowledge will remain insufficient to support timely informed assessment and decision making. PMID:24821779

  14. To adopt is to adapt: the process of implementing the ICF with an acute stroke multidisciplinary team in England

    PubMed Central

    Tempest, Stephanie; Harries, Priscilla; Kilbride, Cherry; De Souza, Lorraine

    2012-01-01

    Purpose: The success of the International Classifcation of Functioning, Disability and Health (ICF) depends on its uptake in clinical practice. This project aimed to explore ways the ICF could be used with an acute stroke multidisciplinary team and identify key learning from the implementation process. Method: Using an action research approach, iterative cycles of observe, plan, act and evaluate were used within three phases: exploratory; innovatory and refective. Thematic analysis was undertaken, using a model of immersion and crystallisation, on data collected via interview and focus groups, e-mail communications, minutes from relevant meetings, feld notes and a refective diary. Results: Two overall themes were determined from the data analysis which enabled implementation. There is a need to: (1) adopt the ICF in ways that meet local service needs; and (2) adapt the ICF language and format. Conclusions: The empirical fndings demonstrate how to make the ICF classifcation a clinical reality. First, we need to adopt the ICF as a vehicle to implement local service priorities e.g. to structure a multidisciplinary team report, thus enabling ownership of the implementation process. Second, we need to adapt the ICF terminology and format to make it acceptable for use by clinicians. PMID:22372376

  15. Quality Management and Key Performance Indicators in Oncologic Esophageal Surgery.

    PubMed

    Gockel, Ines; Ahlbrand, Constantin Johannes; Arras, Michael; Schreiber, Elke Maria; Lang, Hauke

    2015-12-01

    Ranking systems and comparisons of quality and performance indicators will be of increasing relevance for complex "high-risk" procedures such as esophageal cancer surgery. The identification of evidence-based standards relevant for key performance indicators in esophageal surgery is essential for establishing monitoring systems and furthermore a requirement to enhance treatment quality. In the course of this review, we analyze the key performance indicators case volume, radicality of resection, and postoperative morbidity and mortality, leading to continuous quality improvement. Ranking systems established on this basis will gain increased relevance in highly complex procedures within the national and international comparison and furthermore improve the treatment of patients with esophageal carcinoma.

  16. Increasing the scale and adoption of population health interventions: experiences and perspectives of policy makers, practitioners, and researchers

    PubMed Central

    2014-01-01

    Background Decisions to scale up population health interventions from small projects to wider state or national implementation is fundamental to maximising population-wide health improvements. The objectives of this study were to examine: i) how decisions to scale up interventions are currently made in practice; ii) the role that evidence plays in informing decisions to scale up interventions; and iii) the role policy makers, practitioners, and researchers play in this process. Methods Interviews with an expert panel of senior Australian and international public health policy-makers (n = 7), practitioners (n = 7), and researchers (n = 7) were conducted in May 2013 with a participation rate of 84%. Results Scaling up decisions were generally made through iterative processes and led by policy makers and/or practitioners, but ultimately approved by political leaders and/or senior executives of funding agencies. Research evidence formed a component of the overall set of information used in decision-making, but its contribution was limited by the paucity of relevant intervention effectiveness research, and data on costs and cost effectiveness. Policy makers, practitioners/service managers, and researchers had different, but complementary roles to play in the process of scaling up interventions. Conclusions This analysis articulates the processes of how decisions to scale up interventions are made, the roles of evidence, and contribution of different professional groups. More intervention research that includes data on the effectiveness, reach, and costs of operating at scale and key service delivery issues (including acceptability and fit of interventions and delivery models) should be sought as this has the potential to substantially advance the relevance and ultimately usability of research evidence for scaling up population health action. PMID:24735455

  17. Interpretation and use of evidence in state policymaking: a qualitative analysis

    PubMed Central

    Apollonio, Dorie E; Bero, Lisa A

    2017-01-01

    Introduction Researchers advocating for evidence-informed policy have attempted to encourage policymakers to develop a greater understanding of research and researchers to develop a better understanding of the policymaking process. Our aim was to apply findings drawn from studies of the policymaking process, specifically the theory of policy windows, to identify strategies used to integrate evidence into policymaking and points in the policymaking process where evidence was more or less relevant. Methods Our observational study relied on interviews conducted with 24 policymakers from the USA who had been trained to interpret scientific research in multiple iterations of an evidence-based workshop. Participants were asked to describe cases where they had been involved in making health policy and to provide examples in which research was used, either successfully or unsuccessfully. Interviews were transcribed, independently coded by multiple members of the study team and analysed for content using key words, concepts identified by participants and concepts arising from review of the texts. Results Our results suggest that policymakers who focused on health issues used multiple strategies to encourage evidence-informed policymaking. The respondents used a strict definition of what constituted evidence, and relied on their experience with research to discourage the use of less rigorous research. Their experience suggested that evidence was less useful in identifying problems, encouraging political action or ensuring feasibility and more useful in developing policy alternatives. Conclusions Past research has suggested multiple strategies to increase the use of evidence in policymaking, including the development of rapid-response research and policy-oriented summaries of data. Our findings suggest that these strategies may be most relevant to the policymaking stream, which develops policy alternatives. In addition, we identify several strategies that policymakers and researchers can apply to encourage evidence-informed policymaking. PMID:28219958

  18. Designing hydrologic monitoring networks to maximize predictability of hydrologic conditions in a data assimilation system: a case study from South Florida, U.S.A

    NASA Astrophysics Data System (ADS)

    Flores, A. N.; Pathak, C. S.; Senarath, S. U.; Bras, R. L.

    2009-12-01

    Robust hydrologic monitoring networks represent a critical element of decision support systems for effective water resource planning and management. Moreover, process representation within hydrologic simulation models is steadily improving, while at the same time computational costs are decreasing due to, for instance, readily available high performance computing resources. The ability to leverage these increasingly complex models together with the data from these monitoring networks to provide accurate and timely estimates of relevant hydrologic variables within a multiple-use, managed water resources system would substantially enhance the information available to resource decision makers. Numerical data assimilation techniques provide mathematical frameworks through which uncertain model predictions can be constrained to observational data to compensate for uncertainties in the model forcings and parameters. In ensemble-based data assimilation techniques such as the ensemble Kalman Filter (EnKF), information in observed variables such as canal, marsh and groundwater stages are propagated back to the model states in a manner related to: (1) the degree of certainty in the model state estimates and observations, and (2) the cross-correlation between the model states and the observable outputs of the model. However, the ultimate degree to which hydrologic conditions can be accurately predicted in an area of interest is controlled, in part, by the configuration of the monitoring network itself. In this proof-of-concept study we developed an approach by which the design of an existing hydrologic monitoring network is adapted to iteratively improve the predictions of hydrologic conditions within an area of the South Florida Water Management District (SFWMD). The objective of the network design is to minimize prediction errors of key hydrologic states and fluxes produced by the spatially distributed Regional Simulation Model (RSM), developed specifically to simulate the hydrologic conditions in several intensively managed and hydrologically complex watersheds within the SFWMD system. In a series of synthetic experiments RSM is used to generate the notionally true hydrologic state and the relevant observational data. The EnKF is then used as the mechanism to fuse RSM hydrologic estimates with data from the candidate network. The performance of the candidate network is measured by the prediction errors of the EnKF estimates of hydrologic states, relative to the notionally true scenario. The candidate network is then adapted by relocating existing observational sites to unobserved areas where predictions of local hydrologic conditions are most uncertain and the EnKF procedure repeated. Iteration of the monitoring network continues until further improvements in EnKF-based predictions of hydrologic conditions are negligible.

  19. Hierarchical Engine for Large-scale Infrastructure Co-Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-04-24

    HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.

  20. Powerful Design Principles and Processes: Lessons from a Case of Ambitious Civics Education Curriculum Planning. A Response to "Reinventing the High School Government Course: Rigor, Simulations, and Learning from Text"

    ERIC Educational Resources Information Center

    Dinkelman, Todd

    2016-01-01

    In "Reinventing the High School Government Course," the authors presented the latest iteration of an ambitious AP government course developed over a seven-year design-based implementation research project. Chiefly addressed to curriculum developers and civics teachers, the article elaborates key design principles, provides a description…

  1. AutoMap User’s Guide

    DTIC Science & Technology

    2006-10-01

    Hierarchy of Pre-Processing Techniques 3. NLP (Natural Language Processing) Utilities 3.1 Named-Entity Recognition 3.1.1 Example for Named-Entity... Recognition 3.2 Symbol RemovalN-Gram Identification: Bi-Grams 4. Stemming 4.1 Stemming Example 5. Delete List 5.1 Open a Delete List 5.1.1 Small...iterative and involves several key processes: • Named-Entity Recognition Named-Entity Recognition is an Automap feature that allows you to

  2. e-Learning Application for Machine Maintenance Process using Iterative Method in XYZ Company

    NASA Astrophysics Data System (ADS)

    Nurunisa, Suaidah; Kurniawati, Amelia; Pramuditya Soesanto, Rayinda; Yunan Kurnia Septo Hediyanto, Umar

    2016-02-01

    XYZ Company is a company based on manufacturing part for airplane, one of the machine that is categorized as key facility in the company is Millac 5H6P. As a key facility, the machines should be assured to work well and in peak condition, therefore, maintenance process is needed periodically. From the data gathering, it is known that there are lack of competency from the maintenance staff to maintain different type of machine which is not assigned by the supervisor, this indicate that knowledge which possessed by maintenance staff are uneven. The purpose of this research is to create knowledge-based e-learning application as a realization from externalization process in knowledge transfer process to maintain the machine. The application feature are adjusted for maintenance purpose using e-learning framework for maintenance process, the content of the application support multimedia for learning purpose. QFD is used in this research to understand the needs from user. The application is built using moodle with iterative method for software development cycle and UML Diagram. The result from this research is e-learning application as sharing knowledge media for maintenance staff in the company. From the test, it is known that the application make maintenance staff easy to understand the competencies.

  3. Some Remarks on GMRES for Transport Theory

    NASA Technical Reports Server (NTRS)

    Patton, Bruce W.; Holloway, James Paul

    2003-01-01

    We review some work on the application of GMRES to the solution of the discrete ordinates transport equation in one-dimension. We note that GMRES can be applied directly to the angular flux vector, or it can be applied to only a vector of flux moments as needed to compute the scattering operator of the transport equation. In the former case we illustrate both the delights and defects of ILU right-preconditioners for problems with anisotropic scatter and for problems with upscatter. When working with flux moments we note that GMRES can be used as an accelerator for any existing transport code whose solver is based on a stationary fixed-point iteration, including transport sweeps and DSA transport sweeps. We also provide some numerical illustrations of this idea. We finally show how space can be traded for speed by taking multiple transport sweeps per GMRES iteration. Key Words: transport equation, GMRES, Krylov subspace

  4. Enhancement of runaway production by resonant magnetic perturbation on J-TEXT

    NASA Astrophysics Data System (ADS)

    Chen, Z. Y.; Huang, D. W.; Izzo, V. A.; Tong, R. H.; Jiang, Z. H.; Hu, Q. M.; Wei, Y. N.; Yan, W.; Rao, B.; Wang, S. Y.; Ma, T. K.; Li, S. C.; Yang, Z. J.; Ding, D. H.; Wang, Z. J.; Zhang, M.; Zhuang, G.; Pan, Y.; J-TEXT Team

    2016-07-01

    The suppression of runaways following disruptions is key for the safe operation of ITER. The massive gas injection (MGI) has been developed to mitigate heat loads, electromagnetic forces and runaway electrons (REs) during disruptions. However, MGI may not completely prevent the generation of REs during disruptions on ITER. Resonant magnetic perturbation (RMP) has been applied to suppress runaway generation during disruptions on several machines. It was found that strong RMP results in the enhancement of runaway production instead of runaway suppression on J-TEXT. The runaway current was about 50% pre-disruption plasma current in argon induced reference disruptions. With moderate RMP, the runway current decreased to below 30% pre-disruption plasma current. The runaway current plateaus reach 80% of the pre-disruptive current when strong RMP was applied. Strong RMP may induce large size magnetic islands that could confine more runaway seed during disruptions. This has important implications for runaway suppression on large machines.

  5. To Demonstrate an Integrated Solution for Plasma-Material Interfaces Compatible with an Optimized Core Plasma

    NASA Astrophysics Data System (ADS)

    Goldston, Robert; Brooks, Jeffrey; Hubbard, Amanda; Leonard, Anthony; Lipschultz, Bruce; Maingi, Rajesh; Ulrickson, Michael; Whyte, Dennis

    2009-11-01

    The plasma facing components in a Demo reactor will face much more extreme boundary plasma conditions and operating requirements than any present or planned experiment. These include 1) Power density a factor of four or more greater than in ITER, 2) Continuous operation resulting in annual energy and particle throughput 100-200 times larger than ITER, 3) Elevated surface operating temperature for efficient electricity production, 4) Tritium fuel cycle control for safety and breeding requirements, and 5) Steady state plasma confinement and control. Consistent with ReNeW Thrust 12, design options are being explored for a new moderate-scale facility to assess core-edge interaction issues and solutions. Key desired features include high power density, sufficient pulse length and duty cycle, elevated wall temperature, steady-state control of an optimized core plasma, and flexibility in changing boundary components as well as access for comprehensive measurements.

  6. Design of a Single Motor Based Leg Structure with the Consideration of Inherent Mechanical Stability

    NASA Astrophysics Data System (ADS)

    Taha Manzoor, Muhammad; Sohail, Umer; Noor-e-Mustafa; Nizami, Muhammad Hamza Asif; Ayaz, Yasar

    2017-07-01

    The fundamental aspect of designing a legged robot is constructing a leg design that is robust and presents a simple control problem. In this paper, we have successfully designed a robotic leg based on a unique four bar mechanism with only one motor per leg. The leg design parameters used in our platform are extracted from design principles used in biological systems, multiple iterations and previous research findings. These principles guide a robotic leg to have minimal mechanical passive impedance, low leg mass and inertia, a suitable foot trajectory utilizing a practical balance between leg kinematics and robot usage, and the resultant inherent mechanical stability. The designed platform also exhibits the key feature of self-locking. Theoretical tools and software iterations were used to derive these practical features and yield an intuitive sense of the required leg design parameters.

  7. Accurate quantitative CF-LIBS analysis of both major and minor elements in alloys via iterative correction of plasma temperature and spectral intensity

    NASA Astrophysics Data System (ADS)

    Shuxia, ZHAO; Lei, ZHANG; Jiajia, HOU; Yang, ZHAO; Wangbao, YIN; Weiguang, MA; Lei, DONG; Liantuan, XIAO; Suotang, JIA

    2018-03-01

    The chemical composition of alloys directly determines their mechanical behaviors and application fields. Accurate and rapid analysis of both major and minor elements in alloys plays a key role in metallurgy quality control and material classification processes. A quantitative calibration-free laser-induced breakdown spectroscopy (CF-LIBS) analysis method, which carries out combined correction of plasma temperature and spectral intensity by using a second-order iterative algorithm and two boundary standard samples, is proposed to realize accurate composition measurements. Experimental results show that, compared to conventional CF-LIBS analysis, the relative errors for major elements Cu and Zn and minor element Pb in the copper-lead alloys has been reduced from 12%, 26% and 32% to 1.8%, 2.7% and 13.4%, respectively. The measurement accuracy for all elements has been improved substantially.

  8. Machine learning in motion control

    NASA Technical Reports Server (NTRS)

    Su, Renjeng; Kermiche, Noureddine

    1989-01-01

    The existing methodologies for robot programming originate primarily from robotic applications to manufacturing, where uncertainties of the robots and their task environment may be minimized by repeated off-line modeling and identification. In space application of robots, however, a higher degree of automation is required for robot programming because of the desire of minimizing the human intervention. We discuss a new paradigm of robotic programming which is based on the concept of machine learning. The goal is to let robots practice tasks by themselves and the operational data are used to automatically improve their motion performance. The underlying mathematical problem is to solve the problem of dynamical inverse by iterative methods. One of the key questions is how to ensure the convergence of the iterative process. There have been a few small steps taken into this important approach to robot programming. We give a representative result on the convergence problem.

  9. Fluid Intelligence and Cognitive Reflection in a Strategic Environment: Evidence from Dominance-Solvable Games

    PubMed Central

    Hanaki, Nobuyuki; Jacquemet, Nicolas; Luchini, Stéphane; Zylbersztejn, Adam

    2016-01-01

    Dominance solvability is one of the most straightforward solution concepts in game theory. It is based on two principles: dominance (according to which players always use their dominant strategy) and iterated dominance (according to which players always act as if others apply the principle of dominance). However, existing experimental evidence questions the empirical accuracy of dominance solvability. In this study, we study the relationships between the key facets of dominance solvability and two cognitive skills, cognitive reflection, and fluid intelligence. We provide evidence that the behaviors in accordance with dominance and one-step iterated dominance are both predicted by one's fluid intelligence rather than cognitive reflection. Individual cognitive skills, however, only explain a small fraction of the observed failure of dominance solvability. The accuracy of theoretical predictions on strategic decision making thus not only depends on individual cognitive characteristics, but also, perhaps more importantly, on the decision making environment itself. PMID:27559324

  10. Investigation of MHD instabilities and control in KSTAR preparing for high beta operation

    NASA Astrophysics Data System (ADS)

    Park, Y. S.; Sabbagh, S. A.; Bialek, J. M.; Berkery, J. W.; Lee, S. G.; Ko, W. H.; Bak, J. G.; Jeon, Y. M.; Park, J. K.; Kim, J.; Hahn, S. H.; Ahn, J.-W.; Yoon, S. W.; Lee, K. D.; Choi, M. J.; Yun, G. S.; Park, H. K.; You, K.-I.; Bae, Y. S.; Oh, Y. K.; Kim, W.-C.; Kwak, J. G.

    2013-08-01

    Initial H-mode operation of the Korea Superconducting Tokamak Advanced Research (KSTAR) is expanded to higher normalized beta and lower plasma internal inductance moving towards design target operation. As a key supporting device for ITER, an important goal for KSTAR is to produce physics understanding of MHD instabilities at long pulse with steady-state profiles, at high normalized beta, and over a wide range of plasma rotation profiles. An advance from initial plasma operation is a significant increase in plasma stored energy and normalized beta, with Wtot = 340 kJ, βN = 1.9, which is 75% of the level required to reach the computed ideal n = 1 no-wall stability limit. The internal inductance was lowered to 0.9 at sustained H-mode duration up to 5 s. In ohmically heated plasmas, the plasma current reached 1 MA with prolonged pulse length up to 12 s. Rotating MHD modes are observed in the device with perturbations having tearing rather than ideal parity. Modes with m/n = 3/2 are triggered during the H-mode phase but are relatively weak and do not substantially reduce Wtot. In contrast, 2/1 modes to date only appear when the plasma rotation profiles are lowered after H-L back-transition. Subsequent 2/1 mode locking creates a repetitive collapse of βN by more than 50%. Onset behaviour suggests the 3/2 mode is close to being neoclassically unstable. A correlation between the 2/1 mode amplitude and local rotation shear from an x-ray imaging crystal spectrometer suggests that the rotation shear at the mode rational surface is stabilizing. As a method to access the ITER-relevant low plasma rotation regime, plasma rotation alteration by n = 1, 2 applied fields and associated neoclassical toroidal viscosity (NTV) induced torque is presently investigated. The net rotation profile change measured by a charge exchange recombination diagnostic with proper compensation of plasma boundary movement shows initial evidence of non-resonant rotation damping by the n = 1, 2 applied field configurations. The result addresses perspective on access to low rotation regimes for MHD instability studies applicable to ITER. Computation of active RWM control using the VALEN-3D code examines control performance using midplane locked mode detection sensors. The LM sensors are found to be strongly affected by mode and control coil-induced vessel current, and consequently lead to limited control performance theoretically.

  11. Extending the physics basis of quiescent H-mode toward ITER relevant parameters

    DOE PAGES

    Solomon, W. M.; Burrell, K. H.; Fenstermacher, M. E.; ...

    2015-06-26

    Recent experiments on DIII-D have addressed several long-standing issues needed to establish quiescent H-mode (QH-mode) as a viable operating scenario for ITER. In the past, QH-mode was associated with low density operation, but has now been extended to high normalized densities compatible with operation envisioned for ITER. Through the use of strong shaping, QH-mode plasmas have been maintained at high densities, both absolute (more » $$\\bar{n}$$ e ≈ 7 × 10 19 m ₋3) and normalized Greenwald fraction ($$\\bar{n}$$ e/n G > 0.7). In these plasmas, the pedestal can evolve to very high pressure and edge current as the density is increased. High density QH-mode operation with strong shaping has allowed access to a previously predicted regime of very high pedestal dubbed “Super H-mode”. Calculations of the pedestal height and width from the EPED model are quantitatively consistent with the experimentally observed density evolution. The confirmation of the shape dependence of the maximum density threshold for QH-mode helps validate the underlying theoretical model of peeling- ballooning modes for ELM stability. In general, QH-mode is found to achieve ELM- stable operation while maintaining adequate impurity exhaust, due to the enhanced impurity transport from an edge harmonic oscillation, thought to be a saturated kink- peeling mode driven by rotation shear. In addition, the impurity confinement time is not affected by rotation, even though the energy confinement time and measured E×B shear are observed to increase at low toroidal rotation. Together with demonstrations of high beta, high confinement and low q 95 for many energy confinement times, these results suggest QH-mode as a potentially attractive operating scenario for the ITER Q=10 mission.« less

  12. Analysis of the ITER central solenoid insert (CSI) coil stability tests

    NASA Astrophysics Data System (ADS)

    Savoldi, L.; Bonifetto, R.; Breschi, M.; Isono, T.; Martovetsky, N.; Ozeki, H.; Zanino, R.

    2017-07-01

    At the end of the test campaign of the ITER Central Solenoid Insert (CSI) coil in 2015, after 16,000 electromagnetic (EM) cycles, some tests were devoted to the study of the conductor stability, through the measurement of the Minimum Quench Energy (MQE). The tests were performed by means of an inductive heater (IH), located in the high-field region of the CSI and wrapped around the conductor. The calorimetric calibration of the IH is presented here, aimed at assessing the energy deposited in the conductor for different values of the IH electrical operating conditions. The MQE of the conductor of the ITER CS module 3L can be estimated as ∼200 J ± 20%, deposited on the whole conductor on a length of ∼10 cm (the IH length) in ∼40 ms, at current and magnetic field conditions relevant for the ITER CS operation. The repartition of the energy deposited in the conductor under the IH is computed to be ∼10% in the cable and 90% in the jacket by means of a 3D Finite Elements EM model. It is shown how this repartition implies that the bundle (cable + helium) heat capacity is fully available for stability on the time scale of the tested disturbances. This repartition is used in input to the thermal-hydraulic analysis performed with the 4C code, to assess the capability of the model to accurately reproduce the stability threshold of the conductor. The MQE computed by the code for this disturbance is in good agreement with the measured value, with an underestimation within 15% of the experimental value.

  13. Relativistic electron kinetic effects on laser diagnostics in burning plasmas

    NASA Astrophysics Data System (ADS)

    Mirnov, V. V.; Den Hartog, D. J.

    2018-02-01

    Toroidal interferometry/polarimetry (TIP), poloidal polarimetry (PoPola), and Thomson scattering systems (TS) are major optical diagnostics being designed and developed for ITER. Each of them relies upon a sophisticated quantitative understanding of the electron response to laser light propagating through a burning plasma. Review of the theoretical results for two different applications is presented: interferometry/polarimetry (I/P) and polarization of Thomson scattered light, unified by the importance of relativistic (quadratic in vTe/c) electron kinetic effects. For I/P applications, rigorous analytical results are obtained perturbatively by expansion in powers of the small parameter τ = Te/me c2, where Te is electron temperature and me is electron rest mass. Experimental validation of the analytical models has been made by analyzing data of more than 1200 pulses collected from high-Te JET discharges. Based on this validation the relativistic analytical expressions are included in the error analysis and design projects of the ITER TIP and PoPola systems. The polarization properties of incoherent Thomson scattered light are being examined as a method of Te measurement relevant to ITER operational regimes. The theory is based on Stokes vector transformation and Mueller matrices formalism. The general approach is subdivided into frequency-integrated and frequency-resolved cases. For each of them, the exact analytical relativistic solutions are presented in the form of Mueller matrix elements averaged over the relativistic Maxwellian distribution function. New results related to the detailed verification of the frequency-resolved solutions are reported. The precise analytic expressions provide output much more rapidly than relativistic kinetic numerical codes allowing for direct real-time feedback control of ITER device operation.

  14. Determination of optimal imaging settings for urolithiasis CT using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR): a physical human phantom study

    PubMed Central

    Choi, Se Y; Ahn, Seung H; Choi, Jae D; Kim, Jung H; Lee, Byoung-Il; Kim, Jeong-In

    2016-01-01

    Objective: The purpose of this study was to compare CT image quality for evaluating urolithiasis using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR) according to various scan parameters and radiation doses. Methods: A 5 × 5 × 5 mm3 uric acid stone was placed in a physical human phantom at the level of the pelvis. 3 tube voltages (120, 100 and 80 kV) and 4 current–time products (100, 70, 30 and 15 mAs) were implemented in 12 scans. Each scan was reconstructed with FBP, statistical IR (Levels 5–7) and knowledge-based IMR (soft-tissue Levels 1–3). The radiation dose, objective image quality and signal-to-noise ratio (SNR) were evaluated, and subjective assessments were performed. Results: The effective doses ranged from 0.095 to 2.621 mSv. Knowledge-based IMR showed better objective image noise and SNR than did FBP and statistical IR. The subjective image noise of FBP was worse than that of statistical IR and knowledge-based IMR. The subjective assessment scores deteriorated after a break point of 100 kV and 30 mAs. Conclusion: At the setting of 100 kV and 30 mAs, the radiation dose can be decreased by approximately 84% while keeping the subjective image assessment. Advances in knowledge: Patients with urolithiasis can be evaluated with ultralow-dose non-enhanced CT using a knowledge-based IMR algorithm at a substantially reduced radiation dose with the imaging quality preserved, thereby minimizing the risks of radiation exposure while providing clinically relevant diagnostic benefits for patients. PMID:26577542

  15. Bulk ion heating with ICRF waves in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mantsinen, M. J., E-mail: mervi.mantsinen@bsc.es; Barcelona Supercomputing Center, Barcelona; Bilato, R.

    2015-12-10

    Heating with ICRF waves is a well-established method on present-day tokamaks and one of the heating systems foreseen for ITER. However, further work is still needed to test and optimize its performance in fusion devices with metallic high-Z plasma facing components (PFCs) in preparation of ITER and DEMO operation. This is of particular importance for the bulk ion heating capabilities of ICRF waves. Efficient bulk ion heating with the standard ITER ICRF scheme, i.e. the second harmonic heating of tritium with or without {sup 3}He minority, was demonstrated in experiments carried out in deuterium-tritium plasmas on JET and TFTR andmore » is confirmed by ICRF modelling. This paper focuses on recent experiments with {sup 3}He minority heating for bulk ion heating on the ASDEX Upgrade (AUG) tokamak with ITER-relevant all-tungsten PFCs. An increase of 80% in the central ion temperature T{sub i} from 3 to 5.5 keV was achieved when 3 MW of ICRF power tuned to the central {sup 3}He ion cyclotron resonance was added to 4.5 MW of deuterium NBI. The radial gradient of the T{sub i} profile reached locally values up to about 50 keV/m and the normalized logarithmic ion temperature gradients R/LT{sub i} of about 20, which are unusually large for AUG plasmas. The large changes in the T{sub i} profiles were accompanied by significant changes in measured plasma toroidal rotation, plasma impurity profiles and MHD activity, which indicate concomitant changes in plasma properties with the application of ICRF waves. When the {sup 3}He concentration was increased above the optimum range for bulk ion heating, a weaker peaking of the ion temperature profile was observed, in line with theoretical expectations.« less

  16. ITER CS Intermodule Support Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myatt, R.; Freudenberg, Kevin D

    2011-01-01

    With five independently driven, bi-polarity power supplies, the modules of the ITER central solenoid (CS) can be energized in aligned or opposing field directions. This sets up the possibility for repelling modules, which indeed occurs, particularly between CS2L and CS3L around the End of Burn (EOB) time point. Light interface compression between these two modules at EOB and wide variations in these coil currents throughout the pulse produce a tendency for relative motion or slip. Ideally, the slip is purely radial as the modules breathe without any accumulative translational motion. In reality, however, asymmetries such as nonuniformity in intermodule friction,more » lateral loads from a plasma Vertical Disruption Event (VDE), magnetic forces from manufacturing and assembly tolerances, and earthquakes can all contribute to a combination of radial and lateral module motion. This paper presents 2D and 3D, nonlinear, ANSYS models which simulate these various asymmetries and determine the lateral forces which must be carried by the intermodule structure. Summing all of these asymmetric force contributions leads to a design-basis lateral load which is used in the design of various support concepts: the CS-CDR centering rings and a variation, the 2001 FDR baseline radial keys, and interlocking castles structures. Radial key-type intermodule structure interface slip and stresses are tracked through multiple 15 MA scenario current pulses to demonstrate stable motion following the first few cycles. Detractions and benefits of each candidate intermodule structure are discussed, leading to the simplest and most robust configuration which meets the design requirements: match-drilled radial holes and pin-shaped keys.« less

  17. Overview of the recent DiMES and MiMES experiments in DIII-D

    NASA Astrophysics Data System (ADS)

    Rudakov, D. L.; Wong, C. P. C.; Litnovsky, A.; Wampler, W. R.; Boedo, J. A.; Brooks, N. H.; Fenstermacher, M. E.; Groth, M.; Hollmann, E. M.; Jacob, W.; Krasheninnikov, S. I.; Krieger, K.; Lasnier, C. J.; Leonard, A. W.; McLean, A. G.; Marot, M.; Moyer, R. A.; Petrie, T. W.; Philipps, V.; Smirnov, R. D.; Stangeby, P. C.; Watkins, J. G.; West, W. P.; Yu, J. H.

    2009-12-01

    Divertor and midplane material evaluation systems (DiMES and MiMES) in the DIII-D tokamak are used to address a variety of plasma-material interaction (PMI) issues relevant to ITER. Among the topics studied are carbon erosion and re-deposition, hydrogenic retention in the gaps between plasma-facing components (PFCs), deterioration of diagnostic mirrors from carbon deposition and techniques to mitigate that deposition, and dynamics and transport of dust. An overview of the recent experimental results is presented.

  18. NREL Spectrum of Innovation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2011-02-25

    There are many voices calling for a future of abundant clean energy. The choices are difficult and the challenges daunting. How will we get there? The National Renewable Energy Laboratory integrates the entire spectrum of innovation including fundamental science, market relevant research, systems integration, testing and validation, commercialization and deployment. The innovation process at NREL is interdependent and iterative. Many scientific breakthroughs begin in our own laboratories, but new ideas and technologies come to NREL at any point along the innovation spectrum to be validated and refined for commercial use.

  19. Genetic Evolution of Shape-Altering Programs for Supersonic Aerodynamics

    NASA Technical Reports Server (NTRS)

    Kennelly, Robert A., Jr.; Bencze, Daniel P. (Technical Monitor)

    2002-01-01

    Two constrained shape optimization problems relevant to aerodynamics are solved by genetic programming, in which a population of computer programs evolves automatically under pressure of fitness-driven reproduction and genetic crossover. Known optimal solutions are recovered using a small, naive set of elementary operations. Effectiveness is improved through use of automatically defined functions, especially when one of them is capable of a variable number of iterations, even though the test problems lack obvious exploitable regularities. An attempt at evolving new elementary operations was only partially successful.

  20. NREL Spectrum of Innovation

    ScienceCinema

    None

    2018-05-11

    There are many voices calling for a future of abundant clean energy. The choices are difficult and the challenges daunting. How will we get there? The National Renewable Energy Laboratory integrates the entire spectrum of innovation including fundamental science, market relevant research, systems integration, testing and validation, commercialization and deployment. The innovation process at NREL is interdependent and iterative. Many scientific breakthroughs begin in our own laboratories, but new ideas and technologies come to NREL at any point along the innovation spectrum to be validated and refined for commercial use.

  1. A Self-Adapting System for the Automated Detection of Inter-Ictal Epileptiform Discharges

    PubMed Central

    Lodder, Shaun S.; van Putten, Michel J. A. M.

    2014-01-01

    Purpose Scalp EEG remains the standard clinical procedure for the diagnosis of epilepsy. Manual detection of inter-ictal epileptiform discharges (IEDs) is slow and cumbersome, and few automated methods are used to assist in practice. This is mostly due to low sensitivities, high false positive rates, or a lack of trust in the automated method. In this study we aim to find a solution that will make computer assisted detection more efficient than conventional methods, while preserving the detection certainty of a manual search. Methods Our solution consists of two phases. First, a detection phase finds all events similar to epileptiform activity by using a large database of template waveforms. Individual template detections are combined to form “IED nominations”, each with a corresponding certainty value based on the reliability of their contributing templates. The second phase uses the ten nominations with highest certainty and presents them to the reviewer one by one for confirmation. Confirmations are used to update certainty values of the remaining nominations, and another iteration is performed where ten nominations with the highest certainty are presented. This continues until the reviewer is satisfied with what has been seen. Reviewer feedback is also used to update template accuracies globally and improve future detections. Key Findings Using the described method and fifteen evaluation EEGs (241 IEDs), one third of all inter-ictal events were shown after one iteration, half after two iterations, and 74%, 90%, and 95% after 5, 10 and 15 iterations respectively. Reviewing fifteen iterations for the 20–30 min recordings 1took approximately 5 min. Significance The proposed method shows a practical approach for combining automated detection with visual searching for inter-ictal epileptiform activity. Further evaluation is needed to verify its clinical feasibility and measure the added value it presents. PMID:24454813

  2. Capsule Performance Optimization for the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Landen, Otto

    2009-11-01

    The overall goal of the capsule performance optimization campaign is to maximize the probability of ignition by experimentally correcting for likely residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models before proceeding to cryogenic-layered implosions and ignition attempts. This will be accomplished using a variety of targets that will set key laser, hohlraum and capsule parameters to maximize ignition capsule implosion velocity, while minimizing fuel adiabat, core shape asymmetry and ablator-fuel mix. The targets include high Z re-emission spheres setting foot symmetry through foot cone power balance [1], liquid Deuterium-filled ``keyhole'' targets setting shock speed and timing through the laser power profile [2], symmetry capsules setting peak cone power balance and hohlraum length [3], and streaked x-ray backlit imploding capsules setting ablator thickness [4]. We will show how results from successful tuning technique demonstration shots performed at the Omega facility under scaled hohlraum and capsule conditions relevant to the ignition design meet the required sensitivity and accuracy. We will also present estimates of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors, and show that these get reduced after a number of shots and iterations to meet an acceptable level of residual uncertainty. Finally, we will present results from upcoming tuning technique validation shots performed at NIF at near full-scale. Prepared by LLNL under Contract DE-AC52-07NA27344. [4pt] [1] E. Dewald, et. al. Rev. Sci. Instrum. 79 (2008) 10E903. [0pt] [2] T.R. Boehly, et. al., Phys. Plasmas 16 (2009) 056302. [0pt] [3] G. Kyrala, et. al., BAPS 53 (2008) 247. [0pt] [4] D. Hicks, et. al., BAPS 53 (2008) 2.

  3. Saliendo Adelante: Stressors and Coping Strategies Among Immigrant Latino Men Who Have Sex With Men in a Nontraditional Settlement State.

    PubMed

    Gilbert, Paul A; Barrington, Clare; Rhodes, Scott D; Eng, Eugenia

    2016-11-01

    Immigrant Latino men who have sex with men (MSM) are marginalized along multiple dimensions (e.g., ethnicity, sexual orientation, language use), which can negatively affect their health and well-being. As little is known about how this subgroup experiences the stress of marginalization and how, in turn, they cope with such stress, this study investigated stressors and coping strategies to better understand the factors shaping Latino MSM health. Assisted by a community advisory committee, we conducted in-depth interviews with 15 foreign-born Latino MSM in a nontraditional settlement state. Drawing on grounded theory methods, we analyzed transcripts iteratively to identify processes and characterize themes. Results were confirmed in member check interviews (n = 4) and findings were further contextualized through key informant interviews (n = 3). Participants reported ubiquitous, concurrent stressors due to being an immigrant, being a sexual minority, and being working poor. In particular, homophobia within families and local Latino communities was seen as pervasive. Some participants faced additional stressors due to being undocumented and not being Mexican. Participants drew on four types of coping strategies, with no dominant coping response: passive coping (i.e., not reacting to stressors); attempting to change stressors; seeking social support; and seeking distractions. Family ties, especially with mothers, provided key emotional support but could also generate stress related to participants' sexuality. This study lays a foundation for future work and is particularly relevant for Latino MSM in nontraditional settlement states. Findings may inform future interventions to reduce stressors and increase resiliency, which can positively affect multiple health outcomes. © The Author(s) 2016.

  4. Analysis of how the health systems context shapes responses to the control of human immunodeficiency virus: case-studies from the Russian Federation.

    PubMed Central

    Atun, Rifat A.; McKee, Martin; Drobniewski, Francis; Coker, Richard

    2005-01-01

    OBJECTIVE: To develop a methodology and an instrument that allow the simultaneous rapid and systematic examination of the broad public health context, the health care systems, and the features of disease-specific programmes. METHODS: Drawing on methodologies used for rapid situational assessments of vertical programmes for tackling communicable disease, we analysed programmes for the control human of immunodeficiency virus (HIV) and their health systems context in three regions in the Russian Federation. The analysis was conducted in three phases: first, analysis of published literature, documents and routine data from the regions; second, interviews with key informants, and third, further data collection and analysis. Synthesis of findings through exploration of emergent themes, with iteration, resulted in the identification of the key systems issues that influenced programme delivery. FINDINGS: We observed a complex political economy within which efforts to control HIV sit, an intricate legal environment, and a high degree of decentralization of financing and operational responsibility. Although each region displays some commonalities arising from the Soviet traditions of public health control, there are considerable variations in the epidemiological trajectories, cultural responses, the political environment, financing, organization and service delivery, and the extent of multisectoral work in response to HIV epidemics. CONCLUSION: Within a centralized, post-Soviet health system, centrally directed measures to enhance HIV control may have varying degrees of impact at the regional level. Although the central tenets of effective vertical HIV programmes may be present, local imperatives substantially influence their interpretation, operationalization and effectiveness. Systematic analysis of the context within which vertical programmes are embedded is necessary to enhance understanding of how the relevant policies are prioritized and translated to action. PMID:16283049

  5. A three-talk model for shared decision making: multistage consultation process.

    PubMed

    Elwyn, Glyn; Durand, Marie Anne; Song, Julia; Aarts, Johanna; Barr, Paul J; Berger, Zackary; Cochran, Nan; Frosch, Dominick; Galasiński, Dariusz; Gulbrandsen, Pål; Han, Paul K J; Härter, Martin; Kinnersley, Paul; Lloyd, Amy; Mishra, Manish; Perestelo-Perez, Lilisbeth; Scholl, Isabelle; Tomori, Kounosuke; Trevena, Lyndal; Witteman, Holly O; Van der Weijden, Trudy

    2017-11-06

    Objectives  To revise an existing three-talk model for learning how to achieve shared decision making, and to consult with relevant stakeholders to update and obtain wider engagement. Design  Multistage consultation process. Setting  Key informant group, communities of interest, and survey of clinical specialties. Participants  19 key informants, 153 member responses from multiple communities of interest, and 316 responses to an online survey from medically qualified clinicians from six specialties. Results  After extended consultation over three iterations, we revised the three-talk model by making changes to one talk category, adding the need to elicit patient goals, providing a clear set of tasks for each talk category, and adding suggested scripts to illustrate each step. A new three-talk model of shared decision making is proposed, based on "team talk," "option talk," and "decision talk," to depict a process of collaboration and deliberation. Team talk places emphasis on the need to provide support to patients when they are made aware of choices, and to elicit their goals as a means of guiding decision making processes. Option talk refers to the task of comparing alternatives, using risk communication principles. Decision talk refers to the task of arriving at decisions that reflect the informed preferences of patients, guided by the experience and expertise of health professionals. Conclusions  The revised three-talk model of shared decision making depicts conversational steps, initiated by providing support when introducing options, followed by strategies to compare and discuss trade-offs, before deliberation based on informed preferences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. Physics and Control of Locked Modes in the DIII-D Tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volpe, Francesco

    This Final Technical Report summarizes an investigation, carried out under the auspices of the DOE Early Career Award, of the physics and control of non-rotating magnetic islands (“locked modes”) in tokamak plasmas. Locked modes are one of the main causes of disruptions in present tokamaks, and could be an even bigger concern in ITER, due to its relatively high beta (favoring the formation of Neoclassical Tearing Mode islands) and low rotation (favoring locking). For these reasons, this research had the goal of studying and learning how to control locked modes in the DIII-D National Fusion Facility under ITER-relevant conditions ofmore » high pressure and low rotation. Major results included: the first full suppression of locked modes and avoidance of the associated disruptions; the demonstration of error field detection from the interaction between locked modes, applied rotating fields and intrinsic errors; the analysis of a vast database of disruptive locked modes, which led to criteria for disruption prediction and avoidance.« less

  7. CRISPR/Cas9-coupled recombineering for metabolic engineering of Corynebacterium glutamicum.

    PubMed

    Cho, Jae Sung; Choi, Kyeong Rok; Prabowo, Cindy Pricilia Surya; Shin, Jae Ho; Yang, Dongsoo; Jang, Jaedong; Lee, Sang Yup

    2017-07-01

    Genome engineering of Corynebacterium glutamicum, an important industrial microorganism for amino acids production, currently relies on random mutagenesis and inefficient double crossover events. Here we report a rapid genome engineering strategy to scarlessly knock out one or more genes in C. glutamicum in sequential and iterative manner. Recombinase RecT is used to incorporate synthetic single-stranded oligodeoxyribonucleotides into the genome and CRISPR/Cas9 to counter-select negative mutants. We completed the system by engineering the respective plasmids harboring CRISPR/Cas9 and RecT for efficient curing such that multiple gene targets can be done iteratively and final strains will be free of plasmids. To demonstrate the system, seven different mutants were constructed within two weeks to study the combinatorial deletion effects of three different genes on the production of γ-aminobutyric acid, an industrially relevant chemical of much interest. This genome engineering strategy will expedite metabolic engineering of C. glutamicum. Copyright © 2017 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  8. Enriching regulatory networks by bootstrap learning using optimised GO-based gene similarity and gene links mined from PubMed abstracts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Ronald C.; Sanfilippo, Antonio P.; McDermott, Jason E.

    2011-02-18

    Transcriptional regulatory networks are being determined using “reverse engineering” methods that infer connections based on correlations in gene state. Corroboration of such networks through independent means such as evidence from the biomedical literature is desirable. Here, we explore a novel approach, a bootstrapping version of our previous Cross-Ontological Analytic method (XOA) that can be used for semi-automated annotation and verification of inferred regulatory connections, as well as for discovery of additional functional relationships between the genes. First, we use our annotation and network expansion method on a biological network learned entirely from the literature. We show how new relevant linksmore » between genes can be iteratively derived using a gene similarity measure based on the Gene Ontology that is optimized on the input network at each iteration. Second, we apply our method to annotation, verification, and expansion of a set of regulatory connections found by the Context Likelihood of Relatedness algorithm.« less

  9. Constitutive law for thermally-activated plasticity of recrystallized tungsten

    NASA Astrophysics Data System (ADS)

    Zinovev, Aleksandr; Terentyev, Dmitry; Dubinko, Andrii; Delannay, Laurent

    2017-12-01

    A physically-based constitutive law relevant for ITER-specification tungsten grade in as-recrystallized state is proposed. The material demonstrates stages III and IV of the plastic deformation, in which hardening rate does not drop to zero with the increase of applied stress. Despite the classical Kocks-Mecking model, valid at stage III, the strain hardening asymptotically decreases resembling a hyperbolic function. The material parameters are fitted by relying on tensile test data and by requiring that the strain and stress at the onset of diffuse necking (uniform elongation and ultimate tensile strength correspondingly) as well as the yield stress be reproduced. The model is then validated in the temperature range 300-600 °C with the help of finite element analysis of tensile tests which confirms the reproducibility of the experimental engineering curves up to the onset of diffuse necking, beyond which the development of ductile damage accelerates the material failure. This temperature range represents the low temperature application window for tungsten as divertor material in fusion reactor ITER.

  10. Tritium processing for the European test blanket systems: current status of the design and development strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricapito, I.; Calderoni, P.; Poitevin, Y.

    2015-03-15

    Tritium processing technologies of the two European Test Blanket Systems (TBS), HCLL (Helium Cooled Lithium Lead) and HCPB (Helium Cooled Pebble Bed), play an essential role in meeting the main objectives of the TBS experimental campaign in ITER. The compliancy with the ITER interface requirements, in terms of space availability, service fluids, limits on tritium release, constraints on maintenance, is driving the design of the TBS tritium processing systems. Other requirements come from the characteristics of the relevant test blanket module and the scientific programme that has to be developed and implemented. This paper identifies the main requirements for themore » design of the TBS tritium systems and equipment and, at the same time, provides an updated overview on the current design status, mainly focusing onto the tritium extractor from Pb-16Li and TBS tritium accountancy. Considerations are also given on the possible extrapolation to DEMO breeding blanket. (authors)« less

  11. First density profile measurements using frequency modulation of the continuous wave reflectometry on JETa)

    NASA Astrophysics Data System (ADS)

    Meneses, L.; Cupido, L.; Sirinelli, A.; Manso, M. E.; Jet-Efds Contributors

    2008-10-01

    We present the main design options and implementation of an X-mode reflectometer developed and successfully installed at JET using an innovative approach. It aims to prove the viability of measuring density profiles with high spatial and temporal resolution using broadband reflectometry operating in long and complex transmission lines. It probes the plasma with magnetic fields between 2.4 and 3.0 T using the V band [~(0-1.4)×1019 m-3]. The first experimental results show the high sensitivity of the diagnostic when measuring changes in the plasma density profile occurring ITER relevant regimes, such as ELMy H-modes. The successful demonstration of this concept motivated the upgrade of the JET frequency modulation of the continuous wave (FMCW) reflectometry diagnostic, to probe both the edge and core. This new system is essential to prove the viability of using the FMCW reflectometry technique to probe the plasma in next step devices, such as ITER, since they share the same waveguide complexity.

  12. Assessing Behavioural Manifestations Prior to Clinical Diagnosis of Huntington Disease: "Anger and Irritability" and "Obsessions and Compulsions"

    PubMed Central

    Vaccarino, Anthony L; Anonymous; Anderson, Karen E.; Borowsky, Beth; Coccaro, Emil; Craufurd, David; Endicott, Jean; Giuliano, Joseph; Groves, Mark; Guttman, Mark; Ho, Aileen K; Kupchak, Peter; Paulsen, Jane S.; Stanford, Matthew S.; van Kammen, Daniel P; Watson, David; Wu, Kevin D; Evans, Ken

    2011-01-01

    The Functional Rating Scale Taskforce for pre-Huntington Disease (FuRST-pHD) is a multinational, multidisciplinary initiative with the goal of developing a data-driven, comprehensive, psychometrically sound, rating scale for assessing symptoms and functional ability in prodromal and early Huntington disease (HD) gene expansion carriers. The process involves input from numerous sources to identify relevant symptom domains, including HD individuals, caregivers, and experts from a variety of fields, as well as knowledge gained from the analysis of data from ongoing large-scale studies in HD using existing clinical scales. This is an iterative process in which an ongoing series of field tests in prodromal (prHD) and early HD individuals provides the team with data on which to make decisions regarding which questions should undergo further development or testing and which should be excluded. We report here the development and assessment of the first iteration of interview questions aimed to assess "Anger and Irritability" and "Obsessions and Compulsions" in prHD individuals. PMID:21826116

  13. Key concepts relevant to quality of complex and shared decision-making in health care: a literature review.

    PubMed

    Dy, Sydney M; Purnell, Tanjala S

    2012-02-01

    High-quality provider-patient decision-making is key to quality care for complex conditions. We performed an analysis of key elements relevant to quality and complex, shared medical decision-making. Based on a search of electronic databases, including Medline and the Cochrane Library, as well as relevant articles' reference lists, reviews of tools, and annotated bibliographies, we developed a list of key concepts and applied them to a decision-making example. Key concepts identified included provider competence, trustworthiness, and cultural competence; communication with patients and families; information quality; patient/surrogate competence; and roles and involvement. We applied this concept list to a case example, shared decision-making for live donor kidney transplantation, and identified the likely most important concepts as provider and cultural competence, information quality, and communication with patients and families. This concept list may be useful for conceptualizing the quality of complex shared decision-making and in guiding research in this area. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. From Constructive Field Theory to Fractional Stochastic Calculus. (II) Constructive Proof of Convergence for the Lévy Area of Fractional Brownian Motion with Hurst Index {{alpha} {in} ((1)/(8),(1)/(4))}

    NASA Astrophysics Data System (ADS)

    Magnen, Jacques; Unterberger, Jérémie

    2012-03-01

    {Let $B=(B_1(t),...,B_d(t))$ be a $d$-dimensional fractional Brownian motion with Hurst index $\\alpha<1/4$, or more generally a Gaussian process whose paths have the same local regularity. Defining properly iterated integrals of $B$ is a difficult task because of the low H\\"older regularity index of its paths. Yet rough path theory shows it is the key to the construction of a stochastic calculus with respect to $B$, or to solving differential equations driven by $B$. We intend to show in a series of papers how to desingularize iterated integrals by a weak, singular non-Gaussian perturbation of the Gaussian measure defined by a limit in law procedure. Convergence is proved by using "standard" tools of constructive field theory, in particular cluster expansions and renormalization. These powerful tools allow optimal estimates, and call for an extension of Gaussian tools such as for instance the Malliavin calculus. After a first introductory paper \\cite{MagUnt1}, this one concentrates on the details of the constructive proof of convergence for second-order iterated integrals, also known as L\\'evy area.

  15. Turbulence Enhancement by Fractal Square Grids: Effects of the Number of Fractal Scales

    NASA Astrophysics Data System (ADS)

    Omilion, Alexis; Ibrahim, Mounir; Zhang, Wei

    2017-11-01

    Fractal square grids offer a unique solution for passive flow control as they can produce wakes with a distinct turbulence intensity peak and a prolonged turbulence decay region at the expense of only minimal pressure drop. While previous studies have solidified this characteristic of fractal square grids, how the number of scales (or fractal iterations N) affect turbulence production and decay of the induced wake is still not well understood. The focus of this research is to determine the relationship between the fractal iteration N and the turbulence produced in the wake flow using well-controlled water-tunnel experiments. Particle Image Velocimetry (PIV) is used to measure the instantaneous velocity fields downstream of four different fractal grids with increasing number of scales (N = 1, 2, 3, and 4) and a conventional single-scale grid. By comparing the turbulent scales and statistics of the wake, we are able to determine how each iteration affects the peak turbulence intensity and the production/decay of turbulence from the grid. In light of the ability of these fractal grids to increase turbulence intensity with low pressure drop, this work can potentially benefit a wide variety of applications where energy efficient mixing or convective heat transfer is a key process.

  16. Modelling of the test of the JT-60SA HTS current leads

    NASA Astrophysics Data System (ADS)

    Zappatore, A.; Heller, R.; Savoldi, L.; Zanino, R.

    2017-07-01

    The CURLEAD code, which was developed at the Karlsruhe Institute of Technology (KIT), implements an integrated 1D transient model of a high temperature superconducting (HTS) current lead (CL) including the room termination (RT), the meander-flow type heat exchanger (HX), and the HTS module. CURLEAD was successfully used for the design of the 70 kA ITER demonstrator and of the W7-X and JT-60SA CLs. Recently the code was successfully applied to the prediction and analysis of steady state operation of the ITER correction coils (CC) HTS CL. Here the steady state and pulsed operation of the JT-60SA HTS CLs are analysed, which requires also the modelling of the HX shell and of the vacuum shell, which was not present in the ITER CC. The CURLEAD model extension is presented and the capability of the new version of CURLEAD to reproduce the transient experimental data of the JT-60SA HTS CL is shown. The results obtained provide a better understanding of key parameters of the CL, among which the temperature evolution at the HX-HTS interface, the GHe mass flow rate needed in the HX to achieve the target temperature at that location and the heat load at the cold end.

  17. The Depression Inventory Development Workgroup: A Collaborative, Empirically Driven Initiative to Develop a New Assessment Tool for Major Depressive Disorder.

    PubMed

    Vaccarino, Anthony L; Evans, Kenneth R; Kalali, Amir H; Kennedy, Sidney H; Engelhardt, Nina; Frey, Benicio N; Greist, John H; Kobak, Kenneth A; Lam, Raymond W; MacQueen, Glenda; Milev, Roumen; Placenza, Franca M; Ravindran, Arun V; Sheehan, David V; Sills, Terrence; Williams, Janet B W

    2016-01-01

    The Depression Inventory Development project is an initiative of the International Society for CNS Drug Development whose goal is to develop a comprehensive and psychometrically sound measurement tool to be utilized as a primary endpoint in clinical trials for major depressive disorder. Using an iterative process between field testing and psychometric analysis and drawing upon expertise of international researchers in depression, the Depression Inventory Development team has established an empirically driven and collaborative protocol for the creation of items to assess symptoms in major depressive disorder. Depression-relevant symptom clusters were identified based on expert clinical and patient input. In addition, as an aid for symptom identification and item construction, the psychometric properties of existing clinical scales (assessing depression and related indications) were evaluated using blinded datasets from pharmaceutical antidepressant drug trials. A series of field tests in patients with major depressive disorder provided the team with data to inform the iterative process of scale development. We report here an overview of the Depression Inventory Development initiative, including results of the third iteration of items assessing symptoms related to anhedonia, cognition, fatigue, general malaise, motivation, anxiety, negative thinking, pain and appetite. The strategies adopted from the Depression Inventory Development program, as an empirically driven and collaborative process for scale development, have provided the foundation to develop and validate measurement tools in other therapeutic areas as well.

  18. An Efficient Non-iterative Bulk Parametrization of Surface Fluxes for Stable Atmospheric Conditions Over Polar Sea-Ice

    NASA Astrophysics Data System (ADS)

    Gryanik, Vladimir M.; Lüpkes, Christof

    2018-02-01

    In climate and weather prediction models the near-surface turbulent fluxes of heat and momentum and related transfer coefficients are usually parametrized on the basis of Monin-Obukhov similarity theory (MOST). To avoid iteration, required for the numerical solution of the MOST equations, many models apply parametrizations of the transfer coefficients based on an approach relating these coefficients to the bulk Richardson number Rib. However, the parametrizations that are presently used in most climate models are valid only for weaker stability and larger surface roughnesses than those documented during the Surface Heat Budget of the Arctic Ocean campaign (SHEBA). The latter delivered a well-accepted set of turbulence data in the stable surface layer over polar sea-ice. Using stability functions based on the SHEBA data, we solve the MOST equations applying a new semi-analytic approach that results in transfer coefficients as a function of Rib and roughness lengths for momentum and heat. It is shown that the new coefficients reproduce the coefficients obtained by the numerical iterative method with a good accuracy in the most relevant range of stability and roughness lengths. For small Rib, the new bulk transfer coefficients are similar to the traditional coefficients, but for large Rib they are much smaller than currently used coefficients. Finally, a possible adjustment of the latter and the implementation of the new proposed parametrizations in models are discussed.

  19. Non-iterative triple excitations in equation-of-motion coupled-cluster theory for electron attachment with applications to bound and temporary anions.

    PubMed

    Jagau, Thomas-C

    2018-01-14

    The impact of residual electron correlation beyond the equation-of-motion coupled-cluster singles and doubles (EOM-CCSD) approximation on positions and widths of electronic resonances is investigated. To establish a method that accomplishes this task in an economical manner, several approaches proposed for the approximate treatment of triple excitations are reviewed with respect to their performance in the electron attachment (EA) variant of EOM-CC theory. The recently introduced EOM-CCSD(T)(a)* method [D. A. Matthews and J. F. Stanton, J. Chem. Phys. 145, 124102 (2016)], which includes non-iterative corrections to the reference and the target states, reliably reproduces vertical attachment energies from EOM-EA-CC calculations with single, double, and full triple excitations in contrast to schemes in which non-iterative corrections are applied only to the target states. Applications of EOM-EA-CCSD(T)(a)* augmented by a complex absorbing potential (CAP) to several temporary anions illustrate that shape resonances are well described by EOM-EA-CCSD, but that residual electron correlation often makes a non-negligible impact on their positions and widths. The positions of Feshbach resonances, on the other hand, are significantly improved when going from CAP-EOM-EA-CCSD to CAP-EOM-EA-CCSD(T)(a)*, but the correct energetic order of the relevant electronic states is still not achieved.

  20. Non-iterative triple excitations in equation-of-motion coupled-cluster theory for electron attachment with applications to bound and temporary anions

    NASA Astrophysics Data System (ADS)

    Jagau, Thomas-C.

    2018-01-01

    The impact of residual electron correlation beyond the equation-of-motion coupled-cluster singles and doubles (EOM-CCSD) approximation on positions and widths of electronic resonances is investigated. To establish a method that accomplishes this task in an economical manner, several approaches proposed for the approximate treatment of triple excitations are reviewed with respect to their performance in the electron attachment (EA) variant of EOM-CC theory. The recently introduced EOM-CCSD(T)(a)* method [D. A. Matthews and J. F. Stanton, J. Chem. Phys. 145, 124102 (2016)], which includes non-iterative corrections to the reference and the target states, reliably reproduces vertical attachment energies from EOM-EA-CC calculations with single, double, and full triple excitations in contrast to schemes in which non-iterative corrections are applied only to the target states. Applications of EOM-EA-CCSD(T)(a)* augmented by a complex absorbing potential (CAP) to several temporary anions illustrate that shape resonances are well described by EOM-EA-CCSD, but that residual electron correlation often makes a non-negligible impact on their positions and widths. The positions of Feshbach resonances, on the other hand, are significantly improved when going from CAP-EOM-EA-CCSD to CAP-EOM-EA-CCSD(T)(a)*, but the correct energetic order of the relevant electronic states is still not achieved.

  1. Electron kinetic effects on optical diagnostics in fusion plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirnov, V. V.; Den Hartog, D. J.; Duff, J.

    At anticipated high electron temperatures in ITER, the effects of electron thermal motion on Thomson scattering (TS), toroidal interferometer/polarimeter (TIP) and poloidal polarimeter (PoPola) diagnostics will be significant and must be accurately treated. We calculate electron thermal corrections to the interferometric phase and polarization state of an EM wave propagating along tangential and poloidal chords (Faraday and Cotton-Mouton polarimetry) and perform analysis of the degree of polarization for incoherent TS. The precision of the previous lowest order linear in τ = T{sub e}/m{sub e}c{sup 2} model may be insufficient; we present a more precise model with τ{sup 2}-order corrections to satisfy themore » high accuracy required for ITER TIP and PoPola diagnostics. The linear model is extended from Maxwellian to a more general class of anisotropic electron distributions that allows us to take into account distortions caused by equilibrium current, ECRH and RF current drive effects. The classical problem of degree of polarization of incoherent Thomson scattered radiation is solved analytically exactly without any approximations for the full range of incident polarizations, scattering angles, and electron thermal motion from non-relativistic to ultra-relativistic. The results are discussed in the context of the possible use of the polarization properties of Thomson scattered light as a method of T{sup e} measurement relevant to ITER operational scenarios.« less

  2. Gas-driven permeation of deuterium through tungsten and tungsten alloys

    DOE PAGES

    Buchenauer, Dean A.; Karnesky, Richard A.; Fang, Zhigang Zak; ...

    2016-03-25

    Here, to address the transport and trapping of hydrogen isotopes, several permeation experiments are being pursued at both Sandia National Laboratories (deuterium gas-driven permeation) and Idaho National Laboratories (tritium gas- and plasma-driven tritium permeation). These experiments are in part a collaboration between the US and Japan to study the performance of tungsten at divertor relevant temperatures (PHENIX). Here we report on the development of a high temperature (≤1150 °C) gas-driven permeation cell and initial measurements of deuterium permeation in several types of tungsten: high purity tungsten foil, ITER-grade tungsten (grains oriented through the membrane), and dispersoid-strengthened ultra-fine grain (UFG) tungstenmore » being developed in the US. Experiments were performed at 500–1000 °C and 0.1–1.0 atm D 2 pressure. Permeation through ITER-grade tungsten was similar to earlier W experiments by Frauenfelder (1968–69) and Zaharakov (1973). Data from the UFG alloy indicates marginally higher permeability (< 10×) at lower temperatures, but the permeability converges to that of the ITER tungsten at 1000 °C. The permeation cell uses only ceramic and graphite materials in the hot zone to reduce the possibility for oxidation of the sample membrane. Sealing pressure is applied externally, thereby allowing for elevation of the temperature for brittle membranes above the ductile-to-brittle transition temperature.« less

  3. Contribution of ASDEX Upgrade to disruption studies for ITER

    NASA Astrophysics Data System (ADS)

    Pautasso, G.; Zhang, Y.; Reiter, B.; Giannone, L.; Gruber, O.; Herrmann, A.; Kardaun, O.; Khayrutdinov, K. K.; Lukash, V. E.; Maraschek, M.; Mlynek, A.; Nakamura, Y.; Schneider, W.; Sias, G.; Sugihara, M.; ASDEX Upgrade Team

    2011-10-01

    This paper describes the most recent contributions of ASDEX Upgrade to ITER in the field of disruption studies. (1) The ITER specifications for the halo current magnitude are based on data collected from several tokamaks and summarized in the plot of the toroidal peaking factor versus the maximum halo current fraction. Even if the maximum halo current in ASDEX Upgrade reaches 50% of the plasma current, the duration of this maximum lasts a fraction of a ms. (2) Long-lasting asymmetries of the halo current are rare and do not give rise to a large asymmetric component of the mechanical forces on the machine. Differently from JET, these asymmetries are neither locked nor exhibit a stationary harmonic structure. (3) Recent work on disruption prediction has concentrated on the search for a simple function of the most relevant plasma parameters, which is able to discriminate between the safe and pre-disruption phases of a discharge. For this purpose, the disruptions of the last four years have been classified into groups and then discriminant analysis is used to select the most significant variables and to derive the discriminant function. (4) The attainment of the critical density for the collisional suppression of the runaway electrons seems to be technically and physically possible on our medium size tokamak. The CO2 interferometer and the AXUV diagnostic provide information on the highly 3D impurity transport process during the whole plasma quench.

  4. Stakeholder engagement and public policy evaluation: factors contributing to the development and implementation of a regional network for geriatric care.

    PubMed

    Glover, Catherine; Hillier, Loretta M; Gutmanis, Iris

    2007-01-01

    The development and implementation of a regional network that provides universally accessible and consistent services to the frail elderly living in Southwestern Ontario is described. Through continuous stakeholder engagement, clear network goals were identified and operationalized. Stakeholder commitment to the integration of expertise and specialized services, to evidence-based public policy and to iterative evaluation cycles were key to network success.

  5. Army Program Value Added Analysis 90-97 (VAA 90-97)

    DTIC Science & Technology

    1991-08-01

    affordability or duplication of capability. The AHP process appears to hold the greatest possibilities in this regard. 1-11. OTHER KEY FINDINGS a. The...to provide the logical skeleton in which to build an alternative’s effectiveness value. The analytical hierarchy process ( AHP ) is particularly...likely to be, at first cut, very fuzzy . Thus, the issue clarification step is inherently iterative. As the analyst gathers more and more information in

  6. A multi-machine scaling of halo current rotation

    NASA Astrophysics Data System (ADS)

    Myers, C. E.; Eidietis, N. W.; Gerasimov, S. N.; Gerhardt, S. P.; Granetz, R. S.; Hender, T. C.; Pautasso, G.; Contributors, JET

    2018-01-01

    Halo currents generated during unmitigated tokamak disruptions are known to develop rotating asymmetric features that are of great concern to ITER because they can dynamically amplify the mechanical stresses on the machine. This paper presents a multi-machine analysis of these phenomena. More specifically, data from C-Mod, NSTX, ASDEX Upgrade, DIII-D, and JET are used to develop empirical scalings of three key quantities: (1) the machine-specific minimum current quench time, \

  7. Introducing 12 year-olds to elementary particles

    NASA Astrophysics Data System (ADS)

    Wiener, Gerfried J.; Schmeling, Sascha M.; Hopf, Martin

    2017-07-01

    We present a new learning unit, which introduces 12 year-olds to the subatomic structure of matter. The learning unit was iteratively developed as a design-based research project using the technique of probing acceptance. We give a brief overview of the unit’s final version, discuss its key ideas and main concepts, and conclude by highlighting the main implications of our research, which we consider to be most promising for use in the physics classroom.

  8. A multi-machine scaling of halo current rotation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, C. E.; Eidietis, N. W.; Gerasimov, S. N.

    Halo currents generated during unmitigated tokamak disruptions are known to develop rotating asymmetric features that are of great concern to ITER because they can dynamically amplify the mechanical stresses on the machine. This paper presents a multi-machine analysis of these phenomena. More specifically, data from C-Mod, NSTX, ASDEX Upgrade, DIII-D, and JET are used to develop empirical scalings of three key quantities: the machine-specific minimum current quench time,more » $$ \

  9. A multi-machine scaling of halo current rotation

    DOE PAGES

    Myers, C. E.; Eidietis, N. W.; Gerasimov, S. N.; ...

    2017-12-12

    Halo currents generated during unmitigated tokamak disruptions are known to develop rotating asymmetric features that are of great concern to ITER because they can dynamically amplify the mechanical stresses on the machine. This paper presents a multi-machine analysis of these phenomena. More specifically, data from C-Mod, NSTX, ASDEX Upgrade, DIII-D, and JET are used to develop empirical scalings of three key quantities: the machine-specific minimum current quench time,more » $$ \

  10. 6th international conference on Mars polar science and exploration: Conference summary and five top questions

    NASA Astrophysics Data System (ADS)

    Smith, Isaac B.; Diniega, Serina; Beaty, David W.; Thorsteinsson, Thorsteinn; Becerra, Patricio; Bramson, Ali M.; Clifford, Stephen M.; Hvidberg, Christine S.; Portyankina, Ganna; Piqueux, Sylvain; Spiga, Aymeric; Titus, Timothy N.

    2018-07-01

    We provide a historical context of the International Conference on Mars Polar Science and Exploration and summarize the proceedings from the 6th iteration of this meeting. In particular, we identify five key Mars polar science questions based primarily on presentations and discussions at the conference and discuss the overlap between some of those questions. We briefly describe the seven scientific field trips that were offered at the conference, which greatly supplemented conference discussion of Mars polar processes and landforms. We end with suggestions for measurements, modeling, and laboratory and field work that were highlighted during conference discussion as necessary steps to address key knowledge gaps.

  11. 6th international conference on Mars polar science and exploration: Conference summary and five top questions

    USGS Publications Warehouse

    Smith, Isaac B.; Diniega, Serina; Beaty, David W.; Thorsteinsson, Thorsteinn; Becerra, Patricio; Bramson, Ali; Clifford, Stephen M.; Hvidberg, Christine S.; Portyankina, Ganna; Piqueux, Sylvain; Spiga, Aymeric; Titus, Timothy N.

    2018-01-01

    We provide a historical context of the International Conference on Mars Polar Science and Exploration and summarize the proceedings from the 6th iteration of this meeting. In particular, we identify five key Mars polar science questions based primarily on presentations and discussions at the conference and discuss the overlap between some of those questions. We briefly describe the seven scientific field trips that were offered at the conference, which greatly supplemented conference discussion of Mars polar processes and landforms. We end with suggestions for measurements, modeling, and laboratory and field work that were highlighted during conference discussion as necessary steps to address key knowledge gaps.

  12. Stakeholder assessment of comparative effectiveness research needs for Medicaid populations.

    PubMed

    Fischer, Michael A; Allen-Coleman, Cora; Farrell, Stephen F; Schneeweiss, Sebastian

    2015-09-01

    Patients, providers and policy-makers rely heavily on comparative effectiveness research (CER) when making complex, real-world medical decisions. In particular, Medicaid providers and policy-makers face unique challenges in decision-making because their program cares for traditionally underserved populations, especially children, pregnant women and people with mental illness. Because these patient populations have generally been underrepresented in research discussions, CER questions for these groups may be understudied. To address this problem, the Agency for Healthcare Research and Quality commissioned our team to work with Medicaid Medical Directors and other stakeholders to identify relevant CER questions. Through an iterative process of topic identification and refinement, we developed relevant, feasible and actionable questions based on issues affecting Medicaid programs nationwide. We describe challenges and limitations and provide recommendations for future stakeholder engagement.

  13. Stakeholder assessment of comparative effectiveness research needs for Medicaid populations

    PubMed Central

    Fischer, Michael A; Allen-Coleman, Cora; Farrell, Stephen F; Schneeweiss, Sebastian

    2015-01-01

    Patients, providers and policy-makers rely heavily on comparative effectiveness research (CER) when making complex, real-world medical decisions. In particular, Medicaid providers and policy-makers face unique challenges in decision-making because their program cares for traditionally underserved populations, especially children, pregnant women and people with mental illness. Because these patient populations have generally been underrepresented in research discussions, CER questions for these groups may be understudied. To address this problem, the Agency for Healthcare Research and Quality commissioned our team to work with Medicaid Medical Directors and other stakeholders to identify relevant CER questions. Through an iterative process of topic identification and refinement, we developed relevant, feasible and actionable questions based on issues affecting Medicaid programs nationwide. We describe challenges and limitations and provide recommendations for future stakeholder engagement. PMID:26388438

  14. Dietary supplement labeling and advertising claims: are clinical studies on the full product required?

    PubMed

    Villafranco, John E; Bond, Katie

    2009-01-01

    Whether labeling and advertising claims for multi-ingredient dietary supplements may be based on the testing of individual, key ingredients--rather than the actual product--has caused a good deal of confusion. This confusion stems from the dearth of case law and the open-endedness of Federal Trade Commission (FTC) and Food and Drug Administration (FDA) guidance on this issue. Nevertheless, the relevant regulatory guidance, case law and self-regulatory case law--when assessed together--indicate that the law allows and even protects "key ingredient claims" (i.e., claims based on efficacy testing of key ingredients in the absence of full product testing). This article provides an overview of the relevant substantiation requirements for dietary supplement claims and then reviews FTC's and FDA's guidance on key ingredient claims; relevant case law; use of key ingredient claims in the advertising of other consumer products; and the National Advertising Division of the Better Business Bureau, Inc.'s (NAD's) approach to evaluating key ingredient claims for dietary supplements. This article concludes that key ingredient claims--provided they are presented in a truthful and non-deceptive manner--are permissible, and should be upheld in litigation and cases subject to industry self-regulation. This article further concludes that the NAD's approach to key ingredient claims provides practical guidance for crafting and substantiating dietary supplement key ingredient claims.

  15. Plasma-wall interaction studies within the EUROfusion consortium: progress on plasma-facing components development and qualification

    NASA Astrophysics Data System (ADS)

    Brezinsek, S.; Coenen, J. W.; Schwarz-Selinger, T.; Schmid, K.; Kirschner, A.; Hakola, A.; Tabares, F. L.; van der Meiden, H. J.; Mayoral, M.-L.; Reinhart, M.; Tsitrone, E.; Ahlgren, T.; Aints, M.; Airila, M.; Almaviva, S.; Alves, E.; Angot, T.; Anita, V.; Arredondo Parra, R.; Aumayr, F.; Balden, M.; Bauer, J.; Ben Yaala, M.; Berger, B. M.; Bisson, R.; Björkas, C.; Bogdanovic Radovic, I.; Borodin, D.; Bucalossi, J.; Butikova, J.; Butoi, B.; Čadež, I.; Caniello, R.; Caneve, L.; Cartry, G.; Catarino, N.; Čekada, M.; Ciraolo, G.; Ciupinski, L.; Colao, F.; Corre, Y.; Costin, C.; Craciunescu, T.; Cremona, A.; De Angeli, M.; de Castro, A.; Dejarnac, R.; Dellasega, D.; Dinca, P.; Dittmar, T.; Dobrea, C.; Hansen, P.; Drenik, A.; Eich, T.; Elgeti, S.; Falie, D.; Fedorczak, N.; Ferro, Y.; Fornal, T.; Fortuna-Zalesna, E.; Gao, L.; Gasior, P.; Gherendi, M.; Ghezzi, F.; Gosar, Ž.; Greuner, H.; Grigore, E.; Grisolia, C.; Groth, M.; Gruca, M.; Grzonka, J.; Gunn, J. P.; Hassouni, K.; Heinola, K.; Höschen, T.; Huber, S.; Jacob, W.; Jepu, I.; Jiang, X.; Jogi, I.; Kaiser, A.; Karhunen, J.; Kelemen, M.; Köppen, M.; Koslowski, H. R.; Kreter, A.; Kubkowska, M.; Laan, M.; Laguardia, L.; Lahtinen, A.; Lasa, A.; Lazic, V.; Lemahieu, N.; Likonen, J.; Linke, J.; Litnovsky, A.; Linsmeier, Ch.; Loewenhoff, T.; Lungu, C.; Lungu, M.; Maddaluno, G.; Maier, H.; Makkonen, T.; Manhard, A.; Marandet, Y.; Markelj, S.; Marot, L.; Martin, C.; Martin-Rojo, A. B.; Martynova, Y.; Mateus, R.; Matveev, D.; Mayer, M.; Meisl, G.; Mellet, N.; Michau, A.; Miettunen, J.; Möller, S.; Morgan, T. W.; Mougenot, J.; Mozetič, M.; Nemanič, V.; Neu, R.; Nordlund, K.; Oberkofler, M.; Oyarzabal, E.; Panjan, M.; Pardanaud, C.; Paris, P.; Passoni, M.; Pegourie, B.; Pelicon, P.; Petersson, P.; Piip, K.; Pintsuk, G.; Pompilian, G. O.; Popa, G.; Porosnicu, C.; Primc, G.; Probst, M.; Räisänen, J.; Rasinski, M.; Ratynskaia, S.; Reiser, D.; Ricci, D.; Richou, M.; Riesch, J.; Riva, G.; Rosinski, M.; Roubin, P.; Rubel, M.; Ruset, C.; Safi, E.; Sergienko, G.; Siketic, Z.; Sima, A.; Spilker, B.; Stadlmayr, R.; Steudel, I.; Ström, P.; Tadic, T.; Tafalla, D.; Tale, I.; Terentyev, D.; Terra, A.; Tiron, V.; Tiseanu, I.; Tolias, P.; Tskhakaya, D.; Uccello, A.; Unterberg, B.; Uytdenhoven, I.; Vassallo, E.; Vavpetič, P.; Veis, P.; Velicu, I. L.; Vernimmen, J. W. M.; Voitkans, A.; von Toussaint, U.; Weckmann, A.; Wirtz, M.; Založnik, A.; Zaplotnik, R.; PFC contributors, WP

    2017-11-01

    The provision of a particle and power exhaust solution which is compatible with first-wall components and edge-plasma conditions is a key area of present-day fusion research and mandatory for a successful operation of ITER and DEMO. The work package plasma-facing components (WP PFC) within the European fusion programme complements with laboratory experiments, i.e. in linear plasma devices, electron and ion beam loading facilities, the studies performed in toroidally confined magnetic devices, such as JET, ASDEX Upgrade, WEST etc. The connection of both groups is done via common physics and engineering studies, including the qualification and specification of plasma-facing components, and by modelling codes that simulate edge-plasma conditions and the plasma-material interaction as well as the study of fundamental processes. WP PFC addresses these critical points in order to ensure reliable and efficient use of conventional, solid PFCs in ITER (Be and W) and DEMO (W and steel) with respect to heat-load capabilities (transient and steady-state heat and particle loads), lifetime estimates (erosion, material mixing and surface morphology), and safety aspects (fuel retention, fuel removal, material migration and dust formation) particularly for quasi-steady-state conditions. Alternative scenarios and concepts (liquid Sn or Li as PFCs) for DEMO are developed and tested in the event that the conventional solution turns out to not be functional. Here, we present an overview of the activities with an emphasis on a few key results: (i) the observed synergistic effects in particle and heat loading of ITER-grade W with the available set of exposition devices on material properties such as roughness, ductility and microstructure; (ii) the progress in understanding of fuel retention, diffusion and outgassing in different W-based materials, including the impact of damage and impurities like N; and (iii), the preferential sputtering of Fe in EUROFER steel providing an in situ W surface and a potential first-wall solution for DEMO.

  16. Plasma–wall interaction studies within the EUROfusion consortium: progress on plasma-facing components development and qualification

    DOE PAGES

    Brezinsek, S.; Coenen, J. W.; Schwarz-Selinger, T.; ...

    2017-06-14

    The provision of a particle and power exhaust solution which is compatible with first-wall components and edge-plasma conditions is a key area of present-day fusion research and mandatory for a successful operation of ITER and DEMO. The work package plasma-facing components (WP PFC) within the European fusion programme complements with laboratory experiments, i.e. in linear plasma devices, electron and ion beam loading facilities, the studies performed in toroidally confined magnetic devices, such as JET, ASDEX Upgrade, WEST etc. The connection of both groups is done via common physics and engineering studies, including the qualification and specification of plasma-facing components, andmore » by modelling codes that simulate edge-plasma conditions and the plasma–material interaction as well as the study of fundamental processes. WP PFC addresses these critical points in order to ensure reliable and efficient use of conventional, solid PFCs in ITER (Be and W) and DEMO (W and steel) with respect to heat-load capabilities (transient and steady-state heat and particle loads), lifetime estimates (erosion, material mixing and surface morphology), and safety aspects (fuel retention, fuel removal, material migration and dust formation) particularly for quasi-steady-state conditions. Alternative scenarios and concepts (liquid Sn or Li as PFCs) for DEMO are developed and tested in the event that the conventional solution turns out to not be functional. Here, we present an overview of the activities with an emphasis on a few key results: (i) the observed synergistic effects in particle and heat loading of ITER-grade W with the available set of exposition devices on material properties such as roughness, ductility and microstructure; (ii) the progress in understanding of fuel retention, diffusion and outgassing in different W-based materials, including the impact of damage and impurities like N; and (iii), the preferential sputtering of Fe in EUROFER steel providing an in situ W surface and a potential first-wall solution for DEMO.« less

  17. Plasma–wall interaction studies within the EUROfusion consortium: progress on plasma-facing components development and qualification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brezinsek, S.; Coenen, J. W.; Schwarz-Selinger, T.

    The provision of a particle and power exhaust solution which is compatible with first-wall components and edge-plasma conditions is a key area of present-day fusion research and mandatory for a successful operation of ITER and DEMO. The work package plasma-facing components (WP PFC) within the European fusion programme complements with laboratory experiments, i.e. in linear plasma devices, electron and ion beam loading facilities, the studies performed in toroidally confined magnetic devices, such as JET, ASDEX Upgrade, WEST etc. The connection of both groups is done via common physics and engineering studies, including the qualification and specification of plasma-facing components, andmore » by modelling codes that simulate edge-plasma conditions and the plasma–material interaction as well as the study of fundamental processes. WP PFC addresses these critical points in order to ensure reliable and efficient use of conventional, solid PFCs in ITER (Be and W) and DEMO (W and steel) with respect to heat-load capabilities (transient and steady-state heat and particle loads), lifetime estimates (erosion, material mixing and surface morphology), and safety aspects (fuel retention, fuel removal, material migration and dust formation) particularly for quasi-steady-state conditions. Alternative scenarios and concepts (liquid Sn or Li as PFCs) for DEMO are developed and tested in the event that the conventional solution turns out to not be functional. Here, we present an overview of the activities with an emphasis on a few key results: (i) the observed synergistic effects in particle and heat loading of ITER-grade W with the available set of exposition devices on material properties such as roughness, ductility and microstructure; (ii) the progress in understanding of fuel retention, diffusion and outgassing in different W-based materials, including the impact of damage and impurities like N; and (iii), the preferential sputtering of Fe in EUROFER steel providing an in situ W surface and a potential first-wall solution for DEMO.« less

  18. A qualitative study of the perspectives of key stakeholders on the delivery of clinical academic training in the East Midlands.

    PubMed

    Green, Ruth H; Evans, Val; MacLeod, Sheona; Barratt, Jonathan

    2018-02-01

    Major changes in the design and delivery of clinical academic training in the United Kingdom have occurred yet there has been little exploration of the perceptions of integrated clinic academic trainees or educators. We obtained the views of a range of key stakeholders involved in clinical academic training in the East Midlands. A qualitative study with inductive iterative thematic content analysis of findings from trainee surveys and facilitated focus groups. The East Midlands School of Clinical Academic Training. Integrated Clinical Academic Trainees, clinical and academic educators involved in clinical academic training. The experience, opinions and beliefs of key stakeholders about barriers and enablers in the delivery of clinical academic training. We identified key themes many shared by both trainees and educators. These highlighted issues in the systems and process of the integrated academic pathways, career pathways, supervision and support, the assessment process and the balance between clinical and academic training. Our findings help inform the future development of integrated academic training programmes.

  19. Radiation induced currents in mineral-insulated cables and in pick-up coils: model calculations and experimental verification in the BR1 reactor

    NASA Astrophysics Data System (ADS)

    Vermeeren, Ludo; Leysen, Willem; Brichard, Benoit

    2018-01-01

    Mineral-insulated (MI) cables and Low-Temperature Co-fired Ceramic (LTCC) magnetic pick-up coils are intended to be installed in various position in ITER. The severe ITER nuclear radiation field is expected to lead to induced currents that could perturb diagnostic measurements. In order to assess this problem and to find mitigation strategies models were developed for the calculation of neutron-and gamma-induced currents in MI cables and in LTCC coils. The models are based on calculations with the MCNPX code, combined with a dedicated model for the drift of electrons stopped in the insulator. The gamma induced currents can be easily calculated with a single coupled photon-electron MCNPX calculation. The prompt neutron induced currents requires only a single coupled neutron-photon-electron MCNPX run. The various delayed neutron contributions require a careful analysis of all possibly relevant neutron-induced reaction paths and a combination of different types of MCNPX calculations. The models were applied for a specific twin-core copper MI cable, for one quad-core copper cable and for silver conductor LTCC coils (one with silver ground plates in order to reduce the currents and one without such silver ground plates). Calculations were performed for irradiation conditions (neutron and gamma spectra and fluxes) in relevant positions in ITER and in the Y3 irradiation channel of the BR1 reactor at SCK•CEN, in which an irradiation test of these four test devices was carried out afterwards. We will present the basic elements of the models and show the results of all relevant partial currents (gamma and neutron induced, prompt and various delayed currents) in BR1-Y3 conditions. Experimental data will be shown and analysed in terms of the respective contributions. The tests were performed at reactor powers of 350 kW and 1 MW, leading to thermal neutron fluxes of 1E11 n/cm2s and 3E11 n/cm2s, respectively. The corresponding total radiation induced currents are ranging from 1 to 7 nA only, putting a challenge on the acquisition system and on the data analysis. The detailed experimental results will be compared with the corresponding values predicted by the model. The overall agreement between the experimental data and the model predictions is fairly good, with very consistent data for the main delayed current components, while the lower amplitude delayed currents and some of the prompt contributions show some minor discrepancies.

  20. A geometric multigrid preconditioning strategy for DPG system matrices

    DOE PAGES

    Roberts, Nathan V.; Chan, Jesse

    2017-08-23

    Here, the discontinuous Petrov–Galerkin (DPG) methodology of Demkowicz and Gopalakrishnan (2010, 2011) guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. A key question that has not yet been answered in general – though there are some results for Poisson, e.g.– is how best to precondition the DPG system matrix, so that iterative solvers may be used to allow solution of large-scale problems.

  1. Commission 41: History of Astronomy

    NASA Astrophysics Data System (ADS)

    Ruggles, Clive; Kochhar, Rajesh; Il-Seong, Nha; Belmonte, Juan; Corbin, Brenda; de Jong, Teije; Norris, Ray; Pigatto, Luisa; Soma, Mitsuru; Sterken, Chris; Xiaochun, Sun

    2012-04-01

    Commission 41 was created at the VIIth IAU General Assembly in Zürich in 1948. From an inauspicious start-Otto Neugebauer was appointed the first President in his absence, but proceeded to express his conviction that ``an international organization in the history of astronomy has no positive function. . .my only activity during my term of service consisted in iterated attempts to resign''-the Commission quickly assumed a key role in the international development of the history of astronomy as an academic discipline.

  2. A self-adapting heuristic for automatically constructing terrain appreciation exercises

    NASA Astrophysics Data System (ADS)

    Nanda, S.; Lickteig, C. L.; Schaefer, P. S.

    2008-04-01

    Appreciating terrain is a key to success in both symmetric and asymmetric forms of warfare. Training to enable Soldiers to master this vital skill has traditionally required their translocation to a selected number of areas, each affording a desired set of topographical features, albeit with limited breadth of variety. As a result, the use of such methods has proved to be costly and time consuming. To counter this, new computer-aided training applications permit users to rapidly generate and complete training exercises in geo-specific open and urban environments rendered by high-fidelity image generation engines. The latter method is not only cost-efficient, but allows any given exercise and its conditions to be duplicated or systematically varied over time. However, even such computer-aided applications have shortcomings. One of the principal ones is that they usually require all training exercises to be painstakingly constructed by a subject matter expert. Furthermore, exercise difficulty is usually subjectively assessed and frequently ignored thereafter. As a result, such applications lack the ability to grow and adapt to the skill level and learning curve of each trainee. In this paper, we present a heuristic that automatically constructs exercises for identifying key terrain. Each exercise is created and administered in a unique iteration, with its level of difficulty tailored to the trainee's ability based on the correctness of that trainee's responses in prior iterations.

  3. A visual identification key utilizing both gestalt and analytic approaches to identification of Carices present in North America (Plantae, Cyperaceae)

    PubMed Central

    2013-01-01

    Abstract Images are a critical part of the identification process because they enable direct, immediate and relatively unmediated comparisons between a specimen being identified and one or more reference specimens. The Carices Interactive Visual Identification Key (CIVIK) is a novel tool for identification of North American Carex species, the largest vascular plant genus in North America, and two less numerous closely-related genera, Cymophyllus and Kobresia. CIVIK incorporates 1288 high-resolution tiled image sets that allow users to zoom in to view minute structures that are crucial at times for identification in these genera. Morphological data are derived from the earlier Carex Interactive Identification Key (CIIK) which in turn used data from the Flora of North America treatments. In this new iteration, images can be viewed in a grid or histogram format, allowing multiple representations of data. In both formats the images are fully zoomable. PMID:24723777

  4. Branding a School-Based Campaign Combining Healthy Eating and Eco-friendliness.

    PubMed

    Folta, Sara C; Koch-Weser, Susan; Tanskey, Lindsay A; Economos, Christina D; Must, Aviva; Whitney, Claire; Wright, Catherine M; Goldberg, Jeanne P

    2018-02-01

    To develop a branding strategy for a campaign to improve the quality of foods children bring from home to school, using a combined healthy eating and eco-friendly approach and for a control campaign focusing solely on nutrition. Formative research was conducted with third- and fourth-grade students in lower- and middle-income schools in Greater Boston and their parents. Phase I included concept development focus groups. Phase II included concept testing focus groups. A thematic analysis approach was used to identify key themes. In phase I, the combined nutrition and eco-friendly messages resonated; child preference emerged as a key factor affecting food from home. In phase II, key themes included fun with food and an element of mystery. Themes were translated into a concept featuring food face characters. Iterative formative research provided information necessary to create a brand that appealed to a specified target audience. Copyright © 2017. Published by Elsevier Inc.

  5. Progress in preparing scenarios for operation of the International Thermonuclear Experimental Reactor

    NASA Astrophysics Data System (ADS)

    Sips, A. C. C.; Giruzzi, G.; Ide, S.; Kessel, C.; Luce, T. C.; Snipes, J. A.; Stober, J. K.

    2015-02-01

    The development of operating scenarios is one of the key issues in the research for ITER which aims to achieve a fusion gain (Q) of ˜10, while producing 500 MW of fusion power for ≥300 s. The ITER Research plan proposes a success oriented schedule starting in hydrogen and helium, to be followed by a nuclear operation phase with a rapid development towards Q ˜ 10 in deuterium/tritium. The Integrated Operation Scenarios Topical Group of the International Tokamak Physics Activity initiates joint activities among worldwide institutions and experiments to prepare ITER operation. Plasma formation studies report robust plasma breakdown in devices with metal walls over a wide range of conditions, while other experiments use an inclined EC launch angle at plasma formation to mimic the conditions in ITER. Simulations of the plasma burn-through predict that at least 4 MW of Electron Cyclotron heating (EC) assist would be required in ITER. For H-modes at q95 ˜ 3, many experiments have demonstrated operation with scaled parameters for the ITER baseline scenario at ne/nGW ˜ 0.85. Most experiments, however, obtain stable discharges at H98(y,2) ˜ 1.0 only for βN = 2.0-2.2. For the rampup in ITER, early X-point formation is recommended, allowing auxiliary heating to reduce the flux consumption. A range of plasma inductance (li(3)) can be obtained from 0.65 to 1.0, with the lowest values obtained in H-mode operation. For the rampdown, the plasma should stay diverted maintaining H-mode together with a reduction of the elongation from 1.85 to 1.4. Simulations show that the proposed rampup and rampdown schemes developed since 2007 are compatible with the present ITER design for the poloidal field coils. At 13-15 MA and densities down to ne/nGW ˜ 0.5, long pulse operation (>1000 s) in ITER is possible at Q ˜ 5, useful to provide neutron fluence for Test Blanket Module assessments. ITER scenario preparation in hydrogen and helium requires high input power (>50 MW). H-mode operation in helium may be possible at input powers above 35 MW at a toroidal field of 2.65 T, for studying H-modes and ELM mitigation. In hydrogen, H-mode operation is expected to be marginal, even at 2.65 T with 60 MW of input power. Simulation code benchmark studies using hybrid and steady state scenario parameters have proved to be a very challenging and lengthy task of testing suites of codes, consisting of tens of sophisticated modules. Nevertheless, the general basis of the modelling appears sound, with substantial consistency among codes developed by different groups. For a hybrid scenario at 12 MA, the code simulations give a range for Q = 6.5-8.3, using 30 MW neutral beam injection and 20 MW ICRH. For non-inductive operation at 7-9 MA, the simulation results show more variation. At high edge pedestal pressure (Tped ˜ 7 keV), the codes predict Q = 3.3-3.8 using 33 MW NB, 20 MW EC, and 20 MW ion cyclotron to demonstrate the feasibility of steady-state operation with the day-1 heating systems in ITER. Simulations using a lower edge pedestal temperature (˜3 keV) but improved core confinement obtain Q = 5-6.5, when ECCD is concentrated at mid-radius and ˜20 MW off-axis current drive (ECCD or LHCD) is added. Several issues remain to be studied, including plasmas with dominant electron heating, mitigation of transient heat loads integrated in scenario demonstrations and (burn) control simulations in ITER scenarios.

  6. Land Operations in the Year 2020 (LO2020) (Operations terrestres a l’horizon 2020 (LO2020)).

    DTIC Science & Technology

    1999-03-01

    CAPABILITIES Technologies [  ] □ [500-700] n[>70°] 186 APPENDIX 4 to ANNEX V SHORT LISTED TECHNOLOGIES CARACTERISED REGARDING CC 1. top... CARACTERISATION MATRIX techno Legend: no relevance weak relevance good relevance strong relevance 189 KEY TECHNOLOGIES CARACTERISED REGARDING COST (34

  7. Distributed Simulation as a modelling tool for the development of a simulation-based training programme for cardiovascular specialties.

    PubMed

    Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando

    2017-01-01

    Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n  = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n  = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n  = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n  = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n  = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded with the qualitative component, which augmented the quantitative findings; trainees' user feedback was used to perform iterative refinements to the simulation design and components (collaborative design), resulting in higher mean evaluation scores leading up to the implementation phase. Through application of innovative Distributed Simulation techniques, collaborative design, and consistent evaluation techniques from conceptual, development, and implementation stages, fully immersive simulation techniques for cardiovascular specialities are achievable and have the potential to be implemented more broadly.

  8. Geometric Assortative Growth Model for Small-World Networks

    PubMed Central

    2014-01-01

    It has been shown that both humanly constructed and natural networks are often characterized by small-world phenomenon and assortative mixing. In this paper, we propose a geometrically growing model for small-world networks. The model displays both tunable small-world phenomenon and tunable assortativity. We obtain analytical solutions of relevant topological properties such as order, size, degree distribution, degree correlation, clustering, transitivity, and diameter. It is also worth noting that the model can be viewed as a generalization for an iterative construction of Farey graphs. PMID:24578661

  9. The Development and Validation of a Human Systems Integration (HSI) Program for the Canadian Department of National Defence (DND)

    DTIC Science & Technology

    2008-09-01

    inputs to interface and workspace design , and iterative user testing is not required. However, an effective HSI Program is just as important on a COTS...this phase is the contract, and the various design reviews, tests , and evaluations that occur to ensure that the system meets it goals. 3.3 DBCM ESM...Report(s). • HSI Approvals of Relevant Design Changes. • HSI Test Plans and Reports. • HSI Review Progress and Evaluation Memos and Reports. • HSI

  10. Targeted exploration and analysis of large cross-platform human transcriptomic compendia

    PubMed Central

    Zhu, Qian; Wong, Aaron K; Krishnan, Arjun; Aure, Miriam R; Tadych, Alicja; Zhang, Ran; Corney, David C; Greene, Casey S; Bongo, Lars A; Kristensen, Vessela N; Charikar, Moses; Li, Kai; Troyanskaya, Olga G.

    2016-01-01

    We present SEEK (http://seek.princeton.edu), a query-based search engine across very large transcriptomic data collections, including thousands of human data sets from almost 50 microarray and next-generation sequencing platforms. SEEK uses a novel query-level cross-validation-based algorithm to automatically prioritize data sets relevant to the query and a robust search approach to identify query-coregulated genes, pathways, and processes. SEEK provides cross-platform handling, multi-gene query search, iterative metadata-based search refinement, and extensive visualization-based analysis options. PMID:25581801

  11. Lessons Learned in Developing Research Opportunities for Native American Undergraduate Students: The GEMscholars Project

    NASA Astrophysics Data System (ADS)

    Zurn-Birkhimer, S. M.; Filley, T. R.; Kroeger, T. J.

    2008-12-01

    Interventions for the well-documented national deficiency of underrepresented students in higher education have focused primarily on the undergraduate student population with significantly less attention given to issues of diversity within graduate programs. As a result, we have made little progress in transforming faculty composition to better reflect the nation's diversity resulting in relatively few minority mentors joining faculty ranks and schools falling short of the broader representation to create an enriched, diverse academic environment. The GEMscholars (Geology, Environmental Science and Meteorology scholars) Program began in the summer of 2006 with the goal of increasing the number of Native American students pursuing graduate degrees in the geosciences. We drew on research from Native American student education models to address three key themes of (a) mentoring, (b) culturally relevant valuations of geosciences and possible career paths, and (c) connections to community and family. A collaboration between Purdue University, West Lafayette, IN and three institutions in northern Minnesota; Bemidji State University, Red Lake Nation College and Leech Lake Tribal College, is structured to develop research opportunities and a support network for Native American undergraduate students (called GEMscholars) to participate in summer geoscience research projects in their home communities. Research opportunities were specifically chosen to have cultural relevance and yield locally important findings. The GEMscholars work on projects that directly link to their local ecosystems and permit them to engage in long term monitoring and cohesive interaction among each successive year's participants. For example, the GEMscholars have established and now maintain permanent field monitoring plots to assess the impacts of invasive European earthworm activity on forest ecosystem health. The culmination of the summer project is the GEMscholars Symposium at Purdue University where the GEMscholars present their research findings to the academic community. Initial results from formative evaluations have been promising and allowed for two iterations of program modifications. The research team has turned "lessons learned" into best practices for developing research opportunities for Native American undergraduate students. Best practices include (a) developing and maintaining tribal relations, (b) creating projects that are exciting for the students and relevant to the community, and (c) maintaining constructive and positive student contact.

  12. Passive and active adaptive management: Approaches and an example

    USGS Publications Warehouse

    Williams, B.K.

    2011-01-01

    Adaptive management is a framework for resource conservation that promotes iterative learning-based decision making. Yet there remains considerable confusion about what adaptive management entails, and how to actually make resource decisions adaptively. A key but somewhat ambiguous distinction in adaptive management is between active and passive forms of adaptive decision making. The objective of this paper is to illustrate some approaches to active and passive adaptive management with a simple example involving the drawdown of water impoundments on a wildlife refuge. The approaches are illustrated for the drawdown example, and contrasted in terms of objectives, costs, and potential learning rates. Some key challenges to the actual practice of AM are discussed, and tradeoffs between implementation costs and long-term benefits are highlighted. ?? 2010 Elsevier Ltd.

  13. Mode of action associated with development of hemangiosarcoma in mice given pregabalin and assessment of human relevance.

    PubMed

    Criswell, Kay A; Cook, Jon C; Wojcinski, Zbigniew; Pegg, David; Herman, James; Wesche, David; Giddings, John; Brady, Joseph T; Anderson, Timothy

    2012-07-01

    Pregabalin increased the incidence of hemangiosarcomas in carcinogenicity studies of 2-year mice but was not tumorigenic in rats. Serum bicarbonate increased within 24 h of pregabalin administration in mice and rats. Rats compensated appropriately, but mice developed metabolic alkalosis and increased blood pH. Local tissue hypoxia and increased endothelial cell proliferation were also confirmed in mice alone. The combination of hypoxia and sustained increases in endothelial cell proliferation, angiogenic growth factors, dysregulated erythropoiesis, and macrophage activation is proposed as the key event in the mode of action (MOA) for hemangiosarcoma formation. Hemangiosarcomas occur spontaneously in untreated control mice but occur only rarely in humans. The International Programme on Chemical Safety and International Life Sciences Institute developed a Human Relevance Framework (HRF) analysis whereby presence or absence of key events can be used to assess human relevance. The HRF combines the MOA with an assessment of biologic plausibility in humans to assess human relevance. This manuscript compares the proposed MOA with Hill criteria, a component of the HRF, for strength, consistency, specificity, temporality, and dose response, with an assessment of key biomarkers in humans, species differences in response to disease conditions, and spontaneous incidence of hemangiosarcoma to evaluate human relevance. Lack of key biomarker events in the MOA in rats, monkeys, and humans supports a species-specific process and demonstrates that the tumor findings in mice are not relevant to humans at the clinical dose of pregabalin. Based on this collective dataset, clinical use of pregabalin would not pose an increased risk for hemangiosarcoma to humans.

  14. Integrating knowledge generation with knowledge diffusion and utilization: a case study analysis of the Consortium for Applied Research and Evaluation in Mental Health.

    PubMed

    Vingilis, Evelyn; Hartford, Kathleen; Schrecker, Ted; Mitchell, Beth; Lent, Barbara; Bishop, Joan

    2003-01-01

    Knowledge diffusion and utilization (KDU) have become a key focus in the health research community because of the limited success to date of research findings to inform health policies, programs and services. Yet, evidence indicates that successful KDU is often predicated on the early involvement of potential knowledge users in the conceptualization and conduct of the research and on the development of a "partnership culture". This study describes the integration of KDU theory with practice via a case study analysis of the Consortium for Applied Research and Evaluation in Mental Health (CAREMH). This qualitative study, using a single-case design, included a number of data sources: proposals, meeting minutes, presentations, publications, reports and curricula vitae of CAREMH members. CAREMH has adopted the following operational strategies to increase KDU capacity: 1) viewing research as a means and not as an end; 2) bringing the university and researcher to the community; 3) using participatory research methods; 4) embracing transdisciplinary research and interactions; and 5) using connectors. Examples of the iterative process between researchers and potential knowledge users in their contribution to knowledge generation, diffusion and utilization are provided. This case study supports the importance of early and ongoing involvement of relevant potential knowledge users in research to enhance its utilization potential. It also highlights the need for re-thinking research funding approaches.

  15. Soft X-ray tomography in support of impurity control in tokamaks

    NASA Astrophysics Data System (ADS)

    Mlynar, J.; Mazon, D.; Imrisek, M.; Loffelmann, V.; Malard, P.; Odstrcil, T.; Tomes, M.; Vezinet, D.; Weinzettl, V.

    2016-10-01

    This contribution reviews an important example of current developments in diagnostic systems and data analysis tools aimed at improved understanding and control of transport processes in magnetically confined high temperature plasmas. The choice of tungsten for the plasma facing components of ITER and probably also DEMO means that impurity control in fusion plasmas is now a crucial challenge. Soft X-ray (SXR) diagnostic systems serve as a key sensor for experimental studies of plasma impurity transport with a clear prospective of its control via actuators based mainly on plasma heating systems. The SXR diagnostic systems typically feature high temporal resolution but limited spatial resolution due to access restrictions. In order to reconstruct the spatial distribution of the SXR radiation from line integrated measurements, appropriate tomographic methods have been developed and validated, while novel numerical methods relevant for real-time control have been proposed. Furthermore, in order to identify the main contributors to the SXR plasma radiation, at least partial control over the spectral sensitivity range of the detectors would be beneficial, which motivates for developments of novel SXR diagnostic methods. Last, but not least, semiconductor photosensitive elements cannot survive in harsh conditions of future fusion reactors due to radiation damage, which calls for development of radiation hard SXR detectors. Present research in this field is exemplified on recent results from tokamaks COMPASS, TORE SUPRA and the Joint European Torus JET. Further planning is outlined.

  16. Assessing research activity and capacity of community-based organizations: development and pilot testing of an instrument.

    PubMed

    Humphries, Debbie L; Carroll-Scott, Amy; Mitchell, Leif; Tian, Terry; Choudhury, Shonali; Fiellin, David A

    2014-01-01

    Although awareness of the importance of the research capacity of community-based organizations (CBOs) is growing, a uniform framework of the research capacity domains within CBOs has not yet been developed. To develop a framework and instrument (the Community REsearch Activity assessment Tool [CREAT]) for assessing the research activity and capacity of CBOs that incorporates awareness of the different data collection and analysis priorities of CBOs. We conducted a review of existing tools for assessing research capacity to identify key capacity domains. Instrument items were developed through an iterative process with CBO representatives and community researchers. The CREAT was then pilot tested with 30 CBOs. The four primary domains of the CREAT framework include 1) organizational support for research, 2) generalizable experiences, 3) research specific experiences, and 4) funding. Organizations reported a high prevalence of activities in the research-specific experiences domain, including conducting literature reviews (70%), use of research terminology (83%), and primary data collection (100%). Respondents see research findings as important to improve program and service delivery, and to seek funds for new programs and services. Funders, board members, and policymakers are the most important dissemination audiences. The work reported herein advances the field of CBO research capacity by developing a systematic framework for assessing research activity and capacity relevant to the work of CBOs, and by developing and piloting an instrument to assess activity in these domains.

  17. The Determination of Relevant Goals and Criteria Used to Select an Automated Patient Care Information System

    PubMed Central

    Chocholik, Joan K.; Bouchard, Susan E.; Tan, Joseph K. H.; Ostrow, David N.

    1999-01-01

    Objectives: To determine the relevant weighted goals and criteria for use in the selection of an automated patient care information system (PCIS) using a modified Delphi technique to achieve consensus. Design: A three-phase, six-round modified Delphi process was implemented by a ten-member PCIS selection task force. The first phase consisted of an exploratory round. It was followed by the second phase, of two rounds, to determine the selection goals and finally the third phase, of three rounds, to finalize the selection criteria. Results: Consensus on the goals and criteria for selecting a PCIS was measured during the Delphi process by reviewing the mean and standard deviation of the previous round's responses. After the study was completed, the results were analyzed using a limits-of-agreement indicator that showed strong agreement of each individual's responses between each of the goal determination rounds. Further analysis for variability in the group's response showed a significant movement to consensus after the first goal-determination iteration, with consensus reached on all goals by the end of the second iteration. Conclusion: The results indicated that the relevant weighted goals and criteria used to make the final decision for an automated PCIS were developed as a result of strong agreement among members of the PCIS selection task force. It is therefore recognized that the use of the Delphi process was beneficial in achieving consensus among clinical and nonclinical members in a relatively short time while avoiding a decision based on political biases and the “groupthink” of traditional committee meetings. The results suggest that improvements could be made in lessening the number of rounds by having information available through side conversations, by having other statistical indicators besides the mean and standard deviation available between rounds, and by having a content expert address questions between rounds. PMID:10332655

  18. Multi-scale modelling to relate beryllium surface temperature, deuterium concentration and erosion in fusion reactor environment

    DOE PAGES

    Safi, E.; Valles, G.; Lasa, A.; ...

    2017-03-27

    Beryllium (Be) has been chosen as the plasma-facing material for the main wall of ITER, the next generation fusion reactor. Identifying the key parameters that determine Be erosion under reactor relevant conditions is vital to predict the ITER plasma-facing component lifetime and viability. To date, a certain prediction of Be erosion, focusing on the effect of two such parameters, surface temperature and D surface content, has not been achieved. In this paper, we develop the first multi-scale KMC-MD modeling approach for Be to provide a more accurate database for its erosion, as well as investigating parameters that affect erosion. First,more » we calculate the complex relationship between surface temperature and D concentration precisely by simulating the time evolution of the system using an object kinetic Monte Carlo (OKMC) technique. These simulations provide a D surface concentration profile for any surface temperature and incoming D energy. We then describe how this profile can be implemented as a starting configuration in molecular dynamics (MD) simulations. We finally use MD simulations to investigate the effect of temperature (300–800 K) and impact energy (10–200 eV) on the erosion of Be due to D plasma irradiations. The results reveal a strong dependency of the D surface content on temperature. Increasing the surface temperature leads to a lower D concentration at the surface, because of the tendency of D atoms to avoid being accommodated in a vacancy, and de-trapping from impurity sites diffuse fast toward bulk. At the next step, total and molecular Be erosion yields due to D irradiations are analyzed using MD simulations. The results show a strong dependency of erosion yields on surface temperature and incoming ion energy. The total Be erosion yield increases with temperature for impact energies up to 100 eV. However, increasing temperature and impact energy results in a lower fraction of Be atoms being sputtered as BeD molecules due to the lower D surface concentrations at higher temperatures. Finally, these findings correlate well with different experiments performed at JET and PISCES-B devices.« less

  19. Multi-scale modelling to relate beryllium surface temperature, deuterium concentration and erosion in fusion reactor environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safi, E.; Valles, G.; Lasa, A.

    Beryllium (Be) has been chosen as the plasma-facing material for the main wall of ITER, the next generation fusion reactor. Identifying the key parameters that determine Be erosion under reactor relevant conditions is vital to predict the ITER plasma-facing component lifetime and viability. To date, a certain prediction of Be erosion, focusing on the effect of two such parameters, surface temperature and D surface content, has not been achieved. In this paper, we develop the first multi-scale KMC-MD modeling approach for Be to provide a more accurate database for its erosion, as well as investigating parameters that affect erosion. First,more » we calculate the complex relationship between surface temperature and D concentration precisely by simulating the time evolution of the system using an object kinetic Monte Carlo (OKMC) technique. These simulations provide a D surface concentration profile for any surface temperature and incoming D energy. We then describe how this profile can be implemented as a starting configuration in molecular dynamics (MD) simulations. We finally use MD simulations to investigate the effect of temperature (300–800 K) and impact energy (10–200 eV) on the erosion of Be due to D plasma irradiations. The results reveal a strong dependency of the D surface content on temperature. Increasing the surface temperature leads to a lower D concentration at the surface, because of the tendency of D atoms to avoid being accommodated in a vacancy, and de-trapping from impurity sites diffuse fast toward bulk. At the next step, total and molecular Be erosion yields due to D irradiations are analyzed using MD simulations. The results show a strong dependency of erosion yields on surface temperature and incoming ion energy. The total Be erosion yield increases with temperature for impact energies up to 100 eV. However, increasing temperature and impact energy results in a lower fraction of Be atoms being sputtered as BeD molecules due to the lower D surface concentrations at higher temperatures. Finally, these findings correlate well with different experiments performed at JET and PISCES-B devices.« less

  20. A research agenda for gastrointestinal and endoscopic surgery.

    PubMed

    Urbach, D R; Horvath, K D; Baxter, N N; Jobe, B A; Madan, A K; Pryor, A D; Khaitan, L; Torquati, A; Brower, S T; Trus, T L; Schwaitzberg, S

    2007-09-01

    Development of a research agenda may help to inform researchers and research-granting agencies about the key research gaps in an area of research and clinical care. The authors sought to develop a list of research questions for which further research was likely to have a major impact on clinical care in the area of gastrointestinal and endoscopic surgery. A formal group process was used to conduct an iterative, anonymous Web-based survey of an expert panel including the general membership of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES). In round 1, research questions were solicited, which were categorized, collapsed, and rewritten in a common format. In round 2, the expert panel rated all the questions using a priority scale ranging from 1 (lowest) to 5 (highest). In round 3, the panel re-rated the 40 questions with the highest mean priority score in round 2. A total of 241 respondents to round 1 submitted 382 questions, which were reduced by a review panel to 106 unique questions encompassing 33 topics in gastrointestinal and endoscopic surgery. In the two successive rounds, respectively, 397 and 385 respondents ranked the questions by priority, then re-ranked the 40 questions with the highest mean priority score. High-priority questions related to antireflux surgery, the oncologic and immune effects of minimally invasive surgery, and morbid obesity. The question with the highest mean priority ranking was: "What is the best treatment (antireflux surgery, endoluminal therapy, or medication) for GERD?" The second highest-ranked question was: "Does minimally invasive surgery improve oncologic outcomes as compared with open surgery?" Other questions covered a broad range of research areas including clinical research, basic science research, education and evaluation, outcomes measurement, and health technology assessment. An iterative, anonymous group survey process was used to develop a research agenda for gastrointestinal and endoscopic surgery consisting of the 40 most important research questions in the field. This research agenda can be used by researchers and research-granting agencies to focus research activity in the areas most likely to have an impact on clinical care, and to appraise the relevance of scientific contributions.

  1. Increasing the frequency of physical activity very brief advice for cancer patients. Development of an intervention using the behaviour change wheel.

    PubMed

    Webb, J; Foster, J; Poulter, E

    2016-04-01

    Being physically active has multiple benefits for cancer patients. Despite this only 23% are active to the national recommendations and 31% are completely inactive. A cancer diagnosis offers a teachable moment in which patients might be more receptive to lifestyle changes. Nurses are well placed to offer physical activity advice, however, only 9% of UK nurses involved in cancer care talk to all cancer patients about physical activity. A change in the behaviour of nurses is needed to routinely deliver physical activity advice to cancer patients. As recommended by the Medical Research Council, behavioural change interventions should be evidenced-based and use a relevant and coherent theoretical framework to stand the best chance of success. This paper presents a case study on the development of an intervention to improve the frequency of delivery of very brief advice (VBA) on physical activity by nurses to cancer patients, using the Behaviour Change Wheel (BCW). The eight composite steps outlined by the BCW guided the intervention development process. An iterative approach was taken involving key stakeholders (n = 45), with four iterations completed in total. This was not defined a priori but emerged during the development process. A 60 min training intervention, delivered in either a face-to-face or online setting, with follow-up at eight weeks, was designed to improve the capability, opportunity and motivation of nurses to deliver VBA on physical activity to people living with cancer. This intervention incorporates seven behaviour change techniques of goal setting coupled with commitment; instructions on how to perform the behaviour; salience of the consequences of delivering VBA; a demonstration on how to give VBA, all delivered via a credible source with objects added to the environment to support behavioural change. The BCW is a time consuming process, however, it provides a useful and comprehensive framework for intervention development and greater control over intervention replication and evaluation. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. What makes a sustainability tool valuable, practical and useful in real-world healthcare practice? A mixed-methods study on the development of the Long Term Success Tool in Northwest London.

    PubMed

    Lennox, Laura; Doyle, Cathal; Reed, Julie E; Bell, Derek

    2017-09-24

    Although improvement initiatives show benefits to patient care, they often fail to sustain. Models and frameworks exist to address this challenge, but issues with design, clarity and usability have been barriers to use in healthcare settings. This work aimed to collaborate with stakeholders to develop a sustainability tool relevant to people in healthcare settings and practical for use in improvement initiatives. Tool development was conducted in six stages. A scoping literature review, group discussions and a stakeholder engagement event explored literature findings and their resonance with stakeholders in healthcare settings. Interviews, small-scale trialling and piloting explored the design and tested the practicality of the tool in improvement initiatives. National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care for Northwest London (CLAHRC NWL). CLAHRC NWL improvement initiative teams and staff. The iterative design process and engagement of stakeholders informed the articulation of the sustainability factors identified from the literature and guided tool design for practical application. Key iterations of factors and tool design are discussed. From the development process, the Long Term Success Tool (LTST) has been designed. The Tool supports those implementing improvements to reflect on 12 sustainability factors to identify risks to increase chances of achieving sustainability over time. The Tool is designed to provide a platform for improvement teams to share their own views on sustainability as well as learn about the different views held within their team to prompt discussion and actions. The development of the LTST has reinforced the importance of working with stakeholders to design strategies which respond to their needs and preferences and can practically be implemented in real-world settings. Further research is required to study the use and effectiveness of the tool in practice and assess engagement with the method over time. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Multi-scale modelling to relate beryllium surface temperature, deuterium concentration and erosion in fusion reactor environment

    NASA Astrophysics Data System (ADS)

    Safi, E.; Valles, G.; Lasa, A.; Nordlund, K.

    2017-05-01

    Beryllium (Be) has been chosen as the plasma-facing material for the main wall of ITER, the next generation fusion reactor. Identifying the key parameters that determine Be erosion under reactor relevant conditions is vital to predict the ITER plasma-facing component lifetime and viability. To date, a certain prediction of Be erosion, focusing on the effect of two such parameters, surface temperature and D surface content, has not been achieved. In this work, we develop the first multi-scale KMC-MD modeling approach for Be to provide a more accurate database for its erosion, as well as investigating parameters that affect erosion. First, we calculate the complex relationship between surface temperature and D concentration precisely by simulating the time evolution of the system using an object kinetic Monte Carlo (OKMC) technique. These simulations provide a D surface concentration profile for any surface temperature and incoming D energy. We then describe how this profile can be implemented as a starting configuration in molecular dynamics (MD) simulations. We finally use MD simulations to investigate the effect of temperature (300-800 K) and impact energy (10-200 eV) on the erosion of Be due to D plasma irradiations. The results reveal a strong dependency of the D surface content on temperature. Increasing the surface temperature leads to a lower D concentration at the surface, because of the tendency of D atoms to avoid being accommodated in a vacancy, and de-trapping from impurity sites diffuse fast toward bulk. At the next step, total and molecular Be erosion yields due to D irradiations are analyzed using MD simulations. The results show a strong dependency of erosion yields on surface temperature and incoming ion energy. The total Be erosion yield increases with temperature for impact energies up to 100 eV. However, increasing temperature and impact energy results in a lower fraction of Be atoms being sputtered as BeD molecules due to the lower D surface concentrations at higher temperatures. These findings correlate well with different experiments performed at JET and PISCES-B devices.

  4. Particle-in-cell simulations of the plasma interaction with poloidal gaps in the ITER divertor outer vertical target

    NASA Astrophysics Data System (ADS)

    Komm, M.; Gunn, J. P.; Dejarnac, R.; Pánek, R.; Pitts, R. A.; Podolník, A.

    2017-12-01

    Predictive modelling of the heat flux distribution on ITER tungsten divertor monoblocks is a critical input to the design choice for component front surface shaping and for the understanding of power loading in the case of small-scale exposed edges. This paper presents results of particle-in-cell (PIC) simulations of plasma interaction in the vicinity of poloidal gaps between monoblocks in the high heat flux areas of the ITER outer vertical target. The main objective of the simulations is to assess the role of local electric fields which are accounted for in a related study using the ion orbit approach including only the Lorentz force (Gunn et al 2017 Nucl. Fusion 57 046025). Results of the PIC simulations demonstrate that even if in some cases the electric field plays a distinct role in determining the precise heat flux distribution, when heat diffusion into the bulk material is taken into account, the thermal responses calculated using the PIC or ion orbit approaches are very similar. This is a consequence of the small spatial scales over which the ion orbits distribute the power. The key result of this study is that the computationally much less intensive ion orbit approximation can be used with confidence in monoblock shaping design studies, thus validating the approach used in Gunn et al (2017 Nucl. Fusion 57 046025).

  5. Robust Decision Making Approach to Managing Water Resource Risks (Invited)

    NASA Astrophysics Data System (ADS)

    Lempert, R.

    2010-12-01

    The IPCC and US National Academies of Science have recommended iterative risk management as the best approach for water management and many other types of climate-related decisions. Such an approach does not rely on a single set of judgments at any one time but rather actively updates and refines strategies as new information emerges. In addition, the approach emphasizes that a portfolio of different types of responses, rather than any single action, often provides the best means to manage uncertainty. Implementing an iterative risk management approach can however prove difficult in actual decision support applications. This talk will suggest that robust decision making (RDM) provides a particularly useful set of quantitative methods for implementing iterative risk management. This RDM approach is currently being used in a wide variety of water management applications. RDM employs three key concepts that differentiate it from most types of probabilistic risk analysis: 1) characterizing uncertainty with multiple views of the future (which can include sets of probability distributions) rather than a single probabilistic best-estimate, 2) employing a robustness rather than an optimality criterion to assess alternative policies, and 3) organizing the analysis with a vulnerability and response option framework, rather than a predict-then-act framework. This talk will summarize the RDM approach, describe its use in several different types of water management applications, and compare the results to those obtained with other methods.

  6. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  7. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part I

    NASA Technical Reports Server (NTRS)

    Boyer, Charles M.; Jackson, Trevor P.; Beyon, Jeffrey Y.; Petway, Larry B.

    2013-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Mechanical placement collaboration reduced potential electromagnetic interference (EMI). Through application of newly selected electrical components and thermal analysis data, a total electronic chassis redesign was accomplished. Use of an innovative forced convection tunnel heat sink was employed to meet and exceed project requirements for cooling, mass reduction, and volume reduction. Functionality was a key concern to make efficient use of airflow, and accessibility was also imperative to allow for servicing of chassis internals. The collaborative process provided for accelerated design maturation with substantiated function.

  8. Spacecraft Attitude Maneuver Planning Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Kornfeld, Richard P.

    2004-01-01

    A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This approach for attitude path planning makes full use of a priori constraint knowledge and is computationally tractable enough to be executed onboard a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used as is or as an initial solution to initialize additional deterministic optimization algorithms. A number of representative case examples for time-fixed and time-varying conditions yielded search times that are typically on the order of minutes, thus demonstrating the viability of this method. This approach is applicable to all deep space and planet Earth missions requiring greater spacecraft autonomy, and greatly facilitates navigation and science observation planning.

  9. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    NASA Astrophysics Data System (ADS)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  10. Understanding and predicting the dynamics of tokamak discharges during startup and rampdown

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackson, G. L.; Politzer, P. A.; Humphreys, D. A.

    Understanding the dynamics of plasma startup and termination is important for present tokamaks and for predictive modeling of future burning plasma devices such as ITER. We report on experiments in the DIII-D tokamak that explore the plasma startup and rampdown phases and on the benchmarking of transport models. Key issues have been examined such as plasma initiation and burnthrough with limited inductive voltage and achieving flattop and maximum burn within the technical limits of coil systems and their actuators while maintaining the desired q profile. Successful rampdown requires scenarios consistent with technical limits, including controlled H-L transitions, while avoiding verticalmore » instabilities, additional Ohmic transformer flux consumption, and density limit disruptions. Discharges were typically initiated with an inductive electric field typical of ITER, 0.3 V/m, most with second harmonic electron cyclotron assist. A fast framing camera was used during breakdown and burnthrough of low Z impurity charge states to study the formation physics. An improved 'large aperture' ITER startup scenario was developed, and aperture reduction in rampdown was found to be essential to avoid instabilities. Current evolution using neoclassical conductivity in the CORSICA code agrees with rampup experiments, but the prediction of the temperature and internal inductance evolution using the Coppi-Tang model for electron energy transport is not yet accurate enough to allow extrapolation to future devices.« less

  11. RACLETTE: a model for evaluating the thermal response of plasma facing components to slow high power plasma transients. Part II: Analysis of ITER plasma facing components

    NASA Astrophysics Data System (ADS)

    Federici, Gianfranco; Raffray, A. René

    1997-04-01

    The transient thermal model RACLETTE (acronym of Rate Analysis Code for pLasma Energy Transfer Transient Evaluation) described in part I of this paper is applied here to analyse the heat transfer and erosion effects of various slow (100 ms-10 s) high power energy transients on the actively cooled plasma facing components (PFCs) of the International Thermonuclear Experimental Reactor (ITER). These have a strong bearing on the PFC design and need careful analysis. The relevant parameters affecting the heat transfer during the plasma excursions are established. The temperature variation with time and space is evaluated together with the extent of vaporisation and melting (the latter only for metals) for the different candidate armour materials considered for the design (i.e., Be for the primary first wall, Be and CFCs for the limiter, Be, W, and CFCs for the divertor plates) and including for certain cases low-density vapour shielding effects. The critical heat flux, the change of the coolant parameters and the possible severe degradation of the coolant heat removal capability that could result under certain conditions during these transients, for example for the limiter, are also evaluated. Based on the results, the design implications on the heat removal performance and erosion damage of the variuos ITER PFCs are critically discussed and some recommendations are made for the selection of the most adequate protection materials and optimum armour thickness.

  12. Feasibility of a far infrared laser based polarimeter diagnostic system for the JT-60SA fusion experiment

    NASA Astrophysics Data System (ADS)

    Boboc, A.; Gil, C.; Terranova, D.; Orsitto, F. P.; Soare, S.; Lotte, P.; Sozzi, C.; Imazawa, R.; Kubo, H.

    2018-07-01

    JT-60SA is the large Tokamak device that is being built in Japan under the Broader Approach Satellite Tokamak Programme and the Japanese National Programme and will operate as a satellite machine for ITER. The main goal of the JT-60SA Programme is to provide valuable information for the ITER steady-state scenario and for the design of DEMO, where the real-time control of the safety factor profile is very important, in connection with both MHD stability and plasma confinement. It has been demonstrated in this work that to this end polarimetry measurements are necessary, in particular in order to reconstruct the safety factor profile in reversed shear scenarios. In this paper we present the main steps of a conceptual feasibility study of a multi-channel polarimeter diagnostic and the resulting optimised geometry. In this study, magnetic scenario modelling, a realistic CAD-driven design and long-term operation requirements, rarely even considered at this stage, have been considered. It is shown that a far infrared polarimeter system, with a laser operating at a wavelength of 194.7 μm and up to twelve channels can be envisaged for JT-60SA. The top requirements can be attained, i.e., that the polarimeter, together with other diagnostic measurements, should provide q-profile reconstruction with an accuracy of 10% for the entire plasma cycle and suitable time resolution for real-time applications, in particular in high density and ITER-relevant plasma scenarios.

  13. Physics design of the injector source for ITER neutral beam injector (invited).

    PubMed

    Antoni, V; Agostinetti, P; Aprile, D; Cavenago, M; Chitarin, G; Fonnesu, N; Marconato, N; Pilan, N; Sartori, E; Serianni, G; Veltri, P

    2014-02-01

    Two Neutral Beam Injectors (NBI) are foreseen to provide a substantial fraction of the heating power necessary to ignite thermonuclear fusion reactions in ITER. The development of the NBI system at unprecedented parameters (40 A of negative ion current accelerated up to 1 MV) requires the realization of a full scale prototype, to be tested and optimized at the Test Facility under construction in Padova (Italy). The beam source is the key component of the system and the design of the multi-grid accelerator is the goal of a multi-national collaborative effort. In particular, beam steering is a challenging aspect, being a tradeoff between requirements of the optics and real grids with finite thickness and thermo-mechanical constraints due to the cooling needs and the presence of permanent magnets. In the paper, a review of the accelerator physics and an overview of the whole R&D physics program aimed to the development of the injector source are presented.

  14. Visualizing multiattribute Web transactions using a freeze technique

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Cotting, Daniel; Dayal, Umeshwar; Machiraju, Vijay; Garg, Pankaj

    2003-05-01

    Web transactions are multidimensional and have a number of attributes: client, URL, response times, and numbers of messages. One of the key questions is how to simultaneously lay out in a graph the multiple relationships, such as the relationships between the web client response times and URLs in a web access application. In this paper, we describe a freeze technique to enhance a physics-based visualization system for web transactions. The idea is to freeze one set of objects before laying out the next set of objects during the construction of the graph. As a result, we substantially reduce the force computation time. This technique consists of three steps: automated classification, a freeze operation, and a graph layout. These three steps are iterated until the final graph is generated. This iterated-freeze technique has been prototyped in several e-service applications at Hewlett Packard Laboratories. It has been used to visually analyze large volumes of service and sales transactions at online web sites.

  15. Export Control Requirements for Tritium Processing Design and R&D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollis, William Kirk; Maynard, Sarah-Jane Wadsworth

    This document will address requirements of export control associated with tritium plant design and processes. Los Alamos National Laboratory has been working in the area of tritium plant system design and research and development (R&D) since the early 1970’s at the Tritium Systems Test Assembly (TSTA). This work has continued to the current date with projects associated with the ITER project and other Office of Science Fusion Energy Science (OS-FES) funded programs. ITER is currently the highest funding area for the DOE OS-FES. Although export control issues have been integrated into these projects in the past a general guidance documentmore » has not been available for reference in this area. To address concerns with currently funded tritium plant programs and assist future projects for FES, this document will identify the key reference documents and specific sections within related to tritium research. Guidance as to the application of these sections will be discussed with specific detail to publications and work with foreign nationals.« less

  16. Color and Contour Based Identification of Stem of Coconut Bunch

    NASA Astrophysics Data System (ADS)

    Kannan Megalingam, Rajesh; Manoharan, Sakthiprasad K.; Reddy, Rajesh G.; Sriteja, Gone; Kashyap, Ashwin

    2017-08-01

    Vision is the key component of Artificial Intelligence and Automated Robotics. Sensors or Cameras are the sight organs for a robot. Only through this, they are able to locate themselves or identify the shape of a regular or an irregular object. This paper presents the method of Identification of an object based on color and contour recognition using a camera through digital image processing techniques for robotic applications. In order to identify the contour, shape matching technique is used, which takes the input data from the database provided, and uses it to identify the contour by checking for shape match. The shape match is based on the idea of iterating through each contour of the threshold image. The color is identified on HSV Scale, by approximating the desired range of values from the database. HSV data along with iteration is used for identifying a quadrilateral, which is our required contour. This algorithm could also be used in a non-deterministic plane, which only uses HSV values exclusively.

  17. Oxidative dearomatisation: the key step of sorbicillinoid biosynthesis† †Electronic supplementary information (ESI) available: Containing all experimental details. See DOI: 10.1039/c3sc52911h Click here for additional data file.

    PubMed Central

    Fahad, Ahmed al; Abood, Amira; Fisch, Katja M.; Osipow, Anna; Davison, Jack; Avramović, Marija; Butts, Craig P.; Piel, Jörn; Simpson, Thomas J.

    2014-01-01

    An FAD-dependent monooxygenase encoding gene (SorbC) was cloned from Penicillium chrysogenum E01-10/3 and expressed as a soluble protein in Escherichia coli. The enzyme efficiently performed the oxidative dearomatisation of sorbicillin and dihydrosorbicillin to give sorbicillinol and dihydrosorbicillinol respectively. Bioinformatic examination of the gene cluster surrounding SorbC indicated the presence of two polyketide synthase (PKS) encoding genes designated sorbA and sorbB. The gene sorbA-encodes a highly reducing iterative PKS while SorbB encodes a non-reducing iterative PKS which features a reductive release domain usually involved in the production of polyketide aldehydes. Using these observations and previously reported results from isotopic feeding experiments a new and simpler biosynthetic route to the sorbicillin class of secondary metabolites is proposed which is consistent with all reported experimental results. PMID:25580210

  18. Mining of the Pyrrolamide Antibiotics Analogs in Streptomyces netropsis Reveals the Amidohydrolase-Dependent “Iterative Strategy” Underlying the Pyrrole Polymerization

    PubMed Central

    Deng, Zixin; Zhao, Changming; Yu, Yi

    2014-01-01

    In biosynthesis of natural products, potential intermediates or analogs of a particular compound in the crude extracts are commonly overlooked in routine assays due to their low concentration, limited structural information, or because of their insignificant bio-activities. This may lead into an incomplete and even an incorrect biosynthetic pathway for the target molecule. Here we applied multiple compound mining approaches, including genome scanning and precursor ion scan-directed mass spectrometry, to identify potential pyrrolamide compounds in the fermentation culture of Streptomyces netropsis. Several novel congocidine and distamycin analogs were thus detected and characterized. A more reasonable route for the biosynthesis of pyrrolamides was proposed based on the structures of these newly discovered compounds, as well as the functional characterization of several key biosynthetic genes of pyrrolamides. Collectively, our results implied an unusual “iterative strategy” underlying the pyrrole polymerization in the biosynthesis of pyrrolamide antibiotics. PMID:24901640

  19. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  20. The ATLAS Public Web Pages: Online Management of HEP External Communication Content

    NASA Astrophysics Data System (ADS)

    Goldfarb, S.; Marcelloni, C.; Eli Phoboo, A.; Shaw, K.

    2015-12-01

    The ATLAS Education and Outreach Group is in the process of migrating its public online content to a professionally designed set of web pages built on the Drupal [1] content management system. Development of the front-end design passed through several key stages, including audience surveys, stakeholder interviews, usage analytics, and a series of fast design iterations, called sprints. Implementation of the web site involves application of the html design using Drupal templates, refined development iterations, and the overall population of the site with content. We present the design and development processes and share the lessons learned along the way, including the results of the data-driven discovery studies. We also demonstrate the advantages of selecting a back-end supported by content management, with a focus on workflow. Finally, we discuss usage of the new public web pages to implement outreach strategy through implementation of clearly presented themes, consistent audience targeting and messaging, and the enforcement of a well-defined visual identity.

  1. Export Control Requirements for Tritium Processing Design and R&D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollis, William Kirk; Maynard, Sarah-Jane Wadsworth

    2015-10-30

    This document will address requirements of export control associated with tritium plant design and processes. Los Alamos National Laboratory has been working in the area of tritium plant system design and research and development (R&D) since the early 1970’s at the Tritium Systems Test Assembly (TSTA). This work has continued to the current date with projects associated with the ITER project and other Office of Science Fusion Energy Science (OS-FES) funded programs. ITER is currently the highest funding area for the DOE OS-FES. Although export control issues have been integrated into these projects in the past a general guidance documentmore » has not been available for reference in this area. To address concerns with currently funded tritium plant programs and assist future projects for FES, this document will identify the key reference documents and specific sections within related to tritium research. Guidance as to the application of these sections will be discussed with specific detail to publications and work with foreign nationals.« less

  2. Iterants, Fermions and Majorana Operators

    NASA Astrophysics Data System (ADS)

    Kauffman, Louis H.

    Beginning with an elementary, oscillatory discrete dynamical system associated with the square root of minus one, we study both the foundations of mathematics and physics. Position and momentum do not commute in our discrete physics. Their commutator is related to the diffusion constant for a Brownian process and to the Heisenberg commutator in quantum mechanics. We take John Wheeler's idea of It from Bit as an essential clue and we rework the structure of that bit to a logical particle that is its own anti-particle, a logical Marjorana particle. This is our key example of the amphibian nature of mathematics and the external world. We show how the dynamical system for the square root of minus one is essentially the dynamics of a distinction whose self-reference leads to both the fusion algebra and the operator algebra for the Majorana Fermion. In the course of this, we develop an iterant algebra that supports all of matrix algebra and we end the essay with a discussion of the Dirac equation based on these principles.

  3. Providing Decision-Relevant Information for a State Climate Change Action Plan

    NASA Astrophysics Data System (ADS)

    Wake, C.; Frades, M.; Hurtt, G. C.; Magnusson, M.; Gittell, R.; Skoglund, C.; Morin, J.

    2008-12-01

    Carbon Solutions New England (CSNE), a public-private partnership formed to promote collective action to achieve a low carbon society, has been working with the Governor appointed New Hampshire Climate Change Policy Task Force (NHCCTF) to support the development of a state Climate Change Action Plan. CSNE's role has been to quantify the potential carbon emissions reduction, implementation costs, and cost savings at three distinct time periods (2012, 2025, 2050) for a range of strategies identified by the Task Force. These strategies were developed for several sectors (transportation and land use, electricity generation and use, building energy use, and agriculture, forestry, and waste).New Hampshire's existing and projected economic and population growth are well above the regional average, creating additional challenges for the state to meet regional emission reduction targets. However, by pursuing an ambitious suite of renewable energy and energy efficiency strategies, New Hampshire may be able to continue growing while reducing emissions at a rate close to 3% per year up to 2025. This suite includes efficiency improvements in new and existing buildings, a renewable portfolio standard for electricity generation, avoiding forested land conversion, fuel economy gains in new vehicles, and a reduction in vehicle miles traveled. Most (over 80%) of these emission reduction strategies are projected to provide net economic savings in 2025.A collaborative and iterative process was developed among the key partners in the project. The foundation for the project's success included: a diverse analysis team with leadership that was committed to the project, an open source analysis approach, weekly meetings and frequent communication among the partners, interim reporting of analysis, and an established and trusting relationship among the partners, in part due to collaboration on previous projects.To develop decision-relevant information for the Task Force, CSNE addressed several challenges, including: allocating the emission reduction and economic impacts of local- to state-scale mitigation strategies that are in reality integrated on regional and/or national scales; incorporating changes to the details of the strategies over time; identifying and quantifying key variables; choosing appropriate levels of detail for over 100 strategies within the limited analysis timeframe; integrating individual strategies into a coherent whole; and structuring data presentation to maximize transparency of analysis without confusing or overwhelming decision makers.

  4. Optimization of the volume reconstruction for classical Tomo-PIV algorithms (MART, BIMART and SMART): synthetic and experimental studies

    NASA Astrophysics Data System (ADS)

    Thomas, L.; Tremblais, B.; David, L.

    2014-03-01

    Optimization of multiplicative algebraic reconstruction technique (MART), simultaneous MART and block iterative MART reconstruction techniques was carried out on synthetic and experimental data. Different criteria were defined to improve the preprocessing of the initial images. Knowledge of how each reconstruction parameter influences the quality of particle volume reconstruction and computing time is the key in Tomo-PIV. These criteria were applied to a real case, a jet in cross flow, and were validated.

  5. Joint Doctrine for Unmanned Aircraft Systems: The Air Force and the Army Hold the Key to Success

    DTIC Science & Technology

    2010-05-03

    concept, coupled with sensor technologies that provide multiple video streams to multiple ground units, delivers increased capability and capacity to...airborne surveillance” allow one UAS to collect up to ten video transmissions, sending them to ten different users on the ground. Future iterations...of this technology, dubbed Gorgon Stare, will increase to as many as 65 video streams per UAS by 2014. 31 Being able to send multiple views of an

  6. Three-dimensional marginal separation

    NASA Technical Reports Server (NTRS)

    Duck, Peter W.

    1988-01-01

    The three dimensional marginal separation of a boundary layer along a line of symmetry is considered. The key equation governing the displacement function is derived, and found to be a nonlinear integral equation in two space variables. This is solved iteratively using a pseudo-spectral approach, based partly in double Fourier space, and partly in physical space. Qualitatively, the results are similar to previously reported two dimensional results (which are also computed to test the accuracy of the numerical scheme); however quantitatively the three dimensional results are much different.

  7. An iterative requirements specification procedure for decision support systems.

    PubMed

    Brookes, C H

    1987-08-01

    Requirements specification is a key element in a DSS development project because it not only determines what is to be done, it also drives the evolution process. A procedure for requirements elicitation is described that is based on the decomposition of the DSS design task into a number of functions, subfunctions, and operators. It is postulated that the procedure facilitates the building of a DSS that is complete and integrates MIS, modelling and expert system components. Some examples given are drawn from the health administration field.

  8. How do Supervising Clinicians of a University Hospital and Associated Teaching Hospitals Rate the Relevance of the Key Competencies within the CanMEDS Roles Framework in Respect to Teaching in Clinical Clerkships?

    PubMed

    Jilg, Stefanie; Möltner, Andreas; Berberat, Pascal; Fischer, Martin R; Breckwoldt, Jan

    2015-01-01

    In German-speaking countries, the physicians' roles framework of the "Canadian Medical Education Directives for Specialists" (CanMEDS) is increasingly used to conceptualize postgraduate medical education. It is however unclear, whether it may also be applied to the final year of undergraduate education within clinical clerkships, called "Practical Year" (PY). Therefore, the aim of this study was to explore how clinically active physicians at a university hospital and at associated teaching hospitals judge the relevance of the seven CanMEDS roles (and their (role-defining) key competencies) in respect to their clinical work and as learning content for PY training. Furthermore, these physicians were asked whether the key competencies were actually taught during PY training. 124 physicians from internal medicine and surgery rated the relevance of the 28 key competencies of the CanMEDS framework using a questionnaire. For each competency, following three aspects were rated: "relevance for your personal daily work", "importance for teaching during PY", and "implementation into actual PY teaching". In respect to the main study objective, all questionnaires could be included into analysis. All seven CanMEDS roles were rated as relevant for personal daily work, and also as important for teaching during PY. Furthermore, all roles were stated to be taught during actual PY training. The roles "Communicator", "Medical Expert", and "Collaborator" were rated as significantly more important than the other roles, for all three sub-questions. No differences were found between the two disciplines internal medicine and surgery, nor between the university hospital and associated teaching hospitals. Participating physicians rated all key competencies of the CanMEDS model to be relevant for their personal daily work, and for teaching during PY. These findings support the suitability of the CanMEDS framework as a conceptual element of PY training.

  9. How do Supervising Clinicians of a University Hospital and Associated Teaching Hospitals Rate the Relevance of the Key Competencies within the CanMEDS Roles Framework in Respect to Teaching in Clinical Clerkships?

    PubMed Central

    Jilg, Stefanie; Möltner, Andreas; Berberat, Pascal; Fischer, Martin R.; Breckwoldt, Jan

    2015-01-01

    Background and aim: In German-speaking countries, the physicians’ roles framework of the “Canadian Medical Education Directives for Specialists” (CanMEDS) is increasingly used to conceptualize postgraduate medical education. It is however unclear, whether it may also be applied to the final year of undergraduate education within clinical clerkships, called “Practical Year” (PY). Therefore, the aim of this study was to explore how clinically active physicians at a university hospital and at associated teaching hospitals judge the relevance of the seven CanMEDS roles (and their (role-defining) key competencies) in respect to their clinical work and as learning content for PY training. Furthermore, these physicians were asked whether the key competencies were actually taught during PY training. Methods: 124 physicians from internal medicine and surgery rated the relevance of the 28 key competencies of the CanMEDS framework using a questionnaire. For each competency, following three aspects were rated: “relevance for your personal daily work”, “importance for teaching during PY”, and “implementation into actual PY teaching”. Results: In respect to the main study objective, all questionnaires could be included into analysis. All seven CanMEDS roles were rated as relevant for personal daily work, and also as important for teaching during PY. Furthermore, all roles were stated to be taught during actual PY training. The roles “Communicator”, “Medical Expert”, and “Collaborator” were rated as significantly more important than the other roles, for all three sub-questions. No differences were found between the two disciplines internal medicine and surgery, nor between the university hospital and associated teaching hospitals. Conclusion: Participating physicians rated all key competencies of the CanMEDS model to be relevant for their personal daily work, and for teaching during PY. These findings support the suitability of the CanMEDS framework as a conceptual element of PY training. PMID:26413171

  10. Investigation of the boundary layer during the transition from volume to surface dominated H- production at the BATMAN test facility

    NASA Astrophysics Data System (ADS)

    Wimmer, C.; Schiesko, L.; Fantz, U.

    2016-02-01

    BATMAN (Bavarian Test Machine for Negative ions) is a test facility equipped with a 1/8 scale H- source for the ITER heating neutral beam injection. Several diagnostics in the boundary layer close to the plasma grid (first grid of the accelerator system) followed the transition from volume to surface dominated H- production starting with a Cs-free, cleaned source and subsequent evaporation of caesium, while the source has been operated at ITER relevant pressure of 0.3 Pa: Langmuir probes are used to determine the plasma potential, optical emission spectroscopy is used to follow the caesiation process, and cavity ring-down spectroscopy allows for the measurement of the H- density. The influence on the plasma during the transition from an electron-ion plasma towards an ion-ion plasma, in which negative hydrogen ions become the dominant negatively charged particle species, is seen in a strong increase of the H- density combined with a reduction of the plasma potential. A clear correlation of the extracted current densities (jH-, je) exists with the Cs emission.

  11. Investigation of the boundary layer during the transition from volume to surface dominated H⁻ production at the BATMAN test facility.

    PubMed

    Wimmer, C; Schiesko, L; Fantz, U

    2016-02-01

    BATMAN (Bavarian Test Machine for Negative ions) is a test facility equipped with a 18 scale H(-) source for the ITER heating neutral beam injection. Several diagnostics in the boundary layer close to the plasma grid (first grid of the accelerator system) followed the transition from volume to surface dominated H(-) production starting with a Cs-free, cleaned source and subsequent evaporation of caesium, while the source has been operated at ITER relevant pressure of 0.3 Pa: Langmuir probes are used to determine the plasma potential, optical emission spectroscopy is used to follow the caesiation process, and cavity ring-down spectroscopy allows for the measurement of the H(-) density. The influence on the plasma during the transition from an electron-ion plasma towards an ion-ion plasma, in which negative hydrogen ions become the dominant negatively charged particle species, is seen in a strong increase of the H(-) density combined with a reduction of the plasma potential. A clear correlation of the extracted current densities (j(H(-)), j(e)) exists with the Cs emission.

  12. Learning Biological Networks via Bootstrapping with Optimized GO-based Gene Similarity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Ronald C.; Sanfilippo, Antonio P.; McDermott, Jason E.

    2010-08-02

    Microarray gene expression data provide a unique information resource for learning biological networks using "reverse engineering" methods. However, there are a variety of cases in which we know which genes are involved in a given pathology of interest, but we do not have enough experimental evidence to support the use of fully-supervised/reverse-engineering learning methods. In this paper, we explore a novel semi-supervised approach in which biological networks are learned from a reference list of genes and a partial set of links for these genes extracted automatically from PubMed abstracts, using a knowledge-driven bootstrapping algorithm. We show how new relevant linksmore » across genes can be iteratively derived using a gene similarity measure based on the Gene Ontology that is optimized on the input network at each iteration. We describe an application of this approach to the TGFB pathway as a case study and show how the ensuing results prove the feasibility of the approach as an alternate or complementary technique to fully supervised methods.« less

  13. Construction of robust dynamic genome-scale metabolic model structures of Saccharomyces cerevisiae through iterative re-parameterization.

    PubMed

    Sánchez, Benjamín J; Pérez-Correa, José R; Agosin, Eduardo

    2014-09-01

    Dynamic flux balance analysis (dFBA) has been widely employed in metabolic engineering to predict the effect of genetic modifications and environmental conditions in the cell׳s metabolism during dynamic cultures. However, the importance of the model parameters used in these methodologies has not been properly addressed. Here, we present a novel and simple procedure to identify dFBA parameters that are relevant for model calibration. The procedure uses metaheuristic optimization and pre/post-regression diagnostics, fixing iteratively the model parameters that do not have a significant role. We evaluated this protocol in a Saccharomyces cerevisiae dFBA framework calibrated for aerobic fed-batch and anaerobic batch cultivations. The model structures achieved have only significant, sensitive and uncorrelated parameters and are able to calibrate different experimental data. We show that consumption, suboptimal growth and production rates are more useful for calibrating dynamic S. cerevisiae metabolic models than Boolean gene expression rules, biomass requirements and ATP maintenance. Copyright © 2014 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  14. Prospects for measuring the fuel ion ratio in burning ITER plasmas using a DT neutron emission spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellesen, C.; Skiba, M., E-mail: mateusz.skiba@physics.uu.se; Dzysiuk, N.

    2014-11-15

    The fuel ion ratio n{sub t}/n{sub d} is an essential parameter for plasma control in fusion reactor relevant applications, since maximum fusion power is attained when equal amounts of tritium (T) and deuterium (D) are present in the plasma, i.e., n{sub t}/n{sub d} = 1.0. For neutral beam heated plasmas, this parameter can be measured using a single neutron spectrometer, as has been shown for tritium concentrations up to 90%, using data obtained with the MPR (Magnetic Proton Recoil) spectrometer during a DT experimental campaign at the Joint European Torus in 1997. In this paper, we evaluate the demands thatmore » a DT spectrometer has to fulfill to be able to determine n{sub t}/n{sub d} with a relative error below 20%, as is required for such measurements at ITER. The assessment shows that a back-scattering time-of-flight design is a promising concept for spectroscopy of 14 MeV DT emission neutrons.« less

  15. Preliminary safety analysis of the Baita Bihor radioactive waste repository, Romania

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Little, Richard; Bond, Alex; Watson, Sarah

    2007-07-01

    A project funded under the European Commission's Phare Programme 2002 has undertaken an in-depth analysis of the operational and post-closure safety of the Baita Bihor repository. The repository has accepted low- and some intermediate-level radioactive waste from industry, medical establishments and research activities since 1985 and the current estimate is that disposals might continue for around another 20 to 35 years. The analysis of the operational and post-closure safety of the Baita Bihor repository was carried out in two iterations, with the second iteration resulting in reduced uncertainties, largely as a result taking into account new information on the hydrologymore » and hydrogeology of the area, collected as part of the project. Impacts were evaluated for the maximum potential inventory that might be available for disposal to Baita Bihor for a number of operational and postclosure scenarios and associated conceptual models. The results showed that calculated impacts were below the relevant regulatory criteria. In light of the assessment, a number of recommendations relating to repository operation, optimisation of repository engineering and waste disposals, and environmental monitoring were made. (authors)« less

  16. Reducing sick leave of Dutch vocational school students: adaptation of a sick leave protocol using the intervention mapping process.

    PubMed

    de Kroon, Marlou L A; Bulthuis, Jozien; Mulder, Wico; Schaafsma, Frederieke G; Anema, Johannes R

    2016-12-01

    Since the extent of sick leave and the problems of vocational school students are relatively large, we aimed to tailor a sick leave protocol at Dutch lower secondary education schools to the particular context of vocational schools. Four steps of the iterative process of Intervention Mapping (IM) to adapt this protocol were carried out: (1) performing a needs assessment and defining a program objective, (2) determining the performance and change objectives, (3) identifying theory-based methods and practical strategies and (4) developing a program plan. Interviews with students using structured questionnaires, in-depth interviews with relevant stakeholders, a literature research and, finally, a pilot implementation were carried out. A sick leave protocol was developed that was feasible and acceptable for all stakeholders. The main barriers for widespread implementation are time constraints in both monitoring and acting upon sick leave by school and youth health care. The iterative process of IM has shown its merits in the adaptation of the manual 'A quick return to school is much better' to a sick leave protocol for vocational school students.

  17. Progress in Arc Safety System Based on Harmonics Detection for ICRH Antennae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berger-By, G.; Beaumont, B.; Lombard, G.

    2007-09-28

    The arc detection systems based on harmonics detection have been tested n USA (TFTR, DIII, Alcator C-mod) and Germany (Asdex). These systems have some advantages in comparison with traditonal securities which use a threshold on the Vr/Vf (Reflected to Forward voltage ratio) calculation and are ITER relevant. On Tore Supra (TS) 3 systems have been built using this principle with some improvements and new features to increase the protection of the 3 ICRH generators and antennae. On JET 2 arc safety systems based on the TS principle wil also be used to mprove the JET ITER-like antenna safety. In ordermore » to have the maximum security level on the TS ICRH system, the 3 antennae are used with these systems during all plasma shots n redundancy with the other systems. This TS RF principle and ts electronic interactions with the VME control of the generator are described. The results on the TS ICRH transmitter feeding the 3 antennae are summarized and some typical signals are given.« less

  18. Overview of LH experiments in JET with an ITER-like wall

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirov, K. K.; Baranov, Yu.; Brix, M.

    2014-02-12

    An overview of the recent results of Lower Hybrid (LH) experiments at JET with the ITER-like wall (ILW) is presented. Topics relevant to LH wave coupling are addressed as well as issues related to ILW and LH system protections. LH wave coupling was studied in conditions determined by ILW recycling and operational constraints. It was concluded that LH wave coupling was not significantly affected and the pre-ILW performance could be recovered after optimising the launcher position and local gas puffing. SOL density measurements were performed using a Li-beam diagnostic. Dependencies on the D2 injection rate from the dedicated gas valve,more » the LH power and the LH launcher position were analysed. SOL density modifications due to LH were modelled by the EDGE2D code assuming SOL heating by collisional dissipation of the LH wave and/or possible ExB drifts in the SOL. The simulations matched reasonably well the measured SOL profiles. Observations of arcs and hotspots with visible and IR cameras viewing the LH launcher are presented.« less

  19. Multi-megawatt, gigajoule plasma operation in Tore Supra

    NASA Astrophysics Data System (ADS)

    Dumont, R. J.; Goniche, M.; Ekedahl, A.; Saoutic, B.; Artaud, J.-F.; Basiuk, V.; Bourdelle, C.; Corre, Y.; Decker, J.; Elbèze, D.; Giruzzi, G.; Hoang, G.-T.; Imbeaux, F.; Joffrin, E.; Litaudon, X.; Lotte, Ph; Maget, P.; Mazon, D.; Nilsson, E.; The Tore Supra Team

    2014-07-01

    Integrating several important technological elements required for long pulse operation in magnetic fusion devices, the Tore Supra tokamak routinely addresses the physics and technology issues related to this endeavor and, as a result, contributes essential information on critical issues for ITER. During the last experimental campaign, components of the radiofrequency system including an ITER relevant launcher (passive active multijunction (PAM)) and continuous wave/3.7 GHz klystrons, have been extensively qualified, and then used to develop steady state scenarios in which the lower hybrid (LH), ion cyclotron (IC) and electron cyclotron (EC) systems have been combined in fully stationary shots (duration ˜150 s, injected power up to ˜8 MW, injected/extracted energy up to ˜1 GJ). Injection of LH power in the 5.0-6.0 MW range has extended the domain of accessible plasma parameters to higher densities and non-inductive currents. These discharges exhibit steady electron internal transport barriers (ITBs). We report here on various issues relevant to the steady state operation of future devices, ranging from operational aspects and limitations related to the achievement of long pulses in a fully actively cooled fusion device (e.g. overheating due to fast particle losses), to more fundamental plasma physics topics. The latter include a beneficial influence of IC resonance heating on the magnetohydrodynamic (MHD) stability in these discharges, which has been studied in detail. Another interesting observation is the appearance of oscillations of the central temperature with typical periods of the order of one to several seconds, caused by a nonlinear interplay between LH deposition, MHD activity and bootstrap current in the presence of an ITB.

  20. The design of an ECRH system for JET-EP

    NASA Astrophysics Data System (ADS)

    Verhoeven, A. G. A.; Bongers, W. A.; Elzendoorn, B. S. Q.; Graswinckel, M.; Hellingman, P.; Kooijman, W.; Kruijt, O. G.; Maagdenberg, J.; Ronden, D.; Stakenborg, J.; Sterk, A. B.; Tichler, J.; Alberti, S.; Goodman, T.; Henderson, M.; Hoekzema, J. A.; Oosterbeek, J. W.; Fernandez, A.; Likin, K.; Bruschi, A.; Cirant, S.; Novak, S.; Piosczyk, B.; Thumm, M.; Bindslev, H.; Kaye, A.; Fleming, C.; Zohm, H.

    2003-11-01

    An electron cyclotron resonance heating (ECRH) system has been designed for JET in the framework of the JET enhanced performance project (JET-EP) under the European fusion development agreement. Due to financial constraints it has been decided not to implement this project. Nevertheless, the design work conducted from April 2000 to January 2002 shows a number of features that can be relevant in preparation of future ECRH systems, e.g. for ITER. The ECRH system was foreseen to comprise six gyrotrons, 1 MW each, in order to deliver 5 MW into the plasma (Verhoeven A.G.A. et al 2001 The ECRH system for JET 26th Int. Conf. on Infrared and Millimeter Waves (Toulouse, 10 14 September 2001) p 83; Verhoeven A.G.A. et al 2003 The 113 GHz ECRH system for JET Proc. 12th Joint Workshop on ECE and ECRH (13 16 May 2002) ed G. Giruzzi (Aix-en-Provence: World Scientific) pp 511 16). The main aim was to enable the control of neo-classical tearing modes. The paper will concentrate on: the power-supply and modulation system, including series IGBT switches, to enable independent control of each gyrotron and an all-solid-state body power supply to stabilize the gyrotron output power and to enable fast modulations up to 10 kHz and a plug-in launcher that is steerable in both toroidal and poloidal angles and able to handle eight separate mm-wave beams. Four steerable launching mirrors were foreseen to handle two mm-wave beams each. Water cooling of all the mirrors was a particularly ITER-relevant feature.

  1. Prediction, experimental results and analysis of the ITER TF insert coil quench propagation tests, using the 4C code

    NASA Astrophysics Data System (ADS)

    Zanino, R.; Bonifetto, R.; Brighenti, A.; Isono, T.; Ozeki, H.; Savoldi, L.

    2018-07-01

    The ITER toroidal field insert (TFI) coil is a single-layer Nb3Sn solenoid tested in 2016-2017 at the National Institutes for Quantum and Radiological Science and Technology (former JAEA) in Naka, Japan. The TFI, the last in a series of ITER insert coils, was tested in operating conditions relevant for the actual ITER TF coils, inserting it in the borehole of the central solenoid model coil, which provided the background magnetic field. In this paper, we consider the five quench propagation tests that were performed using one or two inductive heaters (IHs) as drivers; out of these, three used just one IH but with increasing delay times, up to 7.5 s, between the quench detection and the TFI current dump. The results of the 4C code prediction of the quench propagation up to the current dump are presented first, based on simulations performed before the tests. We then describe the experimental results, showing good reproducibility. Finally, we compare the 4C code predictions with the measurements, confirming the 4C code capability to accurately predict the quench propagation, and the evolution of total and local voltages, as well as of the hot spot temperature. To the best of our knowledge, such a predictive validation exercise is performed here for the first time for the quench of a Nb3Sn coil. Discrepancies between prediction and measurement are found in the evolution of the jacket temperatures, in the He pressurization and quench acceleration in the late phase of the transient before the dump, as well as in the early evolution of the inlet and outlet He mass flow rate. Based on the lessons learned in the predictive exercise, the model is then refined to try and improve a posteriori (i.e. in interpretive, as opposed to predictive mode) the agreement between simulation and experiment.

  2. An iterative consensus-building approach to revising a genetics/genomics competency framework for nurse education in the UK

    PubMed Central

    Kirk, Maggie; Tonkin, Emma; Skirton, Heather

    2014-01-01

    KIRK M., TONKIN E. & SKIRTON H. (2014) An iterative consensus-building approach to revising a genetics/genomics competency framework for nurse education in the UK. Journal of Advanced Nursing 70(2), 405–420. doi: 10.1111/jan.12207 AimTo report a review of a genetics education framework using a consensus approach to agree on a contemporary and comprehensive revised framework. BackgroundAdvances in genomic health care have been significant since the first genetics education framework for nurses was developed in 2003. These, coupled with developments in policy and international efforts to promote nursing competence in genetics, indicated that review was timely. DesignA structured, iterative, primarily qualitative approach, based on a nominal group technique. MethodA meeting convened in 2010 involved stakeholders in UK nursing education, practice and management, including patient representatives (n = 30). A consensus approach was used to solicit participants' views on the individual/family needs identified from real-life stories of people affected by genetic conditions and the nurses' knowledge, skills and attitudes needed to meet those needs. Five groups considered the stories in iterative rounds, reviewing comments from previous groups. Omissions and deficiencies were identified by mapping resulting themes to the original framework. Anonymous voting captured views. Educators at a second meeting developed learning outcomes for the final framework. FindingsDeficiencies in relation to Advocacy, Information management and Ongoing care were identified. All competencies of the original framework were revised, adding an eighth competency to make explicit the need for ongoing care of the individual/family. ConclusionModifications to the framework reflect individual/family needs and are relevant to the nursing role. The approach promoted engagement in a complex issue and provides a framework to guide nurse education in genetics/genomics; however, nursing leadership is crucial to successful implementation. PMID:23879662

  3. Estimation of brood and nest survival: Comparative methods in the presence of heterogeneity

    USGS Publications Warehouse

    Manly, Bryan F.J.; Schmutz, Joel A.

    2001-01-01

    The Mayfield method has been widely used for estimating survival of nests and young animals, especially when data are collected at irregular observation intervals. However, this method assumes survival is constant throughout the study period, which often ignores biologically relevant variation and may lead to biased survival estimates. We examined the bias and accuracy of 1 modification to the Mayfield method that allows for temporal variation in survival, and we developed and similarly tested 2 additional methods. One of these 2 new methods is simply an iterative extension of Klett and Johnson's method, which we refer to as the Iterative Mayfield method and bears similarity to Kaplan-Meier methods. The other method uses maximum likelihood techniques for estimation and is best applied to survival of animals in groups or families, rather than as independent individuals. We also examined how robust these estimators are to heterogeneity in the data, which can arise from such sources as dependent survival probabilities among siblings, inherent differences among families, and adoption. Testing of estimator performance with respect to bias, accuracy, and heterogeneity was done using simulations that mimicked a study of survival of emperor goose (Chen canagica) goslings. Assuming constant survival for inappropriately long periods of time or use of Klett and Johnson's methods resulted in large bias or poor accuracy (often >5% bias or root mean square error) compared to our Iterative Mayfield or maximum likelihood methods. Overall, estimator performance was slightly better with our Iterative Mayfield than our maximum likelihood method, but the maximum likelihood method provides a more rigorous framework for testing covariates and explicity models a heterogeneity factor. We demonstrated use of all estimators with data from emperor goose goslings. We advocate that future studies use the new methods outlined here rather than the traditional Mayfield method or its previous modifications.

  4. Validation of the model for ELM suppression with 3D magnetic fields using low torque ITER baseline scenario discharges in DIII-D

    NASA Astrophysics Data System (ADS)

    Moyer, R. A.; Paz-Soldan, C.; Nazikian, R.; Orlov, D. M.; Ferraro, N. M.; Grierson, B. A.; Knölker, M.; Lyons, B. C.; McKee, G. R.; Osborne, T. H.; Rhodes, T. L.; Meneghini, O.; Smith, S.; Evans, T. E.; Fenstermacher, M. E.; Groebner, R. J.; Hanson, J. M.; La Haye, R. J.; Luce, T. C.; Mordijck, S.; Solomon, W. M.; Turco, F.; Yan, Z.; Zeng, L.; DIII-D Team

    2017-10-01

    Experiments have been executed in the DIII-D tokamak to extend suppression of Edge Localized Modes (ELMs) with Resonant Magnetic Perturbations (RMPs) to ITER-relevant levels of beam torque. The results support the hypothesis for RMP ELM suppression based on transition from an ideal screened response to a tearing response at a resonant surface that prevents expansion of the pedestal to an unstable width [Snyder et al., Nucl. Fusion 51, 103016 (2011) and Wade et al., Nucl. Fusion 55, 023002 (2015)]. In ITER baseline plasmas with I/aB = 1.4 and pedestal ν * ˜ 0.15, ELMs are readily suppressed with co- I p neutral beam injection. However, reducing the beam torque from 5 Nm to ≤ 3.5 Nm results in loss of ELM suppression and a shift in the zero-crossing of the electron perpendicular rotation ω ⊥ e ˜ 0 deeper into the plasma. The change in radius of ω ⊥ e ˜ 0 is due primarily to changes to the electron diamagnetic rotation frequency ωe * . Linear plasma response modeling with the resistive MHD code m3d-c1 indicates that the tearing response location tracks the inward shift in ω ⊥ e ˜ 0. At pedestal ν * ˜ 1, ELM suppression is also lost when the beam torque is reduced, but the ω ⊥ e change is dominated by collapse of the toroidal rotation v T . The hypothesis predicts that it should be possible to obtain ELM suppression at reduced beam torque by also reducing the height and width of the ωe * profile. This prediction has been confirmed experimentally with RMP ELM suppression at 0 Nm of beam torque and plasma normalized pressure β N ˜ 0.7. This opens the possibility of accessing ELM suppression in low torque ITER baseline plasmas by establishing suppression at low beta and then increasing beta while relying on the strong RMP-island coupling to maintain suppression.

  5. Comparative studies for two different orientations of pebble bed in an HCCB blanket

    NASA Astrophysics Data System (ADS)

    Paritosh, CHAUDHURI; Chandan, DANANI; E, RAJENDRAKUMAR

    2017-12-01

    The Indian Test Blanket Module (TBM) program in ITER is one of the major steps in its fusion reactor program towards DEMO and the future fusion power reactor vision. Research and development (R&D) is focused on two types of breeding blanket concepts: lead-lithium ceramic breeder (LLCB) and helium-cooled ceramic breeder (HCCB) blanket systems for the DEMO reactor. As part of the ITER-TBM program, the LLCB concept will be tested in one-half of ITER port no. 2, whose materials and technologies will be tested during ITER operation. The HCCB concept is a variant of the solid breeder blanket, which is presently part of our domestic R&D program for DEMO relevant technology development. In the HCCB concept Li2TiO3 and beryllium are used as the tritium breeder and neutron multiplier, respectively, in the form of a packed bed having edge-on configuration with reduced activation ferritic martensitic steel as the structural material. In this paper two design schemes, mainly two different orientations of pebble beds, are discussed. In the current concept (case-1), the ceramic breeder beds are kept horizontal in the toroidal-radial direction. Due to gravity, the pebbles may settle down at the bottom and create a finite gap between the pebbles and the top cooling plate, which will affect the heat transfer between them. In the alternate design concept (case-2), the pebble bed is vertically (poloidal-radial) orientated where the side plates act as cooling plates instead of top and bottom plates. These two design variants are analyzed analytically and 2D thermal-hydraulic simulation studies are carried out with ANSYS, using the heat loads obtained from neutronic calculations. Based on the analysis the performance is compared and details of the thermal and radiative heat transfer studies are also discussed in this paper.

  6. System Engineering Analysis For Improved Scout Business Information Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Slyke, D. A.

    The project uses system engineering principles to address the need of Boy Scout leaders for an integrated system to facilitate advancement and awards records, leader training and planning for meetings and activities. Existing products to address needs of Scout leaders and relevant stakeholders function to support record keeping and some communication functions but opportunity exists for a better system to fully integrate these functions with training delivery and recording, activity planning along with feedback and information gathering from stakeholders. Key stakeholders for the sytem include Scouts and their families, leaders, training providers, sellers of supplies and awards, content generators andmore » facilities that serve Scout activities. Key performance parameters for the system are protection of personal information, availability of current information, information accuracy and information content that has depth. Implementation concepts considered for the system include (1) owned and operated by Boy Scouts of America, (2) Contracted out to a vendor (3) distributed system that functions with BSA managed interfaces. The selected concept is to contract out to a vendor to maximize the likelihood of successful integration and take advantage of the best technology. Development of requirements considers three key use cases (1) System facilitates planning a hike with training needed satisfied in advance and advancement recording real time (2) Scheduling and documenting in-person training, (3) Family interested in Scouting receives information and can request follow-up. Non-functional requirements are analyzed with the Quality Function Deployment tool. Requirement addressing frequency of backup, compatibility with legacy and new technology, language support, software update are developed to address system reliability and intuitive interface. System functions analyzed include update of activity database, maintenance of advancement status, archive of documents, and monitoring of content that is accessible. The study examines risks associated with information security, technological change and continued popularity of Scouting. Mitigation is based on system functions that are defined. The approach to developing an improved system for facilitating Boy Scout leader functions was iterative with insights into capabilities coming in the course of working through the used cases and sequence diagrams.« less

  7. Digital storytelling as a method in health research: a systematic review protocol.

    PubMed

    Rieger, Kendra L; West, Christina H; Kenny, Amanda; Chooniedass, Rishma; Demczuk, Lisa; Mitchell, Kim M; Chateau, Joanne; Scott, Shannon D

    2018-03-05

    Digital storytelling is an arts-based research method with potential to elucidate complex narratives in a compelling manner, increase participant engagement, and enhance the meaning of research findings. This method involves the creation of a 3- to 5-min video that integrates multimedia materials including photos, participant voices, drawings, and music. Given the significant potential of digital storytelling to meaningfully capture and share participants' lived experiences, a systematic review of its use in healthcare research is crucial to develop an in-depth understanding of how researchers have used this method, with an aim to refine and further inform future iterations of its use. We aim to identify and synthesize evidence on the use, impact, and ethical considerations of using digital storytelling in health research. The review questions are as follows: (1) What is known about the purpose, definition, use (processes), and contexts of digital storytelling as part of the research process in health research? (2) What impact does digital storytelling have upon the research process, knowledge development, and healthcare practice? (3) What are the key ethical considerations when using digital storytelling within qualitative, quantitative, and mixed method research studies? Key databases and the grey literature will be searched from 1990 to the present for qualitative, quantitative, and mixed methods studies that utilized digital storytelling as part of the research process. Two independent reviewers will screen and critically appraise relevant articles with established quality appraisal tools. We will extract narrative data from all studies with a standardized data extraction form and conduct a thematic analysis of the data. To facilitate innovative dissemination through social media, we will develop a visual infographic and three digital stories to illustrate the review findings, as well as methodological and ethical implications. In collaboration with national and international experts in digital storytelling, we will synthesize key evidence about digital storytelling that is critical to the development of methodological and ethical expertise about arts-based research methods. We will also develop recommendations for incorporating digital storytelling in a meaningful and ethical manner into the research process. PROSPERO registry number CRD42017068002 .

  8. Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Admissibility and Termination Analysis.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao

    In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.

  9. A qualitative study of the perspectives of key stakeholders on the delivery of clinical academic training in the East Midlands

    PubMed Central

    Evans, Val; MacLeod, Sheona

    2018-01-01

    Objective Major changes in the design and delivery of clinical academic training in the United Kingdom have occurred yet there has been little exploration of the perceptions of integrated clinic academic trainees or educators. We obtained the views of a range of key stakeholders involved in clinical academic training in the East Midlands. Design A qualitative study with inductive iterative thematic content analysis of findings from trainee surveys and facilitated focus groups. Setting The East Midlands School of Clinical Academic Training. Participants Integrated Clinical Academic Trainees, clinical and academic educators involved in clinical academic training. Main outcome measures The experience, opinions and beliefs of key stakeholders about barriers and enablers in the delivery of clinical academic training. Results We identified key themes many shared by both trainees and educators. These highlighted issues in the systems and process of the integrated academic pathways, career pathways, supervision and support, the assessment process and the balance between clinical and academic training. Conclusions Our findings help inform the future development of integrated academic training programmes. PMID:29487745

  10. Beef quality attributes: A systematic review of consumer perspectives.

    PubMed

    Henchion, Maeve M; McCarthy, Mary; Resconi, Virginia C

    2017-06-01

    Informed by quality theory, this systematic literature review seeks to determine the relative importance of beef quality attributes from a consumer perspective, considering search, experience and credence quality attributes. While little change is anticipated in consumer ranking of search and experience attributes in the future, movement is expected in terms of ranking within the credence category and also in terms of the ranking of credence attributes overall. This highlights an opportunity for quality assurance schemes (QAS) to become more consumer focused through including a wider range of credence attributes. To capitalise on this opportunity, the meat industry should actively anticipate new relevant credence attributes and researchers need to develop new or better methods to measure them. This review attempts to identify the most relevant quality attributes in beef that may be considered in future iterations of QAS, to increase consumer satisfaction and, potentially, to increase returns to industry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Dynamic optical imaging of vascular and metabolic reactivity in rheumatoid joints.

    PubMed

    Lasker, Joseph M; Fong, Christopher J; Ginat, Daniel T; Dwyer, Edward; Hielscher, Andreas H

    2007-01-01

    Dynamic optical imaging is increasingly applied to clinically relevant areas such as brain and cancer imaging. In this approach, some external stimulus is applied and changes in relevant physiological parameters (e.g., oxy- or deoxyhemoglobin concentrations) are determined. The advantage of this approach is that the prestimulus state can be used as a reference or baseline against which the changes can be calibrated. Here we present the first application of this method to the problem of characterizing joint diseases, especially effects of rheumatoid arthritis (RA) in the proximal interphalangeal finger joints. Using a dual-wavelength tomographic imaging system together with previously implemented model-based iterative image reconstruction schemes, we have performed initial dynamic imaging case studies on a limited number of healthy volunteers and patients diagnosed with RA. Focusing on three cases studies, we illustrated our major finds. These studies support our hypothesis that differences in the vascular reactivity exist between affected and unaffected joints.

  12. Measurement of a model of implementation for health care: toward a testable theory

    PubMed Central

    2012-01-01

    Background Greenhalgh et al. used a considerable evidence-base to develop a comprehensive model of implementation of innovations in healthcare organizations [1]. However, these authors did not fully operationalize their model, making it difficult to test formally. The present paper represents a first step in operationalizing Greenhalgh et al.’s model by providing background, rationale, working definitions, and measurement of key constructs. Methods A systematic review of the literature was conducted for key words representing 53 separate sub-constructs from six of the model’s broad constructs. Using an iterative process, we reviewed existing measures and utilized or adapted items. Where no one measure was deemed appropriate, we developed other items to measure the constructs through consensus. Results The review and iterative process of team consensus identified three types of data that can been used to operationalize the constructs in the model: survey items, interview questions, and administrative data. Specific examples of each of these are reported. Conclusion Despite limitations, the mixed-methods approach to measurement using the survey, interview measure, and administrative data can facilitate research on implementation by providing investigators with a measurement tool that captures most of the constructs identified by the Greenhalgh model. These measures are currently being used to collect data concerning the implementation of two evidence-based psychotherapies disseminated nationally within Department of Veterans Affairs. Testing of psychometric properties and subsequent refinement should enhance the utility of the measures. PMID:22759451

  13. Video encryption using chaotic masks in joint transform correlator

    NASA Astrophysics Data System (ADS)

    Saini, Nirmala; Sinha, Aloka

    2015-03-01

    A real-time optical video encryption technique using a chaotic map has been reported. In the proposed technique, each frame of video is encrypted using two different chaotic random phase masks in the joint transform correlator architecture. The different chaotic random phase masks can be obtained either by using different iteration levels or by using different seed values of the chaotic map. The use of different chaotic random phase masks makes the decryption process very complex for an unauthorized person. Optical, as well as digital, methods can be used for video encryption but the decryption is possible only digitally. To further enhance the security of the system, the key parameters of the chaotic map are encoded using RSA (Rivest-Shamir-Adleman) public key encryption. Numerical simulations are carried out to validate the proposed technique.

  14. A checklist and adult key to the Chinese stonefly (Plecoptera) genera.

    PubMed

    Chen, Zhi-Teng; Du, Yu-Zhou

    2018-01-23

    The first checklist of the known stonefly genera of China is presented. Using relevant literature and available specimens, a diagnostic key to the ten families representing 65 genera is provided. In addition, illustrations for the key characters are provided.

  15. Beyond Getting in and Fitting in: An Examination of Social Networks and Professionally Relevant Social Capital among Latina/o University Students

    ERIC Educational Resources Information Center

    Rios-Aguilar, Cecilia; Deil-Amen, Regina

    2012-01-01

    Social network analyses, combined with qualitative analyses, are examined to understand key components of the college trajectories of 261 Latina/o students. Their social network ties reveal variation in extensity and the relevance. Most ties facilitate social capital relevant to getting into college, fewer engage social capital relevant to…

  16. 75 FR 20843 - Notice of Workshop To Discuss Policy-Relevant Science to Inform EPA's Integrated Plan for the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-21

    ... Policy-Relevant Science to Inform EPA's Integrated Plan for the Review of the Lead National Ambient Air...-Relevant Science to Inform EPA's Integrated Plan for Review of the National Ambient Air Quality Standards... review focuses on the key policy-relevant issues and considers the most meaningful new science to inform...

  17. Contribution of Tore Supra in preparation of ITER

    NASA Astrophysics Data System (ADS)

    Saoutic, B.; Abiteboul, J.; Allegretti, L.; Allfrey, S.; Ané, J. M.; Aniel, T.; Argouarch, A.; Artaud, J. F.; Aumenier, M. H.; Balme, S.; Basiuk, V.; Baulaigue, O.; Bayetti, P.; Bécoulet, A.; Bécoulet, M.; Benkadda, M. S.; Benoit, F.; Berger-by, G.; Bernard, J. M.; Bertrand, B.; Beyer, P.; Bigand, A.; Blum, J.; Boilson, D.; Bonhomme, G.; Bottollier-Curtet, H.; Bouchand, C.; Bouquey, F.; Bourdelle, C.; Bourmaud, S.; Brault, C.; Brémond, S.; Brosset, C.; Bucalossi, J.; Buravand, Y.; Cara, P.; Catherine-Dumont, V.; Casati, A.; Chantant, M.; Chatelier, M.; Chevet, G.; Ciazynski, D.; Ciraolo, G.; Clairet, F.; Coatanea-Gouachet, M.; Colas, L.; Commin, L.; Corbel, E.; Corre, Y.; Courtois, X.; Dachicourt, R.; Dapena Febrer, M.; Davi Joanny, M.; Daviot, R.; De Esch, H.; Decker, J.; Decool, P.; Delaporte, P.; Delchambre, E.; Delmas, E.; Delpech, L.; Desgranges, C.; Devynck, P.; Dittmar, T.; Doceul, L.; Douai, D.; Dougnac, H.; Duchateau, J. L.; Dugué, B.; Dumas, N.; Dumont, R.; Durocher, A.; Duthoit, F. X.; Ekedahl, A.; Elbeze, D.; El Khaldi, M.; Escourbiac, F.; Faisse, F.; Falchetto, G.; Farge, M.; Farjon, J. L.; Faury, M.; Fedorczak, N.; Fenzi-Bonizec, C.; Firdaouss, M.; Frauel, Y.; Garbet, X.; Garcia, J.; Gardarein, J. L.; Gargiulo, L.; Garibaldi, P.; Gauthier, E.; Gaye, O.; Géraud, A.; Geynet, M.; Ghendrih, P.; Giacalone, I.; Gibert, S.; Gil, C.; Giruzzi, G.; Goniche, M.; Grandgirard, V.; Grisolia, C.; Gros, G.; Grosman, A.; Guigon, R.; Guilhem, D.; Guillerminet, B.; Guirlet, R.; Gunn, J.; Gurcan, O.; Hacquin, S.; Hatchressian, J. C.; Hennequin, P.; Hernandez, C.; Hertout, P.; Heuraux, S.; Hillairet, J.; Hoang, G. T.; Honore, C.; Houry, M.; Hutter, T.; Huynh, P.; Huysmans, G.; Imbeaux, F.; Joffrin, E.; Johner, J.; Jourd'Heuil, L.; Katharria, Y. S.; Keller, D.; Kim, S. H.; Kocan, M.; Kubic, M.; Lacroix, B.; Lamaison, V.; Latu, G.; Lausenaz, Y.; Laviron, C.; Leroux, F.; Letellier, L.; Lipa, M.; Litaudon, X.; Loarer, T.; Lotte, P.; Madeleine, S.; Magaud, P.; Maget, P.; Magne, R.; Manenc, L.; Marandet, Y.; Marbach, G.; Maréchal, J. L.; Marfisi, L.; Martin, C.; Martin, G.; Martin, V.; Martinez, A.; Martins, J. P.; Masset, R.; Mazon, D.; Mellet, N.; Mercadier, L.; Merle, A.; Meshcheriakov, D.; Meyer, O.; Million, L.; Missirlian, M.; Mollard, P.; Moncada, V.; Monier-Garbet, P.; Moreau, D.; Moreau, P.; Morini, L.; Nannini, M.; Naiim Habib, M.; Nardon, E.; Nehme, H.; Nguyen, C.; Nicollet, S.; Nouilletas, R.; Ohsako, T.; Ottaviani, M.; Pamela, S.; Parrat, H.; Pastor, P.; Pecquet, A. L.; Pégourié, B.; Peysson, Y.; Porchy, I.; Portafaix, C.; Preynas, M.; Prou, M.; Raharijaona, J. M.; Ravenel, N.; Reux, C.; Reynaud, P.; Richou, M.; Roche, H.; Roubin, P.; Sabot, R.; Saint-Laurent, F.; Salasca, S.; Samaille, F.; Santagiustina, A.; Sarazin, Y.; Semerok, A.; Schlosser, J.; Schneider, M.; Schubert, M.; Schwander, F.; Ségui, J. L.; Selig, G.; Sharma, P.; Signoret, J.; Simonin, A.; Song, S.; Sonnendruker, E.; Sourbier, F.; Spuig, P.; Tamain, P.; Tena, M.; Theis, J. M.; Thouvenin, D.; Torre, A.; Travère, J. M.; Tsitrone, E.; Vallet, J. C.; Van Der Plas, E.; Vatry, A.; Verger, J. M.; Vermare, L.; Villecroze, F.; Villegas, D.; Volpe, R.; Vulliez, K.; Wagrez, J.; Wauters, T.; Zani, L.; Zarzoso, D.; Zou, X. L.

    2011-09-01

    Tore Supra routinely addresses the physics and technology of very long-duration plasma discharges, thus bringing precious information on critical issues of long pulse operation of ITER. A new ITER relevant lower hybrid current drive (LHCD) launcher has allowed coupling to the plasma a power level of 2.7 MW for 78 s, corresponding to a power density close to the design value foreseen for an ITER LHCD system. In accordance with the expectations, long distance (10 cm) power coupling has been obtained. Successive stationary states of the plasma current profile have been controlled in real-time featuring (i) control of sawteeth with varying plasma parameters, (ii) obtaining and sustaining a 'hot core' plasma regime, (iii) recovery from a voluntarily triggered deleterious magnetohydrodynamic regime. The scrape-off layer (SOL) parameters and power deposition have been documented during L-mode ramp-up phase, a crucial point for ITER before the X-point formation. Disruption mitigation studies have been conducted with massive gas injection, evidencing the difference between He and Ar and the possible role of the q = 2 surface in limiting the gas penetration. ICRF assisted wall conditioning in the presence of magnetic field has been investigated, culminating in the demonstration that this conditioning scheme allows one to recover normal operation after disruptions. The effect of the magnetic field ripple on the intrinsic plasma rotation has been studied, showing the competition between turbulent transport processes and ripple toroidal friction. During dedicated dimensionless experiments, the effect of varying the collisionality on turbulence wavenumber spectra has been documented, giving new insight into the turbulence mechanism. Turbulence measurements have also allowed quantitatively comparing experimental results with predictions by 5D gyrokinetic codes: numerical results simultaneously match the magnitude of effective heat diffusivity, rms values of density fluctuations and wavenumber spectra. A clear correlation between electron temperature gradient and impurity transport in the very core of the plasma has been observed, strongly suggesting the existence of a threshold above which transport is dominated by turbulent electron modes. Dynamics of edge turbulent fluctuations has been studied by correlating data from fast imaging cameras and Langmuir probes, yielding a coherent picture of transport processes involved in the SOL. Corrections were made to this article on 6 January 2012. Some of the letters in the text were missing.

  18. Understanding relevance of health research: considerations in the context of research impact assessment.

    PubMed

    Dobrow, Mark J; Miller, Fiona A; Frank, Cy; Brown, Adalsteinn D

    2017-04-17

    With massive investment in health-related research, above and beyond investments in the management and delivery of healthcare and public health services, there has been increasing focus on the impact of health research to explore and explain the consequences of these investments and inform strategic planning. Relevance is reflected by increased attention to the usability and impact of health research, with research funders increasingly engaging in relevance assessment as an input to decision processes. Yet, it is unclear whether relevance is a synonym for or predictor of impact, a necessary condition or stage in achieving it, or a distinct aim of the research enterprise. The main aim of this paper is to improve our understanding of research relevance, with specific objectives to (1) unpack research relevance from both theoretical and practical perspectives, and (2) outline key considerations for its assessment. Our approach involved the scholarly strategy of review and reflection. We prepared a draft paper based on an exploratory review of literature from various fields, and gained from detailed and insightful analysis and critique at a roundtable discussion with a group of key health research stakeholders. We also solicited review and feedback from a small sample of expert reviewers. Research relevance seems increasingly important in justifying research investments and guiding strategic research planning. However, consideration of relevance has been largely tacit in the health research community, often depending on unexplained interpretations of value, fit and potential for impact. While research relevance seems a necessary condition for impact - a process or component of efforts to make rigorous research usable - ultimately, relevance stands apart from research impact. Careful and explicit consideration of research relevance is vital to gauge the overall value and impact of a wide range of individual and collective research efforts and investments. To improve understanding, this paper outlines four key considerations, including how research relevance assessments (1) orientate to, capture and compare research versus non-research sources, (2) consider both instrumental versus non-instrumental uses of research, (3) accommodate dynamic temporal-shifting perspectives on research, and (4) align with an intersubjective understanding of relevance.

  19. Recent developments in the structural design and optimization of ITER neutral beam manifold

    NASA Astrophysics Data System (ADS)

    Chengzhi, CAO; Yudong, PAN; Zhiwei, XIA; Bo, LI; Tao, JIANG; Wei, LI

    2018-02-01

    This paper describes a new design of the neutral beam manifold based on a more optimized support system. A proposed alternative scheme has presented to replace the former complex manifold supports and internal pipe supports in the final design phase. Both the structural reliability and feasibility were confirmed with detailed analyses. Comparative analyses between two typical types of manifold support scheme were performed. All relevant results of mechanical analyses for typical operation scenarios and fault conditions are presented. Future optimization activities are described, which will give useful information for a refined setting of components in the next phase.

  20. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan

    2012-01-01

    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474

  1. TU-G-201-01: What Therapy Physicists Need to Know About CT and PET/CT: Terminology and Latest Developments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hua, C.

    This session will update therapeutic physicists on technological advancements and radiation oncology features of commercial CT, MRI, and PET/CT imaging systems. Also described are physicists’ roles in every stage of equipment selection, purchasing, and operation, including defining specifications, evaluating vendors, making recommendations, and optimal and safe use of imaging equipment in radiation oncology environment. The first presentation defines important terminology of CT and PET/CT followed by a review of latest innovations, such as metal artifact reduction, statistical iterative reconstruction, radiation dose management, tissue classification by dual energy CT and spectral CT, improvement in spatial resolution and sensitivity in PET, andmore » potentials of PET/MR. We will also discuss important technical specifications and items in CT and PET/CT purchasing quotes and their impacts. The second presentation will focus on key components in the request for proposal for a MRI simulator and how to evaluate vendor proposals. MRI safety issues in radiation Oncology, including MRI scanner Zones (4-zone design), will be discussed. Basic MR terminologies, important functionalities, and advanced features, which are relevant to radiation therapy, will be discussed. In the third presentation, justification of imaging systems for radiation oncology, considerations in room design and construction in a RO department, shared use with diagnostic radiology, staffing needs and training, clinical/research use cases and implementation, will be discussed. The emphasis will be on understanding and bridging the differences between diagnostic and radiation oncology installations, building consensus amongst stakeholders for purchase and use, and integrating imaging technologies into the radiation oncology environment. Learning Objectives: Learn the latest innovations of major imaging systems relevant to radiation therapy Be able to describe important technical specifications of CT, MRI, and PET/CT Understand the process of budget request, equipment justification, comparisons of technical specifications, site visits, vendor selection, and contract development.« less

  2. TU-G-201-00: Imaging Equipment Specification and Selection in Radiation Oncology Departments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This session will update therapeutic physicists on technological advancements and radiation oncology features of commercial CT, MRI, and PET/CT imaging systems. Also described are physicists’ roles in every stage of equipment selection, purchasing, and operation, including defining specifications, evaluating vendors, making recommendations, and optimal and safe use of imaging equipment in radiation oncology environment. The first presentation defines important terminology of CT and PET/CT followed by a review of latest innovations, such as metal artifact reduction, statistical iterative reconstruction, radiation dose management, tissue classification by dual energy CT and spectral CT, improvement in spatial resolution and sensitivity in PET, andmore » potentials of PET/MR. We will also discuss important technical specifications and items in CT and PET/CT purchasing quotes and their impacts. The second presentation will focus on key components in the request for proposal for a MRI simulator and how to evaluate vendor proposals. MRI safety issues in radiation Oncology, including MRI scanner Zones (4-zone design), will be discussed. Basic MR terminologies, important functionalities, and advanced features, which are relevant to radiation therapy, will be discussed. In the third presentation, justification of imaging systems for radiation oncology, considerations in room design and construction in a RO department, shared use with diagnostic radiology, staffing needs and training, clinical/research use cases and implementation, will be discussed. The emphasis will be on understanding and bridging the differences between diagnostic and radiation oncology installations, building consensus amongst stakeholders for purchase and use, and integrating imaging technologies into the radiation oncology environment. Learning Objectives: Learn the latest innovations of major imaging systems relevant to radiation therapy Be able to describe important technical specifications of CT, MRI, and PET/CT Understand the process of budget request, equipment justification, comparisons of technical specifications, site visits, vendor selection, and contract development.« less

  3. Spatial assignment of symmetry adapted perturbation theory interaction energy components: The atomic SAPT partition

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Sherrill, C. David

    2014-07-01

    We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work through the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.

  4. Spatial assignment of symmetry adapted perturbation theory interaction energy components: The atomic SAPT partition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parrish, Robert M.; Sherrill, C. David, E-mail: sherrill@gatech.edu

    2014-07-28

    We develop a physically-motivated assignment of symmetry adapted perturbation theory for intermolecular interactions (SAPT) into atom-pairwise contributions (the A-SAPT partition). The basic precept of A-SAPT is that the many-body interaction energy components are computed normally under the formalism of SAPT, following which a spatially-localized two-body quasiparticle interaction is extracted from the many-body interaction terms. For electrostatics and induction source terms, the relevant quasiparticles are atoms, which are obtained in this work through the iterative stockholder analysis (ISA) procedure. For the exchange, induction response, and dispersion terms, the relevant quasiparticles are local occupied orbitals, which are obtained in this work throughmore » the Pipek-Mezey procedure. The local orbital atomic charges obtained from ISA additionally allow the terms involving local orbitals to be assigned in an atom-pairwise manner. Further summation over the atoms of one or the other monomer allows for a chemically intuitive visualization of the contribution of each atom and interaction component to the overall noncovalent interaction strength. Herein, we present the intuitive development and mathematical form for A-SAPT applied in the SAPT0 approximation (the A-SAPT0 partition). We also provide an efficient series of algorithms for the computation of the A-SAPT0 partition with essentially the same computational cost as the corresponding SAPT0 decomposition. We probe the sensitivity of the A-SAPT0 partition to the ISA grid and convergence parameter, orbital localization metric, and induction coupling treatment, and recommend a set of practical choices which closes the definition of the A-SAPT0 partition. We demonstrate the utility and computational tractability of the A-SAPT0 partition in the context of side-on cation-π interactions and the intercalation of DNA by proflavine. A-SAPT0 clearly shows the key processes in these complicated noncovalent interactions, in systems with up to 220 atoms and 2845 basis functions.« less

  5. Interpretation and use of evidence in state policymaking: a qualitative analysis.

    PubMed

    Apollonio, Dorie E; Bero, Lisa A

    2017-02-20

    Researchers advocating for evidence-informed policy have attempted to encourage policymakers to develop a greater understanding of research and researchers to develop a better understanding of the policymaking process. Our aim was to apply findings drawn from studies of the policymaking process, specifically the theory of policy windows, to identify strategies used to integrate evidence into policymaking and points in the policymaking process where evidence was more or less relevant. Our observational study relied on interviews conducted with 24 policymakers from the USA who had been trained to interpret scientific research in multiple iterations of an evidence-based workshop. Participants were asked to describe cases where they had been involved in making health policy and to provide examples in which research was used, either successfully or unsuccessfully. Interviews were transcribed, independently coded by multiple members of the study team and analysed for content using key words, concepts identified by participants and concepts arising from review of the texts. Our results suggest that policymakers who focused on health issues used multiple strategies to encourage evidence-informed policymaking. The respondents used a strict definition of what constituted evidence, and relied on their experience with research to discourage the use of less rigorous research. Their experience suggested that evidence was less useful in identifying problems, encouraging political action or ensuring feasibility and more useful in developing policy alternatives. Past research has suggested multiple strategies to increase the use of evidence in policymaking, including the development of rapid-response research and policy-oriented summaries of data. Our findings suggest that these strategies may be most relevant to the policymaking stream, which develops policy alternatives. In addition, we identify several strategies that policymakers and researchers can apply to encourage evidence-informed policymaking. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  6. Maternal vaccination: moving the science forward

    PubMed Central

    Faucette, Azure N.; Unger, Benjamin L.; Gonik, Bernard; Chen, Kang

    2015-01-01

    BACKGROUND Infections remain one of the leading causes of morbidity in pregnant women and newborns, with vaccine-preventable infections contributing significantly to the burden of disease. In the past decade, maternal vaccination has emerged as a promising public health strategy to prevent and combat maternal, fetal and neonatal infections. Despite a number of universally recommended maternal vaccines, the development and evaluation of safe and effective maternal vaccines and their wide acceptance are hampered by the lack of thorough understanding of the efficacy and safety in the pregnant women and the offspring. METHODS An outline was synthesized based on the current status and major gaps in the knowledge of maternal vaccination. A systematic literature search in PUBMED was undertaken using the key words in each section title of the outline to retrieve articles relevant to pregnancy. Articles cited were selected based on relevance and quality. On the basis of the reviewed information, a perspective on the future directions of maternal vaccination research was formulated. RESULTS Maternal vaccination can generate active immune protection in the mother and elicit systemic immunoglobulin G (IgG) and mucosal IgG, IgA and IgM responses to confer neonatal protection. The maternal immune system undergoes significant modulation during pregnancy, which influences responsiveness to vaccines. Significant gaps exist in our knowledge of the efficacy and safety of maternal vaccines, and no maternal vaccines against a large number of old and emerging pathogens are available. Public acceptance of maternal vaccination has been low. CONCLUSIONS To tackle the scientific challenges of maternal vaccination and to provide the public with informed vaccination choices, scientists and clinicians in different disciplines must work closely and have a mechanistic understanding of the systemic, reproductive and mammary mucosal immune responses to vaccines. The use of animal models should be coupled with human studies in an iterative manner for maternal vaccine experimentation, evaluation and optimization. Systems biology approaches should be adopted to improve the speed, accuracy and safety of maternal vaccine targeting. PMID:25015234

  7. TU-G-201-02: An MRI Simulator From Proposal to Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Y.

    2015-06-15

    This session will update therapeutic physicists on technological advancements and radiation oncology features of commercial CT, MRI, and PET/CT imaging systems. Also described are physicists’ roles in every stage of equipment selection, purchasing, and operation, including defining specifications, evaluating vendors, making recommendations, and optimal and safe use of imaging equipment in radiation oncology environment. The first presentation defines important terminology of CT and PET/CT followed by a review of latest innovations, such as metal artifact reduction, statistical iterative reconstruction, radiation dose management, tissue classification by dual energy CT and spectral CT, improvement in spatial resolution and sensitivity in PET, andmore » potentials of PET/MR. We will also discuss important technical specifications and items in CT and PET/CT purchasing quotes and their impacts. The second presentation will focus on key components in the request for proposal for a MRI simulator and how to evaluate vendor proposals. MRI safety issues in radiation Oncology, including MRI scanner Zones (4-zone design), will be discussed. Basic MR terminologies, important functionalities, and advanced features, which are relevant to radiation therapy, will be discussed. In the third presentation, justification of imaging systems for radiation oncology, considerations in room design and construction in a RO department, shared use with diagnostic radiology, staffing needs and training, clinical/research use cases and implementation, will be discussed. The emphasis will be on understanding and bridging the differences between diagnostic and radiation oncology installations, building consensus amongst stakeholders for purchase and use, and integrating imaging technologies into the radiation oncology environment. Learning Objectives: Learn the latest innovations of major imaging systems relevant to radiation therapy Be able to describe important technical specifications of CT, MRI, and PET/CT Understand the process of budget request, equipment justification, comparisons of technical specifications, site visits, vendor selection, and contract development.« less

  8. Liquefied natural gas (LNG) safety

    NASA Technical Reports Server (NTRS)

    Ordin, P. M.

    1977-01-01

    Bibliography, assembled from computer search of NASA Aerospace Safety Data Bank, including title of report, author, abstract, source, description of figures, key references, and key words or subject terms. Publication is indexed by key subjects and by authors. Items are relevant to design engineers and safety specialists.

  9. Role of Australian primary healthcare organisations (PHCOs) in primary healthcare (PHC) workforce planning: lessons from abroad.

    PubMed

    Naccarella, Lucio; Buchan, James; Newton, Bill; Brooks, Peter

    2011-08-01

    To review international experience in order to inform Australian PHC workforce policy on the role of primary healthcare organisations (PHCOs/Medicare Locals) in PHC workforce planning. A NZ and UK study tour was conducted by the lead author, involving 29 key informant interviews with regard to PHCOs roles and the effect on PHC workforce planning. Interviews were audio-taped with consent, transcribed and analysed thematically. Emerging themes included: workforce planning is a complex, dynamic, iterative process and key criteria exist for doing workforce planning well; PHCOs lacked a PHC workforce policy framework to do workforce planning; PHCOs lacked authority, power and appropriate funding to do workforce planning; there is a need to align workforce planning with service planning; and a PHC Workforce Planning and Development Benchmarking Database is essential for local planning and evaluating workforce reforms. With the Australian government promoting the role of PHCOs in health system reform, reflections from abroad highlight the key action within PHC and PHCOs required to optimise PHC workforce planning.

  10. A systematic literature review of the key challenges for developing the structure of public health economic models.

    PubMed

    Squires, Hazel; Chilcott, James; Akehurst, Ronald; Burr, Jennifer; Kelly, Michael P

    2016-04-01

    To identify the key methodological challenges for public health economic modelling and set an agenda for future research. An iterative literature search identified papers describing methodological challenges for developing the structure of public health economic models. Additional multidisciplinary literature searches helped expand upon important ideas raised within the review. Fifteen articles were identified within the formal literature search, highlighting three key challenges: inclusion of non-healthcare costs and outcomes; inclusion of equity; and modelling complex systems and multi-component interventions. Based upon these and multidisciplinary searches about dynamic complexity, the social determinants of health, and models of human behaviour, six areas for future research were specified. Future research should focus on: the use of systems approaches within health economic modelling; approaches to assist the systematic consideration of the social determinants of health; methods for incorporating models of behaviour and social interactions; consideration of equity; and methodology to help modellers develop valid, credible and transparent public health economic model structures.

  11. Gas Gun Model and Comparison to Experimental Performance of Pipe Guns Operating with Light Propellant Gases and Large Cryogenic Pellets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J. R.; Carmichael, J. R.; Gebhart, T. E.

    Injection of multiple large (~10 to 30 mm diameter) shattered pellets into ITER plasmas is presently part of the scheme planned to mitigate the deleterious effects of disruptions on the vessel components. To help in the design and optimize performance of the pellet injectors for this application, a model referred to as “the gas gun simulator” has been developed and benchmarked against experimental data. The computer code simulator is a Java program that models the gas-dynamics characteristics of a single-stage gas gun. Following a stepwise approach, the code utilizes a variety of input parameters to incrementally simulate and analyze themore » dynamics of the gun as the projectile is launched down the barrel. Using input data, the model can calculate gun performance based on physical characteristics, such as propellant-gas and fast-valve properties, barrel geometry, and pellet mass. Although the model is fundamentally generic, the present version is configured to accommodate cryogenic pellets composed of H2, D2, Ne, Ar, and mixtures of them and light propellant gases (H2, D2, and He). The pellets are solidified in situ in pipe guns that consist of stainless steel tubes and fast-acting valves that provide the propellant gas for pellet acceleration (to speeds ~200 to 700 m/s). The pellet speed is the key parameter in determining the response time of a shattered pellet system to a plasma disruption event. The calculated speeds from the code simulations of experiments were typically in excellent agreement with the measured values. With the gas gun simulator validated for many test shots and over a wide range of physical and operating parameters, it is a valuable tool for optimization of the injector design, including the fast valve design (orifice size and volume) for any operating pressure (~40 bar expected for the ITER application) and barrel length for any pellet size (mass, diameter, and length). Key design parameters and proposed values for the pellet injectors for the ITER disruption mitigation systems are discussed.« less

  12. Gas Gun Model and Comparison to Experimental Performance of Pipe Guns Operating with Light Propellant Gases and Large Cryogenic Pellets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combs, S. K.; Reed, J. R.; Lyttle, M. S.

    2016-01-01

    Injection of multiple large (~10 to 30 mm diameter) shattered pellets into ITER plasmas is presently part of the scheme planned to mitigate the deleterious effects of disruptions on the vessel components. To help in the design and optimize performance of the pellet injectors for this application, a model referred to as “the gas gun simulator” has been developed and benchmarked against experimental data. The computer code simulator is a Java program that models the gas-dynamics characteristics of a single-stage gas gun. Following a stepwise approach, the code utilizes a variety of input parameters to incrementally simulate and analyze themore » dynamics of the gun as the projectile is launched down the barrel. Using input data, the model can calculate gun performance based on physical characteristics, such as propellant-gas and fast-valve properties, barrel geometry, and pellet mass. Although the model is fundamentally generic, the present version is configured to accommodate cryogenic pellets composed of H2, D2, Ne, Ar, and mixtures of them and light propellant gases (H2, D2, and He). The pellets are solidified in situ in pipe guns that consist of stainless steel tubes and fast-acting valves that provide the propellant gas for pellet acceleration (to speeds ~200 to 700 m/s). The pellet speed is the key parameter in determining the response time of a shattered pellet system to a plasma disruption event. The calculated speeds from the code simulations of experiments were typically in excellent agreement with the measured values. With the gas gun simulator validated for many test shots and over a wide range of physical and operating parameters, it is a valuable tool for optimization of the injector design, including the fast valve design (orifice size and volume) for any operating pressure (~40 bar expected for the ITER application) and barrel length for any pellet size (mass, diameter, and length). Key design parameters and proposed values for the pellet injectors for the ITER disruption mitigation systems are discussed.« less

  13. The Effect of Iteration on the Design Performance of Primary School Children

    ERIC Educational Resources Information Center

    Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.

    2015-01-01

    Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…

  14. The conversion of a room temperature NaK loop to a high temperature MHD facility for Li/V blanket testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, C.B.; Haglund, R.C.; Miller, M.E.

    1996-12-31

    The Vanadium/Lithium system has been the recent focus of ANL`s Blanket Technology Pro-ram, and for the last several years, ANL`s Liquid Metal Blanket activities have been carried out in direct support of the ITER (International Thermonuclear Experimental Reactor) breeding blanket task area. A key feasibility issue for the ITER Vanadium/Lithium breeding blanket is the Near the development of insulator coatings. Design calculations, Hua and Gohar, show that an electrically insulating layer is necessary to maintain an acceptably low magneto-hydrodynamic (MHD) pressure drop in the current ITER design. Consequently, the decision was made to convert Argonne`s Liquid Metal EXperiment (ALEX) frommore » a 200{degrees}C NaK facility to a 350{degrees}C lithium facility. The upgraded facility was designed to produce MHD pressure drop data, test section voltage distributions, and heat transfer data for mid-scale test sections and blanket mockups at Hartmann numbers (M) and interaction parameters (N) in the range of 10{sup 3} to 10{sup 5} in lithium at 350{degrees}C. Following completion of the upgrade work, a short performance test was conducted, followed by two longer multiple-hour, MHD tests, all at 230{degrees}C. The modified ALEX facility performed up to expectations in the testing. MHD pressure drop and test section voltage distributions were collected at Hartmann numbers of 1000.« less

  15. Conversion of a room temperature NaK loop to a high temperature MHD facility for Li/V blanket testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, C.B.; Haglund, R.C.; Miller, M.E.

    1996-12-31

    The Vanadium/Lithium system has been the recent focus of ANL`s Blanket Technology Program, and for the last several years, ANL`s Liquid Metal Blanket activities have been carried out in direct support of the ITER (International Thermonuclear Experimental Reactor) breeding blanket task area. A key feasibility issue for the ITER Vanadium/Lithium breeding blanket is the development of insulator coatings. Design calculations, Hua and Gohar, show that an electrically insulating layer is necessary to maintain an acceptably low magnetohydrodynamic (MHD) pressure drop in the current ITER design. Consequently, the decision was made to convert Argonne`s Liquid Metal EXperiment (ALEX) from a 200{degree}Cmore » NaK facility to a 350{degree}C lithium facility. The upgraded facility was designed to produce MHD pressure drop data, test section voltage distributions, and heat transfer data for mid-scale test sections and blanket mockups at Hartmann numbers (M) and interaction parameters (N) in the range of 10{sup 3} to 10{sup 5} in lithium at 350{degree}C. Following completion of the upgrade work, a short performance test was conducted, followed by two longer, multiple-hour, MHD tests, all at 230{degree}C. The modified ALEX facility performed up to expectations in the testing. MHD pressure drop and test section voltage distributions were collected at Hartmann numbers of 1000. 4 refs., 2 figs.« less

  16. Performance evaluation of algebraic reconstruction technique (ART) for prototype chest digital tomosynthesis (CDT) system

    NASA Astrophysics Data System (ADS)

    Lee, Haenghwa; Choi, Sunghoon; Jo, Byungdu; Kim, Hyemi; Lee, Donghoon; Kim, Dohyeon; Choi, Seungyeon; Lee, Youngjin; Kim, Hee-Joung

    2017-03-01

    Chest digital tomosynthesis (CDT) is a new 3D imaging technique that can be expected to improve the detection of subtle lung disease over conventional chest radiography. Algorithm development for CDT system is challenging in that a limited number of low-dose projections are acquired over a limited angular range. To confirm the feasibility of algebraic reconstruction technique (ART) method under variations in key imaging parameters, quality metrics were conducted using LUNGMAN phantom included grand-glass opacity (GGO) tumor. Reconstructed images were acquired from the total 41 projection images over a total angular range of +/-20°. We evaluated contrast-to-noise ratio (CNR) and artifacts spread function (ASF) to investigate the effect of reconstruction parameters such as number of iterations, relaxation parameter and initial guess on image quality. We found that proper value of ART relaxation parameter could improve image quality from the same projection. In this study, proper value of relaxation parameters for zero-image (ZI) and back-projection (BP) initial guesses were 0.4 and 0.6, respectively. Also, the maximum CNR values and the minimum full width at half maximum (FWHM) of ASF were acquired in the reconstructed images after 20 iterations and 3 iterations, respectively. According to the results, BP initial guess for ART method could provide better image quality than ZI initial guess. In conclusion, ART method with proper reconstruction parameters could improve image quality due to the limited angular range in CDT system.

  17. Material Assessment for ITER's Collective Thomson Scattering first mirror

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, R.; Policarpo, H.; Goncalves, B.

    2015-07-01

    The International Thermonuclear Energy Reactor (ITER) Collective Thomson Scattering (CTS) system is a diagnostic instrument that measures plasma density and velocity through Thomson scattering of microwave radiation. Some of the key components of the CTS are quasi-optical mirrors that are used to produce astigmatic beam patterns, which have impact on the strength and spatial resolution of the diagnostic signal. The mirrors are exposed to neutron radiation, which may alter the quality of the signal received. In this work, three different materials (molybdenum (Mo), stainless steel 316 (SS-316) and tungsten (W)) are considered for the first mirror of the CTS. Themore » objective is to access which of the material studied are best suited for this mirror, considering different neutron radiation loads simulated scenarios defined by ITER, based on the resultant stresses and temperature distributions. For it, the neutron irradiation, and subsequent heat-load on the mirrors are simulated using the Monte Carlo N-Particle (MCNP) code. Based on the MCNP heat-load results, transient thermal-structural Finite Element Analysis (FEA) of the mirror over a 400 s discharge, with and without cooling on the rear side, are conducted using in commercial FEA software ANSYS. Results show that of the tested materials Mo and W are the most suitable material for this application. Even though, this study does not yet consider the variation of the material properties with temperature, it presents a quick initial satisfactory assessment that may be considered in subsequent and more complex analysis. (authors)« less

  18. Chicago Classification Criteria of Esophageal Motility Disorders Defined in High Resolution Esophageal Pressure Topography (EPT)†

    PubMed Central

    Bredenoord, Albert J; Fox, Mark; Kahrilas, Peter J; Pandolfino, John E; Schwizer, Werner; Smout, AJPM; Conklin, Jeffrey L; Cook, Ian J; Gyawali, Prakash; Hebbard, Geoffrey; Holloway, Richard H; Ke, Meiyun; Keller, Jutta; Mittal, Ravinder K; Peters, Jeff; Richter, Joel; Roman, Sabine; Rommel, Nathalie; Sifrim, Daniel; Tutuian, Radu; Valdovinos, Miguel; Vela, Marcelo F; Zerbib, Frank

    2011-01-01

    Background The Chicago Classification of esophageal motility was developed to facilitate the interpretation of clinical high resolution esophageal pressure topography (EPT) studies, concurrent with the widespread adoption of this technology into clinical practice. The Chicago Classification has been, and will continue to be, an evolutionary process, molded first by published evidence pertinent to the clinical interpretation of high resolution manometry (HRM) studies and secondarily by group experience when suitable evidence is lacking. Methods This publication summarizes the state of our knowledge as of the most recent meeting of the International High Resolution Manometry Working Group in Ascona, Switzerland in April 2011. The prior iteration of the Chicago Classification was updated through a process of literature analysis and discussion. Key Results The major changes in this document from the prior iteration are largely attributable to research studies published since the prior iteration, in many cases research conducted in response to prior deliberations of the International High Resolution Manometry Working Group. The classification now includes criteria for subtyping achalasia, EGJ outflow obstruction, motility disorders not observed in normal subjects (Distal esophageal spasm, Hypercontractile esophagus, and Absent peristalsis), and statistically defined peristaltic abnormalities (Weak peristalsis, Frequent failed peristalsis, Rapid contractions with normal latency, and Hypertensive peristalsis). Conclusions & Inferences The Chicago Classification is an algorithmic scheme for diagnosis of esophageal motility disorders from clinical EPT studies. Moving forward, we anticipate continuing this process with increased emphasis placed on natural history studies and outcome data based on the classification. PMID:22248109

  19. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. Tomore » alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.« less

  20. Validating Lung Models Using the ASL 5000 Breathing Simulator.

    PubMed

    Dexter, Amanda; McNinch, Neil; Kaznoch, Destiny; Volsko, Teresa A

    2018-04-01

    This study sought to validate pediatric models with normal and altered pulmonary mechanics. PubMed and CINAHL databases were searched for studies directly measuring pulmonary mechanics of healthy infants and children, infants with severe bronchopulmonary dysplasia and neuromuscular disease. The ASL 5000 was used to construct models using tidal volume (VT), inspiratory time (TI), respiratory rate, resistance, compliance, and esophageal pressure gleaned from literature. Data were collected for a 1-minute period and repeated three times for each model. t tests compared modeled data with data abstracted from the literature. Repeated measures analyses evaluated model performance over multiple iterations. Statistical significance was established at a P value of less than 0.05. Maximum differences of means (experimental iteration mean - clinical standard mean) for TI and VT are the following: term infant without lung disease (TI = 0.09 s, VT = 0.29 mL), severe bronchopulmonary dysplasia (TI = 0.08 s, VT = 0.17 mL), child without lung disease (TI = 0.10 s, VT = 0.17 mL), and child with neuromuscular disease (TI = 0.09 s, VT = 0.57 mL). One-sample testing demonstrated statistically significant differences between clinical controls and VT and TI values produced by the ASL 5000 for each iteration and model (P < 0.01). The greatest magnitude of differences was negligible (VT < 1.6%, TI = 18%) and not clinically relevant. Inconsistencies occurred with the models constructed on the ASL 5000. It was deemed accurate for the study purposes. It is therefore essential to test models and evaluate magnitude of differences before use.

Top