PREFACE: Progress in the ITER Physics Basis
NASA Astrophysics Data System (ADS)
Ikeda, K.
2007-06-01
I would firstly like to congratulate all who have contributed to the preparation of the `Progress in the ITER Physics Basis' (PIPB) on its publication and express my deep appreciation of the hard work and commitment of the many scientists involved. With the signing of the ITER Joint Implementing Agreement in November 2006, the ITER Members have now established the framework for construction of the project, and the ITER Organization has begun work at Cadarache. The review of recent progress in the physics basis for burning plasma experiments encompassed by the PIPB will be a valuable resource for the project and, in particular, for the current Design Review. The ITER design has been derived from a physics basis developed through experimental, modelling and theoretical work on the properties of tokamak plasmas and, in particular, on studies of burning plasma physics. The `ITER Physics Basis' (IPB), published in 1999, has been the reference for the projection methodologies for the design of ITER, but the IPB also highlighted several key issues which needed to be resolved to provide a robust basis for ITER operation. In the intervening period scientists of the ITER Participant Teams have addressed these issues intensively. The International Tokamak Physics Activity (ITPA) has provided an excellent forum for scientists involved in these studies, focusing their work on the high priority physics issues for ITER. Significant progress has been made in many of the issues identified in the IPB and this progress is discussed in depth in the PIPB. In this respect, the publication of the PIPB symbolizes the strong interest and enthusiasm of the plasma physics community for the success of the ITER project, which we all recognize as one of the great scientific challenges of the 21st century. I wish to emphasize my appreciation of the work of the ITPA Coordinating Committee members, who are listed below. Their support and encouragement for the preparation of the PIPB were fundamental to its completion. I am pleased to witness the extensive collaborations, the excellent working relationships and the free exchange of views that have been developed among scientists working on magnetic fusion, and I would particularly like to acknowledge the importance which they assign to ITER in their research. This close collaboration and the spirit of free discussion will be essential to the success of ITER. Finally, the PIPB identifies issues which remain in the projection of burning plasma performance to the ITER scale and in the control of burning plasmas. Continued R&D is therefore called for to reduce the uncertainties associated with these issues and to ensure the efficient operation and exploitation of ITER. It is important that the international fusion community maintains a high level of collaboration in the future to address these issues and to prepare the physics basis for ITER operation. ITPA Coordination Committee R. Stambaugh (Chair of ITPA CC, General Atomics, USA) D.J. Campbell (Previous Chair of ITPA CC, European Fusion Development Agreement—Close Support Unit, ITER Organization) M. Shimada (Co-Chair of ITPA CC, ITER Organization) R. Aymar (ITER International Team, CERN) V. Chuyanov (ITER Organization) J.H. Han (Korea Basic Science Institute, Korea) Y. Huo (Zengzhou University, China) Y.S. Hwang (Seoul National University, Korea) N. Ivanov (Kurchatov Institute, Russia) Y. Kamada (Japan Atomic Energy Agency, Naka, Japan) P.K. Kaw (Institute for Plasma Research, India) S. Konovalov (Kurchatov Institute, Russia) M. Kwon (National Fusion Research Center, Korea) J. Li (Academy of Science, Institute of Plasma Physics, China) S. Mirnov (TRINITI, Russia) Y. Nakamura (National Institute for Fusion Studies, Japan) H. Ninomiya (Japan Atomic Energy Agency, Naka, Japan) E. Oktay (Department of Energy, USA) J. Pamela (European Fusion Development Agreement—Close Support Unit) C. Pan (Southwestern Institute of Physics, China) F. Romanelli (Ente per le Nuove tecnologie, l'Energia e l'Ambiente, Italy and European Fusion Development Agreement—Close Support Unit) N. Sauthoff (Princeton Plasma Physics Laboratory, USA and Oak Ridge National Laboratories, USA) Y. Saxena (Institute for Plasma Research, India) Y. Shimomura (ITER Organization) R. Singh (Institute for Plasma Research, India) S. Takamura (Nagoya University, Japan) K. Toi (National Institute for Fusion Studies, Japan) M. Wakatani (Kyoto University, Japan (deceased)) H. Zohm (Max-Planck-Institut für Plasmaphysik, Garching, Germany)
The Physics Basis of ITER Confinement
NASA Astrophysics Data System (ADS)
Wagner, F.
2009-02-01
ITER will be the first fusion reactor and the 50 year old dream of fusion scientists will become reality. The quality of magnetic confinement will decide about the success of ITER, directly in the form of the confinement time and indirectly because it decides about the plasma parameters and the fluxes, which cross the separatrix and have to be handled externally by technical means. This lecture portrays some of the basic principles which govern plasma confinement, uses dimensionless scaling to set the limits for the predictions for ITER, an approach which also shows the limitations of the predictions, and describes briefly the major characteristics and physics behind the H-mode—the preferred confinement regime of ITER.
NASA Astrophysics Data System (ADS)
Wilson, J. R.; Bonoli, P. T.
2015-02-01
Ion cyclotron range of frequency (ICRF) heating is foreseen as an integral component of the initial ITER operation. The status of ICRF preparations for ITER and supporting research were updated in the 2007 [Gormezano et al., Nucl. Fusion 47, S285 (2007)] report on the ITER physics basis. In this report, we summarize progress made toward the successful application of ICRF power on ITER since that time. Significant advances have been made in support of the technical design by development of new techniques for arc protection, new algorithms for tuning and matching, carrying out experimental tests of more ITER like antennas and demonstration on mockups that the design assumptions are correct. In addition, new applications of the ICRF system, beyond just bulk heating, have been proposed and explored.
Advances in the physics basis for the European DEMO design
NASA Astrophysics Data System (ADS)
Wenninger, R.; Arbeiter, F.; Aubert, J.; Aho-Mantila, L.; Albanese, R.; Ambrosino, R.; Angioni, C.; Artaud, J.-F.; Bernert, M.; Fable, E.; Fasoli, A.; Federici, G.; Garcia, J.; Giruzzi, G.; Jenko, F.; Maget, P.; Mattei, M.; Maviglia, F.; Poli, E.; Ramogida, G.; Reux, C.; Schneider, M.; Sieglin, B.; Villone, F.; Wischmeier, M.; Zohm, H.
2015-06-01
In the European fusion roadmap, ITER is followed by a demonstration fusion power reactor (DEMO), for which a conceptual design is under development. This paper reports the first results of a coherent effort to develop the relevant physics knowledge for that (DEMO Physics Basis), carried out by European experts. The program currently includes investigations in the areas of scenario modeling, transport, MHD, heating & current drive, fast particles, plasma wall interaction and disruptions.
NASA Astrophysics Data System (ADS)
Awatey, M. T.; Irving, J.; Oware, E. K.
2016-12-01
Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the physics of the underlying process.
Developing DIII-D To Prepare For ITER And The Path To Fusion Energy
NASA Astrophysics Data System (ADS)
Buttery, Richard; Hill, David; Solomon, Wayne; Guo, Houyang; DIII-D Team
2017-10-01
DIII-D pursues the advancement of fusion energy through scientific understanding and discovery of solutions. Research targets two key goals. First, to prepare for ITER we must resolve how to use its flexible control tools to rapidly reach Q =10, and develop the scientific basis to interpret results from ITER for fusion projection. Second, we must determine how to sustain a high performance fusion core in steady state conditions, with minimal actuators and a plasma exhaust solution. DIII-D will target these missions with: (i) increased electron heating and balanced torque neutral beams to simulate burning plasma conditions (ii) new 3D coil arrays to resolve control of transients (iii) off axis current drive to study physics in steady state regimes (iv) divertors configurations to promote detachment with low upstream density (v) a reactor relevant wall to qualify materials and resolve physics in reactor-like conditions. With new diagnostics and leading edge simulation, this will position the US for success in ITER and a unique knowledge to accelerate the approach to fusion energy. Supported by the US DOE under DE-FC02-04ER54698.
A poloidal section neutron camera for MAST upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sangaroon, S.; Weiszflog, M.; Cecconello, M.
2014-08-21
The Mega Ampere Spherical Tokamak Upgrade (MAST Upgrade) is intended as a demonstration of the physics viability of the Spherical Tokamak (ST) concept and as a platform for contributing to ITER/DEMO physics. Concerning physics exploitation, MAST Upgrade plasma scenarios can contribute to the ITER Tokamak physics particularly in the field of fast particle behavior and current drive studies. At present, MAST is equipped with a prototype neutron camera (NC). On the basis of the experience and results from previous experimental campaigns using the NC, the conceptual design of a neutron camera upgrade (NC Upgrade) is being developed. As part ofmore » the MAST Upgrade, the NC Upgrade is considered a high priority diagnostic since it would allow studies in the field of fast ions and current drive with good temporal and spatial resolution. In this paper, we explore an optional design with the camera array viewing the poloidal section of the plasma from different directions.« less
NASA Astrophysics Data System (ADS)
Stambaugh, Ronald D.
2013-01-01
The journal Nuclear Fusion has played a key role in the development of the physics basis for fusion energy. That physics basis has been sufficiently advanced to enable construction of such major facilities as ITER along the tokamak line in magnetic fusion and the National Ignition Facility (NIF) in laser-driven fusion. In the coming decade, while ITER is being constructed and brought into deuterium-tritium (DT) operation, this physics basis will be significantly deepened and extended, with particular key remaining issues addressed. Indeed such a focus was already evident with about 19% of the papers submitted to the 24th IAEA Fusion Energy Conference in San Diego, USA appearing in the directly labelled ITER and IFE categories. Of course many of the papers in the other research categories were aimed at issues relevant to these major fusion directions. About 17% of the papers submitted in the 'Experiment and Theory' categories dealt with the highly ITER relevant and inter-related issues of edge-localized modes, non-axisymmetric fields and plasma rotation. It is gratifying indeed to see how the international community is able to make such a concerted effort, facilitated by the ITPA and the ITER-IO, around such a major issue for ITER. In addition to deepening and extending the physics bases for the mainline approaches to fusion energy, the coming decade should see significant progress in the physics basis for additional fusion concepts. The stellarator concept should reach a high level of maturity with such facilities as LHD operating in Japan and already producing significant results and the W7-X in the EU coming online soon. Physics issues that require pulses of hundreds of seconds to investigate can be confronted in the new superconducting tokamaks coming online in Asia and in the major stellarators. The basis for steady-state operation of a tokamak may be further developed in the upper half of the tokamak operating space—the wall stabilized regime. New divertor geometries are already being investigated. Progress should continue on additional driver approaches in inertial fusion. Nuclear Fusion will continue to play a major role in documenting the significant advances in fusion plasma science on the way to fusion energy. Successful outcomes in projects like ITER and NIF will bring sharply into focus the remaining significant issues in fusion materials science and fusion nuclear science and technology needed to move from the scientific feasibility of fusion to the actual realization of fusion power production. These issues are largely common to magnetic and inertial fusion. Progress in these areas has been limited by the lack of suitable major research facilities. Hopefully the coming decade will see progress along these lines. Nuclear Fusion will play its part with increased papers reporting significant advances in fusion materials and nuclear science and technology. The reputation and status of the journal remains high; paper submissions are increasing and the Impact Factor for the journal remains high at 4.09 for 2011. We look forward in the coming months to publishing expanded versions of many of the outstanding papers presented at the IAEA FEC in San Diego. We congratulate Dr Patrick Diamond of the University of California at San Diego for winning the 2012 Nuclear Fusion Prize for his paper [1] and Dr Hajime Urano of the Japan Atomic Energy Agency for winning the 2011 Nuclear Fusion Prize for his paper [2]. Papers of such quality by our many authors enable the high standard of the journal to be maintained. The Nuclear Fusion editorial office understands how much effort is required by our referees. The Editorial Board decided that an expression of thanks to our most loyal referees is appropriate and so, since January 2005, we have been offering ten of the most active referees over the past year a personal subscription to Nuclear Fusion with electronic access for one year, free of charge. This year, three of the top referees have reviewed five manuscripts in the period November 2011 to December 2012 and provided excellent advice to the authors. We have excluded our Board Members, Guest Editors of special editions and those referees who were already listed in recent years. The following people have been selected: Marina Becoulet, CEA-Cadarache, France Jiaqui Dong, Southwestern Institute of Physics, China Emiliano Fable, Max-Planck-Institut für Plasmaphysik, Germany Ambrogio Fasoli, Ecole Polytechnique Federale de Lausanne, Switzerland Eric Fredrickson, Princeton Plasma Physics Laboratory, USA Manuel Garcia-Munoz, Max-Planck-Institut fuer Plasmaphysik, Germany William Heidbrink, California University, USA Katsumi Ida, National Inst. For Fusion Science, Japan Peter Stangeby, Toronto University, Canada James Strachan, Princeton Plasma Physics Laboratory, USA Victor Yavorskij, Ukraine National Academy of Sciences, Ukraine In addition, there is a group of several hundred referees who have helped us in the past year to maintain the high scientific standard of Nuclear Fusion. At the end of this issue we give the full list of all referees for 2012. Our thanks to them!
Banerjee, Amartya S.; Lin, Lin; Hu, Wei; ...
2016-10-21
The Discontinuous Galerkin (DG) electronic structure method employs an adaptive local basis (ALB) set to solve the Kohn-Sham equations of density functional theory in a discontinuous Galerkin framework. The adaptive local basis is generated on-the-fly to capture the local material physics and can systematically attain chemical accuracy with only a few tens of degrees of freedom per atom. A central issue for large-scale calculations, however, is the computation of the electron density (and subsequently, ground state properties) from the discretized Hamiltonian in an efficient and scalable manner. We show in this work how Chebyshev polynomial filtered subspace iteration (CheFSI) canmore » be used to address this issue and push the envelope in large-scale materials simulations in a discontinuous Galerkin framework. We describe how the subspace filtering steps can be performed in an efficient and scalable manner using a two-dimensional parallelization scheme, thanks to the orthogonality of the DG basis set and block-sparse structure of the DG Hamiltonian matrix. The on-the-fly nature of the ALB functions requires additional care in carrying out the subspace iterations. We demonstrate the parallel scalability of the DG-CheFSI approach in calculations of large-scale twodimensional graphene sheets and bulk three-dimensional lithium-ion electrolyte systems. In conclusion, employing 55 296 computational cores, the time per self-consistent field iteration for a sample of the bulk 3D electrolyte containing 8586 atoms is 90 s, and the time for a graphene sheet containing 11 520 atoms is 75 s.« less
NASA Astrophysics Data System (ADS)
1990-09-01
The main purpose of the International Thermonuclear Experimental Reactor (ITER) is to develop an experimental fusion reactor through the united efforts of many technologically advanced countries. The ITER terms of reference, issued jointly by the European Community, Japan, the USSR, and the United States, call for an integrated international design activity and constitute the basis of current activities. Joint work on ITER is carried out under the auspices of the International Atomic Energy Agency (IAEA), according to the terms of quadripartite agreement reached between the European Community, Japan, the USSR, and the United States. The site for joint technical work sessions is at the Max Planck Institute of Plasma Physics. Garching, Federal Republic of Germany. The ITER activities have two phases: a definition phase performed in 1988 and the present design phase (1989 to 1990). During the definition phase, a set of ITER technical characteristics and supporting research and development (R and D) activities were developed and reported. The present conceptual design phase of ITER lasts until the end of 1990. The objectives of this phase are to develop the design of ITER, perform a safety and environmental analysis, develop site requirements, define future R and D needs, and estimate cost, manpower, and schedule for construction and operation. A final report will be submitted at the end of 1990. This paper summarizes progress in the ITER program during the 1989 design phase.
The engineering design of the Tokamak Physics Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, J.A.
A mission and supporting physics objectives have been developed, which establishes an important role for the Tokamak Physics Experiment (TPX) in developing the physic basis for a future fusion reactor. The design of TPX include advanced physics features, such as shaping and profile control, along with the capability of operating for very long pulses. The development of the superconducting magnets, actively cooled internal hardware, and remote maintenance will be an important technology contribution to future fusion projects, such as ITER. The Conceptual Design and Management Systems for TPX have been developed and reviewed, and the project is beginning Preliminary Design.more » If adequately funded the construction project should be completed in the year 2000.« less
Overview of Recent DIII-D Experimental Results
NASA Astrophysics Data System (ADS)
Fenstermacher, Max
2015-11-01
Recent DIII-D experiments have added to the ITER physics basis and to physics understanding for extrapolation to future devices. ELMs were suppressed by RMPs in He plasmas consistent with ITER non-nuclear phase conditions, and in steady state hybrid plasmas. Characteristics of the EHO during both standard high torque, and low torque enhanced pedestal QH-mode with edge broadband fluctuations were measured, including edge localized density fluctuations with a microwave imaging reflectometer. The path to Super H-mode was verified at high beta with a QH-mode edge, and in plasmas with ELMs triggered by Li granules. ITER acceptable TQ mitigation was obtained with low Ne fraction Shattered Pellet Injection. Divertor ne and Te data from Thomson Scattering confirm predicted drift-driven asymmetries in electron pressure, and X-divertor heat flux reduction and detachment were characterized. The crucial mechanisms for ExB shear control of turbulence were clarified. In collaboration with EAST, high beta-p scenarios were obtained with 80 % bootstrap fraction, high H-factor and stability limits, and large radius ITBs leading to low AE activity. Work supported by the US Department of Energy under DE-FC02-04ER54698 and DE-AC52-07NA27344.
The fractal geometry of Hartree-Fock
NASA Astrophysics Data System (ADS)
Theel, Friethjof; Karamatskou, Antonia; Santra, Robin
2017-12-01
The Hartree-Fock method is an important approximation for the ground-state electronic wave function of atoms and molecules so that its usage is widespread in computational chemistry and physics. The Hartree-Fock method is an iterative procedure in which the electronic wave functions of the occupied orbitals are determined. The set of functions found in one step builds the basis for the next iteration step. In this work, we interpret the Hartree-Fock method as a dynamical system since dynamical systems are iterations where iteration steps represent the time development of the system, as encountered in the theory of fractals. The focus is put on the convergence behavior of the dynamical system as a function of a suitable control parameter. In our case, a complex parameter λ controls the strength of the electron-electron interaction. An investigation of the convergence behavior depending on the parameter λ is performed for helium, neon, and argon. We observe fractal structures in the complex λ-plane, which resemble the well-known Mandelbrot set, determine their fractal dimension, and find that with increasing nuclear charge, the fragmentation increases as well.
Exact exchange-correlation potentials of singlet two-electron systems
NASA Astrophysics Data System (ADS)
Ryabinkin, Ilya G.; Ospadov, Egor; Staroverov, Viktor N.
2017-10-01
We suggest a non-iterative analytic method for constructing the exchange-correlation potential, v XC ( r ) , of any singlet ground-state two-electron system. The method is based on a convenient formula for v XC ( r ) in terms of quantities determined only by the system's electronic wave function, exact or approximate, and is essentially different from the Kohn-Sham inversion technique. When applied to Gaussian-basis-set wave functions, the method yields finite-basis-set approximations to the corresponding basis-set-limit v XC ( r ) , whereas the Kohn-Sham inversion produces physically inappropriate (oscillatory and divergent) potentials. The effectiveness of the procedure is demonstrated by computing accurate exchange-correlation potentials of several two-electron systems (helium isoelectronic series, H2, H3 + ) using common ab initio methods and Gaussian basis sets.
The ITER project construction status
NASA Astrophysics Data System (ADS)
Motojima, O.
2015-10-01
The pace of the ITER project in St Paul-lez-Durance, France is accelerating rapidly into its peak construction phase. With the completion of the B2 slab in August 2014, which will support about 400 000 metric tons of the tokamak complex structures and components, the construction is advancing on a daily basis. Magnet, vacuum vessel, cryostat, thermal shield, first wall and divertor structures are under construction or in prototype phase in the ITER member states of China, Europe, India, Japan, Korea, Russia, and the United States. Each of these member states has its own domestic agency (DA) to manage their procurements of components for ITER. Plant systems engineering is being transformed to fully integrate the tokamak and its auxiliary systems in preparation for the assembly and operations phase. CODAC, diagnostics, and the three main heating and current drive systems are also progressing, including the construction of the neutral beam test facility building in Padua, Italy. The conceptual design of the Chinese test blanket module system for ITER has been completed and those of the EU are well under way. Significant progress has been made addressing several outstanding physics issues including disruption load characterization, prediction, avoidance, and mitigation, first wall and divertor shaping, edge pedestal and SOL plasma stability, fuelling and plasma behaviour during confinement transients and W impurity transport. Further development of the ITER Research Plan has included a definition of the required plant configuration for 1st plasma and subsequent phases of ITER operation as well as the major plasma commissioning activities and the needs of the accompanying R&D program to ITER construction by the ITER parties.
Analytical approximation of the InGaZnO thin-film transistors surface potential
NASA Astrophysics Data System (ADS)
Colalongo, Luigi
2016-10-01
Surface-potential-based mathematical models are among the most accurate and physically based compact models of thin-film transistors, and in turn of indium gallium zinc oxide TFTs, available today. However, the need of iterative computations of the surface potential limits their computational efficiency and diffusion in CAD applications. The existing closed-form approximations of the surface potential are based on regional approximations and empirical smoothing functions that could result not accurate enough in particular to model transconductances and transcapacitances. In this work we present an extremely accurate (in the range of nV) and computationally efficient non-iterative approximation of the surface potential that can serve as a basis for advanced surface-potential-based indium gallium zinc oxide TFTs models.
NASA Astrophysics Data System (ADS)
Leclerc, Arnaud; Thomas, Phillip S.; Carrington, Tucker
2017-08-01
Vibrational spectra and wavefunctions of polyatomic molecules can be calculated at low memory cost using low-rank sum-of-product (SOP) decompositions to represent basis functions generated using an iterative eigensolver. Using a SOP tensor format does not determine the iterative eigensolver. The choice of the interative eigensolver is limited by the need to restrict the rank of the SOP basis functions at every stage of the calculation. We have adapted, implemented and compared different reduced-rank algorithms based on standard iterative methods (block-Davidson algorithm, Chebyshev iteration) to calculate vibrational energy levels and wavefunctions of the 12-dimensional acetonitrile molecule. The effect of using low-rank SOP basis functions on the different methods is analysed and the numerical results are compared with those obtained with the reduced rank block power method. Relative merits of the different algorithms are presented, showing that the advantage of using a more sophisticated method, although mitigated by the use of reduced-rank SOP functions, is noticeable in terms of CPU time.
Kinetics of relativistic runaway electrons
NASA Astrophysics Data System (ADS)
Breizman, B. N.; Aleynikov, P. B.
2017-12-01
This overview covers recent developments in the theory of runaway electrons in tokamaks. Its main purpose is to outline the intuitive basis for first-principle advancements in runaway electron physics. The overview highlights the following physics aspects of the runaway evolution: (1) survival and acceleration of initially hot electrons during thermal quench, (2) effect of magnetic perturbations on runaway confinement, (3) multiplication of the runaways via knock-on collisions with the bulk electrons, (4) slow decay of the runaway current, and (5) runaway-driven micro-instabilities. The scope of the reported studies is governed by the need to understand the behavior of runaway electrons as an essential physics element of the disruption events in ITER in order to develop an effective runaway mitigation scheme. ).
Long-pulse stability limits of the ITER baseline scenario
Jackson, G. L.; Luce, T. C.; Solomon, W. M.; ...
2015-01-14
DIII-D has made significant progress in developing the techniques required to operate ITER, and in understanding their impact on performance when integrated into operational scenarios at ITER relevant parameters. We demonstrated long duration plasmas, stable to m/n =2/1 tearing modes (TMs), with an ITER similar shape and I p/aB T, in DIII-D, that evolve to stationary conditions. The operating region most likely to reach stable conditions has normalized pressure, B N≈1.9–2.1 (compared to the ITER baseline design of 1.6 – 1.8), and a Greenwald normalized density fraction, f GW 0.42 – 0.70 (the ITER design is f GW ≈ 0.8).more » The evolution of the current profile, using internal inductance (l i) as an indicator, is found to produce a smaller fraction of stable pulses when l i is increased above ≈ 1.1 at the beginning of β N flattop. Stable discharges with co-neutral beam injection (NBI) are generally accompanied with a benign n=2 MHD mode. However if this mode exceeds ≈ 10 G, the onset of a m/n=2/1 tearing mode occurs with a loss of confinement. In addition, stable operation with low applied external torque, at or below the extrapolated value expected for ITER has also been demonstrated. With electron cyclotron (EC) injection, the operating region of stable discharges has been further extended at ITER equivalent levels of torque and to ELM free discharges at higher torque but with the addition of an n=3 magnetic perturbation from the DIII-D internal coil set. Lastly, the characterization of the ITER baseline scenario evolution for long pulse duration, extension to more ITER relevant values of torque and electron heating, and suppression of ELMs have significantly advanced the physics basis of this scenario, although significant effort remains in the simultaneous integration of all these requirements.« less
DIII-D research to address key challenges for ITER and fusion energy
NASA Astrophysics Data System (ADS)
Buttery, R. J.; the DIII-D Team
2015-10-01
DIII-D has made significant advances in the scientific basis for fusion energy. The physics mechanism of resonant magnetic perturbation (RMP) edge localized mode (ELM) suppression is revealed as field penetration at the pedestal top, and reduced coil set operation was demonstrated. Disruption runaway electrons were effectively quenched by shattered pellets; runaway dissipation is explained by pitch angle scattering. Modest thermal quench radiation asymmetries are well described NIMROD modelling. With good pedestal regulation and error field correction, low torque ITER baselines have been demonstrated and shown to be compatible with an ITER test blanket module simulator. However performance and long wavelength turbulence degrade as low rotation and electron heating are approached. The alternative QH mode scenario is shown to be compatible with high Greenwald density fraction, with an edge harmonic oscillation demonstrating good impurity flushing. Discharge optimization guided by the EPED model has discovered a new super H-mode with doubled pedestal height. Lithium injection also led to wider, higher pedestals. On the path to steady state, 1 MA has been sustained fully noninductively with βN = 4 and RMP ELM suppression, while a peaked current profile scenario provides attractive options for ITER and a βN = 5 future reactor. Energetic particle transport is found to exhibit a critical gradient behaviour. Scenarios are shown to be compatible with radiative and snowflake divertor techniques. Physics studies reveal that the transition to H mode is locked in by a rise in ion diamagnetic flows. Intrinsic rotation in the plasma edge is demonstrated to arise from kinetic losses. New 3D magnetic sensors validate linear ideal MHD, but identify issues in nonlinear simulations. Detachment, characterized in 2D with sub-eV resolution, reveals a radiation shortfall in simulations. Future facility development targets burning plasma physics with torque free electron heating, the path to steady state with increased off axis currents, and a new divertor solution for fusion reactors.
DIII-D research to address key challenges for ITER and fusion energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buttery, Richard J.
DIII-D has made significant advances in the scientific basis for fusion energy. The physics mechanism of resonant magnetic perturbation (RMP) edge localized mode (ELM) suppression is revealed as field penetration at the pedestal top, and reduced coil set operation was demonstrated. Disruption runaway electrons were effectively quenched by shattered pellets; runaway dissipation is explained by pitch angle scattering. Modest thermal quench radiation asymmetries are well described NIMROD modeling. With good pedestal regulation and error field correction, low torque ITER baselines have been demonstrated and shown to be compatible with an ITER test blanket module simulator. However performance and long wavelengthmore » turbulence degrade as low rotation and electron heating are approached. The alternative QH mode scenario is shown to be compatible with high Greenwald density fraction, with an edge harmonic oscillation demonstrating good impurity flushing. Discharge optimization guided by the EPED model has discovered a new super H-mode with doubled pedestal height. Lithium injection also led to wider, higher pedestals. On the path to steady state, 1 MA has been sustained fully non inductively with β N = 4 and RMP ELM suppression, while a peaked current profile scenario provides attractive options for ITER and a β N = 5 future reactor. Energetic particle transport is found to exhibit a critical gradient behavior. Scenarios are shown to be compatible with radiative and snowflake diverter techniques. Physics studies reveal that the transition to H mode is locked in by a rise in ion diamagnetic flows. Intrinsic rotation in the plasma edge is demonstrated to arise from kinetic losses. New 3D magnetic sensors validate linear ideal MHD, but identify issues in nonlinear simulations. Detachment, characterized in 2D with sub-eV resolution, reveals a radiation shortfall in simulations. As a result, future facility development targets burning plasma physics with torque free electron heating, the path to steady state with increased off axis currents, and a new divertor solution for fusion reactors.« less
DIII-D research to address key challenges for ITER and fusion energy
Buttery, Richard J.
2015-07-29
DIII-D has made significant advances in the scientific basis for fusion energy. The physics mechanism of resonant magnetic perturbation (RMP) edge localized mode (ELM) suppression is revealed as field penetration at the pedestal top, and reduced coil set operation was demonstrated. Disruption runaway electrons were effectively quenched by shattered pellets; runaway dissipation is explained by pitch angle scattering. Modest thermal quench radiation asymmetries are well described NIMROD modeling. With good pedestal regulation and error field correction, low torque ITER baselines have been demonstrated and shown to be compatible with an ITER test blanket module simulator. However performance and long wavelengthmore » turbulence degrade as low rotation and electron heating are approached. The alternative QH mode scenario is shown to be compatible with high Greenwald density fraction, with an edge harmonic oscillation demonstrating good impurity flushing. Discharge optimization guided by the EPED model has discovered a new super H-mode with doubled pedestal height. Lithium injection also led to wider, higher pedestals. On the path to steady state, 1 MA has been sustained fully non inductively with β N = 4 and RMP ELM suppression, while a peaked current profile scenario provides attractive options for ITER and a β N = 5 future reactor. Energetic particle transport is found to exhibit a critical gradient behavior. Scenarios are shown to be compatible with radiative and snowflake diverter techniques. Physics studies reveal that the transition to H mode is locked in by a rise in ion diamagnetic flows. Intrinsic rotation in the plasma edge is demonstrated to arise from kinetic losses. New 3D magnetic sensors validate linear ideal MHD, but identify issues in nonlinear simulations. Detachment, characterized in 2D with sub-eV resolution, reveals a radiation shortfall in simulations. As a result, future facility development targets burning plasma physics with torque free electron heating, the path to steady state with increased off axis currents, and a new divertor solution for fusion reactors.« less
In-vessel tritium retention and removal in ITER
NASA Astrophysics Data System (ADS)
Federici, G.; Anderl, R. A.; Andrew, P.; Brooks, J. N.; Causey, R. A.; Coad, J. P.; Cowgill, D.; Doerner, R. P.; Haasz, A. A.; Janeschitz, G.; Jacob, W.; Longhurst, G. R.; Nygren, R.; Peacock, A.; Pick, M. A.; Philipps, V.; Roth, J.; Skinner, C. H.; Wampler, W. R.
Tritium retention inside the vacuum vessel has emerged as a potentially serious constraint in the operation of the International Thermonuclear Experimental Reactor (ITER). In this paper we review recent tokamak and laboratory data on hydrogen, deuterium and tritium retention for materials and conditions which are of direct relevance to the design of ITER. These data, together with significant advances in understanding the underlying physics, provide the basis for modelling predictions of the tritium inventory in ITER. We present the derivation, and discuss the results, of current predictions both in terms of implantation and codeposition rates, and critically discuss their uncertainties and sensitivity to important design and operation parameters such as the plasma edge conditions, the surface temperature, the presence of mixed-materials, etc. These analyses are consistent with recent tokamak findings and show that codeposition of tritium occurs on the divertor surfaces primarily with carbon eroded from a limited area of the divertor near the strike zones. This issue remains an area of serious concern for ITER. The calculated codeposition rates for ITER are relatively high and the in-vessel tritium inventory limit could be reached, under worst assumptions, in approximately a week of continuous operation. We discuss the implications of these estimates on the design, operation and safety of ITER and present a strategy for resolving the issues. We conclude that as long as carbon is used in ITER - and more generically in any other next-step experimental fusion facility fuelled with tritium - the efficient control and removal of the codeposited tritium is essential. There is a critical need to develop and test in situ cleaning techniques and procedures that are beyond the current experience of present-day tokamaks. We review some of the principal methods that are being investigated and tested, in conjunction with the R&D work still required to extrapolate their applicability to ITER. Finally, unresolved issues are identified and recommendations are made on potential R&D avenues for their resolution.
Emerging Techniques for Dose Optimization in Abdominal CT
Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit
2014-01-01
Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277
Brown, James; Carrington, Tucker
2015-07-28
Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.
NASA Astrophysics Data System (ADS)
Perov, N. I.
1985-02-01
A physical-geometrical method for computing the orbits of earth satellites on the basis of an inadequate number of angular observations (N3) was developed. Specifically, a new method has been developed for calculating the elements of Keplerian orbits of unidentified artificial satellites using two angular observations (alpha sub k, S sub k, k = 1). The first section gives procedures for determining the topocentric distance to AES on the basis of one optical observation. This is followed by description of a very simple method for determining unperturbed orbits using two satellite position vectors and a time interval which is applicable even in the case of antiparallel AED position vectors, a method designated the R sub 2 iterations method.
Security of quantum key distribution with iterative sifting
NASA Astrophysics Data System (ADS)
Tamaki, Kiyoshi; Lo, Hoi-Kwong; Mizutani, Akihiro; Kato, Go; Lim, Charles Ci Wen; Azuma, Koji; Curty, Marcos
2018-01-01
Several quantum key distribution (QKD) protocols employ iterative sifting. After each quantum transmission round, Alice and Bob disclose part of their setting information (including their basis choices) for the detected signals. This quantum phase then ends when the basis dependent termination conditions are met, i.e., the numbers of detected signals per basis exceed certain pre-agreed threshold values. Recently, however, Pfister et al (2016 New J. Phys. 18 053001) showed that the basis dependent termination condition makes QKD insecure, especially in the finite key regime, and they suggested to disclose all the setting information after finishing the quantum phase. However, this protocol has two main drawbacks: it requires that Alice possesses a large memory, and she also needs to have some a priori knowledge about the transmission rate of the quantum channel. Here we solve these two problems by introducing a basis-independent termination condition to the iterative sifting in the finite key regime. The use of this condition, in combination with Azuma’s inequality, provides a precise estimation on the amount of privacy amplification that needs to be applied, thus leading to the security of QKD protocols, including the loss-tolerant protocol (Tamaki et al 2014 Phys. Rev. A 90 052314), with iterative sifting. Our analysis indicates that to announce the basis information after each quantum transmission round does not compromise the key generation rate of the loss-tolerant protocol. Our result allows the implementation of wider classes of classical post-processing techniques in QKD with quantified security.
BOOK REVIEW: Controlled Fusion and Plasma Physics
NASA Astrophysics Data System (ADS)
Engelmann, F.
2007-07-01
This new book by Kenro Miyamoto provides an up-to-date overview of the status of fusion research and the important parts of the underlying plasma physics at a moment where, due to the start of ITER construction, an important step in fusion research has been made and many new research workers will enter the field. For them, and also for interested graduate students and physicists in other fields, the book provides a good introduction into fusion physics as, on the whole, the presentation of the material is quite appropriate for getting acquainted with the field on the basis of just general knowledge in physics. There is overlap with Miyamoto's earlier book Plasma Physics for Nuclear Fusion (MIT Press, Cambridge, USA, 1989) but only in a few sections on subjects which have not evolved since. The presentation is subdivided into two parts of about equal length. The first part, following a concise survey of the physics basis of thermonuclear fusion and of plasmas in general, covers the various magnetic configurations studied for plasma confinement (tokamak; reversed field pinch; stellarator; mirror-type geometries) and introduces the specific properties of plasmas in these devices. Plasma confinement in tokamaks is treated in particular detail, in compliance with the importance of this field in fusion research. This includes a review of the ITER concept and of the rationale for the choice of ITER's parameters. In the second part, selected topics in fusion plasma physics (macroscopic instabilities; propagation of waves; kinetic effects such as energy transfer between waves and particles including microscopic instabilities as well as plasma heating and current drive; transport phenomena induced by turbulence) are presented systematically. While the emphasis is on displaying the essential physics, deeper theoretical analysis is also provided here. Every chapter is complemented by a few related problems, but only partial hints for their solution are given. A selection of references, mostly to articles covering original research, allows the interested reader to go deeper into the various subjects. There are a few quite relevant areas which are essentially not covered in the book (plasma diagnostics; fuelling). The discussion of particle and power exhaust is limited to tokamaks and is somewhat scarce. Other points which I did not find fully satisfactory are: the index is too selective and does not really allow easy access to any specific subject. Cross references between different sections treating related topics are not always given. There are quite a lot of typographical errors which as far as cross references are concerned may be disturbing. A list of the symbols used would be a helpful supplement, especially since some of them appear with different meanings. There are apparent imperfections in the structure of certain chapters. While the English is sometimes unusual, this generally does not affect the readability. Overall, the book can be warmly recommended to all interested in familiarizing themselves with the physics of magnetic fusion.
EDITORIAL: ECRH physics and technology in ITER
NASA Astrophysics Data System (ADS)
Luce, T. C.
2008-05-01
It is a great pleasure to introduce you to this special issue containing papers from the 4th IAEA Technical Meeting on ECRH Physics and Technology in ITER, which was held 6-8 June 2007 at the IAEA Headquarters in Vienna, Austria. The meeting was attended by more than 40 ECRH experts representing 13 countries and the IAEA. Presentations given at the meeting were placed into five separate categories EC wave physics: current understanding and extrapolation to ITER Application of EC waves to confinement and stability studies, including active control techniques for ITER Transmission systems/launchers: state of the art and ITER relevant techniques Gyrotron development towards ITER needs System integration and optimisation for ITER. It is notable that the participants took seriously the focal point of ITER, rather than simply contributing presentations on general EC physics and technology. The application of EC waves to ITER presents new challenges not faced in the current generation of experiments from both the physics and technology viewpoints. High electron temperatures and the nuclear environment have a significant impact on the application of EC waves. The needs of ITER have also strongly motivated source and launcher development. Finally, the demonstrated ability for precision control of instabilities or non-inductive current drive in addition to bulk heating to fusion burn has secured a key role for EC wave systems in ITER. All of the participants were encouraged to submit their contributions to this special issue, subject to the normal publication and technical merit standards of Nuclear Fusion. Almost half of the participants chose to do so; many of the others had been published in other publications and therefore could not be included in this special issue. The papers included here are a representative sample of the meeting. The International Advisory Committee also asked the three summary speakers from the meeting to supply brief written summaries (O. Sauter: EC wave physics and applications, M. Thumm: Source and transmission line development, and S. Cirant: ITER specific system designs). These summaries are included in this issue to give a more complete view of the technical meeting. Finally, it is appropriate to mention the future of this meeting series. With the ratification of the ITER agreement and the formation of the ITER International Organization, it was recognized that meetings conducted by outside agencies with an exclusive focus on ITER would be somewhat unusual. However, the participants at this meeting felt that the gathering of international experts with diverse specialities within EC wave physics and technology to focus on using EC waves in future fusion devices like ITER was extremely valuable. It was therefore recommended that this series of meetings continue, but with the broader focus on the application of EC waves to steady-state and burning plasma experiments including demonstration power plants. As the papers in this special issue show, the EC community is already taking seriously the challenges of applying EC waves to fusion devices with high neutron fluence and continuous operation at high reliability.
Overview of Recent DIII-D Experimental Results
NASA Astrophysics Data System (ADS)
Fenstermacher, Max; DIII-D Team
2017-10-01
Recent DIII-D experiments contributed to the ITER physics basis and to physics understanding for extrapolation to future devices. A predict-first analysis showed how shape can enhance access to RMP ELM suppression. 3D equilibrium changes from ELM control RMPs, were linked to density pumpout. Ion velocity imaging in the SOL showed 3D C2+flow perturbations near RMP induced n =1 islands. Correlation ECE reveals a 40% increase in Te turbulence during QH-mode and 70% during RMP ELM suppression vs. ELMing H-mode. A long-lived predator-prey oscillation replaces edge MHD in recent low-torque QH-mode plasmas. Spatio-temporally resolved runaway electron measurements validate the importance of synchrotron and collisional damping on RE dissipation. A new small angle slot divertor achieves strong plasma cooling and facilitates detachment access. Fast ion confinement was improved in high q_min scenarios using variable beam energy optimization. First reproducible, stable ITER baseline scenarios were established. Studies have validated a model for edge momentum transport that predicts the pedestal main-ion intrinsic velocity value and direction. Work supported by the US DOE under DE-FC02-04ER54698 and DE-AC52-07NA27344.
2017-10-01
chronic mental and physical health problems. Therefore, the project aims to: (1) iteratively design a new web-based PTS and Motivational Interviewing...result in missed opportunities to intervene to prevent chronic mental and physical health problems. The project aims are to: (1) iteratively design a new...intervene to prevent chronic mental and physical health problems. We propose to: (1) Iteratively design a new web-based PTS and Motivational
RMP ELM Suppression in DIII-D Plasmas with ITER Similar Shapes and Collisionalities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, T.E.; Fenstermacher, M. E.; Moyer, R.A.
2008-01-01
Large Type-I edge localized modes (ELMs) are completely eliminated with small n = 3 resonant magnetic perturbations (RMP) in low average triangularity, = 0.26, plasmas and in ITER similar shaped (ISS) plasmas, = 0.53, with ITER relevant collisionalities ve 0.2. Significant differences in the RMP requirements and in the properties of the ELM suppressed plasmas are found when comparing the two triangularities. In ISS plasmas, the current required to suppress ELMs is approximately 25% higher than in low average triangularity plasmas. It is also found that the width of the resonant q95 window required for ELM suppression is smaller inmore » ISS plasmas than in low average triangularity plasmas. An analysis of the positions and widths of resonant magnetic islands across the pedestal region, in the absence of resonant field screening or a self-consistent plasma response, indicates that differences in the shape of the q profile may explain the need for higher RMP coil currents during ELM suppression in ISS plasmas. Changes in the pedestal profiles are compared for each plasma shape as well as with changes in the injected neutral beam power and the RMP amplitude. Implications of these results are discussed in terms of requirements for optimal ELM control coil designs and for establishing the physics basis needed in order to scale this approach to future burning plasma devices such as ITER.« less
Demonstrating the Physics Basis for the ITER 15 MA Inductive Discharge on Alcator C-Mod
NASA Astrophysics Data System (ADS)
Kessel, C. E.; Wolfe, S. M.; Hutchinson, I. H.; Hughes, J. W.; Lin, Y.; Ma, Y.; Mikkelsen, D. R.; Poli, F.; Reinke, M. L.; Wukitch, S. J.
2012-10-01
Rampup discharges in C-Mod, matching ITE's current diffusion times show ICRF heating can save V-s but results in only weak effects on the current profile, despite strong modifications of the central electron temperature. Simulation of these discharges with TSC, and TORIC for ICRF, using multiple transport models, do not reproduce the temperature profile evolution, or the experimental internal self-inductance li, by sufficiently large amounts to be unacceptable for projections to ITER operation. For the flattop phase experiments EDA H-modes approach the ITER parameter targets of q95=3, H98=1, n/nGr=0.85, betaN=1.7, and k=1.8, and sustain them similar to a normalized ITER flattop time. The discharges show a degradation of energy confinement at higher densities, but increasing H98 with increasing net power to the plasma. For these discharges intrinsic impurities (B, Mo) provided radiated power fractions of 25-37%. Experiments show the plasma can remain in H-mode in rampdown with ICRF injection, the density will decrease with Ip while in the H-mode, and the back transition occurs when the net power reaches about half the L-H transition power. C-Mod indicates that faster rampdowns are preferable. Work supported by US Dept of Energy under DE-AC02-CH0911466 and DE-FC02-99ER54512.
NASA Astrophysics Data System (ADS)
Stacey, W. M.
2009-09-01
The possibility that a tokamak D-T fusion neutron source, based on ITER physics and technology, could be used to drive sub-critical, fast-spectrum nuclear reactors fueled with the transuranics (TRU) in spent nuclear fuel discharged from conventional nuclear reactors has been investigated at Georgia Tech in a series of studies which are summarized in this paper. It is found that sub-critical operation of such fast transmutation reactors is advantageous in allowing longer fuel residence time, hence greater TRU burnup between fuel reprocessing stages, and in allowing higher TRU loading without compromising safety, relative to what could be achieved in a similar critical transmutation reactor. The required plasma and fusion technology operating parameter range of the fusion neutron source is generally within the anticipated operational range of ITER. The implications of these results for fusion development policy, if they hold up under more extensive and detailed analysis, is that a D-T fusion tokamak neutron source for a sub-critical transmutation reactor, built on the basis of the ITER operating experience, could possibly be a logical next step after ITER on the path to fusion electrical power reactors. At the same time, such an application would allow fusion to contribute to meeting the nation's energy needs at an earlier stage by helping to close the fission reactor nuclear fuel cycle.
NASA Astrophysics Data System (ADS)
Shimomura, Y.; Aymar, R.; Chuyanov, V. A.; Huguet, M.; Matsumoto, H.; Mizoguchi, T.; Murakami, Y.; Polevoi, A. R.; Shimada, M.; ITER Joint Central Team; ITER Home Teams
2001-03-01
ITER is planned to be the first fusion experimental reactor in the world operating for research in physics and engineering. The first ten years of operation will be devoted primarily to physics issues at low neutron fluence and the following ten years of operation to engineering testing at higher fluence. ITER can accommodate various plasma configurations and plasma operation modes, such as inductive high Q modes, long pulse hybrid modes and non-inductive steady state modes, with large ranges of plasma current, density, beta and fusion power, and with various heating and current drive methods. This flexibility will provide an advantage for coping with uncertainties in the physics database, in studying burning plasmas, in introducing advanced features and in optimizing the plasma performance for the different programme objectives. Remote sites will be able to participate in the ITER experiment. This concept will provide an advantage not only in operating ITER for 24 hours a day but also in involving the worldwide fusion community and in promoting scientific competition among the ITER Parties.
Approximate techniques of structural reanalysis
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1974-01-01
A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.
DIII-D research advancing the scientific basis for burning plasmas and fusion energy
NASA Astrophysics Data System (ADS)
W. M. SolomonThe DIII-D Team
2017-10-01
The DIII-D tokamak has addressed key issues to advance the physics basis for ITER and future steady-state fusion devices. In work related to transient control, magnetic probing is used to identify a decrease in ideal stability, providing a basis for active instability sensing. Improved understanding of 3D interactions is emerging, with RMP-ELM suppression correlated with exciting an edge current driven mode. Should rapid plasma termination be necessary, shattered neon pellet injection has been shown to be tunable to adjust radiation and current quench rate. For predictive simulations, reduced transport models such as TGLF have reproduced changes in confinement associated with electron heating. A new wide-pedestal variant of QH-mode has been discovered where increased edge transport is found to allow higher pedestal pressure. New dimensionless scaling experiments suggest an intrinsic torque comparable to the beam-driven torque on ITER. In steady-state-related research, complete ELM suppression has been achieved that is relatively insensitive to q 95, having a weak effect on the pedestal. Both high-q min and hybrid steady-state plasmas have avoided fast ion instabilities and achieved increased performance by control of the fast ion pressure gradient and magnetic shear, and use of external control tools such as ECH. In the boundary, experiments have demonstrated the impact of E× B drifts on divertor detachment and divertor asymmetries. Measurements in helium plasmas have found that the radiation shortfall can be eliminated provided the density near the X-point is used as a constraint in the modeling. Experiments conducted with toroidal rings of tungsten in the divertor have indicated that control of the strike-point flux is important for limiting the core contamination. Future improvements are planned to the facility to advance physics issues related to the boundary, transients and high performance steady-state operation.
DIII-D research advancing the scientific basis for burning plasmas and fusion energy
Solomon, Wayne M.
2017-07-12
The DIII-D tokamak has addressed key issues to advance the physics basis for ITER and future steady-state fusion devices. In work related to transient control, magnetic probing is used to identify a decrease in ideal stability, providing a basis for active instability sensing. Improved understanding of 3D interactions is emerging, with RMP-ELM suppression correlated with exciting an edge current driven mode. Should rapid plasma termination be necessary, shattered neon pellet injection has been shown to be tunable to adjust radiation and current quench rate. For predictive simulations, reduced transport models such as TGLF have reproduced changes in confinement associated withmore » electron heating. A new wide- pedestal variant of QH-mode has been discovered where increased edge transport is found to allow higher pedestal pressure. New dimensionless scaling experiments suggest an intrinsic torque comparable to the beam-driven torque on ITER. In steady-state-related research, complete ELM suppression has been achieved that is relatively insensitive to q 95, having a weak effect on the pedestal. Both high-q min and hybrid steady-state plasmas have avoided fast ion instabilities and achieved increased performance by control of the fast ion pressure gradient and magnetic shear, and use of external control tools such as ECH. In the boundary, experiments have demonstrated the impact of E × B drifts on divertor detachment and divertor asymmetries. Measurements in helium plasmas have found that the radiation shortfall can be eliminated provided the density near the X-point is used as a constraint in the modeling. Experiments conducted with toroidal rings of tungsten in the divertor have indicated that control of the strike-point flux is important for limiting the core contamination. In conclusion, future improvements are planned to the facility to advance physics issues related to the boundary, transients and high performance steady-state operation.« less
DIII-D research advancing the scientific basis for burning plasmas and fusion energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solomon, Wayne M.
The DIII-D tokamak has addressed key issues to advance the physics basis for ITER and future steady-state fusion devices. In work related to transient control, magnetic probing is used to identify a decrease in ideal stability, providing a basis for active instability sensing. Improved understanding of 3D interactions is emerging, with RMP-ELM suppression correlated with exciting an edge current driven mode. Should rapid plasma termination be necessary, shattered neon pellet injection has been shown to be tunable to adjust radiation and current quench rate. For predictive simulations, reduced transport models such as TGLF have reproduced changes in confinement associated withmore » electron heating. A new wide- pedestal variant of QH-mode has been discovered where increased edge transport is found to allow higher pedestal pressure. New dimensionless scaling experiments suggest an intrinsic torque comparable to the beam-driven torque on ITER. In steady-state-related research, complete ELM suppression has been achieved that is relatively insensitive to q 95, having a weak effect on the pedestal. Both high-q min and hybrid steady-state plasmas have avoided fast ion instabilities and achieved increased performance by control of the fast ion pressure gradient and magnetic shear, and use of external control tools such as ECH. In the boundary, experiments have demonstrated the impact of E × B drifts on divertor detachment and divertor asymmetries. Measurements in helium plasmas have found that the radiation shortfall can be eliminated provided the density near the X-point is used as a constraint in the modeling. Experiments conducted with toroidal rings of tungsten in the divertor have indicated that control of the strike-point flux is important for limiting the core contamination. In conclusion, future improvements are planned to the facility to advance physics issues related to the boundary, transients and high performance steady-state operation.« less
Modernisation of the intermediate physics laboratory
NASA Astrophysics Data System (ADS)
Kontro, Inkeri; Heino, Olga; Hendolin, Ilkka; Galambosi, Szabolcs
2018-03-01
The intermediate laboratory courses at the Department of Physics, University of Helsinki, were reformed using desired learning outcomes as the basis for design. The reformed laboratory courses consist of weekly workshops and small-group laboratory sessions. Many of the laboratory exercises are open-ended and have several possible ways of execution. They were designed around affordable devices, to allow for the purchase of multiple sets of laboratory equipment. This allowed students to work on the same problems simultaneously. Thus, it was possible to set learning goals which build on each other. Workshop sessions supported the course by letting the students solve problems related to conceptual and technical aspects of each laboratory exercise. The laboratory exercises progressed biweekly to allow for iterative problem solving. Students reached the learning goals well and the reform improved student experiences. Neither positive or negative changes in expert-like attitudes towards experimental physics (measured by E-CLASS questionnaire) were observed.
Application of Intervention Mapping to the Development of a Complex Physical Therapist Intervention.
Jones, Taryn M; Dear, Blake F; Hush, Julia M; Titov, Nickolai; Dean, Catherine M
2016-12-01
Physical therapist interventions, such as those designed to change physical activity behavior, are often complex and multifaceted. In order to facilitate rigorous evaluation and implementation of these complex interventions into clinical practice, the development process must be comprehensive, systematic, and transparent, with a sound theoretical basis. Intervention Mapping is designed to guide an iterative and problem-focused approach to the development of complex interventions. The purpose of this case report is to demonstrate the application of an Intervention Mapping approach to the development of a complex physical therapist intervention, a remote self-management program aimed at increasing physical activity after acquired brain injury. Intervention Mapping consists of 6 steps to guide the development of complex interventions: (1) needs assessment; (2) identification of outcomes, performance objectives, and change objectives; (3) selection of theory-based intervention methods and practical applications; (4) organization of methods and applications into an intervention program; (5) creation of an implementation plan; and (6) generation of an evaluation plan. The rationale and detailed description of this process are presented using an example of the development of a novel and complex physical therapist intervention, myMoves-a program designed to help individuals with an acquired brain injury to change their physical activity behavior. The Intervention Mapping framework may be useful in the development of complex physical therapist interventions, ensuring the development is comprehensive, systematic, and thorough, with a sound theoretical basis. This process facilitates translation into clinical practice and allows for greater confidence and transparency when the program efficacy is investigated. © 2016 American Physical Therapy Association.
Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Dufek, Jan
2014-06-01
This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.
A terracing operator for physical property mapping with potential field data
Cordell, L.; McCafferty, A.E.
1989-01-01
The terracing operator works iteratively on gravity or magnetic data, using the sense of the measured field's local curvature, to produce a field comprised of uniform domains separated by abrupt domain boundaries. The result is crudely proportional to a physical-property function defined in one (profile case) or two (map case) horizontal dimensions. This result can be extended to a physical-property model if its behavior in the third (vertical) dimension is defined, either arbitrarily or on the basis of the local geologic situation. The terracing algorithm is computationally fast and appropriate to use with very large digital data sets. The terracing operator was applied separately to aeromagnetic and gravity data from a 136km x 123km area in eastern Kansas. Results provide a reasonable good physical representation of both the gravity and the aeromagnetic data. Superposition of the results from the two data sets shows many areas of agreement that can be referenced to geologic features within the buried Precambrian crystalline basement. -from Authors
Brown, James; Carrington, Tucker
2016-10-14
We demonstrate that it is possible to use a variational method to compute 50 vibrational levels of ethylene oxide (a seven-atom molecule) with convergence errors less than 0.01 cm -1 . This is done by beginning with a small basis and expanding it to include product basis functions that are deemed to be important. For ethylene oxide a basis with fewer than 3 × 10 6 functions is large enough. Because the resulting basis has no exploitable structure we use a mapping to evaluate the matrix-vector products required to use an iterative eigensolver. The expanded basis is compared to bases obtained from pre-determined pruning condition. Similar calculations are presented for molecules with 3, 4, 5, and 6 atoms. For the 6-atom molecule, CH 3 CH, the required expanded basis has about 106 000 functions and is about an order of magnitude smaller than bases made with a pre-determined pruning condition.
Progress in Development of the ITER Plasma Control System Simulation Platform
NASA Astrophysics Data System (ADS)
Walker, Michael; Humphreys, David; Sammuli, Brian; Ambrosino, Giuseppe; de Tommasi, Gianmaria; Mattei, Massimiliano; Raupp, Gerhard; Treutterer, Wolfgang; Winter, Axel
2017-10-01
We report on progress made and expected uses of the Plasma Control System Simulation Platform (PCSSP), the primary test environment for development of the ITER Plasma Control System (PCS). PCSSP will be used for verification and validation of the ITER PCS Final Design for First Plasma, to be completed in 2020. We discuss the objectives of PCSSP, its overall structure, selected features, application to existing devices, and expected evolution over the lifetime of the ITER PCS. We describe an archiving solution for simulation results, methods for incorporating physics models of the plasma and physical plant (tokamak, actuator, and diagnostic systems) into PCSSP, and defining characteristics of models suitable for a plasma control development environment such as PCSSP. Applications of PCSSP simulation models including resistive plasma equilibrium evolution are demonstrated. PCSSP development supported by ITER Organization under ITER/CTS/6000000037. Resistive evolution code developed under General Atomics' Internal funding. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.
Status and Plans for the TRANSP Interpretive and Predictive Simulation Code
NASA Astrophysics Data System (ADS)
Kaye, Stanley; Andre, Robert; Marina, Gorelenkova; Yuan, Xingqui; Hawryluk, Richard; Jardin, Steven; Poli, Francesca
2015-11-01
TRANSP is an integrated interpretive and predictive transport analysis tool that incorporates state of the art heating/current drive sources and transport models. The treatments and transport solvers are becoming increasingly sophisticated and comprehensive. For instance, the ISOLVER component provides a free boundary equilibrium solution, while the PT_SOLVER transport solver is especially suited for stiff transport models such as TGLF. TRANSP also incorporates such source models as NUBEAM for neutral beam injection, GENRAY, TORAY, TORBEAM, TORIC and CQL3D for ICRH, LHCD, ECH and HHFW. The implementation of selected components makes efficient use of MPI for speed up of code calculations. TRANSP has a wide international user-base, and it is run on the FusionGrid to allow for timely support and quick turnaround by the PPPL Computational Plasma Physics Group. It is being used as a basis for both analysis and development of control algorithms and discharge operational scenarios, including simulation of ITER plasmas. This poster will describe present uses of the code worldwide, as well as plans for upgrading the physics modules and code framework. Progress on implementing TRANSP as a component in the ITER IMAS will also be described. This research was supported by the U.S. Department of Energy under contracts DE-AC02-09CH11466.
ITER ECE Diagnostic: Design Progress of IN-DA and the diagnostic role for Physics
NASA Astrophysics Data System (ADS)
Pandya, H. K. B.; Kumar, Ravinder; Danani, S.; Shrishail, P.; Thomas, Sajal; Kumar, Vinay; Taylor, G.; Khodak, A.; Rowan, W. L.; Houshmandyar, S.; Udintsev, V. S.; Casal, N.; Walsh, M. J.
2017-04-01
The ECE Diagnostic system in ITER will be used for measuring the electron temperature profile evolution, electron temperature fluctuations, the runaway electron spectrum, and the radiated power in the electron cyclotron frequency range (70-1000 GHz), These measurements will be used for advanced real time plasma control (e.g. steering the electron cyclotron heating beams), and physics studies. The scope of the Indian Domestic Agency (IN-DA) is to design and develop the polarizer splitter units; the broadband (70 to 1000 GHz) transmission lines; a high temperature calibration source in the Diagnostics Hall; two Michelson Interferometers (70 to 1000 GHz) and a 122-230 GHz radiometer. The remainder of the ITER ECE diagnostic system is the responsibility of the US domestic agency and the ITER Organization (IO). The design needs to conform to the ITER Organization’s strict requirements for reliability, availability, maintainability and inspect-ability. Progress in the design and development of various subsystems and components considering various engineering challenges and solutions will be discussed in this paper. This paper will also highlight how various ECE measurements can enhance understanding of plasma physics in ITER.
Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.
Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua
2018-02-01
Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.
NASA Astrophysics Data System (ADS)
Kim, S. H.; Casper, T. A.; Snipes, J. A.
2018-05-01
ITER will demonstrate the feasibility of burning plasma operation by operating DT plasmas in the ELMy H-mode regime with a high ratio of fusion power gain Q ~ 10. 15 MA ITER baseline operation scenario has been studied using CORSICA, focusing on the entry to burn, flat-top burning plasma operation and exit from burn. The burning plasma operation for about 400 s of the current flat-top was achieved in H-mode within the various engineering constraints imposed by the poloidal field coil and power supply systems. The target fusion gain (Q ~ 10) was achievable in the 15 MA ITER baseline operation with a moderate amount of the total auxiliary heating power (~50 MW). It has been observed that the tungsten (W) concentration needs to be maintained low level (n w/n e up to the order of 1.0 × 10-5) to avoid the radiative collapse and uncontrolled early termination of the discharge. The dynamic evolution of the density can modify the H-mode access unless the applied auxiliary heating power is significantly higher than the H-mode threshold power. Several qualitative sensitivity studies have been performed to provide guidance for further optimizing the plasma operation and performance. Increasing the density profile peaking factor was quite effective in increasing the alpha particle self-heating power and fusion power multiplication factor. Varying the combination of auxiliary heating power has shown that the fusion power multiplication factor can be reduced along with the increase in the total auxiliary heating power. As the 15 MA ITER baseline operation scenario requires full capacity of the coil and power supply systems, the operation window for H-mode access and shape modification was narrow. The updated ITER baseline operation scenarios developed in this work will become a basis for further optimization studies necessary along with the improvement in understanding the burning plasma physics.
Implementation on a nonlinear concrete cracking algorithm in NASTRAN
NASA Technical Reports Server (NTRS)
Herting, D. N.; Herendeen, D. L.; Hoesly, R. L.; Chang, H.
1976-01-01
A computer code for the analysis of reinforced concrete structures was developed using NASTRAN as a basis. Nonlinear iteration procedures were developed for obtaining solutions with a wide variety of loading sequences. A direct access file system was used to save results at each load step to restart within the solution module for further analysis. A multi-nested looping capability was implemented to control the iterations and change the loads. The basis for the analysis is a set of mutli-layer plate elements which allow local definition of materials and cracking properties.
Inductive electronegativity scale. Iterative calculation of inductive partial charges.
Cherkasov, Artem
2003-01-01
A number of novel QSAR descriptors have been introduced on the basis of the previously elaborated models for steric and inductive effects. The developed "inductive" parameters include absolute and effective electronegativity, atomic partial charges, and local and global chemical hardness and softness. Being based on traditional inductive and steric substituent constants these 3D descriptors provide a valuable insight into intramolecular steric and electronic interactions and can find broad application in structure-activity studies. Possible interpretation of physical meaning of the inductive descriptors has been suggested by considering a neutral molecule as an electrical capacitor formed by charged atomic spheres. This approximation relates inductive chemical softness and hardness of bound atom(s) with the total area of the facings of electrical capacitor formed by the atom(s) and the rest of the molecule. The derived full electronegativity equalization scheme allows iterative calculation of inductive partial charges on the basis of atomic electronegativities, covalent radii, and intramolecular distances. A range of inductive descriptors has been computed for a variety of organic compounds. The calculated inductive charges in the studied molecules have been validated by experimental C-1s Electron Core Binding Energies and molecular dipole moments. Several semiempirical chemical rules, such as equalized electronegativity's arithmetic mean, principle of maximum hardness, and principle of hardness borrowing could be explicitly illustrated in the framework of the developed approach.
Simulant Development for LAWPS Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russell, Renee L.; Schonewill, Philip P.; Burns, Carolyn A.
2017-05-23
This report describes simulant development work that was conducted to support the technology maturation of the LAWPS facility. Desired simulant physical properties (density, viscosity, solids concentration, solid particle size), sodium concentrations, and general anion identifications were provided by WRPS. The simulant recipes, particularly a “nominal” 5.6M Na simulant, are intended to be tested at several scales, ranging from bench-scale (500 mL) to full-scale. Each simulant formulation was selected to be chemically representative of the waste streams anticipated to be fed to the LAWPS system, and used the current version of the LAWPS waste specification as a formulation basis. After simulantmore » development iterations, four simulants of varying sodium concentration (5.6M, 6.0M, 4.0M, and 8.0M) were prepared and characterized. The formulation basis, development testing, and final simulant recipes and characterization data for these four simulants are presented in this report.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Yunfeng, E-mail: yfcai@math.pku.edu.cn; Department of Computer Science, University of California, Davis 95616; Bai, Zhaojun, E-mail: bai@cs.ucdavis.edu
2013-12-15
The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal blockmore » preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.« less
Recent progress of the JT-60SA project
NASA Astrophysics Data System (ADS)
Shirai, H.; Barabaschi, P.; Kamada, Y.; the JT-60SA Team
2017-10-01
The JT-60SA project has been implemented for the purpose of an early realization of fusion energy. With a powerful and versatile NBI and ECRF system, a flexible plasma-shaping capability, and various kinds of in-vessel coils to suppress MHD instabilities, JT-60SA plays an essential role in addressing the key physics and engineering issues of ITER and DEMO. It aims to achieve the long sustainment of high integrated performance plasmas under the high β N condition required in DEMO. The fabrication and installation of components and systems of JT-60SA procured by the EU and Japan are steadily progressing. The installation of toroidal field (TF) coils around the vacuum vessel started in December 2016. The commissioning of the cryogenic system and power supply system has been implemented in the Naka site, and JT-60SA will start operation in 2019. The JT-60SA research plan covers a wide area of issues in ITER and DEMO relevant operation regimes, and has been regularly updated on the basis of intensive discussion among European and Japanese researchers.
Overview of Alcator C-Mod Research
NASA Astrophysics Data System (ADS)
White, A. E.
2017-10-01
Alcator C-Mod, a compact (R =0.68m, a =0.21m), high magnetic field, Bt <= 8T, tokamak accesses a variety of naturally ELM-suppressed high confinement regimes that feature extreme power density into the divertor, q|| <= 3 GW/m2, with SOL heat flux widths λq <0.5mm, exceeding conditions expected in ITER and approaching those foreseen in power plants. The unique parameter range provides much of the physics basis of a high-field, compact tokamak reactor. Research spans the topics of core transport and turbulence, RF heating and current drive, pedestal physics, scrape-off layer, divertor and plasma wall interactions. In the last experimental campaign, Super H-mode was explored and featured the highest pedestal pressures ever recorded, pped 90 kPa (90% of ITER target), consistent with EPED predictions. Optimization of naturally ELM-suppressed EDA H-modes accessed the highest volume averaged pressures ever achieved (〈p〉>2 atm), with pped 60 kPa. The SOL heat flux width has been measured at Bpol = 1.25T, confirming the Eich scaling over a broader poloidal field range than before. Multi-channel transport studies focus on the relationship between momentum transport and heat transport with perturbative experiments and new multi-scale gyrokinetic simulation validation techniques were developed. U.S. Department of Energy Grant No. DE-FC02-99ER54512.
Ren, Jiajun; Yi, Yuanping; Shuai, Zhigang
2016-10-11
We propose an inner space perturbation theory (isPT) to replace the expensive iterative diagonalization in the standard density matrix renormalization group theory (DMRG). The retained reduced density matrix eigenstates are partitioned into the active and secondary space. The first-order wave function and the second- and third-order energies are easily computed by using one step Davidson iteration. Our formulation has several advantages including (i) keeping a balance between the efficiency and accuracy, (ii) capturing more entanglement with the same amount of computational time, (iii) recovery of the standard DMRG when all the basis states belong to the active space. Numerical examples for the polyacenes and periacene show that the efficiency gain is considerable and the accuracy loss due to the perturbation treatment is very small, when half of the total basis states belong to the active space. Moreover, the perturbation calculations converge in all our numerical examples.
Analysis of Anderson Acceleration on a Simplified Neutronics/Thermal Hydraulics System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toth, Alex; Kelley, C. T.; Slattery, Stuart R
ABSTRACT A standard method for solving coupled multiphysics problems in light water reactors is Picard iteration, which sequentially alternates between solving single physics applications. This solution approach is appealing due to simplicity of implementation and the ability to leverage existing software packages to accurately solve single physics applications. However, there are several drawbacks in the convergence behavior of this method; namely slow convergence and the necessity of heuristically chosen damping factors to achieve convergence in many cases. Anderson acceleration is a method that has been seen to be more robust and fast converging than Picard iteration for many problems, withoutmore » significantly higher cost per iteration or complexity of implementation, though its effectiveness in the context of multiphysics coupling is not well explored. In this work, we develop a one-dimensional model simulating the coupling between the neutron distribution and fuel and coolant properties in a single fuel pin. We show that this model generally captures the convergence issues noted in Picard iterations which couple high-fidelity physics codes. We then use this model to gauge potential improvements with regard to rate of convergence and robustness from utilizing Anderson acceleration as an alternative to Picard iteration.« less
Gaussian beam and physical optics iteration technique for wideband beam waveguide feed design
NASA Technical Reports Server (NTRS)
Veruttipong, W.; Chen, J. C.; Bathker, D. A.
1991-01-01
The Gaussian beam technique has become increasingly popular for wideband beam waveguide (BWG) design. However, it is observed that the Gaussian solution is less accurate for smaller mirrors (approximately less than 30 lambda in diameter). Therefore, a high-performance wideband BWG design cannot be achieved by using the Gaussian beam technique alone. This article demonstrates a new design approach by iterating Gaussian beam and BWG parameters simultaneously at various frequencies to obtain a wideband BWG. The result is further improved by comparing it with physical optics results and repeating the iteration.
Active spectroscopic measurements using the ITER diagnostic system.
Thomas, D M; Counsell, G; Johnson, D; Vasu, P; Zvonkov, A
2010-10-01
Active (beam-based) spectroscopic measurements are intended to provide a number of crucial parameters for the ITER device being built in Cadarache, France. These measurements include the determination of impurity ion temperatures, absolute densities, and velocity profiles, as well as the determination of the plasma current density profile. Because ITER will be the first experiment to study long timescale (∼1 h) fusion burn plasmas, of particular interest is the ability to study the profile of the thermalized helium ash resulting from the slowing down and confinement of the fusion alphas. These measurements will utilize both the 1 MeV heating neutral beams and a dedicated 100 keV hydrogen diagnostic neutral beam. A number of separate instruments are being designed and built by several of the ITER partners to meet the different spectroscopic measurement needs and to provide the maximum physics information. In this paper, we describe the planned measurements, the intended diagnostic ensemble, and we will discuss specific physics and engineering challenges for these measurements in ITER.
An iterative method for the Helmholtz equation
NASA Technical Reports Server (NTRS)
Bayliss, A.; Goldstein, C. I.; Turkel, E.
1983-01-01
An iterative algorithm for the solution of the Helmholtz equation is developed. The algorithm is based on a preconditioned conjugate gradient iteration for the normal equations. The preconditioning is based on an SSOR sweep for the discrete Laplacian. Numerical results are presented for a wide variety of problems of physical interest and demonstrate the effectiveness of the algorithm.
Extending the physics basis of quiescent H-mode toward ITER relevant parameters
Solomon, W. M.; Burrell, K. H.; Fenstermacher, M. E.; ...
2015-06-26
Recent experiments on DIII-D have addressed several long-standing issues needed to establish quiescent H-mode (QH-mode) as a viable operating scenario for ITER. In the past, QH-mode was associated with low density operation, but has now been extended to high normalized densities compatible with operation envisioned for ITER. Through the use of strong shaping, QH-mode plasmas have been maintained at high densities, both absolute (more » $$\\bar{n}$$ e ≈ 7 × 10 19 m ₋3) and normalized Greenwald fraction ($$\\bar{n}$$ e/n G > 0.7). In these plasmas, the pedestal can evolve to very high pressure and edge current as the density is increased. High density QH-mode operation with strong shaping has allowed access to a previously predicted regime of very high pedestal dubbed “Super H-mode”. Calculations of the pedestal height and width from the EPED model are quantitatively consistent with the experimentally observed density evolution. The confirmation of the shape dependence of the maximum density threshold for QH-mode helps validate the underlying theoretical model of peeling- ballooning modes for ELM stability. In general, QH-mode is found to achieve ELM- stable operation while maintaining adequate impurity exhaust, due to the enhanced impurity transport from an edge harmonic oscillation, thought to be a saturated kink- peeling mode driven by rotation shear. In addition, the impurity confinement time is not affected by rotation, even though the energy confinement time and measured E×B shear are observed to increase at low toroidal rotation. Together with demonstrations of high beta, high confinement and low q 95 for many energy confinement times, these results suggest QH-mode as a potentially attractive operating scenario for the ITER Q=10 mission.« less
Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.
2004-01-01
Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.
Leclerc, Arnaud; Carrington, Tucker
2014-05-07
We propose an iterative method for computing vibrational spectra that significantly reduces the memory cost of calculations. It uses a direct product primitive basis, but does not require storing vectors with as many components as there are product basis functions. Wavefunctions are represented in a basis each of whose functions is a sum of products (SOP) and the factorizable structure of the Hamiltonian is exploited. If the factors of the SOP basis functions are properly chosen, wavefunctions are linear combinations of a small number of SOP basis functions. The SOP basis functions are generated using a shifted block power method. The factors are refined with a rank reduction algorithm to cap the number of terms in a SOP basis function. The ideas are tested on a 20-D model Hamiltonian and a realistic CH3CN (12 dimensional) potential. For the 20-D problem, to use a standard direct product iterative approach one would need to store vectors with about 10(20) components and would hence require about 8 × 10(11) GB. With the approach of this paper only 1 GB of memory is necessary. Results for CH3CN agree well with those of a previous calculation on the same potential.
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.
2014-08-21
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
NASA Astrophysics Data System (ADS)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.
2014-08-01
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.
Fast online generalized multiscale finite element method using constraint energy minimization
NASA Astrophysics Data System (ADS)
Chung, Eric T.; Efendiev, Yalchin; Leung, Wing Tat
2018-02-01
Local multiscale methods often construct multiscale basis functions in the offline stage without taking into account input parameters, such as source terms, boundary conditions, and so on. These basis functions are then used in the online stage with a specific input parameter to solve the global problem at a reduced computational cost. Recently, online approaches have been introduced, where multiscale basis functions are adaptively constructed in some regions to reduce the error significantly. In multiscale methods, it is desired to have only 1-2 iterations to reduce the error to a desired threshold. Using Generalized Multiscale Finite Element Framework [10], it was shown that by choosing sufficient number of offline basis functions, the error reduction can be made independent of physical parameters, such as scales and contrast. In this paper, our goal is to improve this. Using our recently proposed approach [4] and special online basis construction in oversampled regions, we show that the error reduction can be made sufficiently large by appropriately selecting oversampling regions. Our numerical results show that one can achieve a three order of magnitude error reduction, which is better than our previous methods. We also develop an adaptive algorithm and enrich in selected regions with large residuals. In our adaptive method, we show that the convergence rate can be determined by a user-defined parameter and we confirm this by numerical simulations. The analysis of the method is presented.
Establishing Physical and Engineering Science Base to Bridge from ITER to Demo
NASA Astrophysics Data System (ADS)
Peng, Y.-K. Martin; Abdou, M.; Gates, D.; Hegna, C.; Hill, D.; Najmabadi, F.; Navratil, G.; Parker, R.
2007-11-01
A Nuclear Component Testing (NCT) Discussion Group emerged recently to clarify how ``a lowered-risk, reduced-cost approach can provide a progressive fusion environment beyond the ITER level to explore, discover, and help establish the remaining, critically needed physical and engineering sciences knowledge base for Demo.'' The group, assuming success of ITER and other contemporary projects, identified critical ``gap-filling'' investigations: plasma startup, tritium self-sufficiency, plasma facing surface performance and maintainability, first wall/blanket/divertor materials defect control and lifetime management, and remote handling. Only standard or spherical tokamak plasma conditions below the advanced regime are assumed to lower the anticipated physics risk to continuous operation (˜2 weeks). Modular designs and remote handling capabilities are included to mitigate the risk of component failure and ease replacement. Aspect ratio should be varied to lower the cost, accounting for the contending physics risks and the near-term R&D. Cost and time-effective staging from H-H, D-D, to D-T will also be considered. *Work supported by USDOE.
The PROactive innovative conceptual framework on physical activity
Dobbels, Fabienne; de Jong, Corina; Drost, Ellen; Elberse, Janneke; Feridou, Chryssoula; Jacobs, Laura; Rabinovich, Roberto; Frei, Anja; Puhan, Milo A.; de Boer, Willem I.; van der Molen, Thys; Williams, Kate; Pinnock, Hillary; Troosters, Thierry; Karlsson, Niklas; Kulich, Karoly; Rüdell, Katja; Brindicci, Caterina; Higenbottam, Tim; Troosters, Thierry; Dobbels, Fabienne; Decramer, Marc; Tabberer, Margaret; Rabinovich, Roberto A; MacNee, William; Vogiatzis, Ioannis; Polkey, Michael; Hopkinson, Nick; Garcia-Aymerich, Judith; Puhan, Milo; Frei, Anja; van der Molen, Thys; de Jong, Corina; de Boer, Pim; Jarrod, Ian; McBride, Paul; Kamel, Nadia; Rudell, Katja; Wilson, Frederick J.; Ivanoff, Nathalie; Kulich, Karoly; Glendenning, Alistair; Karlsson, Niklas X.; Corriol-Rohou, Solange; Nikai, Enkeleida; Erzen, Damijan
2014-01-01
Although physical activity is considered an important therapeutic target in chronic obstructive pulmonary disease (COPD), what “physical activity” means to COPD patients and how their perspective is best measured is poorly understood. We designed a conceptual framework, guiding the development and content validation of two patient reported outcome (PRO) instruments on physical activity (PROactive PRO instruments). 116 patients from four European countries with diverse demographics and COPD phenotypes participated in three consecutive qualitative studies (63% male, age mean±sd 66±9 years, 35% Global Initiative for Chronic Obstructive Lung Disease stage III–IV). 23 interviews and eight focus groups (n = 54) identified the main themes and candidate items of the framework. 39 cognitive debriefings allowed the clarity of the items and instructions to be optimised. Three themes emerged, i.e. impact of COPD on amount of physical activity, symptoms experienced during physical activity, and adaptations made to facilitate physical activity. The themes were similar irrespective of country, demographic or disease characteristics. Iterative rounds of appraisal and refinement of candidate items resulted in 30 items with a daily recall period and 34 items with a 7-day recall period. For the first time, our approach provides comprehensive insight on physical activity from the COPD patients’ perspective. The PROactive PRO instruments’ content validity represents the pivotal basis for empirically based item reduction and validation. PMID:25034563
The PROactive innovative conceptual framework on physical activity.
Dobbels, Fabienne; de Jong, Corina; Drost, Ellen; Elberse, Janneke; Feridou, Chryssoula; Jacobs, Laura; Rabinovich, Roberto; Frei, Anja; Puhan, Milo A; de Boer, Willem I; van der Molen, Thys; Williams, Kate; Pinnock, Hillary; Troosters, Thierry; Karlsson, Niklas; Kulich, Karoly; Rüdell, Katja
2014-11-01
Although physical activity is considered an important therapeutic target in chronic obstructive pulmonary disease (COPD), what "physical activity" means to COPD patients and how their perspective is best measured is poorly understood. We designed a conceptual framework, guiding the development and content validation of two patient reported outcome (PRO) instruments on physical activity (PROactive PRO instruments). 116 patients from four European countries with diverse demographics and COPD phenotypes participated in three consecutive qualitative studies (63% male, age mean±sd 66±9 years, 35% Global Initiative for Chronic Obstructive Lung Disease stage III-IV). 23 interviews and eight focus groups (n = 54) identified the main themes and candidate items of the framework. 39 cognitive debriefings allowed the clarity of the items and instructions to be optimised. Three themes emerged, i.e. impact of COPD on amount of physical activity, symptoms experienced during physical activity, and adaptations made to facilitate physical activity. The themes were similar irrespective of country, demographic or disease characteristics. Iterative rounds of appraisal and refinement of candidate items resulted in 30 items with a daily recall period and 34 items with a 7-day recall period. For the first time, our approach provides comprehensive insight on physical activity from the COPD patients' perspective. The PROactive PRO instruments' content validity represents the pivotal basis for empirically based item reduction and validation. ©ERS 2014.
NASA Astrophysics Data System (ADS)
Federici, G.; Holland, D. F.; Matera, R.
1996-10-01
In the next generation of DT fuelled tokamaks, i.e., the International Thermonuclear Experimental Reactor (ITER) implantation of energetic DT particles on some portions of the plasma facing components (PFCs) will take place along with significant erosion of the armour surfaces. As a result of the simultaneous removal of material from the front surface, the build-up of tritium inventory and the start of permeation originating in the presence of large densities of neutron-induced traps is expected to be influenced considerably and special provisions could be required to minimise the consequences on the design. This paper reports on the results of a tritium transport modelling study based on a new model which describes the migration of implanted tritium across the bulk of metallic plasma facing materials containing neutron-induced traps which can capture it and includes the synergistic effects of surface erosion. The physical basis of the model is summarised, but emphasis is on the discussion of the results of a comparative study performed for beryllium and tungsten armours for ranges of design and operation conditions similar to those anticipated in the divertor of ITER.
Development of high poloidal beta, steady-state scenario with ITER-like tungsten divertor on EAST
NASA Astrophysics Data System (ADS)
Garofalo, A. M.; Gong, X. Z.; Qian, J.; Chen, J.; Li, G.; Li, K.; Li, M. H.; Zhai, X.; Bonoli, P.; Brower, D.; Cao, L.; Cui, L.; Ding, S.; Ding, W. X.; Guo, W.; Holcomb, C.; Huang, J.; Hyatt, A.; Lanctot, M.; Lao, L. L.; Liu, H.; Lyu, B.; McClenaghan, J.; Peysson, Y.; Ren, Q.; Shiraiwa, S.; Solomon, W.; Zang, Q.; Wan, B.
2017-07-01
Recent experiments on EAST have achieved the first long pulse H-mode (61 s) with zero loop voltage and an ITER-like tungsten divertor, and have demonstrated access to broad plasma current profiles by increasing the density in fully-noninductive lower hybrid current-driven discharges. These long pulse discharges reach wall thermal and particle balance, exhibit stationary good confinement (H 98y2 ~ 1.1) with low core electron transport, and are only possible with optimal active cooling of the tungsten armors. In separate experiments, the electron density was systematically varied in order to study its effect on the deposition profile of the external lower hybrid current drive (LHCD), while keeping the plasma in fully-noninductive conditions and with divertor strike points on the tungsten divertor. A broadening of the current profile is found, as indicated by lower values of the internal inductance at higher density. A broad current profile is attractive because, among other reasons, it enables internal transport barriers at large minor radius, leading to improved confinement as shown in companion DIII-D experiments. These experiments strengthen the physics basis for achieving high performance, steady state discharges in future burning plasmas.
Development of high poloidal beta, steady-state scenario with ITER-like tungsten divertor on EAST
Garofalo, Andrea M.; Gong, X. Z.; Qian, J.; ...
2017-06-07
Recent experiments on EAST have achieved the first long pulse H-mode (61 s) with zero loop voltage and an ITER-like tungsten divertor, and have demonstrated access to broad plasma current profiles by increasing the density in fully-noninductive lower hybrid current-driven discharges. These long pulse discharges reach wall thermal and particle balance, exhibit stationary good confinement (H 98y2~1.1) with low core electron transport, and are only possible with optimal active cooling of the tungsten armors. In separate experiments, the electron density was systematically varied in order to study its effect on the deposition profile of the external lower hybrid current drivemore » (LHCD), while keeping the plasma in fully-noninductive conditions and with divertor strike points on the tungsten divertor. A broadening of the current profile is found, as indicated by lower values of the internal inductance at higher density. A broad current profile is attractive because, among other reasons, it enables internal transport barriers at large minor radius, leading to improved confinement as shown in companion DIII-D experiments. These experiments strengthen the physics basis for achieving high performance, steady state discharges in future burning plasmas.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y.; Loesser, G.; Smith, M.
ITER diagnostic first walls (DFWs) and diagnostic shield modules (DSMs) inside the port plugs (PPs) are designed to protect diagnostic instrument and components from a harsh plasma environment and provide structural support while allowing for diagnostic access to the plasma. The design of DFWs and DSMs are driven by 1) plasma radiation and nuclear heating during normal operation 2) electromagnetic loads during plasma events and associate component structural responses. A multi-physics engineering analysis protocol for the design has been established at Princeton Plasma Physics Laboratory and it was used for the design of ITER DFWs and DSMs. The analyses weremore » performed to address challenging design issues based on resultant stresses and deflections of the DFW-DSM-PP assembly for the main load cases. ITER Structural Design Criteria for In-Vessel Components (SDC-IC) required for design by analysis and three major issues driving the mechanical design of ITER DFWs are discussed. The general guidelines for the DSM design have been established as a result of design parametric studies.« less
Xu, Q; Yang, D; Tan, J; Anastasio, M
2012-06-01
To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.
A fluid modeling perspective on the tokamak power scrape-off width using SOLPS-ITER
NASA Astrophysics Data System (ADS)
Meier, Eric
2016-10-01
SOLPS-ITER, a 2D fluid code, is used to conduct the first fluid modeling study of the physics behind the power scrape-off width (λq). When drift physics are activated in the code, λq is insensitive to changes in toroidal magnetic field (Bt), as predicted by the 0D heuristic drift (HD) model developed by Goldston. Using the HD model, which quantitatively agrees with regression analysis of a multi-tokamak database, λq in ITER is projected to be 1 mm instead of the previously assumed 4 mm, magnifying the challenge of maintaining the peak divertor target heat flux below the technological limit. These simulations, which use DIII-D H-mode experimental conditions as input, and reproduce the observed high-recycling, attached outer target plasma, allow insights into the scrape-off layer (SOL) physics that set λq. Independence of λq with respect to Bt suggests that SOLPS-ITER captures basic HD physics: the effect of Bt on the particle dwell time ( Bt) cancels with the effect on drift speed ( 1 /Bt), fixing the SOL plasma density width, and dictating λq. Scaling with plasma current (Ip), however, is much weaker than the roughly 1 /Ip dependence predicted by the HD model. Simulated net cross-separatrix particle flux due to magnetic drifts exceeds the anomalous particle transport, and a Pfirsch-Schluter-like SOL flow pattern is established. Up-down ion pressure asymmetry enables the net magnetic drift flux. Drifts establish in-out temperature asymmetry, and an associated thermoelectric current carries significant heat flux to the outer target. The density fall-off length in the SOL is similar to the electron temperature fall-off length, as observed experimentally. Finally, opportunities and challenges foreseen in ongoing work to extrapolate SOLPS-ITER and the HD model to ITER and future machines will be discussed. Supported by U.S. Department of Energy Contract DESC0010434.
Self-consistent modeling of CFETR baseline scenarios for steady-state operation
NASA Astrophysics Data System (ADS)
Chen, Jiale; Jian, Xiang; Chan, Vincent S.; Li, Zeyu; Deng, Zhao; Li, Guoqiang; Guo, Wenfeng; Shi, Nan; Chen, Xi; CFETR Physics Team
2017-07-01
Integrated modeling for core plasma is performed to increase confidence in the proposed baseline scenario in the 0D analysis for the China Fusion Engineering Test Reactor (CFETR). The steady-state scenarios are obtained through the consistent iterative calculation of equilibrium, transport, auxiliary heating and current drives (H&CD). Three combinations of H&CD schemes (NB + EC, NB + EC + LH, and EC + LH) are used to sustain the scenarios with q min > 2 and fusion power of ˜70-150 MW. The predicted power is within the target range for CFETR Phase I, although the confinement based on physics models is lower than that assumed in 0D analysis. Ideal MHD stability analysis shows that the scenarios are stable against n = 1-10 ideal modes, where n is the toroidal mode number. Optimization of RF current drive for the RF-only scenario is also presented. The simulation workflow for core plasma in this work provides a solid basis for a more extensive research and development effort for the physics design of CFETR.
ITER Magnet Feeder: Design, Manufacturing and Integration
NASA Astrophysics Data System (ADS)
CHEN, Yonghua; ILIN, Y.; M., SU; C., NICHOLAS; BAUER, P.; JAROMIR, F.; LU, Kun; CHENG, Yong; SONG, Yuntao; LIU, Chen; HUANG, Xiongyi; ZHOU, Tingzhi; SHEN, Guang; WANG, Zhongwei; FENG, Hansheng; SHEN, Junsong
2015-03-01
The International Thermonuclear Experimental Reactor (ITER) feeder procurement is now well underway. The feeder design has been improved by the feeder teams at the ITER Organization (IO) and the Institute of Plasma Physics, Chinese Academy of Sciences (ASIPP) in the last 2 years along with analyses and qualification activities. The feeder design is being progressively finalized. In addition, the preparation of qualification and manufacturing are well scheduled at ASIPP. This paper mainly presents the design, the overview of manufacturing and the status of integration on the ITER magnet feeders. supported by the National Special Support for R&D on Science and Technology for ITER (Ministry of Public Security of the People's Republic of China-MPS) (No. 2008GB102000)
Simulation of RF-fields in a fusion device
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Witte, Dieter; Bogaert, Ignace; De Zutter, Daniel
2009-11-26
In this paper the problem of scattering off a fusion plasma is approached from the point of view of integral equations. Using the volume equivalence principle an integral equation is derived which describes the electromagnetic fields in a plasma. The equation is discretized with MoM using conforming basis functions. This reduces the problem to solving a dense matrix equation. This can be done iteratively. Each iteration can be sped up using FFTs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T; Zhu, L
Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less
Solving Differential Equations Using Modified Picard Iteration
ERIC Educational Resources Information Center
Robin, W. A.
2010-01-01
Many classes of differential equations are shown to be open to solution through a method involving a combination of a direct integration approach with suitably modified Picard iterative procedures. The classes of differential equations considered include typical initial value, boundary value and eigenvalue problems arising in physics and…
CORSICA modelling of ITER hybrid operation scenarios
NASA Astrophysics Data System (ADS)
Kim, S. H.; Bulmer, R. H.; Campbell, D. J.; Casper, T. A.; LoDestro, L. L.; Meyer, W. H.; Pearlstein, L. D.; Snipes, J. A.
2016-12-01
The hybrid operating mode observed in several tokamaks is characterized by further enhancement over the high plasma confinement (H-mode) associated with reduced magneto-hydro-dynamic (MHD) instabilities linked to a stationary flat safety factor (q ) profile in the core region. The proposed ITER hybrid operation is currently aiming at operating for a long burn duration (>1000 s) with a moderate fusion power multiplication factor, Q , of at least 5. This paper presents candidate ITER hybrid operation scenarios developed using a free-boundary transport modelling code, CORSICA, taking all relevant physics and engineering constraints into account. The ITER hybrid operation scenarios have been developed by tailoring the 15 MA baseline ITER inductive H-mode scenario. Accessible operation conditions for ITER hybrid operation and achievable range of plasma parameters have been investigated considering uncertainties on the plasma confinement and transport. ITER operation capability for avoiding the poloidal field coil current, field and force limits has been examined by applying different current ramp rates, flat-top plasma currents and densities, and pre-magnetization of the poloidal field coils. Various combinations of heating and current drive (H&CD) schemes have been applied to study several physics issues, such as the plasma current density profile tailoring, enhancement of the plasma energy confinement and fusion power generation. A parameterized edge pedestal model based on EPED1 added to the CORSICA code has been applied to hybrid operation scenarios. Finally, fully self-consistent free-boundary transport simulations have been performed to provide information on the poloidal field coil voltage demands and to study the controllability with the ITER controllers. Extended from Proc. 24th Int. Conf. on Fusion Energy (San Diego, 2012) IT/P1-13.
NASA Technical Reports Server (NTRS)
Wu, S. T.; Sun, M. T.; Sakurai, Takashi
1990-01-01
This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.
Remote experimental site concept development
NASA Astrophysics Data System (ADS)
Casper, Thomas A.; Meyer, William; Butner, David
1995-01-01
Scientific research is now often conducted on large and expensive experiments that utilize collaborative efforts on a national or international scale to explore physics and engineering issues. This is particularly true for the current US magnetic fusion energy program where collaboration on existing facilities has increased in importance and will form the basis for future efforts. As fusion energy research approaches reactor conditions, the trend is towards fewer large and expensive experimental facilities, leaving many major institutions without local experiments. Since the expertise of various groups is a valuable resource, it is important to integrate these teams into an overall scientific program. To sustain continued involvement in experiments, scientists are now often required to travel frequently, or to move their families, to the new large facilities. This problem is common to many other different fields of scientific research. The next-generation tokamaks, such as the Tokamak Physics Experiment (TPX) or the International Thermonuclear Experimental Reactor (ITER), will operate in steady-state or long pulse mode and produce fluxes of fusion reaction products sufficient to activate the surrounding structures. As a direct consequence, remote operation requiring robotics and video monitoring will become necessary, with only brief and limited access to the vessel area allowed. Even the on-site control room, data acquisition facilities, and work areas will be remotely located from the experiment, isolated by large biological barriers, and connected with fiber-optics. Current planning for the ITER experiment includes a network of control room facilities to be located in the countries of the four major international partners; USA, Russian Federation, Japan, and the European Community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J.
2015-12-01
This work was motivated by the observation, as early as 2008, that GYRO simulations of some ITER operating scenarios exhibited nonlinear zonal-flow generation large enough to effectively quench turbulence inside r /a ~ 0.5. This observation of flow-dominated, low-transport states persisted even as more accurate and comprehensive predictions of ITER profiles were made using the state-of-the-art TGLF transport model. This core stabilization is in stark contrast to GYRO-TGLF comparisons for modern-day tokamaks, for which GYRO and TGLF are typically in very close agreement. So, we began to suspect that TGLF needed to be generalized to include the effect of zonal-flowmore » stabilization in order to be more accurate for the conditions of reactor simulations. While the precise cause of the GYRO-TGLF discrepancy for ITER parameters was not known, it was speculated that closeness to threshold in the absence of driven rotation, as well as electromagnetic stabilization, created conditions more sensitive the self-generated zonal-flow stabilization than in modern tokamaks. Need for nonlinear zonal-flow stabilization: To explore the inclusion of a zonal-flow stabilization mechanism in TGLF, we started with a nominal ITER profile predicted by TGLF, and then performed linear and nonlinear GYRO simulations to characterize the behavior at and slightly above the nominal temperature gradients for finite levels of energy transport. Then, we ran TGLF on these cases to see where the discrepancies were largest. The predicted ITER profiles were indeed near to the TGLF threshold over most of the plasma core in the hybrid discharge studied (weak magnetic shear, q > 1). Scanning temperature gradients above the TGLF power balance values also showed that TGLF overpredicted the electron energy transport in the low-collisionality ITER plasma. At first (in Q3), a model of only the zonal-flow stabilization (Dimits shift) was attempted. Although we were able to construct an ad hoc model of the zonal flows that fit the GYRO simulations, the parameters of the model had to be tuned to each case. A physics basis for the zonal flow model was lacking. Electron energy transport at short wavelength: A secondary issue – the high-k electron energy flux – was initially assumed to be independent of the zonal flow effect. However, detailed studies of the fluctuation spectra from recent multiscale (electron and ion scale) GYRO simulations provided a critical new insight into the role of zonal flows. The multiscale simulations suggested that advection by the zonal flows strongly suppressed electron-scale turbulence. Radial shear of the zonal E×B fluctuation could not compete with the large electron-scale linear growth rate, but the k x-mixing rate of the E×B advection could. This insight led to a preliminary new model for the way zonal flows saturate both electron- and ion-scale turbulence. It was also discovered that the strength of the zonal E×B velocity could be computed from the linear growth rate spectrum. The new saturation model (SAT1), which replaces the original model (SAT0), was fit to the multiscale GYRO simulations as well as the ion-scale GYRO simulations used to calibrate the original SAT0 model. Thus, SAT1 captures the physics of both multiscale electron transport and zonal-flow stabilization. In future work, the SAT1 model will require significant further testing and (expensive) calibration with nonlinear multiscale gyrokinetic simulations over a wider variety of plasma conditions – certainly more than the small set of scans about a single C-Mod L-mode discharge. We believe the SAT1 model holds great promise as a physics-based model of the multiscale turbulent transport in fusion devices. Correction to ITER performance predictions: Finally, the impact of the SAT1model on the ITER hybrid case is mixed. Without the electron-scale contribution to the fluxes, the Dimits shift makes a significant improvement in the predicted fusion power as originally posited. Alas, including the high-k electron transport reduces the improvement, yielding a modest net increase in predicted fusion power compared to the TGLF prediction with the original SAT0 model.« less
Efficient solution of the simplified P N equations
Hamilton, Steven P.; Evans, Thomas M.
2014-12-23
We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.
Low-memory iterative density fitting.
Grajciar, Lukáš
2015-07-30
A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. © 2015 Wiley Periodicals, Inc.
Cooley, Richard L.
1992-01-01
MODFE, a modular finite-element model for simulating steady- or unsteady-state, area1 or axisymmetric flow of ground water in a heterogeneous anisotropic aquifer is documented in a three-part series of reports. In this report, part 2, the finite-element equations are derived by minimizing a functional of the difference between the true and approximate hydraulic head, which produces equations that are equivalent to those obtained by either classical variational or Galerkin techniques. Spatial finite elements are triangular with linear basis functions, and temporal finite elements are one dimensional with linear basis functions. Physical processes that can be represented by the model include (1) confined flow, unconfined flow (using the Dupuit approximation), or a combination of both; (2) leakage through either rigid or elastic confining units; (3) specified recharge or discharge at points, along lines, or areally; (4) flow across specified-flow, specified-head, or head-dependent boundaries; (5) decrease of aquifer thickness to zero under extreme water-table decline and increase of aquifer thickness from zero as the water table rises; and (6) head-dependent fluxes from springs, drainage wells, leakage across riverbeds or confining units combined with aquifer dewatering, and evapotranspiration. The matrix equations produced by the finite-element method are solved by the direct symmetric-Doolittle method or the iterative modified incomplete-Cholesky conjugate-gradient method. The direct method can be efficient for small- to medium-sized problems (less than about 500 nodes), and the iterative method is generally more efficient for larger-sized problems. Comparison of finite-element solutions with analytical solutions for five example problems demonstrates that the finite-element model can yield accurate solutions to ground-water flow problems.
NASA Astrophysics Data System (ADS)
Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.
2018-06-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.
NASA Astrophysics Data System (ADS)
Mostafa, Mostafa E.
2005-10-01
The present study shows that reconstructing the reduced stress tensor (RST) from the measurable fault-slip data (FSD) and the immeasurable shear stress magnitudes (SSM) is a typical iteration problem. The result of direct inversion of FSD presented by Angelier [1990. Geophysical Journal International 103, 363-376] is considered as a starting point (zero step iteration) where all SSM are assigned constant value ( λ=√{3}/2). By iteration, the SSM and RST update each other until they converge to fixed values. Angelier [1990. Geophysical Journal International 103, 363-376] designed the function upsilon ( υ) and the two estimators: relative upsilon (RUP) and (ANG) to express the divergence between the measured and calculated shear stresses. Plotting individual faults' RUP at successive iteration steps shows that they tend to zero (simulated data) or to fixed values (real data) at a rate depending on the orientation and homogeneity of the data. FSD of related origin tend to aggregate in clusters. Plots of the estimators ANG versus RUP show that by iteration, labeled data points are disposed in clusters about a straight line. These two new plots form the basis of a technique for separating FSD into homogeneous clusters.
NASA Astrophysics Data System (ADS)
Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.
2013-09-01
Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.
Plasma-surface interaction in the context of ITER.
Kleyn, A W; Lopes Cardozo, N J; Samm, U
2006-04-21
The decreasing availability of energy and concern about climate change necessitate the development of novel sustainable energy sources. Fusion energy is such a source. Although it will take several decades to develop it into routinely operated power sources, the ultimate potential of fusion energy is very high and badly needed. A major step forward in the development of fusion energy is the decision to construct the experimental test reactor ITER. ITER will stimulate research in many areas of science. This article serves as an introduction to some of those areas. In particular, we discuss research opportunities in the context of plasma-surface interactions. The fusion plasma, with a typical temperature of 10 keV, has to be brought into contact with a physical wall in order to remove the helium produced and drain the excess energy in the fusion plasma. The fusion plasma is far too hot to be brought into direct contact with a physical wall. It would degrade the wall and the debris from the wall would extinguish the plasma. Therefore, schemes are developed to cool down the plasma locally before it impacts on a physical surface. The resulting plasma-surface interaction in ITER is facing several challenges including surface erosion, material redeposition and tritium retention. In this article we introduce how the plasma-surface interaction relevant for ITER can be studied in small scale experiments. The various requirements for such experiments are introduced and examples of present and future experiments will be given. The emphasis in this article will be on the experimental studies of plasma-surface interactions.
NASA Astrophysics Data System (ADS)
Huang, Haiping
2017-05-01
Revealing hidden features in unlabeled data is called unsupervised feature learning, which plays an important role in pretraining a deep neural network. Here we provide a statistical mechanics analysis of the unsupervised learning in a restricted Boltzmann machine with binary synapses. A message passing equation to infer the hidden feature is derived, and furthermore, variants of this equation are analyzed. A statistical analysis by replica theory describes the thermodynamic properties of the model. Our analysis confirms an entropy crisis preceding the non-convergence of the message passing equation, suggesting a discontinuous phase transition as a key characteristic of the restricted Boltzmann machine. Continuous phase transition is also confirmed depending on the embedded feature strength in the data. The mean-field result under the replica symmetric assumption agrees with that obtained by running message passing algorithms on single instances of finite sizes. Interestingly, in an approximate Hopfield model, the entropy crisis is absent, and a continuous phase transition is observed instead. We also develop an iterative equation to infer the hyper-parameter (temperature) hidden in the data, which in physics corresponds to iteratively imposing Nishimori condition. Our study provides insights towards understanding the thermodynamic properties of the restricted Boltzmann machine learning, and moreover important theoretical basis to build simplified deep networks.
Radiative transfer in molecular lines
NASA Astrophysics Data System (ADS)
Asensio Ramos, A.; Trujillo Bueno, J.; Cernicharo, J.
2001-07-01
The highly convergent iterative methods developed by Trujillo Bueno and Fabiani Bendicho (1995) for radiative transfer (RT) applications are generalized to spherical symmetry with velocity fields. These RT methods are based on Jacobi, Gauss-Seidel (GS), and SOR iteration and they form the basis of a new NLTE multilevel transfer code for atomic and molecular lines. The benchmark tests carried out so far are presented and discussed. The main aim is to develop a number of powerful RT tools for the theoretical interpretation of molecular spectra.
Gaussian-Beam/Physical-Optics Design Of Beam Waveguide
NASA Technical Reports Server (NTRS)
Veruttipong, Watt; Chen, Jacqueline C.; Bathker, Dan A.
1993-01-01
In iterative method of designing wideband beam-waveguide feed for paraboloidal-reflector antenna, Gaussian-beam approximation alternated with more nearly exact physical-optics analysis of diffraction. Includes curved and straight reflectors guiding radiation from feed horn to subreflector. For iterative design calculations, curved mirrors mathematically modeled as thin lenses. Each distance Li is combined length of two straight-line segments intersecting at one of flat mirrors. Method useful for designing beam-waveguide reflectors or mirrors required to have diameters approximately less than 30 wavelengths at one or more intended operating frequencies.
Installation and Testing of ITER Integrated Modeling and Analysis Suite (IMAS) on DIII-D
NASA Astrophysics Data System (ADS)
Lao, L.; Kostuk, M.; Meneghini, O.; Smith, S.; Staebler, G.; Kalling, R.; Pinches, S.
2017-10-01
A critical objective of the ITER Integrated Modeling Program is the development of IMAS to support ITER plasma operation and research activities. An IMAS framework has been established based on the earlier work carried out within the EU. It consists of a physics data model and a workflow engine. The data model is capable of representing both simulation and experimental data and is applicable to ITER and other devices. IMAS has been successfully installed on a local DIII-D server using a flexible installer capable of managing the core data access tools (Access Layer and Data Dictionary) and optionally the Kepler workflow engine and coupling tools. A general adaptor for OMFIT (a workflow engine) is being built for adaptation of any analysis code to IMAS using a new IMAS universal access layer (UAL) interface developed from an existing OMFIT EU Integrated Tokamak Modeling UAL. Ongoing work includes development of a general adaptor for EFIT and TGLF based on this new UAL that can be readily extended for other physics codes within OMFIT. Work supported by US DOE under DE-FC02-04ER54698.
Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks
Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng
2017-01-01
High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement
Hao, Yansong; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-01-01
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency. PMID:29597280
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement.
Ren, Bangyue; Hao, Yansong; Wang, Huaqing; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-03-28
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency.
Plasma-surface interaction in the Be/W environment: Conclusions drawn from the JET-ILW for ITER
NASA Astrophysics Data System (ADS)
Brezinsek, S.; JET-EFDA contributors
2015-08-01
The JET ITER-Like Wall experiment (JET-ILW) provides an ideal test bed to investigate plasma-surface interaction (PSI) and plasma operation with the ITER plasma-facing material selection employing beryllium in the main chamber and tungsten in the divertor. The main PSI processes: material erosion and migration, (b) fuel recycling and retention, (c) impurity concentration and radiation have be1en studied and compared between JET-C and JET-ILW. The current physics understanding of these key processes in the JET-ILW revealed that both interpretation of previously obtained carbon results (JET-C) and predictions to ITER need to be revisited. The impact of the first-wall material on the plasma was underestimated. Main observations are: (a) low primary erosion source in H-mode plasmas and reduction of the material migration from the main chamber to the divertor (factor 7) as well as within the divertor from plasma-facing to remote areas (factor 30 - 50). The energetic threshold for beryllium sputtering minimises the primary erosion source and inhibits multi-step re-erosion in the divertor. The physical sputtering yield of tungsten is low as 10-5 and determined by beryllium ions. (b) Reduction of the long-term fuel retention (factor 10 - 20) in JET-ILW with respect to JET-C. The remaining retention is caused by implantation and co-deposition with beryllium and residual impurities. Outgassing has gained importance and impacts on the recycling properties of beryllium and tungsten. (c) The low effective plasma charge (Zeff = 1.2) and low radiation capability of beryllium reveal the bare deuterium plasma physics. Moderate nitrogen seeding, reaching Zeff = 1.6 , restores in particular the confinement and the L-H threshold behaviour. ITER-compatible divertor conditions with stable semi-detachment were obtained owing to a higher density limit with ILW. Overall JET demonstrated successful plasma operation in the Be/W material combination and confirms its advantageous PSI behaviour and gives strong support to the ITER material selection.
NASA Astrophysics Data System (ADS)
An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu
2012-11-01
SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.
The Schwinger Variational Method
NASA Technical Reports Server (NTRS)
Huo, Winifred M.
1995-01-01
Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. The application of the Schwinger variational (SV) method to e-molecule collisions and molecular photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions. Since this is not a review of cross section data, cross sections are presented only to server as illustrative examples. In the SV method, the correct boundary condition is automatically incorporated through the use of Green's function. Thus SV calculations can employ basis functions with arbitrary boundary conditions. The iterative Schwinger method has been used extensively to study molecular photoionization. For e-molecule collisions, it is used at the static exchange level to study elastic scattering and coupled with the distorted wave approximation to study electronically inelastic scattering.
Description of the prototype diagnostic residual gas analyzer for ITER.
Younkin, T R; Biewer, T M; Klepper, C C; Marcus, C
2014-11-01
The diagnostic residual gas analyzer (DRGA) system to be used during ITER tokamak operation is being designed at Oak Ridge National Laboratory to measure fuel ratios (deuterium and tritium), fusion ash (helium), and impurities in the plasma. The eventual purpose of this instrument is for machine protection, basic control, and physics on ITER. Prototyping is ongoing to optimize the hardware setup and measurement capabilities. The DRGA prototype is comprised of a vacuum system and measurement technologies that will overlap to meet ITER measurement requirements. Three technologies included in this diagnostic are a quadrupole mass spectrometer, an ion trap mass spectrometer, and an optical penning gauge that are designed to document relative and absolute gas concentrations.
Fast generating Greenberger-Horne-Zeilinger state via iterative interaction pictures
NASA Astrophysics Data System (ADS)
Huang, Bi-Hua; Chen, Ye-Hong; Wu, Qi-Cheng; Song, Jie; Xia, Yan
2016-10-01
We delve a little deeper into the construction of shortcuts to adiabatic passage for three-level systems by iterative interaction picture (multiple Schrödinger dynamics). As an application example, we use the deduced iterative based shortcuts to rapidly generate the Greenberger-Horne-Zeilinger (GHZ) state in a three-atom system with the help of quantum Zeno dynamics. Numerical simulation shows the dynamics designed by the iterative picture method is physically feasible and the shortcut scheme performs much better than that using the conventional adiabatic passage techniques. Also, the influences of various decoherence processes are discussed by numerical simulation and the results prove that the scheme is fast and robust against decoherence and operational imperfection.
NASA Astrophysics Data System (ADS)
Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto
2017-10-01
This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.
NASA Astrophysics Data System (ADS)
Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.
2006-03-01
Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.
Shading correction assisted iterative cone-beam CT reconstruction
NASA Astrophysics Data System (ADS)
Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye
2017-11-01
Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.
An implementation of the QMR method based on coupled two-term recurrences
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noeel M.
1992-01-01
The authors have proposed a new Krylov subspace iteration, the quasi-minimal residual algorithm (QMR), for solving non-Hermitian linear systems. In the original implementation of the QMR method, the Lanczos process with look-ahead is used to generate basis vectors for the underlying Krylov subspaces. In the Lanczos algorithm, these basis vectors are computed by means of three-term recurrences. It has been observed that, in finite precision arithmetic, vector iterations based on three-term recursions are usually less robust than mathematically equivalent coupled two-term vector recurrences. This paper presents a look-ahead algorithm that constructs the Lanczos basis vectors by means of coupled two-term recursions. Implementation details are given, and the look-ahead strategy is described. A new implementation of the QMR method, based on this coupled two-term algorithm, is described. A simplified version of the QMR algorithm without look-ahead is also presented, and the special case of QMR for complex symmetric linear systems is considered. Results of numerical experiments comparing the original and the new implementations of the QMR method are reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prindle, N.H.; Mendenhall, F.T.; Trauth, K.
1996-05-01
The Systems Prioritization Method (SPM) is a decision-aiding tool developed by Sandia National Laboratories (SNL). SPM provides an analytical basis for supporting programmatic decisions for the Waste Isolation Pilot Plant (WIPP) to meet selected portions of the applicable US EPA long-term performance regulations. The first iteration of SPM (SPM-1), the prototype for SPM< was completed in 1994. It served as a benchmark and a test bed for developing the tools needed for the second iteration of SPM (SPM-2). SPM-2, completed in 1995, is intended for programmatic decision making. This is Volume II of the three-volume final report of the secondmore » iteration of the SPM. It describes the technical input and model implementation for SPM-2, and presents the SPM-2 technical baseline and the activities, activity outcomes, outcome probabilities, and the input parameters for SPM-2 analysis.« less
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
Elliptic polylogarithms and iterated integrals on elliptic curves. Part I: general formalism
NASA Astrophysics Data System (ADS)
Broedel, Johannes; Duhr, Claude; Dulat, Falko; Tancredi, Lorenzo
2018-05-01
We introduce a class of iterated integrals, defined through a set of linearly independent integration kernels on elliptic curves. As a direct generalisation of multiple polylogarithms, we construct our set of integration kernels ensuring that they have at most simple poles, implying that the iterated integrals have at most logarithmic singularities. We study the properties of our iterated integrals and their relationship to the multiple elliptic polylogarithms from the mathematics literature. On the one hand, we find that our iterated integrals span essentially the same space of functions as the multiple elliptic polylogarithms. On the other, our formulation allows for a more direct use to solve a large variety of problems in high-energy physics. We demonstrate the use of our functions in the evaluation of the Laurent expansion of some hypergeometric functions for values of the indices close to half integers.
A minimal approach to the scattering of physical massless bosons
NASA Astrophysics Data System (ADS)
Boels, Rutger H.; Luo, Hui
2018-05-01
Tree and loop level scattering amplitudes which involve physical massless bosons are derived directly from physical constraints such as locality, symmetry and unitarity, bypassing path integral constructions. Amplitudes can be projected onto a minimal basis of kinematic factors through linear algebra, by employing four dimensional spinor helicity methods or at its most general using projection techniques. The linear algebra analysis is closely related to amplitude relations, especially the Bern-Carrasco-Johansson relations for gluon amplitudes and the Kawai-Lewellen-Tye relations between gluons and graviton amplitudes. Projection techniques are known to reduce the computation of loop amplitudes with spinning particles to scalar integrals. Unitarity, locality and integration-by-parts identities can then be used to fix complete tree and loop amplitudes efficiently. The loop amplitudes follow algorithmically from the trees. A number of proof-of-concept examples are presented. These include the planar four point two-loop amplitude in pure Yang-Mills theory as well as a range of one loop amplitudes with internal and external scalars, gluons and gravitons. Several interesting features of the results are highlighted, such as the vanishing of certain basis coefficients for gluon and graviton amplitudes. Effective field theories are naturally and efficiently included into the framework. Dimensional regularisation is employed throughout; different regularisation schemes are worked out explicitly. The presented methods appear most powerful in non-supersymmetric theories in cases with relatively few legs, but with potentially many loops. For instance, in the introduced approach iterated unitarity cuts of four point amplitudes for non-supersymmetric gauge and gravity theories can be computed by matrix multiplication, generalising the so-called rung-rule of maximally supersymmetric theories. The philosophy of the approach to kinematics also leads to a technique to control colour quantum numbers of scattering amplitudes with matter, especially efficient in the adjoint and fundamental representations.
A new iterative scheme for solving the discrete Smoluchowski equation
NASA Astrophysics Data System (ADS)
Smith, Alastair J.; Wells, Clive G.; Kraft, Markus
2018-01-01
This paper introduces a new iterative scheme for solving the discrete Smoluchowski equation and explores the numerical convergence properties of the method for a range of kernels admitting analytical solutions, in addition to some more physically realistic kernels typically used in kinetics applications. The solver is extended to spatially dependent problems with non-uniform velocities and its performance investigated in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arndt, S.; Merkel, P.; Monticello, D.A.
Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz {ital et al.}, {ital Plasma Physics and Controlled Nuclear Fusion Research 1990} (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman {ital et al.}, Comput. Phys. Commun., {bold 43}, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations neededmore » for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann {ital et al.}, Phys. Fluids {bold 26}, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of {open_quotes}self-healing{close_quotes} of islands has been observed. {copyright} {ital 1999 American Institute of Physics.}« less
Plasma Physics Network Newsletter, No. 3
NASA Astrophysics Data System (ADS)
1991-02-01
This issue of the Newsletter contains a report on the First South-North International Workshop on Fusion Theory, Tipaza, Algeria, 17-20 September, 1990; a report in the issuance of the 'Buenos Aires Memorandum' generated during the IV Latin American Workshop on Plasma Physics, Argentina, July 1990, and containing a proposal that the IFRC establish a 'Steering Committee on North-South Collaboration in Controlled Nuclear Fusion and Plasma Physics Research'; the announcement that the 14th International Conference on Plasma Physics and Controlled Nuclear Fusion will be held in Wuerzburg, Germany, September 30 to October 7, 1992; a list of IAEA technical committee meetings for 1991; an item on ITER news; an article 'Long Term Physics R and D Planning (for ITER)' by F. Engelmann; in the planned sequence of 'Reports on National Fusion Programs' contributions on the Chinese and Yugoslav programs; finally, the titles and contacts for two other newsletters of potential interest, i.e., the AAAPT (Asian African Association for Plasma Training) Newsletter, and the IPG (International physics Group-A sub unit of the American Physical Society) Newsletter.
NASA Astrophysics Data System (ADS)
Vogelgesang, Jonas; Schorr, Christian
2016-12-01
We present a semi-discrete Landweber-Kaczmarz method for solving linear ill-posed problems and its application to Cone Beam tomography and laminography. Using a basis function-type discretization in the image domain, we derive a semi-discrete model of the underlying scanning system. Based on this model, the proposed method provides an approximate solution of the reconstruction problem, i.e. reconstructing the density function of a given object from its projections, in suitable subspaces equipped with basis function-dependent weights. This approach intuitively allows the incorporation of additional information about the inspected object leading to a more accurate model of the X-rays through the object. Also, physical conditions of the scanning geometry, like flat detectors in computerized tomography as used in non-destructive testing applications as well as non-regular scanning curves e.g. appearing in computed laminography (CL) applications, are directly taken into account during the modeling process. Finally, numerical experiments of a typical CL application in three dimensions are provided to verify the proposed method. The introduction of geometric prior information leads to a significantly increased image quality and superior reconstructions compared to standard iterative methods.
Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis
2007-07-01
This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litaudon, X; Bernard, J. M.; Colas, L.
2013-01-01
To support the design of an ITER ion-cyclotron range of frequency heating (ICRH) system and to mitigate risks of operation in ITER, CEA has initiated an ambitious Research & Development program accompanied by experiments on Tore Supra or test-bed facility together with a significant modelling effort. The paper summarizes the recent results in the following areas: Comprehensive characterization (experiments and modelling) of a new Faraday screen concept tested on the Tore Supra antenna. A new model is developed for calculating the ICRH sheath rectification at the antenna vicinity. The model is applied to calculate the local heat flux on Toremore » Supra and ITER ICRH antennas. Full-wave modelling of ITER ICRH heating and current drive scenarios with the EVE code. With 20 MW of power, a current of 400 kA could be driven on axis in the DT scenario. Comparison between DT and DT(3He) scenario is given for heating and current drive efficiencies. First operation of CW test-bed facility, TITAN, designed for ITER ICRH components testing and could host up to a quarter of an ITER antenna. R&D of high permittivity materials to improve load of test facilities to better simulate ITER plasma antenna loading conditions.« less
Identification of Threshold Concepts for Biochemistry
Green, David; Lewis, Jennifer E.; Lin, Sara; Minderhout, Vicky
2014-01-01
Threshold concepts (TCs) are concepts that, when mastered, represent a transformed understanding of a discipline without which the learner cannot progress. We have undertaken a process involving more than 75 faculty members and 50 undergraduate students to identify a working list of TCs for biochemistry. The process of identifying TCs for biochemistry was modeled on extensive work related to TCs across a range of disciplines and included faculty workshops and student interviews. Using an iterative process, we prioritized five concepts on which to focus future development of instructional materials. Broadly defined, the concepts are steady state, biochemical pathway dynamics and regulation, the physical basis of interactions, thermodynamics of macromolecular structure formation, and free energy. The working list presented here is not intended to be exhaustive, but rather is meant to identify a subset of TCs for biochemistry for which instructional and assessment tools for undergraduate biochemistry will be developed. PMID:25185234
Multilevel Iterative Methods in Nonlinear Computational Plasma Physics
NASA Astrophysics Data System (ADS)
Knoll, D. A.; Finn, J. M.
1997-11-01
Many applications in computational plasma physics involve the implicit numerical solution of coupled systems of nonlinear partial differential equations or integro-differential equations. Such problems arise in MHD, systems of Vlasov-Fokker-Planck equations, edge plasma fluid equations. We have been developing matrix-free Newton-Krylov algorithms for such problems and have applied these algorithms to the edge plasma fluid equations [1,2] and to the Vlasov-Fokker-Planck equation [3]. Recently we have found that with increasing grid refinement, the number of Krylov iterations required per Newton iteration has grown unmanageable [4]. This has led us to the study of multigrid methods as a means of preconditioning matrix-free Newton-Krylov methods. In this poster we will give details of the general multigrid preconditioned Newton-Krylov algorithm, as well as algorithm performance details on problems of interest in the areas of magnetohydrodynamics and edge plasma physics. Work supported by US DoE 1. Knoll and McHugh, J. Comput. Phys., 116, pg. 281 (1995) 2. Knoll and McHugh, Comput. Phys. Comm., 88, pg. 141 (1995) 3. Mousseau and Knoll, J. Comput. Phys. (1997) (to appear) 4. Knoll and McHugh, SIAM J. Sci. Comput. 19, (1998) (to appear)
Simulation of the hybrid and steady state advanced operating modes in ITER
NASA Astrophysics Data System (ADS)
Kessel, C. E.; Giruzzi, G.; Sips, A. C. C.; Budny, R. V.; Artaud, J. F.; Basiuk, V.; Imbeaux, F.; Joffrin, E.; Schneider, M.; Murakami, M.; Luce, T.; St. John, Holger; Oikawa, T.; Hayashi, N.; Takizuka, T.; Ozeki, T.; Na, Y.-S.; Park, J. M.; Garcia, J.; Tucillo, A. A.
2007-09-01
Integrated simulations are performed to establish a physics basis, in conjunction with present tokamak experiments, for the operating modes in the International Thermonuclear Experimental Reactor (ITER). Simulations of the hybrid mode are done using both fixed and free-boundary 1.5D transport evolution codes including CRONOS, ONETWO, TSC/TRANSP, TOPICS and ASTRA. The hybrid operating mode is simulated using the GLF23 and CDBM05 energy transport models. The injected powers are limited to the negative ion neutral beam, ion cyclotron and electron cyclotron heating systems. Several plasma parameters and source parameters are specified for the hybrid cases to provide a comparison of 1.5D core transport modelling assumptions, source physics modelling assumptions, as well as numerous peripheral physics modelling. Initial results indicate that very strict guidelines will need to be imposed on the application of GLF23, for example, to make useful comparisons. Some of the variations among the simulations are due to source models which vary widely among the codes used. In addition, there are a number of peripheral physics models that should be examined, some of which include fusion power production, bootstrap current, treatment of fast particles and treatment of impurities. The hybrid simulations project to fusion gains of 5.6-8.3, βN values of 2.1-2.6 and fusion powers ranging from 350 to 500 MW, under the assumptions outlined in section 3. Simulations of the steady state operating mode are done with the same 1.5D transport evolution codes cited above, except the ASTRA code. In these cases the energy transport model is more difficult to prescribe, so that energy confinement models will range from theory based to empirically based. The injected powers include the same sources as used for the hybrid with the possible addition of lower hybrid. The simulations of the steady state mode project to fusion gains of 3.5-7, βN values of 2.3-3.0 and fusion powers of 290 to 415 MW, under the assumptions described in section 4. These simulations will be presented and compared with particular focus on the resulting temperature profiles, source profiles and peripheral physics profiles. The steady state simulations are at an early stage and are focused on developing a range of safety factor profiles with 100% non-inductive current.
ERIC Educational Resources Information Center
Moore, Gary T.
This paper questions the physical environmental adequacy of the Infant/Toddler Environment Rating Scale (ITERS) developed by Thelma Harms, Debby Cryer, and Richard Clifford at the University of North Carolina, Chapel Hill. ITERS is a 35-item scale designed to assess the quality of center-based infant and toddler care, and one of a family of child…
Quality measures in applications of image restoration.
Kriete, A; Naim, M; Schafer, L
2001-01-01
We describe a new method for the estimation of image quality in image restoration applications. We demonstrate this technique on a simulated data set of fluorescent beads, in comparison with restoration by three different deconvolution methods. Both the number of iterations and a regularisation factor are varied to enforce changes in the resulting image quality. First, the data sets are directly compared by an accuracy measure. These values serve to validate the image quality descriptor, which is developed on the basis of optical information theory. This most general measure takes into account the spectral energies and the noise, weighted in a logarithmic fashion. It is demonstrated that this method is particularly helpful as a user-oriented method to control the output of iterative image restorations and to eliminate the guesswork in choosing a suitable number of iterations.
Enhancing sparsity of Hermite polynomial expansions by iterative rotations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiu; Lei, Huan; Baker, Nathan A.
2016-02-01
Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.
A minimization method on the basis of embedding the feasible set and the epigraph
NASA Astrophysics Data System (ADS)
Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.
2016-11-01
We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.
NASA Astrophysics Data System (ADS)
Sips, A. C. C.; Giruzzi, G.; Ide, S.; Kessel, C.; Luce, T. C.; Snipes, J. A.; Stober, J. K.
2015-02-01
The development of operating scenarios is one of the key issues in the research for ITER which aims to achieve a fusion gain (Q) of ˜10, while producing 500 MW of fusion power for ≥300 s. The ITER Research plan proposes a success oriented schedule starting in hydrogen and helium, to be followed by a nuclear operation phase with a rapid development towards Q ˜ 10 in deuterium/tritium. The Integrated Operation Scenarios Topical Group of the International Tokamak Physics Activity initiates joint activities among worldwide institutions and experiments to prepare ITER operation. Plasma formation studies report robust plasma breakdown in devices with metal walls over a wide range of conditions, while other experiments use an inclined EC launch angle at plasma formation to mimic the conditions in ITER. Simulations of the plasma burn-through predict that at least 4 MW of Electron Cyclotron heating (EC) assist would be required in ITER. For H-modes at q95 ˜ 3, many experiments have demonstrated operation with scaled parameters for the ITER baseline scenario at ne/nGW ˜ 0.85. Most experiments, however, obtain stable discharges at H98(y,2) ˜ 1.0 only for βN = 2.0-2.2. For the rampup in ITER, early X-point formation is recommended, allowing auxiliary heating to reduce the flux consumption. A range of plasma inductance (li(3)) can be obtained from 0.65 to 1.0, with the lowest values obtained in H-mode operation. For the rampdown, the plasma should stay diverted maintaining H-mode together with a reduction of the elongation from 1.85 to 1.4. Simulations show that the proposed rampup and rampdown schemes developed since 2007 are compatible with the present ITER design for the poloidal field coils. At 13-15 MA and densities down to ne/nGW ˜ 0.5, long pulse operation (>1000 s) in ITER is possible at Q ˜ 5, useful to provide neutron fluence for Test Blanket Module assessments. ITER scenario preparation in hydrogen and helium requires high input power (>50 MW). H-mode operation in helium may be possible at input powers above 35 MW at a toroidal field of 2.65 T, for studying H-modes and ELM mitigation. In hydrogen, H-mode operation is expected to be marginal, even at 2.65 T with 60 MW of input power. Simulation code benchmark studies using hybrid and steady state scenario parameters have proved to be a very challenging and lengthy task of testing suites of codes, consisting of tens of sophisticated modules. Nevertheless, the general basis of the modelling appears sound, with substantial consistency among codes developed by different groups. For a hybrid scenario at 12 MA, the code simulations give a range for Q = 6.5-8.3, using 30 MW neutral beam injection and 20 MW ICRH. For non-inductive operation at 7-9 MA, the simulation results show more variation. At high edge pedestal pressure (Tped ˜ 7 keV), the codes predict Q = 3.3-3.8 using 33 MW NB, 20 MW EC, and 20 MW ion cyclotron to demonstrate the feasibility of steady-state operation with the day-1 heating systems in ITER. Simulations using a lower edge pedestal temperature (˜3 keV) but improved core confinement obtain Q = 5-6.5, when ECCD is concentrated at mid-radius and ˜20 MW off-axis current drive (ECCD or LHCD) is added. Several issues remain to be studied, including plasmas with dominant electron heating, mitigation of transient heat loads integrated in scenario demonstrations and (burn) control simulations in ITER scenarios.
de Vries, Peter C.; Luce, Timothy C.; Bae, Young-soon; ...
2017-11-22
To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in fGW limits the duration ofmore » the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q95~3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in βp at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. Here, the results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.« less
NASA Astrophysics Data System (ADS)
de Vries, P. C.; Luce, T. C.; Bae, Y. S.; Gerhardt, S.; Gong, X.; Gribov, Y.; Humphreys, D.; Kavin, A.; Khayrutdinov, R. R.; Kessel, C.; Kim, S. H.; Loarte, A.; Lukash, V. E.; de la Luna, E.; Nunes, I.; Poli, F.; Qian, J.; Reinke, M.; Sauter, O.; Sips, A. C. C.; Snipes, J. A.; Stober, J.; Treutterer, W.; Teplukhina, A. A.; Voitsekhovitch, I.; Woo, M. H.; Wolfe, S.; Zabeo, L.; the Alcator C-MOD Team; the ASDEX Upgrade Team; the DIII-D Team; the EAST Team; contributors, JET; the KSTAR Team; the NSTX-U Team; the TCV Team; IOS members, ITPA; experts
2018-02-01
To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in f GW limits the duration of the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q 95 ~ 3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in β p at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. The results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.
A new implementation of the CMRH method for solving dense linear systems
NASA Astrophysics Data System (ADS)
Heyouni, M.; Sadok, H.
2008-04-01
The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.
INTRODUCTION: Status report on fusion research
NASA Astrophysics Data System (ADS)
Burkart, Werner
2005-10-01
A major milestone on the path to fusion energy was reached in June 2005 on the occasion of the signing of the joint declaration of all parties to the ITER negotiations, agreeing on future arrangements and on the construction site at Cadarache in France. The International Atomic Energy Agency has been promoting fusion activities since the late 1950s; it took over the auspices of the ITER Conceptual Design Activities in 1988, and of the ITER Engineering and Design Activities in 1992. The Agency continues its support to Member States through the organization of consultancies, workshops and technical meetings, the most prominent being the series of International Fusion Energy Conferences (formerly called the International Conference on Plasma Physics and Controlled Nuclear Fusion Research). The meetings serve as a platform for experts from all Member States to have open discussions on their latest accomplishments as well as on their problems and eventual solutions. The papers presented at the meetings and conferences are routinely published, many being sent to the journal it Nuclear Fusion, co-published monthly by Institute of Physics Publishing, Bristol, UK. The journal's reputation is reflected in the fact that it is a world-renowned publication, and the International Fusion Research Council has used it for the publication of a Status Report on Controlled Thermonuclear Fusion in 1978 and 1990. This present report marks the conclusion of the preparatory phases of ITER activities. It provides background information on the progress of fusion research within the last 15 years. The International Fusion Research Council (IFRC), which initiated the report, was fully aware of the complexities of including all scientific results in just one paper, and so decided to provide an overview and extensive references for the interested reader who need not necessarily be a fusion specialist. Professor Predhiman K. Kaw, Chairman, prepared the report on behalf of the IFRC, reflecting members' personal views on the latest achievements in fusion research, including magnetic and inertial confinement scenarios. The report describes fusion fundamentals and progress in fusion science and technology, with ITER as a possible partner in the realization of self-sustainable burning plasma. The importance of the socio-economic aspects of energy production using fusion power plants is also covered. Noting that applications of plasma science are of broad interest to the Member States, the report addresses the topic of plasma physics to assist in understanding the achievements of better coatings, cheaper light sources, improved heat-resistant materials and other high-technology materials. Nuclear fusion energy production is intrinsically safe, but for ITER the full range of hazards will need to be addressed, including minimising radiation exposure, to accomplish the goal of a sustainable and environmentally acceptable production of energy. We anticipate that the role of the Agency will in future evolve from supporting scientific projects and fostering information exchange to the preparation of safety principles and guidelines for the operation of burning fusion plasmas with a Q > 1. Technical progress in inertial and magnetic confinement, as well as in alternative concepts, will lead to a further increase in international cooperation. New means of communication will be needed, utilizing the best resources of modern information technology to advance interest in fusion. However, today the basis of scientific progress is still through journal publications and, with this in mind, we trust that this report will find an interested readership. We acknowledge with thanks the support of the members of the IFRC as an advisory body to the Agency. Seven chairmen have presided over the IFRC since its first meeting in 1971 in Madison, USA, ensuring that the IAEA fusion efforts were based on the best professional advice possible, and that information on fusion developments has been widely and expertly disseminated. We further acknowledge the efforts of the Chairman of the IFRC and of all authors and experts who contributed to this report on the present status of fusion research.
Progress in the Design and Development of the ITER Low-Field Side Reflectometer (LFSR) System
NASA Astrophysics Data System (ADS)
Doyle, E. J.; Wang, G.; Peebles, W. A.; US LFSR Team
2015-11-01
The US has formed a team, comprised of personnel from PPPL, ORNL, GA and UCLA, to develop the LFSR system for ITER. The LFSR system will contribute to the measurement of a number of plasma parameters on ITER, including edge plasma electron density profiles, monitor Edge Localized Modes (ELMs) and L-H transitions, and provide physics measurements relating to high frequency instabilities, plasma flows, and other density transients. An overview of the status of design activities and component testing for the system will be presented. Since the 2011 conceptual design review, the number of microwave transmission lines (TLs) and antennas has been reduced from twelve (12) to seven (7) due to space constraint in the ITER Tokamak Port Plug. This change has required a reconfiguration and recalculation of the performance of the front-end antenna design, which now includes use of monostatic transmission lines and antennas. Work supported by US ITER/PPPL Subcontracts S013252-C and S012340, and PO 4500051400 from GA to UCLA.
Physics and Engineering Design of the ITER Electron Cyclotron Emission Diagnostic
NASA Astrophysics Data System (ADS)
Rowan, W. L.; Austin, M. E.; Houshmandyar, S.; Phillips, P. E.; Beno, J. H.; Ouroua, A.; Weeks, D. A.; Hubbard, A. E.; Stillerman, J. A.; Feder, R. E.; Khodak, A.; Taylor, G.; Pandya, H. K.; Danani, S.; Kumar, R.
2015-11-01
Electron temperature (Te) measurements and consequent electron thermal transport inferences will be critical to the non-active phases of ITER operation and will take on added importance during the alpha heating phase. Here, we describe our design for the diagnostic that will measure spatial and temporal profiles of Te using electron cyclotron emission (ECE). Other measurement capability includes high frequency instabilities (e.g. ELMs, NTMs, and TAEs). Since results from TFTR and JET suggest that Thomson Scattering and ECE differ at high Te due to driven non-Maxwellian distributions, non-thermal features of the ITER electron distribution must be documented. The ITER environment presents other challenges including space limitations, vacuum requirements, and very high-neutron-fluence. Plasma control in ITER will require real-time Te. The diagnosic design that evolved from these sometimes-conflicting needs and requirements will be described component by component with special emphasis on the integration to form a single effective diagnostic system. Supported by PPPL/US-DA via subcontract S013464-C to UT Austin.
Baseline Architecture of ITER Control System
NASA Astrophysics Data System (ADS)
Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.
2011-08-01
The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boozer, Allen H., E-mail: ahb17@columbia.edu
2015-03-15
The plasma current in ITER cannot be allowed to transfer from thermal to relativistic electron carriers. The potential for damage is too great. Before the final design is chosen for the mitigation system to prevent such a transfer, it is important that the parameters that control the physics be understood. Equations that determine these parameters and their characteristic values are derived. The mitigation benefits of the injection of impurities with the highest possible atomic number Z and the slowing plasma cooling during halo current mitigation to ≳40 ms in ITER are discussed. The highest possible Z increases the poloidal flux consumptionmore » required for each e-fold in the number of relativistic electrons and reduces the number of high energy seed electrons from which exponentiation builds. Slow cooling of the plasma during halo current mitigation also reduces the electron seed. Existing experiments could test physics elements required for mitigation but cannot carry out an integrated demonstration. ITER itself cannot carry out an integrated demonstration without excessive danger of damage unless the probability of successful mitigation is extremely high. The probability of success depends on the reliability of the theory. Equations required for a reliable Monte Carlo simulation are derived.« less
Wei, Jianming; Zhang, Youan; Sun, Meimei; Geng, Baoliang
2017-09-01
This paper presents an adaptive iterative learning control scheme for a class of nonlinear systems with unknown time-varying delays and control direction preceded by unknown nonlinear backlash-like hysteresis. Boundary layer function is introduced to construct an auxiliary error variable, which relaxes the identical initial condition assumption of iterative learning control. For the controller design, integral Lyapunov function candidate is used, which avoids the possible singularity problem by introducing hyperbolic tangent funciton. After compensating for uncertainties with time-varying delays by combining appropriate Lyapunov-Krasovskii function with Young's inequality, an adaptive iterative learning control scheme is designed through neural approximation technique and Nussbaum function method. On the basis of the hyperbolic tangent function's characteristics, the system output is proved to converge to a small neighborhood of the desired trajectory by constructing Lyapunov-like composite energy function (CEF) in two cases, while keeping all the closed-loop signals bounded. Finally, a simulation example is presented to verify the effectiveness of the proposed approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willert, Jeffrey; Taitano, William T.; Knoll, Dana
In this note we demonstrate that using Anderson Acceleration (AA) in place of a standard Picard iteration can not only increase the convergence rate but also make the iteration more robust for two transport applications. We also compare the convergence acceleration provided by AA to that provided by moment-based acceleration methods. Additionally, we demonstrate that those two acceleration methods can be used together in a nested fashion. We begin by describing the AA algorithm. At this point, we will describe two application problems, one from neutronics and one from plasma physics, on which we will apply AA. We provide computationalmore » results which highlight the benefits of using AA, namely that we can compute solutions using fewer function evaluations, larger time-steps, and achieve a more robust iteration.« less
NASA Astrophysics Data System (ADS)
Benner, Peter; Dolgov, Sergey; Khoromskaia, Venera; Khoromskij, Boris N.
2017-04-01
In this paper, we propose and study two approaches to approximate the solution of the Bethe-Salpeter equation (BSE) by using structured iterative eigenvalue solvers. Both approaches are based on the reduced basis method and low-rank factorizations of the generating matrices. We also propose to represent the static screen interaction part in the BSE matrix by a small active sub-block, with a size balancing the storage for rank-structured representations of other matrix blocks. We demonstrate by various numerical tests that the combination of the diagonal plus low-rank plus reduced-block approximation exhibits higher precision with low numerical cost, providing as well a distinct two-sided error estimate for the smallest eigenvalues of the Bethe-Salpeter operator. The complexity is reduced to O (Nb2) in the size of the atomic orbitals basis set, Nb, instead of the practically intractable O (Nb6) scaling for the direct diagonalization. In the second approach, we apply the quantized-TT (QTT) tensor representation to both, the long eigenvectors and the column vectors in the rank-structured BSE matrix blocks, and combine this with the ALS-type iteration in block QTT format. The QTT-rank of the matrix entities possesses almost the same magnitude as the number of occupied orbitals in the molecular systems, No
NASA Astrophysics Data System (ADS)
Rani, Monika; Bhatti, Harbax S.; Singh, Vikramjeet
2017-11-01
In optical communication, the behavior of the ultrashort pulses of optical solitons can be described through nonlinear Schrodinger equation. This partial differential equation is widely used to contemplate a number of physically important phenomena, including optical shock waves, laser and plasma physics, quantum mechanics, elastic media, etc. The exact analytical solution of (1+n)-dimensional higher order nonlinear Schrodinger equation by He's variational iteration method has been presented. Our proposed solutions are very helpful in studying the solitary wave phenomena and ensure rapid convergent series and avoid round off errors. Different examples with graphical representations have been given to justify the capability of the method.
ERIC Educational Resources Information Center
Gale, Jessica; Wind, Stefanie; Koval, Jayma; Dagosta, Joseph; Ryan, Mike; Usselman, Marion
2016-01-01
This paper illustrates the use of simulation-based performance assessment (PA) methodology in a recent study of eighth-grade students' understanding of physical science concepts. A set of four simulation-based PA tasks were iteratively developed to assess student understanding of an array of physical science concepts, including net force,…
Electron-cyclotron wave scattering by edge density fluctuations in ITER
NASA Astrophysics Data System (ADS)
Tsironis, Christos; Peeters, Arthur G.; Isliker, Heinz; Strintzi, Dafni; Chatziantonaki, Ioanna; Vlahos, Loukas
2009-11-01
The effect of edge turbulence on the electron-cyclotron wave propagation in ITER is investigated with emphasis on wave scattering, beam broadening, and its influence on localized heating and current drive. A wave used for electron-cyclotron current drive (ECCD) must cross the edge of the plasma, where density fluctuations can be large enough to bring on wave scattering. The scattering angle due to the density fluctuations is small, but the beam propagates over a distance of several meters up to the resonance layer and even small angle scattering leads to a deviation of several centimeters at the deposition location. Since the localization of ECCD is crucial for the control of neoclassical tearing modes, this issue is of great importance to the ITER design. The wave scattering process is described on the basis of a Fokker-Planck equation, where the diffusion coefficient is calculated analytically as well as computed numerically using a ray tracing code.
Xu, Xin; Huang, Zhenhua; Graves, Daniel; Pedrycz, Witold
2014-12-01
In order to deal with the sequential decision problems with large or continuous state spaces, feature representation and function approximation have been a major research topic in reinforcement learning (RL). In this paper, a clustering-based graph Laplacian framework is presented for feature representation and value function approximation (VFA) in RL. By making use of clustering-based techniques, that is, K-means clustering or fuzzy C-means clustering, a graph Laplacian is constructed by subsampling in Markov decision processes (MDPs) with continuous state spaces. The basis functions for VFA can be automatically generated from spectral analysis of the graph Laplacian. The clustering-based graph Laplacian is integrated with a class of approximation policy iteration algorithms called representation policy iteration (RPI) for RL in MDPs with continuous state spaces. Simulation and experimental results show that, compared with previous RPI methods, the proposed approach needs fewer sample points to compute an efficient set of basis functions and the learning control performance can be improved for a variety of parameter settings.
Automatic knee cartilage delineation using inheritable segmentation
NASA Astrophysics Data System (ADS)
Dries, Sebastian P. M.; Pekar, Vladimir; Bystrov, Daniel; Heese, Harald S.; Blaffert, Thomas; Bos, Clemens; van Muiswinkel, Arianne M. C.
2008-03-01
We present a fully automatic method for segmentation of knee joint cartilage from fat suppressed MRI. The method first applies 3-D model-based segmentation technology, which allows to reliably segment the femur, patella, and tibia by iterative adaptation of the model according to image gradients. Thin plate spline interpolation is used in the next step to position deformable cartilage models for each of the three bones with reference to the segmented bone models. After initialization, the cartilage models are fine adjusted by automatic iterative adaptation to image data based on gray value gradients. The method has been validated on a collection of 8 (3 left, 5 right) fat suppressed datasets and demonstrated the sensitivity of 83+/-6% compared to manual segmentation on a per voxel basis as primary endpoint. Gross cartilage volume measurement yielded an average error of 9+/-7% as secondary endpoint. For cartilage being a thin structure, already small deviations in distance result in large errors on a per voxel basis, rendering the primary endpoint a hard criterion.
NASA Astrophysics Data System (ADS)
Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang
2017-11-01
Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.
NASA Astrophysics Data System (ADS)
Cunha-Filho, A. G.; Briend, Y. P. J.; de Lima, A. M. G.; Donadon, M. V.
2018-05-01
The flutter boundary prediction of complex aeroelastic systems is not an easy task. In some cases, these analyses may become prohibitive due to the high computational cost and time associated with the large number of degrees of freedom of the aeroelastic models, particularly when the aeroelastic model incorporates a control strategy with the aim of suppressing the flutter phenomenon, such as the use of viscoelastic treatments. In this situation, the use of a model reduction method is essential. However, the construction of a modal reduction basis for aeroviscoelastic systems is still a challenge, owing to the inherent frequency- and temperature-dependent behavior of the viscoelastic materials. Thus, the main contribution intended for the present study is to propose an efficient and accurate iterative enriched Ritz basis to deal with aeroviscoelastic systems. The main features and capabilities of the proposed model reduction method are illustrated in the prediction of flutter boundary for a thin three-layer sandwich flat panel and a typical aeronautical stiffened panel, both under supersonic flow.
An efficient algorithm for the generalized Foldy-Lax formulation
NASA Astrophysics Data System (ADS)
Huang, Kai; Li, Peijun; Zhao, Hongkai
2013-02-01
Consider the scattering of a time-harmonic plane wave incident on a two-scale heterogeneous medium, which consists of scatterers that are much smaller than the wavelength and extended scatterers that are comparable to the wavelength. In this work we treat those small scatterers as isotropic point scatterers and use a generalized Foldy-Lax formulation to model wave propagation and capture multiple scattering among point scatterers and extended scatterers. Our formulation is given as a coupled system, which combines the original Foldy-Lax formulation for the point scatterers and the regular boundary integral equation for the extended obstacle scatterers. The existence and uniqueness of the solution for the formulation is established in terms of physical parameters such as the scattering coefficient and the separation distances. Computationally, an efficient physically motivated Gauss-Seidel iterative method is proposed to solve the coupled system, where only a linear system of algebraic equations for point scatterers or a boundary integral equation for a single extended obstacle scatterer is required to solve at each step of iteration. The convergence of the iterative method is also characterized in terms of physical parameters. Numerical tests for the far-field patterns of scattered fields arising from uniformly or randomly distributed point scatterers and single or multiple extended obstacle scatterers are presented.
Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography
Wang, Kun; Su, Richard; Oraevsky, Alexander A; Anastasio, Mark A
2012-01-01
Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications. PMID:22864062
First-Order Hyperbolic System Method for Time-Dependent Advection-Diffusion Problems
2014-03-01
accuracy, with rapid convergence over each physical time step, typically less than five Newton iter - ations. 1 Contents 1 Introduction 3 2 Hyperbolic...however, we employ the Gauss - Seidel (GS) relaxation, which is also an O(N) method for the discretization arising from hyperbolic advection-diffusion system...advection-diffusion scheme. The linear dependency of the iterations on Table 1: Boundary layer problem ( Convergence criteria: Residuals < 10−8.) log10Re
A Burning Plasma Experiment: the role of international collaboration
NASA Astrophysics Data System (ADS)
Prager, Stewart
2003-04-01
The world effort to develop fusion energy is at the threshold of a new stage in its research: the investigation of burning plasmas. A burning plasma is self-heated. The 100 million degree temperature of the plasma is maintained by the heat generated by the fusion reactions themselves, as occurs in burning stars. The fusion-generated alpha particles produce new physical phenomena that are strongly coupled together as a nonlinear complex system, posing a major plasma physics challenge. Two attractive options are being considered by the US fusion community as burning plasma facilities: the international ITER experiment and the US-based FIRE experiment. ITER (the International Thermonuclear Experimental Reactor) is a large, power-plant scale facility. It was conceived and designed by a partnership of the European Union, Japan, the Soviet Union, and the United States. At the completion of the first engineering design in 1998, the US discontinued its participation. FIRE (the Fusion Ignition Research Experiment) is a smaller, domestic facility that is at an advanced pre-conceptual design stage. Each facility has different scientific, programmatic and political implications. Selecting the optimal path for burning plasma science is itself a challenge. Recently, the Fusion Energy Sciences Advisory Committee recommended a dual path strategy in which the US seek to rejoin ITER, but be prepared to move forward with FIRE if the ITER negotiations do not reach fruition by July, 2004. Either the ITER or FIRE experiment would reveal the behavior of burning plasmas, generate large amounts of fusion power, and be a huge step in establishing the potential of fusion energy to contribute to the world's energy security.
Discrete Fourier Transform in a Complex Vector Space
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2015-01-01
An image-based phase retrieval technique has been developed that can be used on board a space based iterative transformation system. Image-based wavefront sensing is computationally demanding due to the floating-point nature of the process. The discrete Fourier transform (DFT) calculation is presented in "diagonal" form. By diagonal we mean that a transformation of basis is introduced by an application of the similarity transform of linear algebra. The current method exploits the diagonal structure of the DFT in a special way, particularly when parts of the calculation do not have to be repeated at each iteration to converge to an acceptable solution in order to focus an image.
NASA Astrophysics Data System (ADS)
Lister, Jo, Dr
2004-12-01
Jack Connor, Jim Hastie and Bryan Taylor The Hannes Alfvén Prize of the European Physical Society for Outstanding Contributions to Plasma Physics (2004) has been awarded to Jack Connor, Jim Hastie and Bryan Taylor `for their seminal contributions to a wide range of issues of fundamental importance to the success of magnetic confinement fusion, including: the development of gyro-kinetic theory; the prediction of the bootstrap current; dimensionless scaling laws; pressure-limiting instabilities, and micro-stability and transport theory'. Jack Connor, Jim Hastie and Bryan Taylor form one of the most successful teams of theoretical physicists in the history of magnetic confinement fusion. They have made important contributions individually, but their greatest discoveries have mostly been accomplished jointly, either in pairs or as a team involving all three. Their early work, in the 1960s, included the development of the gyro-kinetic theory for fine-scale plasma instabilities, which today forms the basis of the most advanced turbulence simulation codes in tokamak and stellarator research. The theoretical prediction of the bootstrap current, made in 1970-71 was not confirmed experimentally for over a decade but is now regarded as crucial to the success of the tokamak as a steady-state fusion power source. Their work on collisional transport also included the prediction of impurity ion accumulation, which is observed in internal transport barriers and is a key concern for long-pulse tokamak operation. The relativistic threshold for runaway electrons, identified in 1975, forms the basis of the most recent tokamak disruption mitigation schemes. In the late 1970s, the team developed the theory for ballooning instabilities, which provided an important ingredient in the `Troyon-Sykes' β-limit—an expression that is still used as a guide to the performance of tokamaks and in the design of ITER. Ballooning mode theory has also contributed to the understanding of instabilities in space plasmas such as magnetospheres and the solar corona. Finally, coming right up to date, the ballooning mode is thought to be a key ingredient in edge-localized modes (ELMs), which are a main issue for ITER, and ballooning stability is an important feature of modern stellarators. In the late 1970s and through the 1980s, the concept of dimensionless scaling laws was introduced and developed (following work by Kadomtsev), enabling scalings for transport coefficients to be derived without tackling all the details of the plasma turbulence. The same ideas are still used today to provide various constraints on confinement scaling laws, for example, on which the ITER design is largely based. The linear theory of toroidal drift waves was also developed by the team during this period, and into the 1990s. Key results on the role of shear damping in toroidal geometry, the identification of modes with extended radial correlation lengths, and the role of flow shear in reducing these correlation lengths (and hence transport) were deduced. All of these are key ideas that are often components in theoretical models for tokamak confinement and the generation of transport barriers. This laudation can only address a small number of the areas in which this formidable team of theoretical plasma physicists have made great contributions to our understanding of magnetically confined plasmas. It is appropriate and timely that their contributions are recognized as they approach the end of their careers.
User Testing of Consumer Medicine Information in Australia
ERIC Educational Resources Information Center
Jay, Eleanor; Aslani, Parisa; Raynor, D. K.
2011-01-01
Background: Consumer Medicine Information (CMI) forms an important basis for the dissemination of medicines information worldwide. Methods: This article presents an overview of the design and development of Australian CMI, and discusses "user-testing" as an iterative, formative process for CMI design. Findings: In Australia, legislation…
THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)
This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...
Developing a Virtual Physics World
ERIC Educational Resources Information Center
Wegener, Margaret; McIntyre, Timothy J.; McGrath, Dominic; Savage, Craig M.; Williamson, Michael
2012-01-01
In this article, the successful implementation of a development cycle for a physics teaching package based on game-like virtual reality software is reported. The cycle involved several iterations of evaluating students' use of the package followed by instructional and software development. The evaluation used a variety of techniques, including…
Statistical Physics for Adaptive Distributed Control
NASA Technical Reports Server (NTRS)
Wolpert, David H.
2005-01-01
A viewgraph presentation on statistical physics for distributed adaptive control is shown. The topics include: 1) The Golden Rule; 2) Advantages; 3) Roadmap; 4) What is Distributed Control? 5) Review of Information Theory; 6) Iterative Distributed Control; 7) Minimizing L(q) Via Gradient Descent; and 8) Adaptive Distributed Control.
Novel aspects of plasma control in ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, D.; Jackson, G.; Walker, M.
2015-02-15
ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less
Novel aspects of plasma control in ITER
Humphreys, David; Ambrosino, G.; de Vries, Peter; ...
2015-02-12
ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g. current profile regulation, tearing mode suppression (TM)), control mathematics (e.g. algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g. methods for management of highly-subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Finally, issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less
Beyond ITER: neutral beams for a demonstration fusion reactor (DEMO) (invited).
McAdams, R
2014-02-01
In the development of magnetically confined fusion as an economically sustainable power source, International Tokamak Experimental Reactor (ITER) is currently under construction. Beyond ITER is the demonstration fusion reactor (DEMO) programme in which the physics and engineering aspects of a future fusion power plant will be demonstrated. DEMO will produce net electrical power. The DEMO programme will be outlined and the role of neutral beams for heating and current drive will be described. In particular, the importance of the efficiency of neutral beam systems in terms of injected neutral beam power compared to wallplug power will be discussed. Options for improving this efficiency including advanced neutralisers and energy recovery are discussed.
Status of the ITER Electron Cyclotron Heating and Current Drive System
NASA Astrophysics Data System (ADS)
Darbos, Caroline; Albajar, Ferran; Bonicelli, Tullio; Carannante, Giuseppe; Cavinato, Mario; Cismondi, Fabio; Denisov, Grigory; Farina, Daniela; Gagliardi, Mario; Gandini, Franco; Gassmann, Thibault; Goodman, Timothy; Hanson, Gregory; Henderson, Mark A.; Kajiwara, Ken; McElhaney, Karen; Nousiainen, Risto; Oda, Yasuhisa; Omori, Toshimichi; Oustinov, Alexander; Parmar, Darshankumar; Popov, Vladimir L.; Purohit, Dharmesh; Rao, Shambhu Laxmikanth; Rasmussen, David; Rathod, Vipal; Ronden, Dennis M. S.; Saibene, Gabriella; Sakamoto, Keishi; Sartori, Filippo; Scherer, Theo; Singh, Narinder Pal; Strauß, Dirk; Takahashi, Koji
2016-01-01
The electron cyclotron (EC) heating and current drive (H&CD) system developed for the ITER is made of 12 sets of high-voltage power supplies feeding 24 gyrotrons connected through 24 transmission lines (TL), to five launchers, four located in upper ports and one at the equatorial level. Nearly all procurements are in-kind, following general ITER philosophy, and will come from Europe, India, Japan, Russia and the USA. The full system is designed to couple to the plasma 20 MW among the 24 MW generated power, at the frequency of 170 GHz, for various physics applications such as plasma start-up, central H&CD and magnetohydrodynamic (MHD) activity control. The design takes present day technology and extends toward high-power continuous operation, which represents a large step forward as compared to the present state of the art. The ITER EC system will be a stepping stone to future EC systems for DEMO and beyond.
NASA Astrophysics Data System (ADS)
Broedel, Johannes; Duhr, Claude; Dulat, Falko; Tancredi, Lorenzo
2018-06-01
We introduce a class of iterated integrals that generalize multiple polylogarithms to elliptic curves. These elliptic multiple polylogarithms are closely related to similar functions defined in pure mathematics and string theory. We then focus on the equal-mass and non-equal-mass sunrise integrals, and we develop a formalism that enables us to compute these Feynman integrals in terms of our iterated integrals on elliptic curves. The key idea is to use integration-by-parts identities to identify a set of integral kernels, whose precise form is determined by the branch points of the integral in question. These kernels allow us to express all iterated integrals on an elliptic curve in terms of them. The flexibility of our approach leads us to expect that it will be applicable to a large variety of integrals in high-energy physics.
EC assisted start-up experiments reproduction in FTU and AUG for simulations of the ITER case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granucci, G.; Ricci, D.; Farina, D.
The breakdown and plasma start-up in ITER are well known issues studied in the last few years in many tokamaks with the aid of calculation based on simplified modeling. The thickness of ITER metallic wall and the voltage limits of the Central Solenoid Power Supply strongly limit the maximum toroidal electric field achievable (0.3 V/m), well below the level used in the present generation of tokamaks. In order to have a safe and robust breakdown, the use of Electron Cyclotron Power to assist plasma formation and current rump up has been foreseen. This has raised attention on plasma formation phasemore » in presence of EC wave, especially in order to predict the required power for a robust breakdown in ITER. Few detailed theory studies have been performed up to nowadays, due to the complexity of the problems. A simplified approach, extended from that proposed in ref[1] has been developed including a impurity multispecies distribution and an EC wave propagation and absorption based on GRAY code. This integrated model (BK0D) has been benchmarked on ohmic and EC assisted experiments on FTU and AUG, finding the key aspects for a good reproduction of data. On the basis of this, the simulation has been devoted to understand the best configuration for ITER case. The dependency of impurity distribution content and neutral gas pressure limits has been considered. As results of the analysis a reasonable amount of power (1 - 2 MW) seems to be enough to extend in a significant way the breakdown and current start up capability of ITER. The work reports the FTU data reproduction and the ITER case simulations.« less
Physics of Tokamak Plasma Start-up
NASA Astrophysics Data System (ADS)
Mueller, Dennis
2012-10-01
This tutorial describes and reviews the state-of-art in tokamak plasma start-up and its importance to next step devices such as ITER, a Fusion Nuclear Science Facility and a Tokamak/ST demo. Tokamak plasma start-up includes breakdown of the initial gas, ramp-up of the plasma current to its final value and the control of plasma parameters during those phases. Tokamaks rely on an inductive component, typically a central solenoid, which has enabled attainment of high performance levels that has enabled the construction of the ITER device. Optimizing the inductive start-up phase continues to be an area of active research, especially in regards to achieving ITER scenarios. A new generation of superconducting tokamaks, EAST and KSTAR, experiments on DIII-D and operation with JET's ITER-like wall are contributing towards this effort. Inductive start-up relies on transformer action to generate a toroidal loop voltage and successful start-up is determined by gas breakdown, avalanche physics and plasma-wall interaction. The goal of achieving steady-sate tokamak operation has motivated interest in other methods for start-up that do not rely on the central solenoid. These include Coaxial Helicity Injection, outer poloidal field coil start-up, and point source helicity injection, which have achieved 200, 150 and 100 kA respectively of toroidal current on closed flux surfaces. Other methods including merging reconnection startup and Electron Bernstein Wave (EBW) plasma start-up are being studied on various devices. EBW start-up generates a directed electron channel due to wave particle interaction physics while the other methods mentioned rely on magnetic helicity injection and magnetic reconnection which are being modeled and understood using NIMROD code simulations.
DIII-D Upgrade to Prepare the Basis for Steady-State Burning Plasmas
NASA Astrophysics Data System (ADS)
Buttery, R. J.; Guo, H. Y.; Taylor, T. S.; Wade, M. R.; Hill, D. N.
2014-10-01
Future steady-state burning plasma facilities will access new physics regimes and modes of plasma behavior. It is vital to prepare for this both experimentally using existing facilities, and theoretically in order to develop the tools to project to and optimize these devices. An upgrade to DIII-D is proposed to address the three critical aspects where research must go beyond what we can do now: (i) torque free electron heating to address the energy, particle and momentum transport mechanisms of burning plasmas using electron cyclotron (EC) heating and full power balanced neutral beams; (ii) off-axis heating and current drive to develop the path to true fusion steady state by reorienting neutral beams and deploying EC and helicon current drive; (iii) a new divertor with hot walls and reactor relevant materials to develop the basis for benign detached divertor operation compatible with wall materials and a high performance fusion core. These elements with modest incremental cost and enacted as a user facility for the whole US program will enable the US to lead on ITER and take a decision to proceed with a Fusion Nuclear Science Facility. Work supported by the US Department of Energy under DE-FC02-04ER54698 and DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Assi, I. A.; Sous, A. J.
2018-05-01
The goal of this work is to derive a new class of short-range potentials that could have a wide range of physical applications, specially in molecular physics. The tridiagonal representation approach has been developed beyond its limitations to produce new potentials by requiring the representation of the Schrödinger wave operator to be multidiagonal and symmetric. This produces a family of Hulthén potentials that has a specific structure, as mentioned in the introduction. As an example, we have solved the nonrelativistic wave equation for the new four-parameter short-range screening potential numerically using the asymptotic iteration method, where we tabulated the eigenvalues for both s -wave and arbitrary l -wave cases in tables.
ERIC Educational Resources Information Center
Pill, Shane
2012-01-01
"Game sense" is a sport-specific iteration of the teaching games for understanding model, designed to balance physical development of motor skill and fitness with the development of game understanding. Game sense can foster a shared vision for sport learning that bridges school physical education and community sport. This article explains how to…
Mariano-Goulart, D; Fourcade, M; Bernon, J L; Rossi, M; Zanca, M
2003-01-01
Thanks to an experimental study based on simulated and physical phantoms, the propagation of the stochastic noise in slices reconstructed using the conjugate gradient algorithm has been analysed versus iterations. After a first increase corresponding to the reconstruction of the signal, the noise stabilises before increasing linearly with iterations. The level of the plateau as well as the slope of the subsequent linear increase depends on the noise in the projection data.
Real-time restoration of white-light confocal microscope optical sections
Balasubramanian, Madhusudhanan; Iyengar, S. Sitharama; Beuerman, Roger W.; Reynaud, Juan; Wolenski, Peter
2009-01-01
Confocal microscopes (CM) are routinely used for building 3-D images of microscopic structures. Nonideal imaging conditions in a white-light CM introduce additive noise and blur. The optical section images need to be restored prior to quantitative analysis. We present an adaptive noise filtering technique using Karhunen–Loéve expansion (KLE) by the method of snapshots and a ringing metric to quantify the ringing artifacts introduced in the images restored at various iterations of iterative Lucy–Richardson deconvolution algorithm. The KLE provides a set of basis functions that comprise the optimal linear basis for an ensemble of empirical observations. We show that most of the noise in the scene can be removed by reconstructing the images using the KLE basis vector with the largest eigenvalue. The prefiltering scheme presented is faster and does not require prior knowledge about image noise. Optical sections processed using the KLE prefilter can be restored using a simple inverse restoration algorithm; thus, the methodology is suitable for real-time image restoration applications. The KLE image prefilter outperforms the temporal-average prefilter in restoring CM optical sections. The ringing metric developed uses simple binary morphological operations to quantify the ringing artifacts and confirms with the visual observation of ringing artifacts in the restored images. PMID:20186290
Inductive flux usage and its optimization in tokamak operation
Luce, Timothy C.; Humphreys, David A.; Jackson, Gary L.; ...
2014-07-30
The energy flow from the poloidal field coils of a tokamak to the electromagnetic and kinetic stored energy of the plasma are considered in the context of optimizing the operation of ITER. The goal is to optimize the flux usage in order to allow the longest possible burn in ITER at the desired conditions to meet the physics objectives (500 MW fusion power with energy gain of 10). A mathematical formulation of the energy flow is derived and applied to experiments in the DIII-D tokamak that simulate the ITER design shape and relevant normalized current and pressure. The rate ofmore » rise of the plasma current was varied, and the fastest stable current rise is found to be the optimum for flux usage in DIII-D. A method to project the results to ITER is formulated. The constraints of the ITER poloidal field coil set yield an optimum at ramp rates slower than the maximum stable rate for plasmas similar to the DIII-D plasmas. Finally, experiments in present-day tokamaks for further optimization of the current rise and validation of the projections are suggested.« less
Modelling of edge localised modes and edge localised mode control [Modelling of ELMs and ELM control
Huijsmans, G. T. A.; Chang, C. S.; Ferraro, N.; ...
2015-02-07
Edge Localised Modes (ELMs) in ITER Q = 10 H-mode plasmas are likely to lead to large transient heat loads to the divertor. In order to avoid an ELM induced reduction of the divertor lifetime, the large ELM energy losses need to be controlled. In ITER, ELM control is foreseen using magnetic field perturbations created by in-vessel coils and the injection of small D2 pellets. ITER plasmas are characterised by low collisionality at a high density (high fraction of the Greenwald density limit). These parameters cannot simultaneously be achieved in current experiments. Thus, the extrapolation of the ELM properties andmore » the requirements for ELM control in ITER relies on the development of validated physics models and numerical simulations. Here, we describe the modelling of ELMs and ELM control methods in ITER. The aim of this paper is not a complete review on the subject of ELM and ELM control modelling but rather to describe the current status and discuss open issues.« less
Arc detection for the ICRF system on ITER
NASA Astrophysics Data System (ADS)
D'Inca, R.
2011-12-01
The ICRF system for ITER is designed to respect the high voltage breakdown limits. However arcs can still statistically happen and must be quickly detected and suppressed by shutting the RF power down. For the conception of a reliable and efficient detector, the analysis of the mechanism of arcs is necessary to find their unique signature. Numerous systems have been conceived to address the issues of arc detection. VSWR-based detectors, RF noise detectors, sound detectors, optical detectors, S-matrix based detectors. Until now, none of them has succeeded in demonstrating the fulfillment of all requirements and the studies for ITER now follow three directions: improvement of the existing concepts to fix their flaws, development of new theoretically fully compliant detectors (like the GUIDAR) and combination of several detectors to benefit from the advantages of each of them. Together with the physical and engineering challenges, the development of an arc detection system for ITER raises methodological concerns to extrapolate the results from basic experiments and present machines to the ITER scale ICRF system and to conduct a relevant risk analysis.
Yan, Yumeng; Wen, Zeyu; Zhang, Di; Huang, Sheng-You
2018-05-18
RNA-RNA interactions play fundamental roles in gene and cell regulation. Therefore, accurate prediction of RNA-RNA interactions is critical to determine their complex structures and understand the molecular mechanism of the interactions. Here, we have developed a physics-based double-iterative strategy to determine the effective potentials for RNA-RNA interactions based on a training set of 97 diverse RNA-RNA complexes. The double-iterative strategy circumvented the reference state problem in knowledge-based scoring functions by updating the potentials through iteration and also overcame the decoy-dependent limitation in previous iterative methods by constructing the decoys iteratively. The derived scoring function, which is referred to as DITScoreRR, was evaluated on an RNA-RNA docking benchmark of 60 test cases and compared with three other scoring functions. It was shown that for bound docking, our scoring function DITScoreRR obtained the excellent success rates of 90% and 98.3% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 63.3% and 71.7% for van der Waals interactions, 45.0% and 65.0% for ITScorePP, and 11.7% and 26.7% for ZDOCK 2.1, respectively. For unbound docking, DITScoreRR achieved the good success rates of 53.3% and 71.7% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 13.3% and 28.3% for van der Waals interactions, 11.7% and 26.7% for our ITScorePP, and 3.3% and 6.7% for ZDOCK 2.1, respectively. DITScoreRR also performed significantly better in ranking decoys and obtained significantly higher score-RMSD correlations than the other three scoring functions. DITScoreRR will be of great value for the prediction and design of RNA structures and RNA-RNA complexes.
Ensemble Kalman Filter versus Ensemble Smoother for Data Assimilation in Groundwater Modeling
NASA Astrophysics Data System (ADS)
Li, L.; Cao, Z.; Zhou, H.
2017-12-01
Groundwater modeling calls for an effective and robust integrating method to fill the gap between the model and data. The Ensemble Kalman Filter (EnKF), a real-time data assimilation method, has been increasingly applied in multiple disciplines such as petroleum engineering and hydrogeology. In this approach, the groundwater models are sequentially updated using measured data such as hydraulic head and concentration data. As an alternative to the EnKF, the Ensemble Smoother (ES) was proposed with updating models using all the data together, and therefore needs a much less computational cost. To further improve the performance of the ES, an iterative ES was proposed for continuously updating the models by assimilating measurements together. In this work, we compare the performance of the EnKF, the ES and the iterative ES using a synthetic example in groundwater modeling. The hydraulic head data modeled on the basis of the reference conductivity field are utilized to inversely estimate conductivities at un-sampled locations. Results are evaluated in terms of the characterization of conductivity and groundwater flow and solute transport predictions. It is concluded that: (1) the iterative ES could achieve a comparable result with the EnKF, but needs a less computational cost; (2) the iterative ES has the better performance than the ES through continuously updating. These findings suggest that the iterative ES should be paid much more attention for data assimilation in groundwater modeling.
Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł
2007-04-21
A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.
Loads specification and embedded plate definition for the ITER cryoline system
NASA Astrophysics Data System (ADS)
Badgujar, S.; Benkheira, L.; Chalifour, M.; Forgeas, A.; Shah, N.; Vaghela, H.; Sarkar, B.
2015-12-01
ITER cryolines (CLs) are complex network of vacuum-insulated multi and single process pipe lines, distributed in three different areas at ITER site. The CLs will support different operating loads during the machine life-time; either considered as nominal, occasional or exceptional. The major loads, which form the design basis are inertial, pressure, temperature, assembly, magnetic, snow, wind, enforced relative displacement and are put together in loads specification. Based on the defined load combinations, conceptual estimation of reaction loads have been carried out for the lines located inside the Tokamak building. Adequate numbers of embedded plates (EPs) per line have been defined and integrated in the building design. The finalization of building EPs to support the lines, before the detailed design, is one of the major design challenges as the usual logic of the design may alter. At the ITER project level, it was important to finalize EPs to allow adequate design and timely availability of the Tokamak building. The paper describes the single loads, load combinations considered in load specification and the approach for conceptual load estimation and selection of EPs for Toroidal Field (TF) Cryoline as an example by converting the load combinations in two main load categories; pressure and seismic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clementson, Joel
2010-05-01
The spectra of highly charged tungsten ions have been investigated using x-ray and extreme ultraviolet spectroscopy. These heavy ions are of interest in relativistic atomic structure theory, where high-precision wavelength measurements benchmark theoretical approaches, and in magnetic fusion research, where the ions may serve to diagnose high-temperature plasmas. The work details spectroscopic investigations of highly charged tungsten ions measured at the Livermore electron beam ion trap (EBIT) facility. Here, the EBIT-I and SuperEBIT electron beam ion traps have been employed to create, trap, and excite tungsten ions of M- and L-shell charge states. The emitted spectra have been studied inmore » high resolution using crystal, grating, and x-ray calorimeter spectrometers. In particular, wavelengths of n = 0 M-shell transitions in K-like W 55+ through Ne-like W 64+, and intershell transitions in Zn-like W 44+ through Co-like W 47+ have been measured. Special attention is given to the Ni-like W46+ ion, which has two strong electric-dipole forbidden transitions that are of interest for plasma diagnostics. The EBIT measurements are complemented by spectral modeling using the Flexible Atomic Code (FAC), and predictions for tokamak spectra are presented. The L-shell tungsten ions have been studied at electron-beam energies of up to 122 keV and transition energies measured in Ne-like W 64+ through Li-like W 71+. These spectra constitute the physics basis in the design of the ion-temperature crystal spectrometer for the ITER tokamak. Tungsten particles have furthermore been introduced into the Sustained Spheromak Physics Experiment (SSPX) spheromak in Livermore in order to investigate diagnostic possibilities of extreme ultraviolet tungsten spectra for the ITER divertor. The spheromak measurement and spectral modeling using FAC suggest that tungsten ions in charge states around Er-like W 6+ could be useful for plasma diagnostics.« less
Examination of the Entry to Burn and Burn Control for the ITER 15 MA Baseline and Other Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kesse, Charles E.; Kim, S-H.; Koechl, F.
2014-09-01
The entry to burn and flattop burn control in ITER will be a critical need from the first DT experiments. Simulations are used to address time-dependent behavior under a range of possible conditions that include injected power level, impurity content (W, Ar, Be), density evolution, H-mode regimes, controlled parameter (Wth, Pnet, Pfusion), and actuator (Paux, fueling, fAr), with a range of transport models. A number of physics issues at the L-H transition require better understanding to project to ITER, however, simulations indicate viable control with sufficient auxiliary power (up to 73 MW), while lower powers become marginal (as low asmore » 43 MW).« less
NASA Astrophysics Data System (ADS)
Phillips, Jordan J.; Zgid, Dominika
2014-06-01
We report an implementation of self-consistent Green's function many-body theory within a second-order approximation (GF2) for application with molecular systems. This is done by iterative solution of the Dyson equation expressed in matrix form in an atomic orbital basis, where the Green's function and self-energy are built on the imaginary frequency and imaginary time domain, respectively, and fast Fourier transform is used to efficiently transform these quantities as needed. We apply this method to several archetypical examples of strong correlation, such as a H32 finite lattice that displays a highly multireference electronic ground state even at equilibrium lattice spacing. In all cases, GF2 gives a physically meaningful description of the metal to insulator transition in these systems, without resorting to spin-symmetry breaking. Our results show that self-consistent Green's function many-body theory offers a viable route to describing strong correlations while remaining within a computationally tractable single-particle formalism.
Dual-process models of health-related behaviour and cognition: a review of theory.
Houlihan, S
2018-03-01
The aim of this review was to synthesise a spectrum of theories incorporating dual-process models of health-related behaviour. Review of theory, adapted loosely from Cochrane-style systematic review methodology. Inclusion criteria were specified to identify all relevant dual-process models that explain decision-making in the context of decisions made about human health. Data analysis took the form of iterative template analysis (adapted from the conceptual synthesis framework used in other reviews of theory), and in this way theories were synthesised on the basis of shared theoretical constructs and causal pathways. Analysis and synthesis proceeded in turn, instead of moving uni-directionally from analysis of individual theories to synthesis of multiple theories. Namely, the reviewer considered and reconsidered individual theories and theoretical components in generating the narrative synthesis' main findings. Drawing on systematic review methodology, 11 electronic databases were searched for relevant dual-process theories. After de-duplication, 12,198 records remained. Screening of title and abstract led to the exclusion of 12,036 records, after which 162 full-text records were assessed. Of those, 21 records were included in the review. Moving back and forth between analysis of individual theories and the synthesis of theories grouped on the basis of theme or focus yielded additional insights into the orientation of a theory to an individual. Theories could be grouped in part on their treatment of an individual as an irrational actor, as social actor, as actor in a physical environment or as a self-regulated actor. Synthesising identified theories into a general dual-process model of health-related behaviour indicated that such behaviour is the result of both propositional and unconscious reasoning driven by an individual's response to internal cues (such as heuristics, attitude and affect), physical cues (social and physical environmental stimuli) as well as regulating factors (such as habit) that mediate between them. Copyright © 2017. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Richard L.; Kochunas, Brendan; Adams, Brian M.
The Virtual Environment for Reactor Applications components included in this distribution include selected computational tools and supporting infrastructure that solve neutronics, thermal-hydraulics, fuel performance, and coupled neutronics-thermal hydraulics problems. The infrastructure components provide a simplified common user input capability and provide for the physics integration with data transfer and coupled-physics iterative solution algorithms.
Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis
NASA Astrophysics Data System (ADS)
Jiao, Yujian; Wang, Li-Lian; Huang, Can
2016-01-01
The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.
Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg
2016-12-13
We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.
NASA Astrophysics Data System (ADS)
Escourbiac, F.; Richou, M.; Guigon, R.; Constans, S.; Durocher, A.; Merola, M.; Schlosser, J.; Riccardi, B.; Grosman, A.
2009-12-01
Experience has shown that a critical part of the high-heat flux (HHF) plasma-facing component (PFC) is the armour to heat sink bond. An experimental study was performed in order to define acceptance criteria with regards to thermal hydraulics and fatigue performance of the International Thermonuclear Experimental Reactor (ITER) divertor PFCs. This study, which includes the manufacturing of samples with calibrated artificial defects relevant to the divertor design, is reported in this paper. In particular, it was concluded that defects detectable with non-destructive examination (NDE) techniques appeared to be acceptable during HHF experiments relevant to heat fluxes expected in the ITER divertor. On the basis of these results, a set of acceptance criteria was proposed and applied to the European vertical target medium-size qualification prototype: 98% of the inspected carbon fibre composite (CFC) monoblocks and 100% of tungsten (W) monoblock and flat tiles elements (i.e. 80% of the full units) were declared acceptable.
Modelling the physics in iterative reconstruction for transmission computed tomography
Nuyts, Johan; De Man, Bruno; Fessler, Jeffrey A.; Zbijewski, Wojciech; Beekman, Freek J.
2013-01-01
There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of X-ray CT imaging. IR has the ability to significantly reduce patient dose, it provides the flexibility to reconstruct images from arbitrary X-ray system geometries and it allows to include detailed models of photon transport and detection physics, to accurately correct for a wide variety of image degrading effects. This paper reviews discretisation issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. Widespread implementation of IR with highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling. PMID:23739261
Total recall in distributive associative memories
NASA Technical Reports Server (NTRS)
Danforth, Douglas G.
1991-01-01
Iterative error correction of asymptotically large associative memories is equivalent to a one-step learning rule. This rule is the inverse of the activation function of the memory. Spectral representations of nonlinear activation functions are used to obtain the inverse in closed form for Sparse Distributed Memory, Selected-Coordinate Design, and Radial Basis Functions.
NASA Astrophysics Data System (ADS)
Niki, Hiroshi; Harada, Kyouji; Morimoto, Munenori; Sakakihara, Michio
2004-03-01
Several preconditioned iterative methods reported in the literature have been used for improving the convergence rate of the Gauss-Seidel method. In this article, on the basis of nonnegative matrix, comparisons between some splittings for such preconditioned matrices are derived. Simple numerical examples are also given.
UserTesting.com: A Tool for Usability Testing of Online Resources
ERIC Educational Resources Information Center
Koundinya, Vikram; Klink, Jenna; Widhalm, Melissa
2017-01-01
Extension educators are increasingly using online resources in their program design and delivery. Usability testing is essential for ensuring that these resources are relevant and useful to learners. On the basis of our experiences with iteratively developing products using a testing service called UserTesting, we promote the use of fee-based…
Foldover-free shape deformation for biomedicine.
Yu, Hongchuan; Zhang, Jian J; Lee, Tong-Yee
2014-04-01
Shape deformation as a fundamental geometric operation underpins a wide range of applications, from geometric modelling, medical imaging to biomechanics. In medical imaging, for example, to quantify the difference between two corresponding images, 2D or 3D, one needs to find the deformation between both images. However, such deformations, particularly deforming complex volume datasets, are prone to the problem of foldover, i.e. during deformation, the required property of one-to-one mapping no longer holds for some points. Despite numerous research efforts, the construction of a mathematically robust foldover-free solution subject to positional constraints remains open. In this paper, we address this challenge by developing a radial basis function-based deformation method. In particular we formulate an effective iterative mechanism which ensures the foldover-free property is satisfied all the time. The experimental results suggest that the resulting deformations meet the internal positional constraints. In addition to radial basis functions, this iterative mechanism can also be incorporated into other deformation approaches, e.g. B-spline based FFDs, to develop different deformable approaches for various applications. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Pavanello, Michele; Tung, Wei-Cheng; Adamowicz, Ludwik
2009-11-14
Efficient optimization of the basis set is key to achieving a very high accuracy in variational calculations of molecular systems employing basis functions that are explicitly dependent on the interelectron distances. In this work we present a method for a systematic enlargement of basis sets of explicitly correlated functions based on the iterative-complement-interaction approach developed by Nakatsuji [Phys. Rev. Lett. 93, 030403 (2004)]. We illustrate the performance of the method in the variational calculations of H(3) where we use explicitly correlated Gaussian functions with shifted centers. The total variational energy (-1.674 547 421 Hartree) and the binding energy (-15.74 cm(-1)) obtained in the calculation with 1000 Gaussians are the most accurate results to date.
NASA Astrophysics Data System (ADS)
Arndt, S.; Merkel, P.; Monticello, D. A.; Reiman, A. H.
1999-04-01
Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz et al., Plasma Physics and Controlled Nuclear Fusion Research 1990 (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman et al., Comput. Phys. Commun., 43, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations needed for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann et al., Phys. Fluids 26, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of "self-healing" of islands has been observed.
NASA Astrophysics Data System (ADS)
Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel
2014-01-01
An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.
Physics design of the injector source for ITER neutral beam injector (invited).
Antoni, V; Agostinetti, P; Aprile, D; Cavenago, M; Chitarin, G; Fonnesu, N; Marconato, N; Pilan, N; Sartori, E; Serianni, G; Veltri, P
2014-02-01
Two Neutral Beam Injectors (NBI) are foreseen to provide a substantial fraction of the heating power necessary to ignite thermonuclear fusion reactions in ITER. The development of the NBI system at unprecedented parameters (40 A of negative ion current accelerated up to 1 MV) requires the realization of a full scale prototype, to be tested and optimized at the Test Facility under construction in Padova (Italy). The beam source is the key component of the system and the design of the multi-grid accelerator is the goal of a multi-national collaborative effort. In particular, beam steering is a challenging aspect, being a tradeoff between requirements of the optics and real grids with finite thickness and thermo-mechanical constraints due to the cooling needs and the presence of permanent magnets. In the paper, a review of the accelerator physics and an overview of the whole R&D physics program aimed to the development of the injector source are presented.
Ultra-Low-Dose Fetal CT With Model-Based Iterative Reconstruction: A Prospective Pilot Study.
Imai, Rumi; Miyazaki, Osamu; Horiuchi, Tetsuya; Asano, Keisuke; Nishimura, Gen; Sago, Haruhiko; Nosaka, Shunsuke
2017-06-01
Prenatal diagnosis of skeletal dysplasia by means of 3D skeletal CT examination is highly accurate. However, it carries a risk of fetal exposure to radiation. Model-based iterative reconstruction (MBIR) technology can reduce radiation exposure; however, to our knowledge, the lower limit of an optimal dose is currently unknown. The objectives of this study are to establish ultra-low-dose fetal CT as a method for prenatal diagnosis of skeletal dysplasia and to evaluate the appropriate radiation dose for ultra-low-dose fetal CT. Relationships between tube current and image noise in adaptive statistical iterative reconstruction and MBIR were examined using a 32-cm CT dose index (CTDI) phantom. On the basis of the results of this examination and the recommended methods for the MBIR option and the known relationship between noise and tube current for filtered back projection, as represented by the expression SD = (milliamperes) -0.5 , the lower limit of the optimal dose in ultra-low-dose fetal CT with MBIR was set. The diagnostic power of the CT images obtained using the aforementioned scanning conditions was evaluated, and the radiation exposure associated with ultra-low-dose fetal CT was compared with that noted in previous reports. Noise increased in nearly inverse proportion to the square root of the dose in adaptive statistical iterative reconstruction and in inverse proportion to the fourth root of the dose in MBIR. Ultra-low-dose fetal CT was found to have a volume CTDI of 0.5 mGy. Prenatal diagnosis was accurately performed on the basis of ultra-low-dose fetal CT images that were obtained using this protocol. The level of fetal exposure to radiation was 0.7 mSv. The use of ultra-low-dose fetal CT with MBIR led to a substantial reduction in radiation exposure, compared with the CT imaging method currently used at our institution, but it still enabled diagnosis of skeletal dysplasia without reducing diagnostic power.
Generalized exact holographic mapping with wavelets
NASA Astrophysics Data System (ADS)
Lee, Ching Hua
2017-12-01
The idea of renormalization and scale invariance is pervasive across disciplines. It has not only drawn numerous surprising connections between physical systems under the guise of holographic duality, but has also inspired the development of wavelet theory now widely used in signal processing. Synergizing on these two developments, we describe in this paper a generalized exact holographic mapping that maps a generic N -dimensional lattice system to a (N +1 )-dimensional holographic dual, with the emergent dimension representing scale. In previous works, this was achieved via the iterations of the simplest of all unitary mappings, the Haar mapping, which fails to preserve the form of most Hamiltonians. By taking advantage of the full generality of biorthogonal wavelets, our new generalized holographic mapping framework is able to preserve the form of a large class of lattice Hamiltonians. By explicitly separating features that are fundamentally associated with the physical system from those that are basis specific, we also obtain a clearer understanding of how the resultant bulk geometry arises. For instance, the number of nonvanishing moments of the high-pass wavelet filter is revealed to be proportional to the radius of the dual anti-de Sitter space geometry. We conclude by proposing modifications to the mapping for systems with generic Fermi pockets.
Commissioning and Plans for the NSTX-U Facility
NASA Astrophysics Data System (ADS)
Ono, Masayuki; NSTX-U Team
2016-10-01
The National Spherical Torus Experiment - Upgrade (NSTX-U) has started its first year of plasma operations after the successful completion of the CD-4 milestones. The unique operating regimes of NSTX-U can contribute to several important issues in the physics of burning plasmas to optimize the performance of ITER. The major mission of NSTX-U is also to develop the physics and technology basis for an ST-based Fusion Nuclear Science Facility (FNSF). The new center stack will provide toroidal field of 1 Tesla at a major radius of 0.93 m which should enable a plasma current of up to 2 mega-Amp for 5 sec. A much more tangential 2nd NBI system, with 2-3 times higher current drive efficiency compared to the 1st NBI system, is installed. NSTX-U is designed to attain the 100% non-inductive operation needed for a compact FNSF design. With higher fields and heating powers of 14 MW, the NSTX-U plasma collisionality will be reduced by a factor of 3-6 to help explore the trend in transport towards the low collisionality FNSF regime. If the favorable trends observed on NSTX holds at low collisionality, high fusion neutron fluences could be achievable in very compact ST devices.
ERIC Educational Resources Information Center
Mikula, Brendon D.; Heckler, Andrew F.
2017-01-01
We propose a framework for improving accuracy, fluency, and retention of basic skills essential for solving problems relevant to STEM introductory courses, and implement the framework for the case of basic vector math skills over several semesters in an introductory physics course. Using an iterative development process, the framework begins with…
Pump-dump iterative squeezing of vibrational wave packets.
Chang, Bo Y; Sola, Ignacio R
2005-12-22
The free motion of a nonstationary vibrational wave packet in an electronic potential is a source of interesting quantum properties. In this work we propose an iterative scheme that allows continuous stretching and squeezing of a wave packet in the ground or in an excited electronic state, by switching the wave function between both potentials with pi pulses at certain times. Using a simple model of displaced harmonic oscillators and delta pulses, we derive the analytical solution and the conditions for its possible implementation and optimization in different molecules and electronic states. We show that the main constraining parameter is the pulse bandwidth. Although in principle the degree of squeezing (or stretching) is not bounded, the physical resources increase quadratically with the number of iterations, while the achieved squeezing only increases linearly.
Simulating transient dynamics of the time-dependent time fractional Fokker-Planck systems
NASA Astrophysics Data System (ADS)
Kang, Yan-Mei
2016-09-01
For a physically realistic type of time-dependent time fractional Fokker-Planck (FP) equation, derived as the continuous limit of the continuous time random walk with time-modulated Boltzmann jumping weight, a semi-analytic iteration scheme based on the truncated (generalized) Fourier series is presented to simulate the resultant transient dynamics when the external time modulation is a piece-wise constant signal. At first, the iteration scheme is demonstrated with a simple time-dependent time fractional FP equation on finite interval with two absorbing boundaries, and then it is generalized to the more general time-dependent Smoluchowski-type time fractional Fokker-Planck equation. The numerical examples verify the efficiency and accuracy of the iteration method, and some novel dynamical phenomena including polarized motion orientations and periodic response death are discussed.
Analytic approximation for random muffin-tin alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, R.; Gray, L.J.; Kaplan, T.
1983-03-15
The methods introduced in a previous paper under the name of ''traveling-cluster approximation'' (TCA) are applied, in a multiple-scattering approach, to the case of a random muffin-tin substitutional alloy. This permits the iterative part of a self-consistent calculation to be carried out entirely in terms of on-the-energy-shell scattering amplitudes. Off-shell components of the mean resolvent, needed for the calculation of spectral functions, are obtained by standard methods involving single-site scattering wave functions. The single-site TCA is just the usual coherent-potential approximation, expressed in a form particularly suited for iteration. A fixed-point theorem is proved for the general t-matrix TCA, ensuringmore » convergence upon iteration to a unique self-consistent solution with the physically essential Herglotz properties.« less
NASA Astrophysics Data System (ADS)
Sanz, D.; Ruiz, M.; Castro, R.; Vega, J.; Afif, M.; Monroe, M.; Simrock, S.; Debelle, T.; Marawar, R.; Glass, B.
2016-04-01
To aid in assessing the functional performance of ITER, Fission Chambers (FC) based on the neutron diagnostic use case deliver timestamped measurements of neutron source strength and fusion power. To demonstrate the Plant System Instrumentation & Control (I&C) required for such a system, ITER Organization (IO) has developed a neutron diagnostics use case that fully complies with guidelines presented in the Plant Control Design Handbook (PCDH). The implementation presented in this paper has been developed on the PXI Express (PXIe) platform using products from the ITER catalog of standard I&C hardware for fast controllers. Using FlexRIO technology, detector signals are acquired at 125 MS/s, while filtering, decimation, and three methods of neutron counting are performed in real-time via the onboard Field Programmable Gate Array (FPGA). Measurement results are reported every 1 ms through Experimental Physics and Industrial Control System (EPICS) Channel Access (CA), with real-time timestamps derived from the ITER Timing Communication Network (TCN) based on IEEE 1588-2008. Furthermore, in accordance with ITER specifications for CODAC Core System (CCS) application development, the software responsible for the management, configuration, and monitoring of system devices has been developed in compliance with a new EPICS module called Nominal Device Support (NDS) and RIO/FlexRIO design methodology.
Analysis of drift effects on the tokamak power scrape-off width using SOLPS-ITER
NASA Astrophysics Data System (ADS)
Meier, E. T.; Goldston, R. J.; Kaveeva, E. G.; Makowski, M. A.; Mordijck, S.; Rozhansky, V. A.; Senichenkov, I. Yu; Voskoboynikov, S. P.
2016-12-01
SOLPS-ITER, a comprehensive 2D scrape-off layer modeling package, is used to examine the physical mechanisms that set the scrape-off width ({λq} ) for inter-ELM power exhaust. Guided by Goldston’s heuristic drift (HD) model, which shows remarkable quantitative agreement with experimental data, this research examines drift effects on {λq} in a DIII-D H-mode magnetic equilibrium. As a numerical expedient, a low target recycling coefficient of 0.9 is used in the simulations, resulting in outer target plasma that is sheath limited instead of conduction limited as in the experiment. Scrape-off layer (SOL) particle diffusivity (D SOL) is scanned from 1 to 0.1 m2 s-1. Across this diffusivity range, outer divertor heat flux is dominated by a narrow (˜3-4 mm when mapped to the outer midplane) electron convection channel associated with thermoelectric current through the SOL from outer to inner divertor. An order-unity up-down ion pressure asymmetry allows net ion drift flux across the separatrix, facilitated by an artificial mechanism that mimics the anomalous electron transport required for overall ambipolarity in the HD model. At {{D}\\text{SOL}}=0.1 m2 s-1, the density fall-off length is similar to the electron temperature fall-off length, as predicted by the HD model and as seen experimentally. This research represents a step toward a deeper understanding of the power scrape-off width, and serves as a basis for extending fluid modeling to more experimentally relevant, high-collisionality regimes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kassab, A.J.; Pollard, J.E.
An algorithm is presented for the high-resolution detection of irregular-shaped subsurface cavities within irregular-shaped bodies by the IR-CAT method. The theoretical basis of the algorithm is rooted in the solution of an inverse geometric steady-state heat conduction problem. A Cauchy boundary condition is prescribed at the exposed surface, and the inverse geometric heat conduction problem is formulated by specifying the thermal condition at the inner cavities walls, whose unknown geometries are to be detected. The location of the inner cavities is initially estimated, and the domain boundaries are discretized. Linear boundary elements are used in conjunction with cubic splines formore » high resolution of the cavity walls. An anchored grid pattern (AGP) is established to constrain the cubic spline knots that control the inner cavity geometry to evolve along the AGP at each iterative step. A residual is defined measuring the difference between imposed and computed boundary conditions. A Newton-Raphson method with a Broyden update is used to automate the detection of inner cavity walls. During the iterative procedure, the movement of the inner cavity walls is restricted to physically realistic intermediate solutions. Numerical simulation demonstrates the superior resolution of the cubic spline AGP algorithm over the linear spline-based AGP in the detection of an irregular-shaped cavity. Numerical simulation is also used to test the sensitivity of the linear and cubic spline AGP algorithms by simulating bias and random error in measured surface temperature. The proposed AGP algorithm is shown to satisfactorily detect cavities with these simulated data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, P. T.
1993-09-01
As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Provingmore » this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H 1 Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.« less
Analysis of drift effects on the tokamak power scrape-off width using SOLPS-ITER
Meier, E. T.; Goldston, R. J.; Kaveeva, E. G.; ...
2016-11-02
SOLPS-ITER, a comprehensive 2D scrape-off layer modeling package, is used to examine the physical mechanisms that set the scrape-off width (more » $${{\\lambda}_{q}}$$ ) for inter-ELM power exhaust. Guided by Goldston's heuristic drift (HD) model, which shows remarkable quantitative agreement with experimental data, this research examines drift effects on $${{\\lambda}_{q}}$$ in a DIII-D H-mode magnetic equilibrium. As a numerical expedient, a low target recycling coefficient of 0.9 is used in the simulations, resulting in outer target plasma that is sheath limited instead of conduction limited as in the experiment. Scrape-off layer (SOL) particle diffusivity (D SOL) is scanned from 1 to 0.1 m2 s –1. Across this diffusivity range, outer divertor heat flux is dominated by a narrow (~3–4mm when mapped to the outer midplane) electron convection channel associated with thermoelectric current through the SOL from outer to inner divertor. An order-unity up–down ion pressure asymmetry allows net ion drift flux across the separatrix, facilitated by an artificial mechanism that mimics the anomalous electron transport required for overall ambipolarity in the HD model. At $${{D}_{\\text{SOL}}}=0.1$$ m2 s –1, the density fall-off length is similar to the electron temperature fall-off length, as predicted by the HD model and as seen experimentally. Furthermore, this research represents a step toward a deeper understanding of the power scrape-off width, and serves as a basis for extending fluid modeling to more experimentally relevant, high-collisionality regimes.« less
Burning plasma regime for Fussion-Fission Research Facility
NASA Astrophysics Data System (ADS)
Zakharov, Leonid E.
2010-11-01
The basic aspects of burning plasma regimes of Fusion-Fission Research Facility (FFRF, R/a=4/1 m/m, Ipl=5 MA, Btor=4-6 T, P^DT=50-100 MW, P^fission=80-4000 MW, 1 m thick blanket), which is suggested as the next step device for Chinese fusion program, are presented. The mission of FFRF is to advance magnetic fusion to the level of a stationary neutron source and to create a technical, scientific, and technology basis for the utilization of high-energy fusion neutrons for the needs of nuclear energy and technology. FFRF will rely as much as possible on ITER design. Thus, the magnetic system, especially TFC, will take advantage of ITER experience. TFC will use the same superconductor as ITER. The plasma regimes will represent an extension of the stationary plasma regimes on HT-7 and EAST tokamaks at ASIPP. Both inductive discharges and stationary non-inductive Lower Hybrid Current Drive (LHCD) will be possible. FFRF strongly relies on new, Lithium Wall Fusion (LiWF) plasma regimes, the development of which will be done on NSTX, HT-7, EAST in parallel with the design work. This regime will eliminate a number of uncertainties, still remaining unresolved in the ITER project. Well controlled, hours long inductive current drive operation at P^DT=50-100 MW is predicted.
Scientific study of data analysis
NASA Technical Reports Server (NTRS)
Wu, S. T.
1990-01-01
We present a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized and the accuracy and numerical instability are discussed. On the basis of this investigation, we claim that the two methods do resemble each other qualitatively.
A polygon-based modeling approach to assess exposure of resources and assets to wildfire
Matthew P. Thompson; Joe Scott; Jeffrey D. Kaiden; Julie W. Gilbertson-Day
2013-01-01
Spatially explicit burn probability modeling is increasingly applied to assess wildfire risk and inform mitigation strategy development. Burn probabilities are typically expressed on a per-pixel basis, calculated as the number of times a pixel burns divided by the number of simulation iterations. Spatial intersection of highly valued resources and assets (HVRAs) with...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, James, E-mail: 9jhb3@queensu.ca; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca
In this paper we show that it is possible to use an iterative eigensolver in conjunction with Halverson and Poirier’s symmetrized Gaussian (SG) basis [T. Halverson and B. Poirier, J. Chem. Phys. 137, 224101 (2012)] to compute accurate vibrational energy levels of molecules with as many as five atoms. This is done, without storing and manipulating large matrices, by solving a regular eigenvalue problem that makes it possible to exploit direct-product structure. These ideas are combined with a new procedure for selecting which basis functions to use. The SG basis we work with is orders of magnitude smaller than themore » basis made by using a classical energy criterion. We find significant convergence errors in previous calculations with SG bases. For sum-of-product Hamiltonians, SG bases large enough to compute accurate levels are orders of magnitude larger than even simple pruned bases composed of products of harmonic oscillator functions.« less
Inferring Pre-shock Acoustic Field From Post-shock Pitot Pressure Measurement
NASA Astrophysics Data System (ADS)
Wang, Jian-Xun; Zhang, Chao; Duan, Lian; Xiao, Heng; Virginia Tech Team; Missouri Univ of Sci; Tech Team
2017-11-01
Linear interaction analysis (LIA) and iterative ensemble Kalman method are used to convert post-shock Pitot pressure fluctuations to static pressure fluctuations in front of the shock. The LIA is used as the forward model for the transfer function associated with a homogeneous field of acoustic waves passing through a nominally normal shock wave. The iterative ensemble Kalman method is then employed to infer the spectrum of upstream acoustic waves based on the post-shock Pitot pressure measured at a single point. Several test cases with synthetic and real measurement data are used to demonstrate the merits of the proposed inference scheme. The study provides the basis for measuring tunnel freestream noise with intrusive probes in noisy supersonic wind tunnels.
NASA Astrophysics Data System (ADS)
Fable, E.; Angioni, C.; Ivanov, A. A.; Lackner, K.; Maj, O.; Medvedev, S. Yu; Pautasso, G.; Pereverzev, G. V.; Treutterer, W.; the ASDEX Upgrade Team
2013-07-01
The modelling of tokamak scenarios requires the simultaneous solution of both the time evolution of the plasma kinetic profiles and of the magnetic equilibrium. Their dynamical coupling involves additional complications, which are not present when the two physical problems are solved separately. Difficulties arise in maintaining consistency in the time evolution among quantities which appear in both the transport and the Grad-Shafranov equations, specifically the poloidal and toroidal magnetic fluxes as a function of each other and of the geometry. The required consistency can be obtained by means of iteration cycles, which are performed outside the equilibrium code and which can have different convergence properties depending on the chosen numerical scheme. When these external iterations are performed, the stability of the coupled system becomes a concern. In contrast, if these iterations are not performed, the coupled system is numerically stable, but can become physically inconsistent. By employing a novel scheme (Fable E et al 2012 Nucl. Fusion submitted), which ensures stability and physical consistency among the same quantities that appear in both the transport and magnetic equilibrium equations, a newly developed version of the ASTRA transport code (Pereverzev G V et al 1991 IPP Report 5/42), which is coupled to the SPIDER equilibrium code (Ivanov A A et al 2005 32nd EPS Conf. on Plasma Physics (Tarragona, 27 June-1 July) vol 29C (ECA) P-5.063), in both prescribed- and free-boundary modes is presented here for the first time. The ASTRA-SPIDER coupled system is then applied to the specific study of the modelling of controlled current ramp-up in ASDEX Upgrade discharges.
Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes
NASA Astrophysics Data System (ADS)
Zhu, Yajun; Zhong, Chengwen; Xu, Kun
2016-06-01
This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.
High fidelity quantum gates with vibrational qubits.
Berrios, Eduardo; Gruebele, Martin; Shyshlov, Dmytro; Wang, Lei; Babikov, Dmitri
2012-11-26
Physical implementation of quantum gates acting on qubits does not achieve a perfect fidelity of 1. The actual output qubit may not match the targeted output of the desired gate. According to theoretical estimates, intrinsic gate fidelities >99.99% are necessary so that error correction codes can be used to achieve perfect fidelity. Here we test what fidelity can be accomplished for a CNOT gate executed by a shaped ultrafast laser pulse interacting with vibrational states of the molecule SCCl(2). This molecule has been used as a test system for low-fidelity calculations before. To make our test more stringent, we include vibrational levels that do not encode the desired qubits but are close enough in energy to interfere with population transfer by the laser pulse. We use two complementary approaches: optimal control theory determines what the best possible pulse can do; a more constrained physical model calculates what an experiment likely can do. Optimal control theory finds pulses with fidelity >0.9999, in excess of the quantum error correction threshold with 8 × 10(4) iterations. On the other hand, the physical model achieves only 0.9992 after 8 × 10(4) iterations. Both calculations converge as an inverse power law toward unit fidelity after >10(2) iterations/generations. In principle, the fidelities necessary for quantum error correction are reachable with qubits encoded by molecular vibrations. In practice, it will be challenging with current laboratory instrumentation because of slow convergence past fidelities of 0.99.
Initial evaluation of discrete orthogonal basis reconstruction of ECT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, E.B.; Donohue, K.D.
1996-12-31
Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less
NASA Astrophysics Data System (ADS)
Arteaga, Santiago Egido
1998-12-01
The steady-state Navier-Stokes equations are of considerable interest because they are used to model numerous common physical phenomena. The applications encountered in practice often involve small viscosities and complicated domain geometries, and they result in challenging problems in spite of the vast attention that has been dedicated to them. In this thesis we examine methods for computing the numerical solution of the primitive variable formulation of the incompressible equations on distributed memory parallel computers. We use the Galerkin method to discretize the differential equations, although most results are stated so that they apply also to stabilized methods. We also reformulate some classical results in a single framework and discuss some issues frequently dismissed in the literature, such as the implementation of pressure space basis and non- homogeneous boundary values. We consider three nonlinear methods: Newton's method, Oseen's (or Picard) iteration, and sequences of Stokes problems. All these iterative nonlinear methods require solving a linear system at every step. Newton's method has quadratic convergence while that of the others is only linear; however, we obtain theoretical bounds showing that Oseen's iteration is more robust, and we confirm it experimentally. In addition, although Oseen's iteration usually requires more iterations than Newton's method, the linear systems it generates tend to be simpler and its overall costs (in CPU time) are lower. The Stokes problems result in linear systems which are easier to solve, but its convergence is much slower, so that it is competitive only for large viscosities. Inexact versions of these methods are studied, and we explain why the best timings are obtained using relatively modest error tolerances in solving the corresponding linear systems. We also present a new damping optimization strategy based on the quadratic nature of the Navier-Stokes equations, which improves the robustness of all the linearization strategies considered and whose computational cost is negligible. The algebraic properties of these systems depend on both the discretization and nonlinear method used. We study in detail the positive definiteness and skewsymmetry of the advection submatrices (essentially, convection-diffusion problems). We propose a discretization based on a new trilinear form for Newton's method. We solve the linear systems using three Krylov subspace methods, GMRES, QMR and TFQMR, and compare the advantages of each. Our emphasis is on parallel algorithms, and so we consider preconditioners suitable for parallel computers such as line variants of the Jacobi and Gauss- Seidel methods, alternating direction implicit methods, and Chebyshev and least squares polynomial preconditioners. These work well for moderate viscosities (moderate Reynolds number). For small viscosities we show that effective parallel solution of the advection subproblem is a critical factor to improve performance. Implementation details on a CM-5 are presented.
Formation and termination of runaway beams in ITER disruptions
NASA Astrophysics Data System (ADS)
Martín-Solís, J. R.; Loarte, A.; Lehnen, M.
2017-06-01
A self-consistent analysis of the relevant physics regarding the formation and termination of runaway beams during mitigated disruptions by Ar and Ne injection is presented for selected ITER scenarios with the aim of improving our understanding of the physics underlying the runaway heat loads onto the plasma facing components (PFCs) and identifying open issues for developing and accessing disruption mitigation schemes for ITER. This is carried out by means of simplified models, but still retaining sufficient details of the key physical processes, including: (a) the expected dominant runaway generation mechanisms (avalanche and primary runaway seeds: Dreicer and hot tail runaway generation, tritium decay and Compton scattering of γ rays emitted by the activated wall), (b) effects associated with the plasma and runaway current density profile shape, and (c) corrections to the runaway dynamics to account for the collisions of the runaways with the partially stripped impurity ions, which are found to have strong effects leading to low runaway current generation and low energy conversion during current termination for mitigated disruptions by noble gas injection (particularly for Ne injection) for the shortest current quench times compatible with acceptable forces on the ITER vessel and in-vessel components ({τ\\text{res}}∼ 22~\\text{ms} ). For the case of long current quench times ({τ\\text{res}}∼ 66~\\text{ms} ), runaway beams up to ∼10 MA can be generated during the disruption current quench and, if the termination of the runaway current is slow enough, the generation of runaways by the avalanche mechanism can play an important role, increasing substantially the energy deposited by the runaways onto the PFCs up to a few hundreds of MJs. Mixed impurity (Ar or Ne) plus deuterium injection proves to be effective in controlling the formation of the runaway current during the current quench, even for the longest current quench times, as well as in decreasing the energy deposited on the runaway electrons during current termination.
NASA Astrophysics Data System (ADS)
Darbos, C.; Henderson, M.; Albajar, F.; Bigelow, T.; Bomcelli, T.; Chavan, R.; Denisov, G.; Farina, D.; Gandini, F.; Heidinger, R.; Goodman, T.; Hogge, J. P.; Kajiwara, K.; Kasugai, A.; Kern, S.; Kobayashi, N.; Oda, Y.; Ramponi, G.; Rao, S. L.; Rasmussen, D.; Rzesnicki, T.; Saibene, G.; Sakamoto, K.; Sauter, O.; Scherer, T.; Strauss, D.; Takahashi, K.; Zohm, H.
2009-11-01
A 26 MW Electron Cyclotron Heating and Current Drive (EC H&CD) system is to be installed for ITER. The main objectives are to provide, start-up assist, central H&CD and control of MHD activity. These are achieved by a combination of two types of launchers, one located in an equatorial port and the second type in four upper ports. The physics applications are partitioned between the two launchers, based on the deposition location and driven current profiles. The equatorial launcher (EL) will access from the plasma axis to mid radius with a relatively broad profile useful for central heating and current drive applications, while the upper launchers (ULs) will access roughly the outer half of the plasma radius with a very narrow peaked profile for the control of the Neoclassical Tearing Modes (NTM) and sawtooth oscillations. The EC power can be switched between launchers on a time scale as needed by the immediate physics requirements. A revision of all injection angles of all launchers is under consideration for increased EC physics capabilities while relaxing the engineering constraints of both the EL and ULs. A series of design reviews are being planned with the five parties (EU, IN, JA, RF, US) procuring the EC system, the EC community and ITER Organization (IO). The review meetings qualify the design and provide an environment for enhancing performances while reducing costs, simplifying interfaces, predicting technology upgrades and commercial availability. In parallel, the test programs for critical components are being supported by IO and performed by the Domestic Agencies (DAs) for minimizing risks. The wide participation of the DAs provides a broad representation from the EC community, with the aim of collecting all expertise in guiding the EC system optimization. Still a strong relationship between IO and the DA is essential for optimizing the design of the EC system and for the installation and commissioning of all ex-vessel components when several teams from several DAs will be involved together in the tests on the ITER site.
ERIC Educational Resources Information Center
Ngai, Grace; Chan, Stephen C. F.; Leong, Hong Va; Ng, Vincent T. Y.
2013-01-01
This article presents the design and development of i*CATch, a construction kit for physical and wearable computing that was designed to be scalable, plug-and-play, and to provide support for iterative and exploratory learning. It consists of a standardized construction interface that can be adapted for a wide range of soft textiles or electronic…
Rowan, W L; Houshmandyar, S; Phillips, P E; Austin, M E; Beno, J H; Hubbard, A E; Khodak, A; Ouroua, A; Taylor, G
2016-11-01
Measurement of the electron cyclotron emission (ECE) is one of the primary diagnostics for electron temperature in ITER. In-vessel, in-vacuum, and quasi-optical antennas capture sufficient ECE to achieve large signal to noise with microsecond temporal resolution and high spatial resolution while maintaining polarization fidelity. Two similar systems are required. One views the plasma radially. The other is an oblique view. Both views can be used to measure the electron temperature, while the oblique is also sensitive to non-thermal distortion in the bulk electron distribution. The in-vacuum optics for both systems are subject to degradation as they have a direct view of the ITER plasma and will not be accessible for cleaning or replacement for extended periods. Blackbody radiation sources are provided for in situ calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowan, W. L., E-mail: w.l.rowan@austin.utexas.edu; Houshmandyar, S.; Phillips, P. E.
2016-11-15
Measurement of the electron cyclotron emission (ECE) is one of the primary diagnostics for electron temperature in ITER. In-vessel, in-vacuum, and quasi-optical antennas capture sufficient ECE to achieve large signal to noise with microsecond temporal resolution and high spatial resolution while maintaining polarization fidelity. Two similar systems are required. One views the plasma radially. The other is an oblique view. Both views can be used to measure the electron temperature, while the oblique is also sensitive to non-thermal distortion in the bulk electron distribution. The in-vacuum optics for both systems are subject to degradation as they have a direct viewmore » of the ITER plasma and will not be accessible for cleaning or replacement for extended periods. Blackbody radiation sources are provided for in situ calibration.« less
Rowan, W. L.; Houshmandyar, S.; Phillips, P. E.; ...
2016-09-07
Measurement of the electron cyclotron emission (ECE) is one of the primary diagnostics for electron temperature in ITER. In-vessel, in-vacuum, and quasi-optical antennas capture sufficient ECE to achieve large signal to noise with microsecond temporal resolution and high spatial resolution while maintaining polarization fidelity. Two similar systems are required. One views the plasma radially. The other is an oblique view. Both views can be used to measure the electron temperature, while the oblique is also sensitive to non-thermal distortion in the bulk electron distribution. The in-vacuum optics for both systems are subject to degradation as they have a direct viewmore » of the ITER plasma and will not be accessible for cleaning or replacement for extended periods. Here, blackbody radiation sources are provided for in situ calibration.« less
Enhanced understanding of non-axisymmetric intrinsic and controlled field impacts in tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
In, Y.; Park, J. -K.; Jeon, Y. M.
Here, an extensive study of intrinsic and controlled non-axisymmetric field (δB) impacts in KSTAR has enhanced the understanding about non-axisymmetric field physics and its implications, in particular, on resonant magnetic perturbation (RMP) physics and power threshold (P th) for L–H transition. The n=1 intrinsic non-axisymmetric field in KSTAR was measured to remain as low as δB/B 0 ~ 4×10 –5 even at high-beta plasmas (β N ~ 2), which corresponds to approximately 20% below the targeted ITER tolerance level. As for the RMP edge-localized-modes (ELM) control, robust n=1 RMP ELM-crash-suppression has been not only sustained for more than ~90 τ E, but also confirmed to be compatible with rotating RMP. An optimal window of radial position of lower X-point (i.e. R x =more » $$1.44\\pm 0.02\\,$$ m) proved to be quite critical to reach full n=1 RMP-driven ELM-crash-suppression, while a constraint of the safety factor could be relaxed (q 95 = 5 $$\\pm $$ 0.25). A more encouraging finding was that even when R x cannot be positioned in the optimal window, another systematic scan in the vicinity of the previously optimal R x allows for a new optimal window with relatively small variations of plasma parameters. Also, we have addressed the importance of optimal phasing (i.e. toroidal phase difference between adjacent rows) for n=1 RMP-driven ELM control, consistent with an ideal plasma response modeling which could predict phasing-dependent ELM suppression windows. In support of ITER RMP study, intentionally misaligned RMPs have been found to be quite effective during ELM-mitigation stage in lowering the peaks of divertor heat flux, as well as in broadening the 'wet' areas. Besides, a systematic survey of P th dependence on non-axisymmetric field has revealed the potential limit of the merit of low intrinsic non-axisymmetry. Considering that the ITER RMP coils are composed of 3-rows, just like in KSTAR, further 3D physics study in KSTAR is expected to help us minimize the uncertainties of the ITER RMP coils, as well as establish an optimal 3D configuration for ITER and future reactors.« less
Enhanced understanding of non-axisymmetric intrinsic and controlled field impacts in tokamaks
NASA Astrophysics Data System (ADS)
In, Y.; Park, J.-K.; Jeon, Y. M.; Kim, J.; Park, G. Y.; Ahn, J.-W.; Loarte, A.; Ko, W. H.; Lee, H. H.; Yoo, J. W.; Juhn, J. W.; Yoon, S. W.; Park, H.; Physics Task Force in KSTAR, 3D
2017-11-01
An extensive study of intrinsic and controlled non-axisymmetric field (δB) impacts in KSTAR has enhanced the understanding about non-axisymmetric field physics and its implications, in particular, on resonant magnetic perturbation (RMP) physics and power threshold (P th) for L-H transition. The n = 1 intrinsic non-axisymmetric field in KSTAR was measured to remain as low as δB/B 0 ~ 4 × 10-5 even at high-beta plasmas (β N ~ 2), which corresponds to approximately 20% below the targeted ITER tolerance level. As for the RMP edge-localized-modes (ELM) control, robust n = 1 RMP ELM-crash-suppression has been not only sustained for more than ~90 τ E, but also confirmed to be compatible with rotating RMP. An optimal window of radial position of lower X-point (i.e. R x = 1.44+/- 0.02 m) proved to be quite critical to reach full n = 1 RMP-driven ELM-crash-suppression, while a constraint of the safety factor could be relaxed (q 95 = 5 +/- 0.25). A more encouraging finding was that even when R x cannot be positioned in the optimal window, another systematic scan in the vicinity of the previously optimal R x allows for a new optimal window with relatively small variations of plasma parameters. Also, we have addressed the importance of optimal phasing (i.e. toroidal phase difference between adjacent rows) for n = 1 RMP-driven ELM control, consistent with an ideal plasma response modeling which could predict phasing-dependent ELM suppression windows. In support of ITER RMP study, intentionally misaligned RMPs have been found to be quite effective during ELM-mitigation stage in lowering the peaks of divertor heat flux, as well as in broadening the ‘wet’ areas. Besides, a systematic survey of P th dependence on non-axisymmetric field has revealed the potential limit of the merit of low intrinsic non-axisymmetry. Considering that the ITER RMP coils are composed of 3-rows, just like in KSTAR, further 3D physics study in KSTAR is expected to help us minimize the uncertainties of the ITER RMP coils, as well as establish an optimal 3D configuration for ITER and future reactors.
Enhanced understanding of non-axisymmetric intrinsic and controlled field impacts in tokamaks
In, Y.; Park, J. -K.; Jeon, Y. M.; ...
2017-08-24
Here, an extensive study of intrinsic and controlled non-axisymmetric field (δB) impacts in KSTAR has enhanced the understanding about non-axisymmetric field physics and its implications, in particular, on resonant magnetic perturbation (RMP) physics and power threshold (P th) for L–H transition. The n=1 intrinsic non-axisymmetric field in KSTAR was measured to remain as low as δB/B 0 ~ 4×10 –5 even at high-beta plasmas (β N ~ 2), which corresponds to approximately 20% below the targeted ITER tolerance level. As for the RMP edge-localized-modes (ELM) control, robust n=1 RMP ELM-crash-suppression has been not only sustained for more than ~90 τ E, but also confirmed to be compatible with rotating RMP. An optimal window of radial position of lower X-point (i.e. R x =more » $$1.44\\pm 0.02\\,$$ m) proved to be quite critical to reach full n=1 RMP-driven ELM-crash-suppression, while a constraint of the safety factor could be relaxed (q 95 = 5 $$\\pm $$ 0.25). A more encouraging finding was that even when R x cannot be positioned in the optimal window, another systematic scan in the vicinity of the previously optimal R x allows for a new optimal window with relatively small variations of plasma parameters. Also, we have addressed the importance of optimal phasing (i.e. toroidal phase difference between adjacent rows) for n=1 RMP-driven ELM control, consistent with an ideal plasma response modeling which could predict phasing-dependent ELM suppression windows. In support of ITER RMP study, intentionally misaligned RMPs have been found to be quite effective during ELM-mitigation stage in lowering the peaks of divertor heat flux, as well as in broadening the 'wet' areas. Besides, a systematic survey of P th dependence on non-axisymmetric field has revealed the potential limit of the merit of low intrinsic non-axisymmetry. Considering that the ITER RMP coils are composed of 3-rows, just like in KSTAR, further 3D physics study in KSTAR is expected to help us minimize the uncertainties of the ITER RMP coils, as well as establish an optimal 3D configuration for ITER and future reactors.« less
NASA Astrophysics Data System (ADS)
Wiesen, S.; Köchl, F.; Belo, P.; Kotov, V.; Loarte, A.; Parail, V.; Corrigan, G.; Garzotti, L.; Harting, D.
2017-07-01
The integrated model JINTRAC is employed to assess the dynamic density evolution of the ITER baseline scenario when fuelled by discrete pellets. The consequences on the core confinement properties, α-particle heating due to fusion and the effect on the ITER divertor operation, taking into account the material limitations on the target heat loads, are discussed within the integrated model. Using the model one can observe that stable but cyclical operational regimes can be achieved for a pellet-fuelled ITER ELMy H-mode scenario with Q = 10 maintaining partially detached conditions in the divertor. It is shown that the level of divertor detachment is inversely correlated with the core plasma density due to α-particle heating, and thus depends on the density evolution cycle imposed by pellet ablations. The power crossing the separatrix to be dissipated depends on the enhancement of the transport in the pedestal region being linked with the pressure gradient evolution after pellet injection. The fuelling efficacy of the deposited pellet material is strongly dependent on the E × B plasmoid drift. It is concluded that integrated models like JINTRAC, if validated and supported by realistic physics constraints, may help to establish suitable control schemes of particle and power exhaust in burning ITER DT-plasma scenarios.
Scientific and technical challenges on the road towards fusion electricity
NASA Astrophysics Data System (ADS)
Donné, A. J. H.; Federici, G.; Litaudon, X.; McDonald, D. C.
2017-10-01
The goal of the European Fusion Roadmap is to deliver fusion electricity to the grid early in the second half of this century. It breaks the quest for fusion energy into eight missions, and for each of them it describes a research and development programme to address all the open technical gaps in physics and technology and estimates the required resources. It points out the needs to intensify industrial involvement and to seek all opportunities for collaboration outside Europe. The roadmap covers three periods: the short term, which runs parallel to the European Research Framework Programme Horizon 2020, the medium term and the long term. ITER is the key facility of the roadmap as it is expected to achieve most of the important milestones on the path to fusion power. Thus, the vast majority of present resources are dedicated to ITER and its accompanying experiments. The medium term is focussed on taking ITER into operation and bringing it to full power, as well as on preparing the construction of a demonstration power plant DEMO, which will for the first time demonstrate fusion electricity to the grid around the middle of this century. Building and operating DEMO is the subject of the last roadmap phase: the long term. Clearly, the Fusion Roadmap is tightly connected to the ITER schedule. Three key milestones are the first operation of ITER, the start of the DT operation in ITER and reaching the full performance at which the thermal fusion power is 10 times the power put in to the plasma. The Engineering Design Activity of DEMO needs to start a few years after the first ITER plasma, while the start of the construction phase will be a few years after ITER reaches full performance. In this way ITER can give viable input to the design and development of DEMO. Because the neutron fluence in DEMO will be much higher than in ITER, it is important to develop and validate materials that can handle these very high neutron loads. For the testing of the materials, a dedicated 14 MeV neutron source is needed. This DEMO Oriented Neutron Source (DONES) is therefore an important facility to support the fusion roadmap.
Computation of optimal output-feedback compensators for linear time-invariant systems
NASA Technical Reports Server (NTRS)
Platzman, L. K.
1972-01-01
The control of linear time-invariant systems with respect to a quadratic performance criterion was considered, subject to the constraint that the control vector be a constant linear transformation of the output vector. The optimal feedback matrix, f*, was selected to optimize the expected performance, given the covariance of the initial state. It is first shown that the expected performance criterion can be expressed as the ratio of two multinomials in the element of f. This expression provides the basis for a feasible method of determining f* in the case of single-input single-output systems. A number of iterative algorithms are then proposed for the calculation of f* for multiple input-output systems. For two of these, monotone convergence is proved, but they involve the solution of nonlinear matrix equations at each iteration. Another is proposed involving the solution of Lyapunov equations at each iteration, and the gradual increase of the magnitude of a penalty function. Experience with this algorithm will be needed to determine whether or not it does, indeed, possess desirable convergence properties, and whether it can be used to determine the globally optimal f*.
OVERVIEW OF NEUTRON MEASUREMENTS IN JET FUSION DEVICE.
Batistoni, P; Villari, R; Obryk, B; Packer, L W; Stamatelatos, I E; Popovichev, S; Colangeli, A; Colling, B; Fonnesu, N; Loreti, S; Klix, A; Klosowski, M; Malik, K; Naish, J; Pillon, M; Vasilopoulou, T; De Felice, P; Pimpinella, M; Quintieri, L
2017-10-05
The design and operation of ITER experimental fusion reactor requires the development of neutron measurement techniques and numerical tools to derive the fusion power and the radiation field in the device and in the surrounding areas. Nuclear analyses provide essential input to the conceptual design, optimisation, engineering and safety case in ITER and power plant studies. The required radiation transport calculations are extremely challenging because of the large physical extent of the reactor plant, the complexity of the geometry, and the combination of deep penetration and streaming paths. This article reports the experimental activities which are carried-out at JET to validate the neutronics measurements methods and numerical tools used in ITER and power plant design. A new deuterium-tritium campaign is proposed in 2019 at JET: the unique 14 MeV neutron yields produced will be exploited as much as possible to validate measurement techniques, codes, procedures and data currently used in ITER design thus reducing the related uncertainties and the associated risks in the machine operation. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Living as a Chameleon: Girls, Anger, and Mental Health
ERIC Educational Resources Information Center
van Daalen-Smith, Cheryl
2008-01-01
One's practice as a school nurse affords numerous privileges. One that stands out in my mind is the privilege of bearing witness to the lives of countless girls as they navigated their own aspirations and the expectations of the culture. The stories they iterated to me in my school nurse office form the basis for this discussion regarding the…
ERIC Educational Resources Information Center
School Science Review, 1984
1984-01-01
Discusses: (1) Brewster's angle in the elementary laboratory; (2) color mixing by computer; (3) computer iteration at A-level; (4) a simple probe for pressure measurement; (5) the measurement of distance using a laser; and (6) an activity on Archimede's principle. (JN)
An assessment of coupling algorithms for nuclear reactor core physics simulations
Hamilton, Steven; Berrill, Mark; Clarno, Kevin; ...
2016-04-01
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less
An assessment of coupling algorithms for nuclear reactor core physics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, Steven; Berrill, Mark; Clarno, Kevin
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less
An assessment of coupling algorithms for nuclear reactor core physics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, Steven, E-mail: hamiltonsp@ornl.gov; Berrill, Mark, E-mail: berrillma@ornl.gov; Clarno, Kevin, E-mail: clarnokt@ornl.gov
This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNKmore » and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less
NASA Astrophysics Data System (ADS)
Rozov, V.; Alekseev, A.
2015-08-01
A necessity to address a wide spectrum of engineering problems in ITER determined the need for efficient tools for modeling of the magnetic environment and force interactions between the main components of the magnet system. The assessment of the operating window for the machine, determined by the electro-magnetic (EM) forces, and the check of feasibility of particular scenarios play an important role for ensuring the safety of exploitation. Such analysis-powered prevention of damages forms an element of the Machine Operations and Investment Protection strategy. The corresponding analysis is a necessary step in preparation of the commissioning, which finalizes the construction phase. It shall be supported by the development of the efficient and robust simulators and multi-physics/multi-system integration of models. The developed numerical model of interactions in the ITER magnetic system, based on the use of pre-computed influence matrices, facilitated immediate and complete assessment and systematic specification of EM loads on magnets in all foreseen operating regimes, their maximum values, envelopes and the most critical scenarios. The common principles of interaction in typical bilateral configurations have been generalized for asymmetry conditions, inspired by the plasma and by the hardware, including asymmetric plasma event and magnetic system fault cases. The specification of loads is supported by the technology of functional approximation of nodal and distributed data by continuous patterns/analytical interpolants. The global model of interactions together with the mesh-independent analytical format of output provides the source of self-consistent and transferable data on the spatial distribution of the system of forces for assessments of structural performance of the components, assemblies and supporting structures. The numerical model used is fully parametrized, which makes it very suitable for multi-variant and sensitivity studies (positioning, off-normal events, asymmetry, etc). The obtained results and matrices form a basis for a relatively simple and robust force processor as a specialized module of a global simulator for diagnostic, operational instrumentation, monitoring and control, as well as a scenario assessment tool. This paper gives an overview of the model, applied technique, assessed problems and obtained qualitative and quantitative results.
NASA Astrophysics Data System (ADS)
Nanson, Gerald C.; Huang, He Qing
2018-02-01
Until recently no universal agreement as to a philosophical or scientific methodological framework has been proposed to guide the study of fluvial geomorphology. An understanding of river form and process requires an understanding of the principles that govern the behaviour and evolution of alluvial rivers at the most fundamental level. To date, the investigations of such principles have followed four approaches: develop qualitative unifying theories that are usually untested; collect and examine data visually and statistically to define semi-quantitative relationships among variables; apply Newtonian theoretical and empirical mechanics in a reductionist manner; resolve the primary flow equations theoretically by assuming maximum or minimum outputs. Here we recommend not a fifth but an overarching philosophy to embrace all four: clarifying and formalising an understanding of the evolution of river channels and iterative directional changes in the context of least action principle (LAP), the theoretical basis of variational mechanics. LAP is exemplified in rivers in the form of maximum flow efficiency (MFE). A sophisticated understanding of evolution in its broadest sense is essential to understand how rivers adjust towards an optimum state rather than towards some other. Because rivers, as dynamic contemporary systems, flow in valleys that are commonly historical landforms and often tectonically determined, we propose that most of the world's alluvial rivers are over-powered for the work they must do. To remain stable they commonly evolve to expend surplus energy via a variety of dynamic equilibrium forms that will further adjust, where possible, to maximise their stability as much less common MFE forms in stationary equilibrium. This paper: 1. Shows that the theory of evolution is derived from, and applicable to, both the physical and biological sciences; 2. Focusses the development of theory in geomorphology on the development of equilibrium theory; 3. Proposes that river channels, like organisms, evolve teleomatically (progression towards an end-state by following natural laws) and iteratively (one stage forming the basis for the next) towards an optimal end-state; 4. Describes LAP as the methodological basis for understanding the self-adjustment alluvial channels towards MFE. 5. Acknowledges that whereas river channels that form within their unmodified alluvium evolve into optimal minimum-energy systems, exogenic variables, such as riparian or aquatic vegetation, can cause significant variations in resultant river-styles. We specifically attempt to address Luna Leopold's lament in 1994 that no clearly expressed philosophy explains the remarkable self-adjustment of alluvial channels.
EDITORIAL: Safety aspects of fusion power plants
NASA Astrophysics Data System (ADS)
Kolbasov, B. N.
2007-07-01
This special issue of Nuclear Fusion contains 13 informative papers that were initially presented at the 8th IAEA Technical Meeting on Fusion Power Plant Safety held in Vienna, Austria, 10-13 July 2006. Following recommendation from the International Fusion Research Council, the IAEA organizes Technical Meetings on Fusion Safety with the aim to bring together experts to discuss the ongoing work, share new ideas and outline general guidance and recommendations on different issues related to safety and environmental (S&E) aspects of fusion research and power facilities. Previous meetings in this series were held in Vienna, Austria (1980), Ispra, Italy (1983), Culham, UK (1986), Jackson Hole, USA (1989), Toronto, Canada (1993), Naka, Japan (1996) and Cannes, France (2000). The recognized progress in fusion research and technology over the last quarter of a century has boosted the awareness of the potential of fusion to be a practically inexhaustible and clean source of energy. The decision to construct the International Thermonuclear Experimental Reactor (ITER) represents a landmark in the path to fusion power engineering. Ongoing activities to license ITER in France look for an adequate balance between technological and scientific deliverables and complying with safety requirements. Actually, this is the first instance of licensing a representative fusion machine, and it will very likely shape the way in which a more common basis for establishing safety standards and policies for licensing future fusion power plants will be developed. Now that ITER licensing activities are underway, it is becoming clear that the international fusion community should strengthen its efforts in the area of designing the next generations of fusion power plants—demonstrational and commercial. Therefore, the 8th IAEA Technical Meeting on Fusion Safety focused on the safety aspects of power facilities. Some ITER-related safety issues were reported and discussed owing to their potential importance for the fusion power plant research programmes. The objective of this Technical Meeting was to examine in an integrated way all the safety aspects anticipated to be relevant to the first fusion power plant prototype expected to become operational by the middle of the century, leading to the first generation of economically viable fusion power plants with attractive S&E features. After screening by guest editors and consideration by referees, 13 (out of 28) papers were accepted for publication. They are devoted to the following safety topics: power plant safety; fusion specific operational safety approaches; test blanket modules; accident analysis; tritium safety and inventories; decommissioning and waste. The paper `Main safety issues at the transition from ITER to fusion power plants' by W. Gulden et al (EU) highlights the differences between ITER and future fusion power plants with magnetic confinement (off-site dose acceptance criteria, consequences of accidents inside and outside the design basis, occupational radiation exposure, and waste management, including recycling and/or final disposal in repositories) on the basis of the most recent European fusion power plant conceptual study. Ongoing S&E studies within the US inertial fusion energy (IFE) community are focusing on two design concepts. These are the high average power laser (HAPL) programme for development of a dry-wall, laser-driven IFE power plant, and the Z-pinch IFE programme for the production of an economically-attractive power plant using high-yield Z-pinch-driven targets. The main safety issues related to these programmes are reviewed in the paper `Status of IFE safety and environmental activities in the US' by S. Reyes et al (USA). The authors propose future directions of research in the IFE S&E area. In the paper `Recent accomplishments and future directions in the US Fusion Safety & Environmental Program' D. Petti et al (USA) state that the US fusion programme has long recognized that the S&E potential of fusion can be attained by prudent materials selection, judicious design choices, and integration of safety requirements into the design of the facility. To achieve this goal, S&E research is focused on understanding the behaviour of the largest sources of radioactive and hazardous materials in a fusion facility, understanding how energy sources in a fusion facility could mobilize those materials, developing integrated state-of-the-art S&E computer codes and risk tools for safety assessment, and evaluating and improving fusion facility design in terms of accident safety, worker safety, and waste disposal. There are three papers considering safety issues of the test blanket modules (TBM) producing tritium to be installed in ITER. These modules represent different concepts of demonstration fusion power facilities (DEMO). L. Boccaccini et al (Germany) analyses the possibility of jeopardizing the ITER safety under specific accidents in the European helium-cooled pebble-bed TBM, e.g. pressurization of the vacuum vessel (VV), hydrogen production from the Be-steam reaction, the possible interconnection between the port cell and VV causing air ingress. Safety analysis is also presented for Chinese TBM with a helium-cooled solid breeder to be tested in ITER by Z. Chen et al (China). Radiological inventories, afterheat, waste disposal ratings, electromagnetic characteristics, LOCA and tritium safety management are considered. An overview of a preliminary safety analysis performed for a US proposed TBM is presented by B. Merrill et al (USA). This DEMO relevant dual coolant liquid lead-lithium TBM has been explored both in the USA and EU. T. Pinna et al (Italy) summarize the six-year development of a failure rate database for fusion specific components on the basis of data coming from operating experience gained in various fusion laboratories. The activity began in 2001 with the study of the Joint European Torus vacuum and active gas handling systems. Two years later the neutral beam injectors and the power supply systems were considered. This year the ion cyclotron resonant heating system is under evaluation. I. Cristescu et al (Germany) present the paper `Tritium inventories and tritium safety design principles for the fuel cycle of ITER'. She and her colleagues developed the dynamic mathematical model (TRIMO) for tritium inventory evaluation within each system of the ITER fuel cycle in various operational scenarios. TRIMO is used as a tool for trade-off studies within the fuel cycle systems with the final goal of global tritium inventory minimization. M. Matsuyama et al (Japan) describes a new technique for in situ quantitative measurements of high-level tritium inventory and its distribution in the VV and tritium systems of ITER and future fusion reactors. This technique is based on utilization of x-rays induced by beta-rays emitting from tritium species. It was applied to three physical states of high-level tritium: to gaseous, aqueous and solid tritium retained on/in various materials. Finally, there are four papers devoted to safety issues in fusion reactor decommissioning and waste management. A paper by R. Pampin et al (UK) provides the revised radioactive waste analysis of two models in the PPCS. Another paper by M. Zucchetti (Italy), S.A. Bartenev (Russia) et al describes a radiochemical extraction technology for purification of V-Cr-Ti alloy components from activation products to the dose rate of 10 µSv/h allowing their clearance or hands-on recycling which has been developed and tested in laboratory stationary conditions. L. El-Guebaly (USA) and her colleagues submitted two papers. In the first paper she optimistically considers the possibility of replacing the disposal of fusion power reactor waste with recycling and clearance. Her second paper considers the implications of new clearance guidelines for nuclear applications, particularly for slightly irradiated fusion materials.
Soft-output decoding algorithms in iterative decoding of turbo codes
NASA Technical Reports Server (NTRS)
Benedetto, S.; Montorsi, G.; Divsalar, D.; Pollara, F.
1996-01-01
In this article, we present two versions of a simplified maximum a posteriori decoding algorithm. The algorithms work in a sliding window form, like the Viterbi algorithm, and can thus be used to decode continuously transmitted sequences obtained by parallel concatenated codes, without requiring code trellis termination. A heuristic explanation is also given of how to embed the maximum a posteriori algorithms into the iterative decoding of parallel concatenated codes (turbo codes). The performances of the two algorithms are compared on the basis of a powerful rate 1/3 parallel concatenated code. Basic circuits to implement the simplified a posteriori decoding algorithm using lookup tables, and two further approximations (linear and threshold), with a very small penalty, to eliminate the need for lookup tables are proposed.
NASA Astrophysics Data System (ADS)
Yarmohammadi, M.; Javadi, S.; Babolian, E.
2018-04-01
In this study a new spectral iterative method (SIM) based on fractional interpolation is presented for solving nonlinear fractional differential equations (FDEs) involving Caputo derivative. This method is equipped with a pre-algorithm to find the singularity index of solution of the problem. This pre-algorithm gives us a real parameter as the index of the fractional interpolation basis, for which the SIM achieves the highest order of convergence. In comparison with some recent results about the error estimates for fractional approximations, a more accurate convergence rate has been attained. We have also proposed the order of convergence for fractional interpolation error under the L2-norm. Finally, general error analysis of SIM has been considered. The numerical results clearly demonstrate the capability of the proposed method.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
Progress of IRSN R&D on ITER Safety Assessment
NASA Astrophysics Data System (ADS)
Van Dorsselaere, J. P.; Perrault, D.; Barrachin, M.; Bentaib, A.; Gensdarmes, F.; Haeck, W.; Pouvreau, S.; Salat, E.; Seropian, C.; Vendel, J.
2012-08-01
The French "Institut de Radioprotection et de Sûreté Nucléaire" (IRSN), in support to the French "Autorité de Sûreté Nucléaire", is analysing the safety of ITER fusion installation on the basis of the ITER operator's safety file. IRSN set up a multi-year R&D program in 2007 to support this safety assessment process. Priority has been given to four technical issues and the main outcomes of the work done in 2010 and 2011 are summarized in this paper: for simulation of accident scenarios in the vacuum vessel, adaptation of the ASTEC system code; for risk of explosion of gas-dust mixtures in the vacuum vessel, adaptation of the TONUS-CFD code for gas distribution, development of DUST code for dust transport, and preparation of IRSN experiments on gas inerting, dust mobilization, and hydrogen-dust mixtures explosion; for evaluation of the efficiency of the detritiation systems, thermo-chemical calculations of tritium speciation during transport in the gas phase and preparation of future experiments to evaluate the most influent factors on detritiation; for material neutron activation, adaptation of the VESTA Monte Carlo depletion code. The first results of these tasks have been used in 2011 for the analysis of the ITER safety file. In the near future, this R&D global programme may be reoriented to account for the feedback of the latter analysis or for new knowledge.
Molecular modeling: An open invitation for applied mathematics
NASA Astrophysics Data System (ADS)
Mezey, Paul G.
2013-10-01
Molecular modeling methods provide a very wide range of challenges for innovative mathematical and computational techniques, where often high dimensionality, large sets of data, and complicated interrelations imply a multitude of iterative approximations. The physical and chemical basis of these methodologies involves quantum mechanics with several non-intuitive aspects, where classical interpretation and classical analogies are often misleading or outright wrong. Hence, instead of the everyday, common sense approaches which work so well in engineering, in molecular modeling one often needs to rely on rather abstract mathematical constraints and conditions, again emphasizing the high level of reliance on applied mathematics. Yet, the interdisciplinary aspects of the field of molecular modeling also generates some inertia and perhaps too conservative reliance on tried and tested methodologies, that is at least partially caused by the less than up-to-date involvement in the newest developments in applied mathematics. It is expected that as more applied mathematicians take up the challenge of employing the latest advances of their field in molecular modeling, important breakthroughs may follow. In this presentation some of the current challenges of molecular modeling are discussed.
Tokamak foundation in USSR/Russia 1950-1990
NASA Astrophysics Data System (ADS)
Smirnov, V. P.
2010-01-01
In the USSR, nuclear fusion research began in 1950 with the work of I.E. Tamm, A.D. Sakharov and colleagues. They formulated the principles of magnetic confinement of high temperature plasmas, that would allow the development of a thermonuclear reactor. Following this, experimental research on plasma initiation and heating in toroidal systems began in 1951 at the Kurchatov Institute. From the very first devices with vessels made of glass, porcelain or metal with insulating inserts, work progressed to the operation of the first tokamak, T-1, in 1958. More machines followed and the first international collaboration in nuclear fusion, on the T-3 tokamak, established the tokamak as a promising option for magnetic confinement. Experiments continued and specialized machines were developed to test separately improvements to the tokamak concept needed for the production of energy. At the same time, research into plasma physics and tokamak theory was being undertaken which provides the basis for modern theoretical work. Since then, the tokamak concept has been refined by a world-wide effort and today we look forward to the successful operation of ITER.
Status and problems of fusion reactor development.
Schumacher, U
2001-03-01
Thermonuclear fusion of deuterium and tritium constitutes an enormous potential for a safe, environmentally compatible and sustainable energy supply. The fuel source is practically inexhaustible. Further, the safety prospects of a fusion reactor are quite favourable due to the inherently self-limiting fusion process, the limited radiologic toxicity and the passive cooling property. Among a small number of approaches, the concept of toroidal magnetic confinement of fusion plasmas has achieved most impressive scientific and technical progress towards energy release by thermonuclear burn of deuterium-tritium fuels. The status of thermonuclear fusion research activity world-wide is reviewed and present solutions to the complicated physical and technological problems are presented. These problems comprise plasma heating, confinement and exhaust of energy and particles, plasma stability, alpha particle heating, fusion reactor materials, reactor safety and environmental compatibility. The results and the high scientific level of this international research activity provide a sound basis for the realisation of the International Thermonuclear Experimental Reactor (ITER), whose goal is to demonstrate the scientific and technological feasibility of a fusion energy source for peaceful purposes.
DIII-D accomplishments and plans in support of fusion next steps
Buttery, R. J; Eidietis, N.; Holcomb, C.; ...
2013-06-01
DIII-D is using its flexibility and diagnostics to address the critical science required to enable next step fusion devices. We have adapted operating scenarios for ITER to low torque and are now being optimized for transport. Three ELM mitigation scenarios have been developed to near-ITER parameters. New control techniques are managing the most challenging plasma instabilities. Disruption mitigation tools show promising dissipation strategies for runaway electrons and heat load. An off axis neutral beam upgrade has enabled sustainment of high βN capable steady state regimes. Divertor research is identifying the challenge, physics and candidate solutions for handling the hot plasmamore » exhaust with notable progress in heat flux reduction using the snowflake configuration. Our work is helping optimize design choices and prepare the scientific tools for operation in ITER, and resolve key elements of the plasma configuration and divertor solution for an FNSF.« less
NASA Astrophysics Data System (ADS)
Soni, Jigensh; Yadav, R. K.; Patel, A.; Gahlaut, A.; Mistry, H.; Parmar, K. G.; Mahesh, V.; Parmar, D.; Prajapati, B.; Singh, M. J.; Bandyopadhyay, M.; Bansal, G.; Pandya, K.; Chakraborty, A.
2013-02-01
Twin Source - An Inductively coupled two RF driver based 180 kW, 1 MHz negative ion source experimental setup is initiated at IPR, Gandhinagar, under Indian program, with the objective of understanding the physics and technology of multi-driver coupling. Twin Source [1] (TS) also provides an intermediate platform between operational ROBIN [2] [5] and eight RF drivers based Indian test facility -INTF [3]. A twin source experiment requires a central system to provide control, data acquisition and communication interface, referred as TS-CODAC, for which a software architecture similar to ITER CODAC core system has been decided for implementation. The Core System is a software suite for ITER plant system manufacturers to use as a template for the development of their interface with CODAC. The ITER approach, in terms of technology, has been adopted for the TS-CODAC so as to develop necessary expertise for developing and operating a control system based on the ITER guidelines as similar configuration needs to be implemented for the INTF. This cost effective approach will provide an opportunity to evaluate and learn ITER CODAC technology, documentation, information technology and control system processes, on an operational machine. Conceptual design of the TS-CODAC system has been completed. For complete control of the system, approximately 200 Nos. control signals and 152 acquisition signals are needed. In TS-CODAC, control loop time required is within the range of 5ms - 10 ms, therefore for the control system, PLC (Siemens S-7 400) has been chosen as suggested in the ITER slow controller catalog. For the data acquisition, the maximum sampling interval required is 100 micro second, and therefore National Instruments (NI) PXIe system and NI 6259 digitizer cards have been selected as suggested in the ITER fast controller catalog. This paper will present conceptual design of TS -CODAC system based on ITER CODAC Core software and applicable plant system integration processes.
Parallel Ellipsoidal Perfectly Matched Layers for Acoustic Helmholtz Problems on Exterior Domains
Bunting, Gregory; Prakash, Arun; Walsh, Timothy; ...
2018-01-26
Exterior acoustic problems occur in a wide range of applications, making the finite element analysis of such problems a common practice in the engineering community. Various methods for truncating infinite exterior domains have been developed, including absorbing boundary conditions, infinite elements, and more recently, perfectly matched layers (PML). PML are gaining popularity due to their generality, ease of implementation, and effectiveness as an absorbing boundary condition. PML formulations have been developed in Cartesian, cylindrical, and spherical geometries, but not ellipsoidal. In addition, the parallel solution of PML formulations with iterative solvers for the solution of the Helmholtz equation, and howmore » this compares with more traditional strategies such as infinite elements, has not been adequately investigated. In this study, we present a parallel, ellipsoidal PML formulation for acoustic Helmholtz problems. To faciliate the meshing process, the ellipsoidal PML layer is generated with an on-the-fly mesh extrusion. Though the complex stretching is defined along ellipsoidal contours, we modify the Jacobian to include an additional mapping back to Cartesian coordinates in the weak formulation of the finite element equations. This allows the equations to be solved in Cartesian coordinates, which is more compatible with existing finite element software, but without the necessity of dealing with corners in the PML formulation. Herein we also compare the conditioning and performance of the PML Helmholtz problem with infinite element approach that is based on high order basis functions. On a set of representative exterior acoustic examples, we show that high order infinite element basis functions lead to an increasing number of Helmholtz solver iterations, whereas for PML the number of iterations remains constant for the same level of accuracy. Finally, this provides an additional advantage of PML over the infinite element approach.« less
Parallel Ellipsoidal Perfectly Matched Layers for Acoustic Helmholtz Problems on Exterior Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bunting, Gregory; Prakash, Arun; Walsh, Timothy
Exterior acoustic problems occur in a wide range of applications, making the finite element analysis of such problems a common practice in the engineering community. Various methods for truncating infinite exterior domains have been developed, including absorbing boundary conditions, infinite elements, and more recently, perfectly matched layers (PML). PML are gaining popularity due to their generality, ease of implementation, and effectiveness as an absorbing boundary condition. PML formulations have been developed in Cartesian, cylindrical, and spherical geometries, but not ellipsoidal. In addition, the parallel solution of PML formulations with iterative solvers for the solution of the Helmholtz equation, and howmore » this compares with more traditional strategies such as infinite elements, has not been adequately investigated. In this study, we present a parallel, ellipsoidal PML formulation for acoustic Helmholtz problems. To faciliate the meshing process, the ellipsoidal PML layer is generated with an on-the-fly mesh extrusion. Though the complex stretching is defined along ellipsoidal contours, we modify the Jacobian to include an additional mapping back to Cartesian coordinates in the weak formulation of the finite element equations. This allows the equations to be solved in Cartesian coordinates, which is more compatible with existing finite element software, but without the necessity of dealing with corners in the PML formulation. Herein we also compare the conditioning and performance of the PML Helmholtz problem with infinite element approach that is based on high order basis functions. On a set of representative exterior acoustic examples, we show that high order infinite element basis functions lead to an increasing number of Helmholtz solver iterations, whereas for PML the number of iterations remains constant for the same level of accuracy. Finally, this provides an additional advantage of PML over the infinite element approach.« less
Polarized atomic orbitals for self-consistent field electronic structure calculations
NASA Astrophysics Data System (ADS)
Lee, Michael S.; Head-Gordon, Martin
1997-12-01
We present a new self-consistent field approach which, given a large "secondary" basis set of atomic orbitals, variationally optimizes molecular orbitals in terms of a small "primary" basis set of distorted atomic orbitals, which are simultaneously optimized. If the primary basis is taken as a minimal basis, the resulting functions are termed polarized atomic orbitals (PAO's) because they are valence (or core) atomic orbitals which have distorted or polarized in an optimal way for their molecular environment. The PAO's derive their flexibility from the fact that they are formed from atom-centered linear-combinations of the larger set of secondary atomic orbitals. The variational conditions satisfied by PAO's are defined, and an iterative method for performing a PAO-SCF calculation is introduced. We compare the PAO-SCF approach against full SCF calculations for the energies, dipoles, and molecular geometries of various molecules. The PAO's are potentially useful for studying large systems that are currently intractable with larger than minimal basis sets, as well as offering potential interpretative benefits relative to calculations in extended basis sets.
Physical and cognitive task analysis in interventional radiology.
Johnson, S; Healey, A; Evans, J; Murphy, M; Crawshaw, M; Gould, D
2006-01-01
To identify, describe and detail the cognitive thought processes, decision-making, and physical actions involved in the preparation and successful performance of core interventional radiology procedures. Five commonly performed core interventional radiology procedures were selected for cognitive task analysis. Several examples of each procedure being performed by consultant interventional radiologists were videoed. The videos of those procedures, and the steps required for successful outcome, were analysed by a psychologist and an interventional radiologist. Once a skeleton algorithm of the procedures was defined, further refinement was achieved using individual interview techniques with consultant interventional radiologists. Additionally a critique of each iteration of the established algorithm was sought from non-participating independent consultant interventional radiologists. Detailed task descriptions and decision protocols were developed for five interventional radiology procedures (arterial puncture, nephrostomy, venous access, biopsy-using both ultrasound and computed tomography, and percutaneous transhepatic cholangiogram). Identical tasks performed within these procedures were identified and standardized within the protocols. Complex procedures were broken down and their constituent processes identified. This might be suitable for use as a training protocol to provide a universally acceptable safe practice at the most fundamental level. It is envisaged that data collected in this way can be used as an educational resource for trainees and could provide the basis for a training curriculum in interventional radiology. It will direct trainees towards safe practice of the highest standard. It will also provide performance objectives of a simulator model.
Creating ISO/EN 13606 archetypes based on clinical information needs.
Rinner, Christoph; Kohler, Michael; Hübner-Bloder, Gudrun; Saboor, Samrend; Ammenwerth, Elske; Duftschmid, Georg
2011-01-01
Archetypes model individual EHR contents and build the basis of the dual-model approach used in the ISO/EN 13606 EHR architecture. We present an approach to create archetypes using an iterative development process. It includes automated generation of electronic case report forms from archetypes. We evaluated our approach by developing 128 archetypes which represent 446 clinical information items from the diabetes domain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Haye, R. J., E-mail: lahaye@fusion.gat.com
2015-12-10
ITER is an international project to design and build an experimental fusion reactor based on the “tokamak” concept. ITER relies upon localized electron cyclotron current drive (ECCD) at the rational safety factor q=2 to suppress or stabilize the expected poloidal mode m=2, toroidal mode n=1 neoclassical tearing mode (NTM) islands. Such islands if unmitigated degrade energy confinement, lock to the resistive wall (stop rotating), cause loss of “H-mode” and induce disruption. The International Tokamak Physics Activity (ITPA) on MHD, Disruptions and Magnetic Control joint experiment group MDC-8 on Current Drive Prevention/Stabilization of Neoclassical Tearing Modes started in 2005, after whichmore » assessments were made for the requirements for ECCD needed in ITER, particularly that of rf power and alignment on q=2 [1]. Narrow well-aligned rf current parallel to and of order of one percent of the total plasma current is needed to replace the “missing” current in the island O-points and heal or preempt (avoid destabilization by applying ECCD on q=2 in absence of the mode) the island [2-4]. This paper updates the advances in ECCD stabilization on NTMs learned in DIII-D experiments and modeling during the last 5 to 10 years as applies to stabilization by localized ECCD of tearing modes in ITER. This includes the ECCD (inside the q=1 radius) stabilization of the NTM “seeding” instability known as sawteeth (m/n=1/1) [5]. Recent measurements in DIII-D show that the ITER-similar current profile is classically unstable, curvature stabilization must not be neglected, and the small island width stabilization effect from helical ion polarization currents is stronger than was previously thought [6]. The consequences of updated assumptions in ITER modeling of the minimum well-aligned ECCD power needed are all-in-all favorable (and well-within the ITER 24 gyrotron capability) when all effects are included. However, a “wild card” may be broadening of the localized ECCD by the presence of the island; various theories predict broadening could occur and there is experimental evidence for broadening in DIII-D. Wider than now expected ECCD in ITER would make alignment easier to do but weaken the stabilization and thus require more rf power. In addition to updated modeling for ITER, advances in the ITER-relevant DIII-D ECCD gyrotron launch mirror control system hardware and real-time plasma control system have been made [7] and there are plans for application in DIII-D ITER demonstration discharges.« less
NASA Astrophysics Data System (ADS)
La Haye, R. J.
2015-12-01
ITER is an international project to design and build an experimental fusion reactor based on the "tokamak" concept. ITER relies upon localized electron cyclotron current drive (ECCD) at the rational safety factor q=2 to suppress or stabilize the expected poloidal mode m=2, toroidal mode n=1 neoclassical tearing mode (NTM) islands. Such islands if unmitigated degrade energy confinement, lock to the resistive wall (stop rotating), cause loss of "H-mode" and induce disruption. The International Tokamak Physics Activity (ITPA) on MHD, Disruptions and Magnetic Control joint experiment group MDC-8 on Current Drive Prevention/Stabilization of Neoclassical Tearing Modes started in 2005, after which assessments were made for the requirements for ECCD needed in ITER, particularly that of rf power and alignment on q=2 [1]. Narrow well-aligned rf current parallel to and of order of one percent of the total plasma current is needed to replace the "missing" current in the island O-points and heal or preempt (avoid destabilization by applying ECCD on q=2 in absence of the mode) the island [2-4]. This paper updates the advances in ECCD stabilization on NTMs learned in DIII-D experiments and modeling during the last 5 to 10 years as applies to stabilization by localized ECCD of tearing modes in ITER. This includes the ECCD (inside the q=1 radius) stabilization of the NTM "seeding" instability known as sawteeth (m/n=1/1) [5]. Recent measurements in DIII-D show that the ITER-similar current profile is classically unstable, curvature stabilization must not be neglected, and the small island width stabilization effect from helical ion polarization currents is stronger than was previously thought [6]. The consequences of updated assumptions in ITER modeling of the minimum well-aligned ECCD power needed are all-in-all favorable (and well-within the ITER 24 gyrotron capability) when all effects are included. However, a "wild card" may be broadening of the localized ECCD by the presence of the island; various theories predict broadening could occur and there is experimental evidence for broadening in DIII-D. Wider than now expected ECCD in ITER would make alignment easier to do but weaken the stabilization and thus require more rf power. In addition to updated modeling for ITER, advances in the ITER-relevant DIII-D ECCD gyrotron launch mirror control system hardware and real-time plasma control system have been made [7] and there are plans for application in DIII-D ITER demonstration discharges.
Data Assimilation on a Quantum Annealing Computer: Feasibility and Scalability
NASA Astrophysics Data System (ADS)
Nearing, G. S.; Halem, M.; Chapman, D. R.; Pelissier, C. S.
2014-12-01
Data assimilation is one of the ubiquitous and computationally hard problems in the Earth Sciences. In particular, ensemble-based methods require a large number of model evaluations to estimate the prior probability density over system states, and variational methods require adjoint calculations and iteration to locate the maximum a posteriori solution in the presence of nonlinear models and observation operators. Quantum annealing computers (QAC) like the new D-Wave housed at the NASA Ames Research Center can be used for optimization and sampling, and therefore offers a new possibility for efficiently solving hard data assimilation problems. Coding on the QAC is not straightforward: a problem must be posed as a Quadratic Unconstrained Binary Optimization (QUBO) and mapped to a spherical Chimera graph. We have developed a method for compiling nonlinear 4D-Var problems on the D-Wave that consists of five steps: Emulating the nonlinear model and/or observation function using radial basis functions (RBF) or Chebyshev polynomials. Truncating a Taylor series around each RBF kernel. Reducing the Taylor polynomial to a quadratic using ancilla gadgets. Mapping the real-valued quadratic to a fixed-precision binary quadratic. Mapping the fully coupled binary quadratic to a partially coupled spherical Chimera graph using ancilla gadgets. At present the D-Wave contains 512 qbits (with 1024 and 2048 qbit machines due in the next two years); this machine size allows us to estimate only 3 state variables at each satellite overpass. However, QAC's solve optimization problems using a physical (quantum) system, and therefore do not require iterations or calculation of model adjoints. This has the potential to revolutionize our ability to efficiently perform variational data assimilation, as the size of these computers grows in the coming years.
Computer-aided diagnosis of early knee osteoarthritis based on MRI T2 mapping.
Wu, Yixiao; Yang, Ran; Jia, Sen; Li, Zhanjun; Zhou, Zhiyang; Lou, Ting
2014-01-01
This work was aimed at studying the method of computer-aided diagnosis of early knee OA (OA: osteoarthritis). Based on the technique of MRI (MRI: Magnetic Resonance Imaging) T2 Mapping, through computer image processing, feature extraction, calculation and analysis via constructing a classifier, an effective computer-aided diagnosis method for knee OA was created to assist doctors in their accurate, timely and convenient detection of potential risk of OA. In order to evaluate this method, a total of 1380 data from the MRI images of 46 samples of knee joints were collected. These data were then modeled through linear regression on an offline general platform by the use of the ImageJ software, and a map of the physical parameter T2 was reconstructed. After the image processing, the T2 values of ten regions in the WORMS (WORMS: Whole-organ Magnetic Resonance Imaging Score) areas of the articular cartilage were extracted to be used as the eigenvalues in data mining. Then,a RBF (RBF: Radical Basis Function) network classifier was built to classify and identify the collected data. The classifier exhibited a final identification accuracy of 75%, indicating a good result of assisting diagnosis. Since the knee OA classifier constituted by a weights-directly-determined RBF neural network didn't require any iteration, our results demonstrated that the optimal weights, appropriate center and variance could be yielded through simple procedures. Furthermore, the accuracy for both the training samples and the testing samples from the normal group could reach 100%. Finally, the classifier was superior both in time efficiency and classification performance to the frequently used classifiers based on iterative learning. Thus it was suitable to be used as an aid to computer-aided diagnosis of early knee OA.
Calibration free beam hardening correction for cardiac CT perfusion imaging
NASA Astrophysics Data System (ADS)
Levi, Jacob; Fahmi, Rachid; Eck, Brendan L.; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.
2016-03-01
Myocardial perfusion imaging using CT (MPI-CT) and coronary CTA have the potential to make CT an ideal noninvasive gate-keeper for invasive coronary angiography. However, beam hardening artifacts (BHA) prevent accurate blood flow calculation in MPI-CT. BH Correction (BHC) methods require either energy-sensitive CT, not widely available, or typically a calibration-based method. We developed a calibration-free, automatic BHC (ABHC) method suitable for MPI-CT. The algorithm works with any BHC method and iteratively determines model parameters using proposed BHA-specific cost function. In this work, we use the polynomial BHC extended to three materials. The image is segmented into soft tissue, bone, and iodine images, based on mean HU and temporal enhancement. Forward projections of bone and iodine images are obtained, and in each iteration polynomial correction is applied. Corrections are then back projected and combined to obtain the current iteration's BHC image. This process is iterated until cost is minimized. We evaluate the algorithm on simulated and physical phantom images and on preclinical MPI-CT data. The scans were obtained on a prototype spectral detector CT (SDCT) scanner (Philips Healthcare). Mono-energetic reconstructed images were used as the reference. In the simulated phantom, BH streak artifacts were reduced from 12+/-2HU to 1+/-1HU and cupping was reduced by 81%. Similarly, in physical phantom, BH streak artifacts were reduced from 48+/-6HU to 1+/-5HU and cupping was reduced by 86%. In preclinical MPI-CT images, BHA was reduced from 28+/-6 HU to less than 4+/-4HU at peak enhancement. Results suggest that the algorithm can be used to reduce BHA in conventional CT and improve MPI-CT accuracy.
Choi, Se Y; Ahn, Seung H; Choi, Jae D; Kim, Jung H; Lee, Byoung-Il; Kim, Jeong-In
2016-01-01
Objective: The purpose of this study was to compare CT image quality for evaluating urolithiasis using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR) according to various scan parameters and radiation doses. Methods: A 5 × 5 × 5 mm3 uric acid stone was placed in a physical human phantom at the level of the pelvis. 3 tube voltages (120, 100 and 80 kV) and 4 current–time products (100, 70, 30 and 15 mAs) were implemented in 12 scans. Each scan was reconstructed with FBP, statistical IR (Levels 5–7) and knowledge-based IMR (soft-tissue Levels 1–3). The radiation dose, objective image quality and signal-to-noise ratio (SNR) were evaluated, and subjective assessments were performed. Results: The effective doses ranged from 0.095 to 2.621 mSv. Knowledge-based IMR showed better objective image noise and SNR than did FBP and statistical IR. The subjective image noise of FBP was worse than that of statistical IR and knowledge-based IMR. The subjective assessment scores deteriorated after a break point of 100 kV and 30 mAs. Conclusion: At the setting of 100 kV and 30 mAs, the radiation dose can be decreased by approximately 84% while keeping the subjective image assessment. Advances in knowledge: Patients with urolithiasis can be evaluated with ultralow-dose non-enhanced CT using a knowledge-based IMR algorithm at a substantially reduced radiation dose with the imaging quality preserved, thereby minimizing the risks of radiation exposure while providing clinically relevant diagnostic benefits for patients. PMID:26577542
NASA Astrophysics Data System (ADS)
Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.
2014-09-01
Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.
The Physics Performance Of The Front Steering Launcher For The ITER ECRH Upper Port
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson, M.; Chavan, R.; Nikkola, P.
2005-09-26
The capability of any given e.m.-wave plasma heating system to be utilized for physics applications depends strongly on the technical properties of the launching antenna (or launcher). An effective ECH launcher must project a small mm-wave beam spot size far into the plasma and 'steer' the beam across a large fraction of the plasma cross section (along the resonance surface). Thus the choice in the launcher concept and design may either severely limit or enhance the capability of a heating system to be effectively applied for physics applications, such as sawtooth stabilization, control of the Neoclassical Tearing Mode (NTM), Edgemore » Localized Mode (ELM) control, etc. Presently, two antenna concepts are under consideration for the ITER upper port ECH launcher: front steering (FS) and remote steering (RS) launchers. The RS launcher has the technical advantage of easier maintenance access to the steering mirror, which is isolated from the torus vacuum. The FS launcher places the steering mirror near the plasma increasing the technical challenges, but significantly enhancing the focusing and steering capabilities of the launcher, offering a threefold increase in NTM stabilization efficiency over the RS launcher as well as the potential for application to other critical physics issues such as ELM or sawtooth control.« less
NASA Astrophysics Data System (ADS)
Gates, David
2013-10-01
The QUAsi-Axisymmetric Research (QUASAR) stellarator is a new facility which can solve two critical problems for fusion, disruptions and steady-state, and which provides new insights into the role of magnetic symmetry in plasma confinement. If constructed it will be the only quasi-axisymmetric stellarator in the world. The innovative principle of quasi-axisymmetry (QA) will be used in QUASAR to study how ``tokamak-like'' systems can be made: 1) Disruption-free, 2) Steady-state with low recirculating power, while preserving or improving upon features of axisymmetric tokamaks, such as 1) Stable at high pressure simultaneous with 2) High confinement (similar to tokamaks), and 3) Scalable to a compact reactor Stellarator research is critical to fusion research in order to establish the physics basis for a magnetic confinement device that can operate efficiently in steady-state, without disruptions at reactor-relevant parameters. The two large stellarator experiments - LHD in Japan and W7-X under construction in Germany are pioneering facilities capable of developing 3D physics understanding at large scale and for very long pulses. The QUASAR design is unique in being QA and optimized for confinement, stability, and moderate aspect ratio (4.5). It projects to a reactor with a major radius of ~8 m similar to advanced tokamak concepts. It is striking that (a) the EU DEMO is a pulsed (~2.5 hour) tokamak with major R ~ 9 m and (b) the ITER physics scenarios do not presume steady-state behavior. Accordingly, QUASAR fills a critical gap in the world stellarator program. This work supported by DoE Contract No. DEAC02-76CH03073.
NASA Astrophysics Data System (ADS)
Maheshwari, A.; Pathak, H. A.; Mehta, B. K.; Phull, G. S.; Laad, R.; Shaikh, M. S.; George, S.; Joshi, K.; Khan, Z.
2017-04-01
ITER Vacuum Vessel is a torus-shaped, double wall structure. The space between the double walls of the VV is filled with In-Wall Shielding Blocks (IWS) and Water. The main purpose of IWS is to provide neutron shielding during ITER plasma operation and to reduce ripple of Toroidal Magnetic Field (TF). Although In-Wall Shield Blocks (IWS) will be submerged in water in between the walls of the ITER Vacuum Vessel (VV), Outgassing Rate (OGR) of IWS materials plays a significant role in leak detection of Vacuum Vessel of ITER. Thermal Outgassing Rate of a material critically depends on the Surface Roughness of material. During leak detection process using RGA equipped Leak detector and tracer gas Helium, there will be a spill over of mass 3 and mass 2 to mass 4 which creates a background reading. Helium background will have contribution of Hydrogen too. So it is necessary to ensure the low OGR of Hydrogen. To achieve an effective leak test it is required to obtain a background below 1 × 10-8 mbar 1 s-1 and hence the maximum Outgassing rate of IWS Materials should comply with the maximum Outgassing rate required for hydrogen i.e. 1 x 10-10 mbar 1 s-1 cm-2 at room temperature. As IWS Materials are special materials developed for ITER project, it is necessary to ensure the compliance of Outgassing rate with the requirement. There is a possibility of diffusing the gasses in material at the time of production. So, to validate the production process of materials as well as manufacturing of final product from this material, three coupons of each IWS material have been manufactured with the same technique which is being used in manufacturing of IWS blocks. Manufacturing records of these coupons have been approved by ITER-IO (International Organization). Outgassing rates of these coupons have been measured at room temperature and found in acceptable limit to obtain the required Helium Background. On the basis of these measurements, test reports have been generated and got approved by IO. This paper will describe the preparation, characteristics and cleaning procedure of samples, description of the system, Outgassing rate Measurement of these samples to ensure the accurate leak detection.
Advanced density profile reflectometry; the state-of-the-art and measurement prospects for ITER
NASA Astrophysics Data System (ADS)
Doyle, E. J.
2006-10-01
Dramatic progress in millimeter-wave technology has allowed the realization of a key goal for ITER diagnostics, the routine measurement of the plasma density profile from millimeter-wave radar (reflectometry) measurements. In reflectometry, the measured round-trip group delay of a probe beam reflected from a plasma cutoff is used to infer the density distribution in the plasma. Reflectometer systems implemented by UCLA on a number of devices employ frequency-modulated continuous-wave (FM-CW), ultrawide-bandwidth, high-resolution radar systems. One such system on DIII-D has routinely demonstrated measurements of the density profile over a range of electron density of 0-6.4x10^19,m-3, with ˜25 μs time and ˜4 mm radial resolution, meeting key ITER requirements. This progress in performance was made possible by multiple advances in the areas of millimeter-wave technology, novel measurement techniques, and improved understanding, including: (i) fast sweep, solid-state, wide bandwidth sources and power amplifiers, (ii) dual polarization measurements to expand the density range, (iii) adaptive radar-based data analysis with parallel processing on a Unix cluster, (iv) high memory depth data acquisition, and (v) advances in full wave code modeling. The benefits of advanced system performance will be illustrated using measurements from a wide range of phenomena, including ELM and fast-ion driven mode dynamics, L-H transition studies and plasma-wall interaction. The measurement capabilities demonstrated by these systems provide a design basis for the development of the main ITER profile reflectometer system. This talk will explore the extent to which these reflectometer system designs, results and experience can be translated to ITER, and will identify what new studies and experimental tests are essential.
Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion
NASA Astrophysics Data System (ADS)
Jakobsen, M.; Wu, R. S.
2016-12-01
Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.
Torak, L.J.
1993-01-01
A MODular, Finite-Element digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water flow. Geometric- and hydrologic-aquifer characteristics in two spatial dimensions are represented by triangular finite elements and linear basis functions; one-dimensional finite elements and linear basis functions represent time. Finite-element matrix equations are solved by the direct symmetric-Doolittle method or the iterative modified, incomplete-Cholesky, conjugate-gradient method. Physical processes that can be represented by the model include (1) confined flow, unconfined flow (using the Dupuit approximation), or a combination of both; (2) leakage through either rigid or elastic confining beds; (3) specified recharge or discharge at points, along lines, and over areas; (4) flow across specified-flow, specified-head, or bead-dependent boundaries; (5) decrease of aquifer thickness to zero under extreme water-table decline and increase of aquifer thickness from zero as the water table rises; and (6) head-dependent fluxes from springs, drainage wells, leakage across riverbeds or confining beds combined with aquifer dewatering, and evapotranspiration. The report describes procedures for applying MODFE to ground-water-flow problems, simulation capabilities, and data preparation. Guidelines for designing the finite-element mesh and for node numbering and determining band widths are given. Tables are given that reference simulation capabilities to specific versions of MODFE. Examples of data input and model output for different versions of MODFE are provided.
Torak, Lynn J.
1992-01-01
A MODular, Finite-Element digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water flow. Geometric- and hydrologic-aquifer characteristics in two spatial dimensions are represented by triangular finite elements and linear basis functions; one-dimensional finite elements and linear basis functions represent time. Finite-element matrix equations are solved by the direct symmetric-Doolittle method or the iterative modified, incomplete-Cholesky, conjugate-gradient method. Physical processes that can be represented by the model include (1) confined flow, unconfined flow (using the Dupuit approximation), or a combination of both; (2) leakage through either rigid or elastic confining beds; (3) specified recharge or discharge at points, along lines, and over areas; (4) flow across specified-flow, specified-head, or head-dependent boundaries; (5) decrease of aquifer thickness to zero under extreme water-table decline and increase of aquifer thickness from zero as the water table rises; and (6) head-dependent fluxes from springs, drainage wells, leakage across riverbeds or confining beds combined with aquifer dewatering, and evapotranspiration.The report describes procedures for applying MODFE to ground-water-flow problems, simulation capabilities, and data preparation. Guidelines for designing the finite-element mesh and for node numbering and determining band widths are given. Tables are given that reference simulation capabilities to specific versions of MODFE. Examples of data input and model output for different versions of MODFE are provided.
Iterative refinement of implicit boundary models for improved geological feature reproduction
NASA Astrophysics Data System (ADS)
Martin, Ryan; Boisvert, Jeff B.
2017-12-01
Geological domains contain non-stationary features that cannot be described by a single direction of continuity. Non-stationary estimation frameworks generate more realistic curvilinear interpretations of subsurface geometries. A radial basis function (RBF) based implicit modeling framework using domain decomposition is developed that permits introduction of locally varying orientations and magnitudes of anisotropy for boundary models to better account for the local variability of complex geological deposits. The interpolation framework is paired with a method to automatically infer the locally predominant orientations, which results in a rapid and robust iterative non-stationary boundary modeling technique that can refine locally anisotropic geological shapes automatically from the sample data. The method also permits quantification of the volumetric uncertainty associated with the boundary modeling. The methodology is demonstrated on a porphyry dataset and shows improved local geological features.
Definition of optical systems payloads
NASA Technical Reports Server (NTRS)
Downey, J. A., III
1981-01-01
The various phases in the formulation of a major NASA project include the inception of the project, planning of the concept, and the project definition. A baseline configuration is established during the planning stage, which serves as a basis for engineering trade studies. Basic technological problems should be recognized early, and a technological verification plan prepared before development of a project begins. A progressive series of iterations is required during the definition phase, illustrating the complex interdependence of existing subsystems. A systems error budget should be established to assess the overall systems performance, identify key performance drivers, and guide performance trades and iterations around these drivers, thus decreasing final systems requirements. Unnecessary interfaces should be avoided, and reasonable design and cost margins maintained. Certain aspects of the definition of the Advanced X-ray Astrophysics Facility are used as an example.
Filtered gradient reconstruction algorithm for compressive spectral imaging
NASA Astrophysics Data System (ADS)
Mejia, Yuri; Arguello, Henry
2017-04-01
Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.
Iteration of ultrasound aberration correction methods
NASA Astrophysics Data System (ADS)
Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond
2004-05-01
Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.
Noise models for low counting rate coherent diffraction imaging.
Godard, Pierre; Allain, Marc; Chamard, Virginie; Rodenburg, John
2012-11-05
Coherent diffraction imaging (CDI) is a lens-less microscopy method that extracts the complex-valued exit field from intensity measurements alone. It is of particular importance for microscopy imaging with diffraction set-ups where high quality lenses are not available. The inversion scheme allowing the phase retrieval is based on the use of an iterative algorithm. In this work, we address the question of the choice of the iterative process in the case of data corrupted by photon or electron shot noise. Several noise models are presented and further used within two inversion strategies, the ordered subset and the scaled gradient. Based on analytical and numerical analysis together with Monte-Carlo studies, we show that any physical interpretations drawn from a CDI iterative technique require a detailed understanding of the relationship between the noise model and the used inversion method. We observe that iterative algorithms often assume implicitly a noise model. For low counting rates, each noise model behaves differently. Moreover, the used optimization strategy introduces its own artefacts. Based on this analysis, we develop a hybrid strategy which works efficiently in the absence of an informed initial guess. Our work emphasises issues which should be considered carefully when inverting experimental data.
Convergence of an iterative procedure for large-scale static analysis of structural components
NASA Technical Reports Server (NTRS)
Austin, F.; Ojalvo, I. U.
1976-01-01
The paper proves convergence of an iterative procedure for calculating the deflections of built-up component structures which can be represented as consisting of a dominant, relatively stiff primary structure and a less stiff secondary structure, which may be composed of one or more substructures that are not connected to one another but are all connected to the primary structure. The iteration consists in estimating the deformation of the primary structure in the absence of the secondary structure on the assumption that all mechanical loads are applied directly to the primary structure. The j-th iterate primary structure deflections at the interface are imposed on the secondary structure, and the boundary loads required to produce these deflections are computed. The cycle is completed by applying the interface reaction to the primary structure and computing its updated deflections. It is shown that the mathematical condition for convergence of this procedure is that the maximum eigenvalue of the equation relating primary-structure deflection to imposed secondary-structure deflection be less than unity, which is shown to correspond with the physical requirement that the secondary structure be more flexible at the interface boundary.
Physics and engineering design of the accelerator and electron dump for SPIDER
NASA Astrophysics Data System (ADS)
Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Marconato, N.; Marcuzzi, D.; Pilan, N.; Serianni, G.; Sonato, P.; Veltri, P.; Zaccaria, P.
2011-06-01
The ITER Neutral Beam Test Facility (PRIMA) is planned to be built at Consorzio RFX (Padova, Italy). PRIMA includes two experimental devices: a full size ion source with low voltage extraction called SPIDER and a full size neutral beam injector at full beam power called MITICA. SPIDER is the first experimental device to be built and operated, aiming at testing the extraction of a negative ion beam (made of H- and in a later stage D- ions) from an ITER size ion source. The main requirements of this experiment are a H-/D- extracted current density larger than 355/285 A m-2, an energy of 100 keV and a pulse duration of up to 3600 s. Several analytical and numerical codes have been used for the design optimization process, some of which are commercial codes, while some others were developed ad hoc. The codes are used to simulate the electrical fields (SLACCAD, BYPO, OPERA), the magnetic fields (OPERA, ANSYS, COMSOL, PERMAG), the beam aiming (OPERA, IRES), the pressure inside the accelerator (CONDUCT, STRIP), the stripping reactions and transmitted/dumped power (EAMCC), the operating temperature, stress and deformations (ALIGN, ANSYS) and the heat loads on the electron dump (ED) (EDAC, BACKSCAT). An integrated approach, taking into consideration at the same time physics and engineering aspects, has been adopted all along the design process. Particular care has been taken in investigating the many interactions between physics and engineering aspects of the experiment. According to the 'robust design' philosophy, a comprehensive set of sensitivity analyses was performed, in order to investigate the influence of the design choices on the most relevant operating parameters. The design of the SPIDER accelerator, here described, has been developed in order to satisfy with reasonable margin all the requirements given by ITER, from the physics and engineering points of view. In particular, a new approach to the compensation of unwanted beam deflections inside the accelerator and a new concept for the ED have been introduced.
Iterative learning control with applications in energy generation, lasers and health care.
Rogers, E; Tutty, O R
2016-09-01
Many physical systems make repeated executions of the same finite time duration task. One example is a robot in a factory or warehouse whose task is to collect an object in sequence from a location, transfer it over a finite duration, place it at a specified location or on a moving conveyor and then return for the next one and so on. Iterative learning control was especially developed for systems with this mode of operation and this paper gives an overview of this control design method using relatively recent relevant applications in wind turbines, free-electron lasers and health care, as exemplars to demonstrate its applicability.
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.
Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei
2013-03-01
A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei
2013-01-01
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329
Small-Scale Smart Grid Construction and Analysis
NASA Astrophysics Data System (ADS)
Surface, Nicholas James
The smart grid (SG) is a commonly used catch-phrase in the energy industry yet there is no universally accepted definition. The objectives and most useful concepts have been investigated extensively in economic, environmental and engineering research by applying statistical knowledge and established theories to develop simulations without constructing physical models. In this study, a small-scale version (SSSG) is constructed to physically represent these ideas so they can be evaluated. Results of construction show data acquisition three times more expensive than the grid itself although mainly due to the incapability to downsize 70% of data acquisition costs to small-scale. Experimentation on the fully assembled grid exposes the limitations of low cost modified sine wave power, significant enough to recommend pure sine wave investment in future SSSG iterations. Findings can be projected to full-size SG at a ratio of 1:10, based on the appliance representing average US household peak daily load. However this exposes disproportionalities in the SSSG compared with previous SG investigations and recommended changes for future iterations are established to remedy this issue. Also discussed are other ideas investigated in the literature and their suitability for SSSG incorporation. It is highly recommended to develop a user-friendly bidirectional charger to more accurately represent vehicle-to-grid (V2G) infrastructure. Smart homes, BEV swap stations and pumped hydroelectric storage can also be researched on future iterations of the SSSG.
VA FitHeart, a Mobile App for Cardiac Rehabilitation: Usability Study
Magnusson, Sara L; Fortney, John C; Sayre, George G; Whooley, Mary A
2018-01-01
Background Cardiac rehabilitation (CR) improves outcomes for patients with ischemic heart disease or heart failure but is underused. New strategies to improve access to and engagement in CR are needed. There is considerable interest in technology-facilitated home CR. However, little is known about patient acceptance and use of mobile technology for CR. Objective The aim of this study was to develop a mobile app for technology-facilitated home CR and seek to determine its usability. Methods We recruited patients eligible for CR who had access to a mobile phone, tablet, or computer with Internet access. The mobile app includes physical activity goal setting, logs for tracking physical activity and health metrics (eg, weight, blood pressure, and mood), health education, reminders, and feedback. Study staff demonstrated the mobile app to participants in person and then observed participants completing prespecified tasks with the mobile app. Participants completed the System Usability Scale (SUS, 0-100), rated likelihood to use the mobile app (0-100), questionnaires on mobile app use, and participated in a semistructured interview. The Unified Theory of Acceptance and Use of Technology and the Theory of Planned Behavior informed the analysis. On the basis of participant feedback, we made iterative revisions to the mobile app between users. Results We conducted usability testing in 13 participants. The first version of the mobile app was used by the first 5 participants, and revised versions were used by the final 8 participants. From the first version to revised versions, task completion success rate improved from 44% (11/25 tasks) to 78% (31/40 tasks; P=.05), SUS improved from 54 to 76 (P=.04; scale 0-100, with 100 being the best usability), and self-reported likelihood of use remained high at 76 and 87 (P=.30; scale 0-100, with 100 being the highest likelihood). In interviews, patients expressed interest in tracking health measures (“I think it’ll be good to track my exercise and to see what I’m doing”), a desire for introductory training (“Initially, training with a technical person, instead of me relying on myself”), and an expectation for sharing data with providers (“It would also be helpful to share with my doctor, it just being a matter of clicking a button and sharing it with my doctor”). Conclusions With participant feedback and iterative revisions, we significantly improved the usability of a mobile app for CR. Patient expectations for using a mobile app for CR include tracking health metrics, introductory training, and sharing data with providers. Iterative mixed-method evaluation may be useful for improving the usability of health technology. PMID:29335235
VA FitHeart, a Mobile App for Cardiac Rehabilitation: Usability Study.
Beatty, Alexis L; Magnusson, Sara L; Fortney, John C; Sayre, George G; Whooley, Mary A
2018-01-15
Cardiac rehabilitation (CR) improves outcomes for patients with ischemic heart disease or heart failure but is underused. New strategies to improve access to and engagement in CR are needed. There is considerable interest in technology-facilitated home CR. However, little is known about patient acceptance and use of mobile technology for CR. The aim of this study was to develop a mobile app for technology-facilitated home CR and seek to determine its usability. We recruited patients eligible for CR who had access to a mobile phone, tablet, or computer with Internet access. The mobile app includes physical activity goal setting, logs for tracking physical activity and health metrics (eg, weight, blood pressure, and mood), health education, reminders, and feedback. Study staff demonstrated the mobile app to participants in person and then observed participants completing prespecified tasks with the mobile app. Participants completed the System Usability Scale (SUS, 0-100), rated likelihood to use the mobile app (0-100), questionnaires on mobile app use, and participated in a semistructured interview. The Unified Theory of Acceptance and Use of Technology and the Theory of Planned Behavior informed the analysis. On the basis of participant feedback, we made iterative revisions to the mobile app between users. We conducted usability testing in 13 participants. The first version of the mobile app was used by the first 5 participants, and revised versions were used by the final 8 participants. From the first version to revised versions, task completion success rate improved from 44% (11/25 tasks) to 78% (31/40 tasks; P=.05), SUS improved from 54 to 76 (P=.04; scale 0-100, with 100 being the best usability), and self-reported likelihood of use remained high at 76 and 87 (P=.30; scale 0-100, with 100 being the highest likelihood). In interviews, patients expressed interest in tracking health measures ("I think it'll be good to track my exercise and to see what I'm doing"), a desire for introductory training ("Initially, training with a technical person, instead of me relying on myself"), and an expectation for sharing data with providers ("It would also be helpful to share with my doctor, it just being a matter of clicking a button and sharing it with my doctor"). With participant feedback and iterative revisions, we significantly improved the usability of a mobile app for CR. Patient expectations for using a mobile app for CR include tracking health metrics, introductory training, and sharing data with providers. Iterative mixed-method evaluation may be useful for improving the usability of health technology. ©Alexis L Beatty, Sara L Magnusson, John C Fortney, George G Sayre, Mary A Whooley. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 15.01.2018.
Software Estimates Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Smith, C. L.
2003-01-01
Simulation-Based Cost Model (SiCM), a discrete event simulation developed in Extend , simulates pertinent aspects of the testing of rocket propulsion test articles for the purpose of estimating the costs of such testing during time intervals specified by its users. A user enters input data for control of simulations; information on the nature of, and activity in, a given testing project; and information on resources. Simulation objects are created on the basis of this input. Costs of the engineering-design, construction, and testing phases of a given project are estimated from numbers and labor rates of engineers and technicians employed in each phase, the duration of each phase; costs of materials used in each phase; and, for the testing phase, the rate of maintenance of the testing facility. The three main outputs of SiCM are (1) a curve, updated at each iteration of the simulation, that shows overall expenditures vs. time during the interval specified by the user; (2) a histogram of the total costs from all iterations of the simulation; and (3) table displaying means and variances of cumulative costs for each phase from all iterations. Other outputs include spending curves for each phase.
Efficient iterative method for solving the Dirac-Kohn-Sham density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Lin; Shao, Sihong; E, Weinan
2012-11-06
We present for the first time an efficient iterative method to directly solve the four-component Dirac-Kohn-Sham (DKS) density functional theory. Due to the existence of the negative energy continuum in the DKS operator, the existing iterative techniques for solving the Kohn-Sham systems cannot be efficiently applied to solve the DKS systems. The key component of our method is a novel filtering step (F) which acts as a preconditioner in the framework of the locally optimal block preconditioned conjugate gradient (LOBPCG) method. The resulting method, dubbed the LOBPCG-F method, is able to compute the desired eigenvalues and eigenvectors in the positive energy band without computing any state in the negative energy band. The LOBPCG-F method introduces mild extra cost compared to the standard LOBPCG method and can be easily implemented. We demonstrate our method in the pseudopotential framework with a planewave basis set which naturally satisfies the kinetic balance prescription. Numerical results for Ptmore » $$_{2}$$, Au$$_{2}$$, TlF, and Bi$$_{2}$$Se$$_{3}$$ indicate that the LOBPCG-F method is a robust and efficient method for investigating the relativistic effect in systems containing heavy elements.« less
Worrall, Graham; Chambers, Larry W.
1990-01-01
With the increasing expenditure on health care programs for seniors, there is an urgent need to evaluate such programs. The Measurement Iterative Loop is a tool that can provide both health administrators and health researchers with a method of evaluation of existing programs and identification of gaps in knowledge, and forms a rational basis for health-care policy decisions. In this article, the Loop is applied to one common problem of the elderly: dementia. PMID:21233998
Preconditioned MoM Solutions for Complex Planar Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasenfest, B J; Jackson, D; Champagne, N
2004-01-23
The numerical analysis of large arrays is a complex problem. There are several techniques currently under development in this area. One such technique is the FAIM (Faster Adaptive Integral Method). This method uses a modification of the standard AIM approach which takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basis functions, such as the RWG basis. These bases are then projected onto a regular grid of interpolating polynomials. This grid can then be used in a 2D ormore » 3D FFT to accelerate the matrix-vector product used in an iterative solver. The method has been proven to greatly reduce solve time by speeding the matrix-vector product computation. The FAIM approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends FAIM by modifying it to allow for layered material Green's Functions and dielectrics. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the FAIM method is reported in; this contribution is limited to presenting new results.« less
Vermeulen, Joan; Neyens, Jacques CL; Spreeuwenberg, Marieke D; van Rossum, Erik; Sipers, Walther; Habets, Herbert; Hewson, David J; de Witte, Luc P
2013-01-01
Purpose To involve elderly people during the development of a mobile interface of a monitoring system that provides feedback to them regarding changes in physical functioning and to test the system in a pilot study. Methods and participants The iterative user-centered development process consisted of the following phases: (1) selection of user representatives; (2) analysis of users and their context; (3) identification of user requirements; (4) development of the interface; and (5) evaluation of the interface in the lab. Subsequently, the monitoring and feedback system was tested in a pilot study by five patients who were recruited via a geriatric outpatient clinic. Participants used a bathroom scale to monitor weight and balance, and a mobile phone to monitor physical activity on a daily basis for six weeks. Personalized feedback was provided via the interface of the mobile phone. Usability was evaluated on a scale from 1 to 7 using a modified version of the Post-Study System Usability Questionnaire (PSSUQ); higher scores indicated better usability. Interviews were conducted to gain insight into the experiences of the participants with the system. Results The developed interface uses colors, emoticons, and written and/or spoken text messages to provide daily feedback regarding (changes in) weight, balance, and physical activity. The participants rated the usability of the monitoring and feedback system with a mean score of 5.2 (standard deviation 0.90) on the modified PSSUQ. The interviews revealed that most participants liked using the system and appreciated that it signaled changes in their physical functioning. However, usability was negatively influenced by a few technical errors. Conclusion Involvement of elderly users during the development process resulted in an interface with good usability. However, the technical functioning of the monitoring system needs to be optimized before it can be used to support elderly people in their self-management. PMID:24039407
Physics and Control of Locked Modes in the DIII-D Tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volpe, Francesco
This Final Technical Report summarizes an investigation, carried out under the auspices of the DOE Early Career Award, of the physics and control of non-rotating magnetic islands (“locked modes”) in tokamak plasmas. Locked modes are one of the main causes of disruptions in present tokamaks, and could be an even bigger concern in ITER, due to its relatively high beta (favoring the formation of Neoclassical Tearing Mode islands) and low rotation (favoring locking). For these reasons, this research had the goal of studying and learning how to control locked modes in the DIII-D National Fusion Facility under ITER-relevant conditions ofmore » high pressure and low rotation. Major results included: the first full suppression of locked modes and avoidance of the associated disruptions; the demonstration of error field detection from the interaction between locked modes, applied rotating fields and intrinsic errors; the analysis of a vast database of disruptive locked modes, which led to criteria for disruption prediction and avoidance.« less
MWR3C physical retrievals of precipitable water vapor and cloud liquid water path
Cadeddu, Maria
2016-10-12
The data set contains physical retrievals of PWV and cloud LWP retrieved from MWR3C measurements during the MAGIC campaign. Additional data used in the retrieval process include radiosondes and ceilometer. The retrieval is based on an optimal estimation technique that starts from a first guess and iteratively repeats the forward model calculations until a predefined convergence criterion is satisfied. The first guess is a vector of [PWV,LWP] from the neural network retrieval fields in the netcdf file. When convergence is achieved the 'a posteriori' covariance is computed and its square root is expressed in the file as the retrieval 1-sigma uncertainty. The closest radiosonde profile is used for the radiative transfer calculations and ceilometer data are used to constrain the cloud base height. The RMS error between the brightness temperatures is computed at the last iterations as a consistency check and is written in the last column of the output file.
Iterants, Fermions and Majorana Operators
NASA Astrophysics Data System (ADS)
Kauffman, Louis H.
Beginning with an elementary, oscillatory discrete dynamical system associated with the square root of minus one, we study both the foundations of mathematics and physics. Position and momentum do not commute in our discrete physics. Their commutator is related to the diffusion constant for a Brownian process and to the Heisenberg commutator in quantum mechanics. We take John Wheeler's idea of It from Bit as an essential clue and we rework the structure of that bit to a logical particle that is its own anti-particle, a logical Marjorana particle. This is our key example of the amphibian nature of mathematics and the external world. We show how the dynamical system for the square root of minus one is essentially the dynamics of a distinction whose self-reference leads to both the fusion algebra and the operator algebra for the Majorana Fermion. In the course of this, we develop an iterant algebra that supports all of matrix algebra and we end the essay with a discussion of the Dirac equation based on these principles.
Toward a first-principles integrated simulation of tokamak edge plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C S; Klasky, Scott A; Cummings, Julian
2008-01-01
Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dul, F.A.; Arczewski, K.
1994-03-01
Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less
Frank, Lawrence D; Saelens, Brian E; Chapman, James; Sallis, James F; Kerr, Jacqueline; Glanz, Karen; Couch, Sarah C; Learnihan, Vincent; Zhou, Chuan; Colburn, Trina; Cain, Kelli L
2012-05-01
GIS-based walkability measures designed to explain active travel fail to capture "playability" and proximity to healthy food. These constructs should be considered when measuring potential child obesogenic environments. The aim of this study was to describe the development of GIS-based multicomponent physical activity and nutrition environment indicators of child obesogenic environments in the San Diego and Seattle regions. Block group-level walkability (street connectivity, residential density, land-use mix, and retail floor area ratio) measures were constructed in each region. Multiple sources were used to enumerate parks (∼900-1600 per region) and food establishments (∼10,000 per region). Physical activity environments were evaluated on the basis of walkability and presence and quality of parks. Nutrition environments were evaluated based on presence and density of fast-food restaurants and distance to supermarkets. Four neighborhood types were defined using high/low cut points for physical activity and nutrition environments defined through an iterative process dependent on regional counts of fast-food outlets and overall distance to parks and grocery stores from census block groups where youth live. To identify sufficient numbers of children aged 6-11 years, high physical activity environment block groups had at least one high-quality park within 0.25 miles and were above median walkability, whereas low physical activity environment groups had no parks and were below median walkability. High nutrition environment block groups had a supermarket within 0.5 miles, and fewer than 16 (Seattle) and 31 (San Diego) fast-food restaurants within 0.5 miles. Low nutrition environments had either no supermarket, or a supermarket and more than 16 (Seattle) and 31 (San Diego) fast-food restaurants within 0.5 miles. Income, educational attainment, and ethnicity varied across physical activity and nutrition environments. These approaches to defining neighborhood environments can be used to study physical activity, nutrition, and obesity outcomes. Findings presented in a companion paper validate these GIS methods for measuring obesogenic environments. Copyright © 2012 American Journal of Preventive Medicine. All rights reserved.
Status of the ITER Electron Cyclotron Heating and Current Drive System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darbos, Caroline; Albajar, Ferran; Bonicelli, Tullio
2015-10-07
We present that the electron cyclotron (EC) heating and current drive (H&CD) system developed for the ITER is made of 12 sets of high-voltage power supplies feeding 24 gyrotrons connected through 24 transmission lines (TL), to five launchers, four located in upper ports and one at the equatorial level. Nearly all procurements are in-kind, following general ITER philosophy, and will come from Europe, India, Japan, Russia and the USA. The full system is designed to couple to the plasma 20 MW among the 24 MW generated power, at the frequency of 170 GHz, for various physics applications such as plasmamore » start-up, central H&CD and magnetohydrodynamic (MHD) activity control. The design takes present day technology and extends toward high-power continuous operation, which represents a large step forward as compared to the present state of the art. The ITER EC system will be a stepping stone to future EC systems for DEMO and beyond.The development of the EC system is facing significant challenges, which includes not only an advanced microwave system but also compliance with stringent requirements associated with nuclear safety as ITER became the first fusion device licensed as basic nuclear installations as of 9 November 2012. Finally, since the conceptual design of the EC system was established in 2007, the EC system has progressed to a preliminary design stage in 2012 and is now moving forward toward a final design.« less
Modelling of steady state erosion of CFC actively water-cooled mock-up for the ITER divertor
NASA Astrophysics Data System (ADS)
Ogorodnikova, O. V.
2008-04-01
Calculations of the physical and chemical erosion of CFC (carbon fibre composite) monoblocks as outer vertical target of the ITER divertor during normal operation regimes have been done. Off-normal events and ELM's are not considered here. For a set of components under thermal and particles loads at glancing incident angle, variations in the material properties and/or assembly of defects could result in different erosion of actively-cooled components and, thus, in temperature instabilities. Operation regimes where the temperature instability takes place are investigated. It is shown that the temperature and erosion instabilities, probably, are not a critical point for the present design of ITER vertical target if a realistic variation of material properties is assumed, namely, the difference in the thermal conductivities of the neighbouring monoblocks is 20% and the maximum allowable size of a defect between CFC armour and cooling tube is +/-90° in circumferential direction from the apex.
Conceptual design of the DEMO neutral beam injectors: main developments and R&D achievements
NASA Astrophysics Data System (ADS)
Sonato, P.; Agostinetti, P.; Bolzonella, T.; Cismondi, F.; Fantz, U.; Fassina, A.; Franke, T.; Furno, I.; Hopf, C.; Jenkins, I.; Sartori, E.; Tran, M. Q.; Varje, J.; Vincenzi, P.; Zanotto, L.
2017-05-01
The objectives of the nuclear fusion power plant DEMO, to be built after the ITER experimental reactor, are usually understood to lie somewhere between those of ITER and a ‘first of a kind’ commercial plant. Hence, in DEMO the issues related to efficiency and RAMI (reliability, availability, maintainability and inspectability) are among the most important drivers for the design, as the cost of the electricity produced by this power plant will strongly depend on these aspects. In the framework of the EUROfusion Work Package Heating and Current Drive within the Power Plant Physics and Development activities, a conceptual design of the neutral beam injector (NBI) for the DEMO fusion reactor has been developed by Consorzio RFX in collaboration with other European research institutes. In order to improve efficiency and RAMI aspects, several innovative solutions have been introduced in comparison to the ITER NBI, mainly regarding the beam source, neutralizer and vacuum pumping systems.
Alfvén eigenmode evolution computed with the VENUS and KINX codes for the ITER baseline scenario
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isaev, M. Yu., E-mail: isaev-my@nrcki.ru; Medvedev, S. Yu.; Cooper, W. A.
A new application of the VENUS code is described, which computes alpha particle orbits in the perturbed electromagnetic fields and its resonant interaction with the toroidal Alfvén eigenmodes (TAEs) for the ITER device. The ITER baseline scenario with Q = 10 and the plasma toroidal current of 15 MA is considered as the most important and relevant for the International Tokamak Physics Activity group on energetic particles (ITPA-EP). For this scenario, typical unstable TAE-modes with the toroidal index n = 20 have been predicted that are localized in the plasma core near the surface with safety factor q = 1.more » The spatial structure of ballooning and antiballooning modes has been computed with the ideal MHD code KINX. The linear growth rates and the saturation levels taking into account the damping effects and the different mode frequencies have been calculated with the VENUS code for both ballooning and antiballooning TAE-modes.« less
Development of the low-field side reflectometer for ITER
NASA Astrophysics Data System (ADS)
Muscatello, Christopher; Anderson, James; Gattuso, Anthony; Doyle, Edward; Peebles, William; Seraydarian, Raymond; Wang, Guiding; Kramer, Gerrit; Zolfaghari, Ali; Atomics Team, General; University of California Los Angeles Team; Princeton Plasma Physics Laboratory Team
2017-10-01
The Low-Field Side Reflectometer (LFSR) for ITER will provide real-time edge density profiles every 10 ms for feedback control and every 24 μs for physics evaluation. The spatial resolution will be better than 5 mm over 30 - 165 GHz, probing the scrape-off layer to the top of the pedestal in H-mode plasmas. An antenna configuration has been selected for measurements covering anticipated plasma elevations. Laboratory validation of diagnostic performance is underway using a LFSR transmission line (TL) mockup. The 40-meter TL includes circular corrugated waveguide, length calibration feature, Gaussian telescope, vacuum windows, containment membranes, and expansion joint. Transceiver modules coupled to the input of the TL provide frequency-modulated (FM) data for evaluation of performance as a monostatic reflectometer. Results from the mockup tests are presented and show that, with some further optimization, the LFSR will meet or exceed the measurement requirements for ITER. An update of the LFSR instrumentation design status is also presented with preliminary test results. Work supported by PPPL under subcontract S013252-A.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parrish, Robert M.; Liu, Fang; Martínez, Todd J., E-mail: toddjmartinez@gmail.com
We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this “difference self-consistent field (dSCF)” picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space.more » These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TERACHEM SCF implementation.« less
NASA Technical Reports Server (NTRS)
Wolf, Stephen W. D.; Goodyer, Michael J.
1988-01-01
Following the realization that a simple iterative strategy for bringing the flexible walls of two-dimensional test sections to streamline contours was too slow for practical use, Judd proposed, developed, and placed into service what was the first Predictive Strategy. The Predictive Strategy reduced by 75 percent or more the number of iterations of wall shapes, and therefore the tunnel run-time overhead attributable to the streamlining process, required to reach satisfactory streamlines. The procedures of the Strategy are embodied in the FORTRAN subroutine WAS (standing for Wall Adjustment Strategy) which is written in general form. The essentials of the test section hardware, followed by the underlying aerodynamic theory which forms the basis of the Strategy, are briefly described. The subroutine is then presented as the Appendix, broken down into segments with descriptions of the numerical operations underway in each, with definitions of variables.
Reed Solomon codes for error control in byte organized computer memory systems
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation are presented. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Menshikov, Ivan S; Shklover, Alexsandr V; Babkina, Tatiana S; Myagkov, Mikhail G
2017-01-01
In this research, the social behavior of the participants in a Prisoner's Dilemma laboratory game is explained on the basis of the quantal response equilibrium concept and the representation of the game in Markov strategies. In previous research, we demonstrated that social interaction during the experiment has a positive influence on cooperation, trust, and gratefulness. This research shows that the quantal response equilibrium concept agrees only with the results of experiments on cooperation in Prisoner's Dilemma prior to social interaction. However, quantal response equilibrium does not explain of participants' behavior after social interaction. As an alternative theoretical approach, an examination was conducted of iterated Prisoner's Dilemma game in Markov strategies. We built a totally mixed Nash equilibrium in this game; the equilibrium agrees with the results of the experiments both before and after social interaction.
Field tests of a participatory ergonomics toolkit for Total Worker Health
Kernan, Laura; Plaku-Alakbarova, Bora; Robertson, Michelle; Warren, Nicholas; Henning, Robert
2018-01-01
Growing interest in Total Worker Health® (TWH) programs to advance worker safety, health and well-being motivated development of a toolkit to guide their implementation. Iterative design of a program toolkit occurred in which participatory ergonomics (PE) served as the primary basis to plan integrated TWH interventions in four diverse organizations. The toolkit provided start-up guides for committee formation and training, and a structured PE process for generating integrated TWH interventions. Process data from program facilitators and participants throughout program implementation were used for iterative toolkit design. Program success depended on organizational commitment to regular design team meetings with a trained facilitator, the availability of subject matter experts on ergonomics and health to support the design process, and retraining whenever committee turnover occurred. A two committee structure (employee Design Team, management Steering Committee) provided advantages over a single, multilevel committee structure, and enhanced the planning, communication, and team-work skills of participants. PMID:28166897
A new approach to the human muscle model.
Baildon, R W; Chapman, A E
1983-01-01
Hill's (1938) two component muscle model is used as basis for digital computer simulation of human muscular contraction by means of an iterative process. The contractile (CC) and series elastic (SEC) components are lumped components of structures which produce and transmit torque to the external environment. The CC is described in angular terms along four dimensions as a series of non-planar torque-angle-angular velocity surfaces stacked on top of each other, each surface being appropriate to a given level of muscular activation. The SEC is described similarly along dimensions of torque, angular stretch, overall muscle angular displacement and activation. The iterative process introduces negligible error and allows the mechanical outcome of a variety of normal muscular contractions to be evaluated parsimoniously. The model allows analysis of many aspects of muscle behaviour as well as optimization studies. Definition of relevant relations should also allow reproduction and prediction of the outcome of contractions in individuals.
Knobology in use: an experimental evaluation of ergonomics recommendations.
Overgård, Kjell Ivar; Fostervold, Knut Inge; Bjelland, Hans Vanhauwaert; Hoff, Thomas
2007-05-01
The scientific basis for ergonomics recommendations for controls has usually not been related to active goal-directed use. The present experiment tests how different knob sizes and torques affect operator performance. The task employed is to control a pointer by the use of a control knob, and is as such an experimentally defined goal-directed task relevant to machine systems in general. Duration of use, error associated with use (overshooting of the goal area) and movement reproduction were used as performance measures. Significant differences between knob sizes were found for movement reproduction. High torques led to less overshooting as opposed to low torques. The results from duration of use showed a tendency that the differences between knob sizes were reduced from the first iteration to the second iteration. The present results indicate that the ergonomically recommended ranges of knob sizes might differently affect operator performance.
Communication: A difference density picture for the self-consistent field ansatz.
Parrish, Robert M; Liu, Fang; Martínez, Todd J
2016-04-07
We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this "difference self-consistent field (dSCF)" picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space. These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TeraChem SCF implementation.
Communication: A difference density picture for the self-consistent field ansatz
NASA Astrophysics Data System (ADS)
Parrish, Robert M.; Liu, Fang; Martínez, Todd J.
2016-04-01
We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this "difference self-consistent field (dSCF)" picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space. These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TeraChem SCF implementation.
Myagkov, Mikhail G.
2017-01-01
In this research, the social behavior of the participants in a Prisoner's Dilemma laboratory game is explained on the basis of the quantal response equilibrium concept and the representation of the game in Markov strategies. In previous research, we demonstrated that social interaction during the experiment has a positive influence on cooperation, trust, and gratefulness. This research shows that the quantal response equilibrium concept agrees only with the results of experiments on cooperation in Prisoner’s Dilemma prior to social interaction. However, quantal response equilibrium does not explain of participants’ behavior after social interaction. As an alternative theoretical approach, an examination was conducted of iterated Prisoner's Dilemma game in Markov strategies. We built a totally mixed Nash equilibrium in this game; the equilibrium agrees with the results of the experiments both before and after social interaction. PMID:29190280
Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1987-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Large-scale expensive black-box function optimization
NASA Astrophysics Data System (ADS)
Rashid, Kashif; Bailey, William; Couët, Benoît
2012-09-01
This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.
Interaction potentials and transport properties of Ba, Ba+, and Ba2+ in rare gases from He to Xe
NASA Astrophysics Data System (ADS)
Buchachenko, Alexei A.; Viehland, Larry A.
2018-04-01
A highly accurate, consistent set of ab initio interaction potentials is obtained for the title systems at the coupled cluster with singles, doubles, and non-iterative triples level of theory with extrapolation to the complete basis set limit. These potentials are shown to be more reliable than the previous potentials based on their long-range behavior, equilibrium properties, collision cross sections, and transport properties.
CONORBIT: constrained optimization by radial basis function interpolation in trust regions
Regis, Rommel G.; Wild, Stefan M.
2016-09-26
Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less
Iterative learning control with applications in energy generation, lasers and health care
Tutty, O. R.
2016-01-01
Many physical systems make repeated executions of the same finite time duration task. One example is a robot in a factory or warehouse whose task is to collect an object in sequence from a location, transfer it over a finite duration, place it at a specified location or on a moving conveyor and then return for the next one and so on. Iterative learning control was especially developed for systems with this mode of operation and this paper gives an overview of this control design method using relatively recent relevant applications in wind turbines, free-electron lasers and health care, as exemplars to demonstrate its applicability. PMID:27713654
NASA Astrophysics Data System (ADS)
Pauldrach, A. W. A.; Hoffmann, T. L.; Hultzsch, P. J. N.
2014-09-01
Context. In type Ia supernova (SN Ia) envelopes a huge number of lines of different elements overlap within their thermal Doppler widths, and this problem is exacerbated by the circumstance that up to 20% of these lines can have a line optical depth higher than 1. The stagnation of the lambda iteration in such optically thick cases is one of the fundamental physical problems inherent in the iterative solution of the non-LTE problem, and the failure of a lambda iteration to converge is a point of crucial importance whose physical significance must be understood completely. Aims: We discuss a general problem related to radiative transfer under the physical conditions of supernova ejecta that involves a failure of the usual non-LTE iteration scheme to converge when multiple strong opacities belonging to different physical transitions come together, similar to the well-known situation where convergence is impaired even when only a single process attains high optical depths. The convergence problem is independent of the chosen frequency and depth grid spacing, independent of whether the radiative transfer is solved in the comoving or observer's frame, and independent of whether a common complete-linearization scheme or a conventional accelerated lambda iteration (ALI) is used. The problem appears when all millions of line transitions required for a realistic description of SN Ia envelopes are treated in the frame of a comprehensive non-LTE model. The only solution to this problem is a complete-linearization approach that considers all ions of all elements simultaneously, or an adequate generalization of the established ALI technique that accounts for the mutual interaction of the strong spectral lines of different elements and which thereby unfreezes the "stuck" state of the iteration. Methods: The physics of the atmospheres of SN Ia are strongly affected by the high-velocity expansion of the ejecta, which dominates the formation of the spectra at all wavelength ranges. Thus, hydrodynamic explosion models and realistic model atmospheres that take into account the strong deviation from local thermodynamic equilibrium (LTE) are necessary for the synthesis and analysis of the spectra. In this regard one of the biggest challenges we have found in modeling the radiative transfer in SN Ia is the fact that the radiative energy in the UV has to be transferred only via spectral lines into the optical regime to be able to leave the ejecta. However, convergence of the model toward a state where this is possible is impaired when using the standard procedures. We report on improvements in our approach of computing synthetic spectra for SN Ia with respect to (i) an improved and sophisticated treatment of many thousands of strong lines that interact intricately with the "pseudo-continuum" formed entirely by Doppler-shifted spectral lines; (ii) an improved and expanded atomic database; and (iii) the inclusion of energy deposition within the ejecta arising from the radioactive decay of mostly 56Ni and 56Co. Results: We show that an ALI procedure we have developed for the mutual interaction of strong spectral lines appearing in the atmospheres of SNe Ia solves the long-standing problem of transferring the radiative energy from the UV into the optical regime. Our new method thus constitutes a foundation for more refined models, such as those including energy deposition. In this regard we furthermore show synthetic spectra obtained with various methods adopted for the released energy and compare them with observations. We discuss in detail applications of the diagnostic technique by example of a standard type Ia supernova, where the comparison of calculated and observed spectra revealed that in the early phases the consideration of the energy deposition within the spectrum-forming regions of the ejecta does not qualitatively alter the shape of the emergent spectra. Conclusions: The results of our investigation lead to an improved understanding of how the shape of the spectrum changes radically as function of depth in the ejecta, and show how different emergent spectra are formed as a result of the particular physical properties of SNe Ia ejecta and the resulting peculiarities in the radiative transfer. This knowledge provides an important insight into the process of extracting information from observed SN Ia spectra, since these spectra are a complex product of numerous unobservable SN Ia spectral features, which are thus analyzed in parallel to the observable SN Ia spectral features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Møyner, Olav, E-mail: olav.moyner@sintef.no; Lie, Knut-Andreas, E-mail: knut-andreas.lie@sintef.no
2016-01-01
A wide variety of multiscale methods have been proposed in the literature to reduce runtime and provide better scaling for the solution of Poisson-type equations modeling flow in porous media. We present a new multiscale restricted-smoothed basis (MsRSB) method that is designed to be applicable to both rectilinear grids and unstructured grids. Like many other multiscale methods, MsRSB relies on a coarse partition of the underlying fine grid and a set of local prolongation operators (multiscale basis functions) that map unknowns associated with the fine grid cells to unknowns associated with blocks in the coarse partition. These mappings are constructedmore » by restricted smoothing: Starting from a constant, a localized iterative scheme is applied directly to the fine-scale discretization to compute prolongation operators that are consistent with the local properties of the differential operators. The resulting method has three main advantages: First of all, both the coarse and the fine grid can have general polyhedral geometry and unstructured topology. This means that partitions and good prolongation operators can easily be constructed for complex models involving high media contrasts and unstructured cell connections introduced by faults, pinch-outs, erosion, local grid refinement, etc. In particular, the coarse partition can be adapted to geological or flow-field properties represented on cells or faces to improve accuracy. Secondly, the method is accurate and robust when compared to existing multiscale methods and does not need expensive recomputation of local basis functions to account for transient behavior: Dynamic mobility changes are incorporated by continuing to iterate a few extra steps on existing basis functions. This way, the cost of updating the prolongation operators becomes proportional to the amount of change in fluid mobility and one reduces the need for expensive, tolerance-based updates. Finally, since the MsRSB method is formulated on top of a cell-centered, conservative, finite-volume method, it is applicable to any flow model in which one can isolate a pressure equation. Herein, we only discuss single and two-phase incompressible models. Compressible flow, e.g., as modeled by the black-oil equations, is discussed in a separate paper.« less
ITER Simulations Using the PEDESTAL Module in the PTRANSP Code
NASA Astrophysics Data System (ADS)
Halpern, F. D.; Bateman, G.; Kritz, A. H.; Pankin, A. Y.; Budny, R. V.; Kessel, C.; McCune, D.; Onjun, T.
2006-10-01
PTRANSP simulations with a computed pedestal height are carried out for ITER scenarios including a standard ELMy H-mode (15 MA discharge) and a hybrid scenario (12MA discharge). It has been found that fusion power production predicted in simulations of ITER discharges depends sensitively on the height of the H-mode temperature pedestal [1]. In order to study this effect, the NTCC PEDESTAL module [2] has been implemented in PTRANSP code to provide boundary conditions used for the computation of the projected performance of ITER. The PEDESTAL module computes both the temperature and width of the pedestal at the edge of type I ELMy H-mode discharges once the threshold conditions for the H-mode are satisfied. The anomalous transport in the plasma core is predicted using the GLF23 or MMM95 transport models. To facilitate the steering of lengthy PTRANSP computations, the PTRANSP code has been modified to allow changes in the transport model when simulations are restarted. The PTRANSP simulation results are compared with corresponding results obtained using other integrated modeling codes.[1] G. Bateman, T. Onjun and A.H. Kritz, Plasma Physics and Controlled Fusion, 45, 1939 (2003).[2] T. Onjun, G. Bateman, A.H. Kritz, and G. Hammett, Phys. Plasmas 9, 5018 (2002).
Takahashi, K; Kajiwara, K; Oda, Y; Kasugai, A; Kobayashi, N; Sakamoto, K; Doane, J; Olstad, R; Henderson, M
2011-06-01
High power, long pulse millimeter (mm) wave experiments of the RF test stand (RFTS) of Japan Atomic Energy Agency (JAEA) were performed. The system consists of a 1 MW/170 GHz gyrotron, a long and short distance transmission line (TL), and an equatorial launcher (EL) mock-up. The RFTS has an ITER-relevant configuration, i.e., consisted by a 1 MW-170 GHz gyrotron, a mm wave TL, and an EL mock-up. The TL is composed of a matching optics unit, evacuated circular corrugated waveguides, 6-miter bends, an in-line waveguide switch, and an isolation valve. The EL-mock-up is fabricated according to the current design of the ITER launcher. The Gaussian-like beam radiation with the steering capability of 20°-40° from the EL mock-up was also successfully proved. The high power, long pulse power transmission test was conducted with the metallic load replaced by the EL mock-up, and the transmission of 1 MW/800 s and 0.5 MW/1000 s was successfully demonstrated with no arcing and no damages. The transmission efficiency of the TL was 96%. The results prove the feasibility of the ITER electron cyclotron heating and current drive system. © 2011 American Institute of Physics
In-Vessel Tritium Retention and Removal in ITER-FEAT
NASA Astrophysics Data System (ADS)
Federici, G.; Brooks, J. N.; Iseli, M.; Wu, C. H.
Erosion of the divertor and first-wall plasma-facing components, tritium uptake in the re-deposited films, and direct implantation in the armour material surfaces surrounding the plasma, represent crucial physical issues that affect the design of future fusion devices. In this paper we present the derivation, and discuss the results, of current predictions of tritium inventory in ITER-FEAT due to co-deposition and implantation and their attendant uncertainties. The current armour materials proposed for ITER-FEAT are beryllium on the first-wall, carbon-fibre-composites on the divertor plate near the separatrix strike points, to withstand the high thermal loads expected during off-normal events, e.g., disruptions, and tungsten elsewhere in the divertor. Tritium co-deposition with chemically eroded carbon in the divertor, and possibly with some Be eroded from the first-wall, is expected to represent the dominant mechanism of in-vessel tritium retention in ITER-FEAT. This demands efficient in-situ methods of mitigation and retrieval to avoid frequent outages due to the reaching of precautionary operating limits set by safety considerations (e.g., ˜350 g of in-vessel co-deposited tritium) and for fuel economy reasons. Priority areas where further R&D work is required to narrow the remaining uncertainties are also briefly discussed.
On the breakdown modes and parameter space of Ohmic Tokamak startup
NASA Astrophysics Data System (ADS)
Peng, Yanli; Jiang, Wei; Zhang, Ya; Hu, Xiwei; Zhuang, Ge; Innocenti, Maria; Lapenta, Giovanni
2017-10-01
Tokamak plasma has to be hot. The process of turning the initial dilute neutral hydrogen gas at room temperature into fully ionized plasma is called tokamak startup. Even with over 40 years of research, the parameter ranges for the successful startup still aren't determined by numerical simulations but by trial and errors. However, in recent years it has drawn much attention due to one of the challenges faced by ITER: the maximum electric field for startup can't exceed 0.3 V/m, which makes the parameter range for successful startup narrower. Besides, this physical mechanism is far from being understood either theoretically or numerically. In this work, we have simulated the plasma breakdown phase driven by pure Ohmic heating using a particle-in-cell/Monte Carlo code, with the aim of giving a predictive parameter range for most tokamaks, even for ITER. We have found three situations during the discharge, as a function of the initial parameters: no breakdown, breakdown and runaway. Moreover, breakdown delay and volt-second consumption under different initial conditions are evaluated. In addition, we have simulated breakdown on ITER and confirmed that when the electric field is 0.3 V/m, the optimal pre-filling pressure is 0.001 Pa, which is in good agreement with ITER's design.
Kessel, C. E.; Poli, F. M.; Ghantous, K.; ...
2015-01-01
Here, the advanced physics and advanced technology tokamak power plant ARIES-ACT1 has a major radius of 6.25 m at an aspect ratio of 4.0, toroidal field of 6.0 T, strong shaping with elongation of 2.2, and triangularity of 0.63. The broadest pressure cases reached wall-stabilized β N ~ 5.75, limited by n = 3 external kink mode requiring a conducting shell at b/a = 0.3, requiring plasma rotation, feedback, and/or kinetic stabilization. The medium pressure peaking case reaches β N = 5.28 with B T = 6.75, while the peaked pressure case reaches β N < 5.15. Fast particle magnetohydrodynamicmore » stability shows that the alpha particles are unstable, but this leads to redistribution to larger minor radius rather than loss from the plasma. Edge and divertor plasma modeling shows that 75% of the power to the divertor can be radiated with an ITER-like divertor geometry, while >95% can be radiated in a stable detached mode with an orthogonal target and wide slot geometry. The bootstrap current fraction is 91% with a q95 of 4.5, requiring ~1.1 MA of external current drive. This current is supplied with 5 MW of ion cyclotron radio frequency/fast wave and 40 MW of lower hybrid current drive. Electron cyclotron is most effective for safety factor control over ρ~0.2 to 0.6 with 20 MW. The pedestal density is ~0.9×10 20/m 3, and the temperature is ~4.4 keV. The H98 factor is 1.65, n/n Gr = 1.0, and the ratio of net power to threshold power is 2.8 to 3.0 in the flattop.« less
Developing Validity Evidence for the Written Pediatric History and Physical Exam Evaluation Rubric.
King, Marta A; Phillipi, Carrie A; Buchanan, Paula M; Lewin, Linda O
The written history and physical examination (H&P) is an underutilized source of medical trainee assessment. The authors describe development and validity evidence for the Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric: a novel tool for evaluating written H&Ps. Using an iterative process, the authors drafted, revised, and implemented the 10-item rubric at 3 academic institutions in 2014. Eighteen attending physicians and 5 senior residents each scored 10 third-year medical student H&Ps. Inter-rater reliability (IRR) was determined using intraclass correlation coefficients. Cronbach α was used to report consistency and Spearman rank-order correlations to determine relationships between rubric items. Raters provided a global assessment, recorded time to review and score each H&P, and completed a rubric utility survey. Overall intraclass correlation was 0.85, indicating adequate IRR. Global assessment IRR was 0.89. IRR for low- and high-quality H&Ps was significantly greater than for medium-quality ones but did not differ on the basis of rater category (attending physician vs. senior resident), note format (electronic health record vs nonelectronic), or student diagnostic accuracy. Cronbach α was 0.93. The highest correlation between an individual item and total score was for assessments was 0.84; the highest interitem correlation was between assessment and differential diagnosis (0.78). Mean time to review and score an H&P was 16.3 minutes; residents took significantly longer than attending physicians. All raters described rubric utility as "good" or "very good" and endorsed continued use. The P-HAPEE rubric offers a novel, practical, reliable, and valid method for supervising physicians to assess pediatric written H&Ps. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Covariate selection with iterative principal component analysis for predicting physical
USDA-ARS?s Scientific Manuscript database
Local and regional soil data can be improved by coupling new digital soil mapping techniques with high resolution remote sensing products to quantify both spatial and absolute variation of soil properties. The objective of this research was to advance data-driven digital soil mapping techniques for ...
A MOOC Based on Blended Pedagogy
ERIC Educational Resources Information Center
Rayyan, S.; Fredericks, C.; Colvin, K. F.; Liu, A.; Teodorescu, R.; Barrantes, A.; Pawl, A.; Seaton, D. T.; Pritchard, D. E.
2016-01-01
We describe three iterations of a Massive Open Online Course (MOOC) developed from online preparation materials for a reformed introductory physics classroom at the Massachusetts Institute of Technology, in which the teaching staff interact with small groups of students doing problems using an expert problem-solving pedagogy. The MOOC contains an…
Developing an Action Concept Inventory
ERIC Educational Resources Information Center
McGinness, Lachlan P.; Savage, C. M.
2016-01-01
We report on progress towards the development of an Action Concept Inventory (ACI), a test that measures student understanding of action principles in introductory mechanics and optics. The ACI also covers key concepts of many-paths quantum mechanics, from which classical action physics arises. We used a multistage iterative development cycle for…
Pulsed power accelerator for material physics experiments
Reisman, D. B.; Stoltzfus, B. S.; Stygar, W. A.; ...
2015-09-01
We have developed the design of Thor: a pulsed power accelerator that delivers a precisely shaped current pulse with a peak value as high as 7 MA to a strip-line load. The peak magnetic pressure achieved within a 1-cm-wide load is as high as 100 GPa. Thor is powered by as many as 288 decoupled and transit-time isolated bricks. Each brick consists of a single switch and two capacitors connected electrically in series. The bricks can be individually triggered to achieve a high degree of current pulse tailoring. Because the accelerator is impedance matched throughout, capacitor energy is delivered tomore » the strip-line load with an efficiency as high as 50%. We used an iterative finite element method (FEM), circuit, and magnetohydrodynamic simulations to develop an optimized accelerator design. When powered by 96 bricks, Thor delivers as much as 4.1 MA to a load, and achieves peak magnetic pressures as high as 65 GPa. When powered by 288 bricks, Thor delivers as much as 6.9 MA to a load, and achieves magnetic pressures as high as 170 GPa. We have developed an algebraic calculational procedure that uses the single brick basis function to determine the brick-triggering sequence necessary to generate a highly tailored current pulse time history for shockless loading of samples. Thor will drive a wide variety of magnetically driven shockless ramp compression, shockless flyer plate, shock-ramp, equation of state, material strength, phase transition, and other advanced material physics experiments.« less
NASA Astrophysics Data System (ADS)
Pack, Robert C.; Standiford, Keith; Lukanc, Todd; Ning, Guo Xiang; Verma, Piyush; Batarseh, Fadi; Chua, Gek Soon; Fujimura, Akira; Pang, Linyong
2014-10-01
A methodology is described wherein a calibrated model-based `Virtual' Variable Shaped Beam (VSB) mask writer process simulator is used to accurately verify complex Optical Proximity Correction (OPC) and Inverse Lithography Technology (ILT) mask designs prior to Mask Data Preparation (MDP) and mask fabrication. This type of verification addresses physical effects which occur in mask writing that may impact lithographic printing fidelity and variability. The work described here is motivated by requirements for extreme accuracy and control of variations for today's most demanding IC products. These extreme demands necessitate careful and detailed analysis of all potential sources of uncompensated error or variation and extreme control of these at each stage of the integrated OPC/ MDP/ Mask/ silicon lithography flow. The important potential sources of variation we focus on here originate on the basis of VSB mask writer physics and other errors inherent in the mask writing process. The deposited electron beam dose distribution may be examined in a manner similar to optical lithography aerial image analysis and image edge log-slope analysis. This approach enables one to catch, grade, and mitigate problems early and thus reduce the likelihood for costly long-loop iterations between OPC, MDP, and wafer fabrication flows. It moreover describes how to detect regions of a layout or mask where hotspots may occur or where the robustness to intrinsic variations may be improved by modification to the OPC, choice of mask technology, or by judicious design of VSB shots and dose assignment.
NASA Astrophysics Data System (ADS)
Adrich, Przemysław
2016-05-01
In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.
Resistive edge mode instability in stellarator and tokamak geometries
NASA Astrophysics Data System (ADS)
Mahmood, M. Ansar; Rafiq, T.; Persson, M.; Weiland, J.
2008-09-01
Geometrical effects on linear stability of electrostatic resistive edge modes are investigated in the three-dimensional Wendelstein 7-X stellarator [G. Grieger et al., Plasma Physics and Controlled Nuclear Fusion Research 1990 (International Atomic Energy Agency, Vienna, 1991), Vol. 3, p. 525] and the International Thermonuclear Experimental Reactor [Progress in the ITER Physics Basis, Nucl. Fusion 7, S1, S285 (2007)]-like equilibria. An advanced fluid model is used for the ions together with the reduced Braghinskii equations for the electrons. Using the ballooning mode representation, the drift wave problem is set as an eigenvalue equation along a field line and is solved numerically using a standard shooting technique. A significantly larger magnetic shear and a less unfavorable normal curvature in the tokamak equilibrium are found to give a stronger finite-Larmor radius stabilization and a more narrow mode spectrum than in the stellarator. The effect of negative global magnetic shear in the tokamak is found to be stabilizing. The growth rate on a tokamak magnetic flux surface is found to be comparable to that on a stellarator surface with the same global magnetic shear but the eigenfunction in the tokamak is broader than in the stellarator due to the presence of large negative local magnetic shear (LMS) on the tokamak surface. A large absolute value of the LMS in a region of unfavorable normal curvature is found to be stabilizing in the stellarator, while in the tokamak case, negative LMS is found to be stabilizing and positive LMS destabilizing.
Scale-Up: Improving Large Enrollment Physics Courses
NASA Astrophysics Data System (ADS)
Beichner, Robert
1999-11-01
The Student-Centered Activities for Large Enrollment University Physics (SCALE-UP) project is working to establish a learning environment that will promote increased conceptual understanding, improved problem-solving performance, and greater student satisfaction, while still maintaining class sizes of approximately 100. We are also addressing the new ABET engineering accreditation requirements for inquiry-based learning along with communication and team-oriented skills development. Results of studies of our latest classroom design, plans for future classroom space, and the current iteration of instructional materials will be discussed.
WE-D-BRF-05: Quantitative Dual-Energy CT Imaging for Proton Stopping Power Computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, D; Williamson, J; Siebers, J
2014-06-15
Purpose: To extend the two-parameter separable basis-vector model (BVM) to estimation of proton stopping power from dual-energy CT (DECT) imaging. Methods: BVM assumes that the photon cross sections of any unknown material can be represented as a linear combination of the corresponding quantities for two bracketing basis materials. We show that both the electron density (ρe) and mean excitation energy (Iex) can be modeled by BVM, enabling stopping power to be estimated from the Bethe-Bloch equation. We have implemented an idealized post-processing dual energy imaging (pDECT) simulation consisting of monogenetic 45 keV and 80 keV scanning beams with polystyrene-water andmore » water-CaCl2 solution basis pairs for soft tissues and bony tissues, respectively. The coefficients of 24 standard ICRU tissue compositions were estimated by pDECT. The corresponding ρe, Iex, and stopping power tables were evaluated via BVM and compared to tabulated ICRU 44 reference values. Results: BVM-based pDECT was found to estimate ρe and Iex with average and maximum errors of 0.5% and 2%, respectively, for the 24 tissues. Proton stopping power values at 175 MeV, show average/maximum errors of 0.8%/1.4%. For adipose, muscle and bone, these errors result range prediction accuracies less than 1%. Conclusion: A new two-parameter separable DECT model (BVM) for estimating proton stopping power was developed. Compared to competing parametric fit DECT models, BVM has the comparable prediction accuracy without necessitating iterative solution of nonlinear equations or a sample-dependent empirical relationship between effective atomic number and Iex. Based on the proton BVM, an efficient iterative statistical DECT reconstruction model is under development.« less
Medical image segmentation based on SLIC superpixels model
NASA Astrophysics Data System (ADS)
Chen, Xiang-ting; Zhang, Fan; Zhang, Ruo-ya
2017-01-01
Medical imaging has been widely used in clinical practice. It is an important basis for medical experts to diagnose the disease. However, medical images have many unstable factors such as complex imaging mechanism, the target displacement will cause constructed defect and the partial volume effect will lead to error and equipment wear, which increases the complexity of subsequent image processing greatly. The segmentation algorithm which based on SLIC (Simple Linear Iterative Clustering, SLIC) superpixels is used to eliminate the influence of constructed defect and noise by means of the feature similarity in the preprocessing stage. At the same time, excellent clustering effect can reduce the complexity of the algorithm extremely, which provides an effective basis for the rapid diagnosis of experts.
NASA Astrophysics Data System (ADS)
Li, Jing; Singh, Chandralekha
2017-09-01
We discuss an investigation of the difficulties that students in a university introductory physics course have with the electric field and superposition principle and how that research was used as a guide in the development and evaluation of a research-validated tutorial on these topics to help students learn these concepts better. The tutorial uses a guided enquiry-based approach to learning and involved an iterative process of development and evaluation. During its development, we obtained feedback both from physics instructors who regularly teach introductory physics in which these concepts are taught and from students for whom the tutorial is intended. The iterative process continued and the feedback was incorporated in the later versions of the tutorial until the researchers were satisfied with the performance of a diverse group of introductory physics students on the post-test after they worked on the tutorial in an individual one-on-one interview situation. Then the final version of the tutorial was administered in several sections of the university physics course after traditional instruction in relevant concepts. We discuss the performance of students in individual interviews and on the pre-test administered before the tutorial (but after traditional lecture-based instruction) and on the post-test administered after the tutorial. We also compare student performance in sections of the class in which students worked on the tutorial with other similar sections of the class in which students only learned via traditional instruction. We find that students performed significantly better in the sections of the class in which the tutorial was used compared to when students learned the material via only lecture-based instruction.
SciDAC GSEP: Gyrokinetic Simulation of Energetic Particle Turbulence and Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Zhihong
Energetic particle (EP) confinement is a key physics issue for burning plasma experiment ITER, the crucial next step in the quest for clean and abundant energy, since ignition relies on self-heating by energetic fusion products (α-particles). Due to the strong coupling of EP with burning thermal plasmas, plasma confinement property in the ignition regime is one of the most uncertain factors when extrapolating from existing fusion devices to the ITER tokamak. EP population in current tokamaks are mostly produced by auxiliary heating such as neutral beam injection (NBI) and radio frequency (RF) heating. Remarkable progress in developing comprehensive EP simulationmore » codes and understanding basic EP physics has been made by two concurrent SciDAC EP projects GSEP funded by the Department of Energy (DOE) Office of Fusion Energy Science (OFES), which have successfully established gyrokinetic turbulence simulation as a necessary paradigm shift for studying the EP confinement in burning plasmas. Verification and validation have rapidly advanced through close collaborations between simulation, theory, and experiment. Furthermore, productive collaborations with computational scientists have enabled EP simulation codes to effectively utilize current petascale computers and emerging exascale computers. We review here key physics progress in the GSEP projects regarding verification and validation of gyrokinetic simulations, nonlinear EP physics, EP coupling with thermal plasmas, and reduced EP transport models. Advances in high performance computing through collaborations with computational scientists that enable these large scale electromagnetic simulations are also highlighted. These results have been widely disseminated in numerous peer-reviewed publications including many Phys. Rev. Lett. papers and many invited presentations at prominent fusion conferences such as the biennial International Atomic Energy Agency (IAEA) Fusion Energy Conference and the annual meeting of the American Physics Society, Division of Plasma Physics (APS-DPP).« less
TU-F-18A-06: Dual Energy CT Using One Full Scan and a Second Scan with Very Few Projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T; Zhu, L
Purpose: The conventional dual energy CT (DECT) requires two full CT scans at different energy levels, resulting in dose increase as well as imaging errors from patient motion between the two scans. To shorten the scan time of DECT and thus overcome these drawbacks, we propose a new DECT algorithm using one full scan and a second scan with very few projections by preserving structural information. Methods: We first reconstruct a CT image on the full scan using a standard filtered-backprojection (FBP) algorithm. We then use a compressed sensing (CS) based iterative algorithm on the second scan for reconstruction frommore » very few projections. The edges extracted from the first scan are used as weights in the Objectives: function of the CS-based reconstruction to substantially improve the image quality of CT reconstruction. The basis material images are then obtained by an iterative image-domain decomposition method and an electron density map is finally calculated. The proposed method is evaluated on phantoms. Results: On the Catphan 600 phantom, the CT reconstruction mean error using the proposed method on 20 and 5 projections are 4.76% and 5.02%, respectively. Compared with conventional iterative reconstruction, the proposed edge weighting preserves object structures and achieves a better spatial resolution. With basis materials of Iodine and Teflon, our method on 20 projections obtains similar quality of decomposed material images compared with FBP on a full scan and the mean error of electron density in the selected regions of interest is 0.29%. Conclusion: We propose an effective method for reducing projections and therefore scan time in DECT. We show that a full scan plus a 20-projection scan are sufficient to provide DECT images and electron density with similar quality compared with two full scans. Our future work includes more phantom studies to validate the performance of our method.« less
Developing and validating advanced divertor solutions on DIII-D for next-step fusion devices
NASA Astrophysics Data System (ADS)
Guo, H. Y.; Hill, D. N.; Leonard, A. W.; Allen, S. L.; Stangeby, P. C.; Thomas, D.; Unterberg, E. A.; Abrams, T.; Boedo, J.; Briesemeister, A. R.; Buchenauer, D.; Bykov, I.; Canik, J. M.; Chrobak, C.; Covele, B.; Ding, R.; Doerner, R.; Donovan, D.; Du, H.; Elder, D.; Eldon, D.; Lasa, A.; Groth, M.; Guterl, J.; Jarvinen, A.; Hinson, E.; Kolemen, E.; Lasnier, C. J.; Lore, J.; Makowski, M. A.; McLean, A.; Meyer, B.; Moser, A. L.; Nygren, R.; Owen, L.; Petrie, T. W.; Porter, G. D.; Rognlien, T. D.; Rudakov, D.; Sang, C. F.; Samuell, C.; Si, H.; Schmitz, O.; Sontag, A.; Soukhanovskii, V.; Wampler, W.; Wang, H.; Watkins, J. G.
2016-12-01
A major challenge facing the design and operation of next-step high-power steady-state fusion devices is to develop a viable divertor solution with order-of-magnitude increases in power handling capability relative to present experience, while having acceptable divertor target plate erosion and being compatible with maintaining good core plasma confinement. A new initiative has been launched on DIII-D to develop the scientific basis for design, installation, and operation of an advanced divertor to evaluate boundary plasma solutions applicable to next step fusion experiments beyond ITER. Developing the scientific basis for fusion reactor divertor solutions must necessarily follow three lines of research, which we plan to pursue in DIII-D: (1) Advance scientific understanding and predictive capability through development and comparison between state-of-the art computational models and enhanced measurements using targeted parametric scans; (2) Develop and validate key divertor design concepts and codes through innovative variations in physical structure and magnetic geometry; (3) Assess candidate materials, determining the implications for core plasma operation and control, and develop mitigation techniques for any deleterious effects, incorporating development of plasma-material interaction models. These efforts will lead to design, installation, and evaluation of an advanced divertor for DIII-D to enable highly dissipative divertor operation at core density (n e/n GW), neutral fueling and impurity influx most compatible with high performance plasma scenarios and reactor relevant plasma facing components (PFCs). This paper highlights the current progress and near-term strategies of boundary/PMI research on DIII-D.
Developing and validating advanced divertor solutions on DIII-D for next-step fusion devices
Guo, H. Y.; Hill, D. N.; Leonard, A. W.; ...
2016-09-14
A major challenge facing the design and operation of next-step high-power steady-state fusion devices is to develop a viable divertor solution with order-of-magnitude increases in power handling capability relative to present experience, while having acceptable divertor target plate erosion and being compatible with maintaining good core plasma confinement. A new initiative has been launched on DIII-D to develop the scientific basis for design, installation, and operation of an advanced divertor to evaluate boundary plasma solutions applicable to next step fusion experiments beyond ITER. Developing the scientific basis for fusion reactor divertor solutions must necessarily follow three lines of research, whichmore » we plan to pursue in DIII-D: (1) Advance scientific understanding and predictive capability through development and comparison between state-of-the art computational models and enhanced measurements using targeted parametric scans; (2) Develop and validate key divertor design concepts and codes through innovative variations in physical structure and magnetic geometry; (3) Assess candidate materials, determining the implications for core plasma operation and control, and develop mitigation techniques for any deleterious effects, incorporating development of plasma-material interaction models. These efforts will lead to design, installation, and evaluation of an advanced divertor for DIII-D to enable highly dissipative divertor operation at core density (n e/n GW), neutral fueling and impurity influx most compatible with high performance plasma scenarios and reactor relevant plasma facing components (PFCs). In conclusion, this paper highlights the current progress and near-term strategies of boundary/PMI research on DIII-D.« less
Use of LANDSAT imagery for wildlife habitat mapping in northeast and east central Alaska
NASA Technical Reports Server (NTRS)
Lent, P. C. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Two scenes were analyzed by applying an iterative cluster analysis to a 2% random data sample and then using the resulting clusters as a training set basis for maximum likelihood classification. Twenty-six and twenty-seven categorical classes, respectively resulted from this process. The majority of classes in each case were quite specific vegetation types; each of these types has specific value as moose habitat.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anzai, Chihaya; Hasselhuhn, Alexander; Höschele, Maik
We compute the contribution to the total cross section for the inclusive production of a Standard Model Higgs boson induced by two quarks with different flavour in the initial state. Our calculation is exact in the Higgs boson mass and the partonic center-of-mass energy. Here, we describe the reduction to master integrals, the construction of a canonical basis, and the solution of the corresponding differential equations. Our analytic result contains both Harmonic Polylogarithms and iterated integrals with additional letters in the alphabet.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
2016-09-01
iterations in that time for the student practitioners to work through. When possible, case studies will be selected from actual counter-radicalizations...justify participation in the learning 9 organization. Those cases will be evaluated on a case -by- case basis and the need to expand the CVE mission...interested within the learning organization. The National Fire Academy Executive Fire Officer Program applied research pre -course is an example of
Orbital-Dependent Density Functionals for Chemical Catalysis
2011-02-16
E2 and SN2 Reactions: Effects of the Choice of Density Functional, Basis Set, and Self-Consistent Iterations," Y. Zhao and D. G. Truhlar, Journal...for the anti-‐ E2, syn-‐E2, and SN2 pathways of the reactions of F-‐ and Cl-‐ with CH3CH2F and
Calculation of Moment Matrix Elements for Bilinear Quadrilaterals and Higher-Order Basis Functions
2016-01-06
methods are known as boundary integral equation (BIE) methods and the present study falls into this category. The numerical solution of the BIE is...iterated integrals. The inner integral involves the product of the free-space Green’s function for the Helmholtz equation multiplied by an appropriate...Website: http://www.wipl-d.com/ 5. Y. Zhang and T. K. Sarkar, Parallel Solution of Integral Equation -Based EM Problems in the Frequency Domain. New
Curvelet-domain multiple matching method combined with cubic B-spline function
NASA Astrophysics Data System (ADS)
Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming
2018-05-01
Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.
Rapid iterative reanalysis for automated design
NASA Technical Reports Server (NTRS)
Bhatia, K. G.
1973-01-01
A method for iterative reanalysis in automated structural design is presented for a finite-element analysis using the direct stiffness approach. A basic feature of the method is that the generalized stiffness and inertia matrices are expressed as functions of structural design parameters, and these generalized matrices are expanded in Taylor series about the initial design. Only the linear terms are retained in the expansions. The method is approximate because it uses static condensation, modal reduction, and the linear Taylor series expansions. The exact linear representation of the expansions of the generalized matrices is also described and a basis for the present method is established. Results of applications of the present method to the recalculation of the natural frequencies of two simple platelike structural models are presented and compared with results obtained by using a commonly applied analysis procedure used as a reference. In general, the results are in good agreement. A comparison of the computer times required for the use of the present method and the reference method indicated that the present method required substantially less time for reanalysis. Although the results presented are for relatively small-order problems, the present method will become more efficient relative to the reference method as the problem size increases. An extension of the present method to static reanalysis is described, ana a basis for unifying the static and dynamic reanalysis procedures is presented.
NASA Astrophysics Data System (ADS)
Matone, Marco
2016-11-01
Recently it has been introduced an algorithm for the Baker-Campbell-Hausdorff (BCH) formula, which extends the Van-Brunt and Visser recent results, leading to new closed forms of BCH formula. More recently, it has been shown that there are 13 types of such commutator algebras. We show, by providing the explicit solutions, that these include the generators of the semisimple complex Lie algebras. More precisely, for any pair, X, Y of the Cartan-Weyl basis, we find W, linear combination of X, Y, such that exp (X) exp (Y)=exp (W). The derivation of such closed forms follows, in part, by using the above mentioned recent results. The complete derivation is provided by considering the structure of the root system. Furthermore, if X, Y, and Z are three generators of the Cartan-Weyl basis, we find, for a wide class of cases, W, a linear combination of X, Y and Z, such that exp (X) exp (Y) exp (Z)=exp (W). It turns out that the relevant commutator algebras are type 1c-i, type 4 and type 5. A key result concerns an iterative application of the algorithm leading to relevant extensions of the cases admitting closed forms of the BCH formula. Here we provide the main steps of such an iteration that will be developed in a forthcoming paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poli, Francesca M.; Fredrickson, Eric; Henderson, Mark A.
Time-dependent simulations are used to evolve plasma discharges in combination with a Modified Rutherford equation (MRE) for calculation of Neoclassical Tearing Mode (NTM) stability in response to Electron Cyclotron (EC) feedback control in ITER. The main application of this integrated approach is to support the development of control algorithms by analyzing the plasma response with physics-based models and to assess how uncertainties in the detection of the magnetic island and in the EC alignment affect the ability of the ITER EC system to fulfill its purpose. These simulations indicate that it is critical to detect the island as soon asmore » possible, before its size exceeds the EC deposition width, and that maintaining alignment with the rational surface within half of the EC deposition width is needed for stabilization and suppression of the modes, especially in the case of modes with helicity (2,1). A broadening of the deposition profile, for example due to wave scattering by turbulence fluctuations or not well aligned beams, could even be favorable in the case of the (2,1)-NTM, by relaxing an over-focussing of the EC beam and improving the stabilization at the mode onset. Pre-emptive control reduces the power needed for suppression and stabilization in the ITER baseline discharge to a maximum of 5 MW, which should be reserved and available to the Upper Launcher during the entire flattop phase. By assuming continuous triggering of NTMs, with pre-emptive control ITER would be still able to demonstrate a fusion gain of Q=10.« less
NASA Astrophysics Data System (ADS)
Poli, F. M.; Fredrickson, E. D.; Henderson, M. A.; Kim, S.-H.; Bertelli, N.; Poli, E.; Farina, D.; Figini, L.
2018-01-01
Time-dependent simulations are used to evolve plasma discharges in combination with a modified Rutherford equation for calculation of neoclassical tearing mode (NTM) stability in response to electron cyclotron (EC) feedback control in ITER. The main application of this integrated approach is to support the development of control algorithms by analyzing the plasma response with physics-based models and to assess how uncertainties in the detection of the magnetic island and in the EC alignment affect the ability of the ITER EC system to fulfill its purpose. Simulations indicate that it is critical to detect the island as soon as possible, before its size exceeds the EC deposition width, and that maintaining alignment with the rational surface within half of the EC deposition width is needed for stabilization and suppression of the modes, especially in the case of modes with helicity (2, 1) . A broadening of the deposition profile, for example due to wave scattering by turbulence fluctuations or not well aligned beams, could even be favorable in the case of the (2, 1)- NTM, by relaxing an over-focussing of the EC beam and improving the stabilization at the mode onset. Pre-emptive control reduces the power needed for suppression and stabilization in the ITER baseline discharge to a maximum of 5 MW, which should be reserved and available to the upper launcher during the entire flattop phase. Assuming continuous triggering of NTMs, with pre-emptive control ITER would be still able to demonstrate a fusion gain of Q=10 .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, Bruno; Carvalho, Paulo F.; Rodrigues, A.P.
The ATCA standard specifies a mandatory Shelf Manager (ShM) unit which is a key element for the system operation. It includes the Intelligent Platform Management Controller (IPMC) which monitors the system health, retrieves inventory information and controls the Field Replaceable Units (FRUs). These elements enable the intelligent health monitoring, providing high-availability and safety operation, ensuring the correct system operation. For critical systems like ones of tokamak ITER these features are mandatory to support the long pulse operation. The Nominal Device Support (NDS) was designed and developed for the ITER CODAC Core System (CCS), which will be the responsible for plantmore » Instrumentation and Control (I and C), supervising and monitoring on ITER. It generalizes the Enhanced Physics and Industrial Control System (EPICS) device support interface for Data Acquisition (DAQ) and timing devices. However the support for health management features and ATCA ShM are not yet provided. This paper presents the implementation and test of a NDS for the ATCA ShM, using the ITER Fast Plant System Controller (FPSC) prototype environment. This prototype is fully compatible with the ITER CCS and uses the EPICS Channel Access (CA) protocol as the interface with the Plant Operation Network (PON). The implemented solution running in an EPICS Input / Output Controller (IOC) provides Process Variables (PV) to the PON network with the system information. These PVs can be used for control and monitoring by all CA clients, such as EPICS user interface clients and alarm systems. The results are presented, demonstrating the fully integration and the usability of this solution. (authors)« less
Poli, Francesca M.; Fredrickson, Eric; Henderson, Mark A.; ...
2017-09-21
Time-dependent simulations are used to evolve plasma discharges in combination with a Modified Rutherford equation (MRE) for calculation of Neoclassical Tearing Mode (NTM) stability in response to Electron Cyclotron (EC) feedback control in ITER. The main application of this integrated approach is to support the development of control algorithms by analyzing the plasma response with physics-based models and to assess how uncertainties in the detection of the magnetic island and in the EC alignment affect the ability of the ITER EC system to fulfill its purpose. These simulations indicate that it is critical to detect the island as soon asmore » possible, before its size exceeds the EC deposition width, and that maintaining alignment with the rational surface within half of the EC deposition width is needed for stabilization and suppression of the modes, especially in the case of modes with helicity (2,1). A broadening of the deposition profile, for example due to wave scattering by turbulence fluctuations or not well aligned beams, could even be favorable in the case of the (2,1)-NTM, by relaxing an over-focussing of the EC beam and improving the stabilization at the mode onset. Pre-emptive control reduces the power needed for suppression and stabilization in the ITER baseline discharge to a maximum of 5 MW, which should be reserved and available to the Upper Launcher during the entire flattop phase. By assuming continuous triggering of NTMs, with pre-emptive control ITER would be still able to demonstrate a fusion gain of Q=10.« less
Nasirudin, Radin A.; Mei, Kai; Panchev, Petar; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Fiebich, Martin; Noël, Peter B.
2015-01-01
Purpose The exciting prospect of Spectral CT (SCT) using photon-counting detectors (PCD) will lead to new techniques in computed tomography (CT) that take advantage of the additional spectral information provided. We introduce a method to reduce metal artifact in X-ray tomography by incorporating knowledge obtained from SCT into a statistical iterative reconstruction scheme. We call our method Spectral-driven Iterative Reconstruction (SPIR). Method The proposed algorithm consists of two main components: material decomposition and penalized maximum likelihood iterative reconstruction. In this study, the spectral data acquisitions with an energy-resolving PCD were simulated using a Monte-Carlo simulator based on EGSnrc C++ class library. A jaw phantom with a dental implant made of gold was used as an object in this study. A total of three dental implant shapes were simulated separately to test the influence of prior knowledge on the overall performance of the algorithm. The generated projection data was first decomposed into three basis functions: photoelectric absorption, Compton scattering and attenuation of gold. A pseudo-monochromatic sinogram was calculated and used as input in the reconstruction, while the spatial information of the gold implant was used as a prior. The results from the algorithm were assessed and benchmarked with state-of-the-art reconstruction methods. Results Decomposition results illustrate that gold implant of any shape can be distinguished from other components of the phantom. Additionally, the result from the penalized maximum likelihood iterative reconstruction shows that artifacts are significantly reduced in SPIR reconstructed slices in comparison to other known techniques, while at the same time details around the implant are preserved. Quantitatively, the SPIR algorithm best reflects the true attenuation value in comparison to other algorithms. Conclusion It is demonstrated that the combination of the additional information from Spectral CT and statistical reconstruction can significantly improve image quality, especially streaking artifacts caused by the presence of materials with high atomic numbers. PMID:25955019
3D Printing: Exploring Capabilities
ERIC Educational Resources Information Center
Samuels, Kyle; Flowers, Jim
2015-01-01
As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…
Tablet PCs: A Physical Educator's New Clipboard
ERIC Educational Resources Information Center
Nye, Susan B.
2010-01-01
Computers in education have come a long way from the abacus of 5,000 years ago to the desktop and laptop computers of today. Computers have transformed the educational environment, and with each new iteration of smaller and more powerful machines come additional advantages for teaching practices. The Tablet PC is one. Tablet PCs are fully…
The Primary Physical Education Curriculum Process: More Complex That You Might Think!!
ERIC Educational Resources Information Center
Jess, Mike; Carse, Nicola; Keay, Jeanne
2016-01-01
In this paper, we present the curriculum development process as a complex, iterative and integrated phenomenon. Building on the early work of Stenhouse [1975, "An Introduction to Curriculum Research and Development". London: Heinemann Educational], we position the teacher at the heart of this process and extend his ideas by exploring how…
Assemblage: Raising Awareness of Student Identity Formation through Art
ERIC Educational Resources Information Center
Drouin, Steven D.
2015-01-01
Asking students to physically construct manifestations of their identities is not necessarily a new technique, but the author wanted students to experience the iterative and fluid nature of identity formation with the hopes of beginning a longer discussion of how other individuals, groups, and varying contexts shape identities with and without…
A Control Algorithm for Chaotic Physical Systems
1991-10-01
revision expands the grid to cover the entire area of any attractor that is present. 5 Map Selection The final choices of the state- space mapping process...interval h?; overrange R0 ; control parameter interval AkO and range [kbro, khigh]; iteration depth. "* State- space mapping : 1. Set up grid by expanding
Identification of spatially-localized initial conditions via sparse PCA
NASA Astrophysics Data System (ADS)
Dwivedi, Anubhav; Jovanovic, Mihailo
2017-11-01
Principal Component Analysis involves maximization of a quadratic form subject to a quadratic constraint on the initial flow perturbations and it is routinely used to identify the most energetic flow structures. For general flow configurations, principal components can be efficiently computed via power iteration of the forward and adjoint governing equations. However, the resulting flow structures typically have a large spatial support leading to a question of physical realizability. To obtain spatially-localized structures, we modify the quadratic constraint on the initial condition to include a convex combination with an additional regularization term which promotes sparsity in the physical domain. We formulate this constrained optimization problem as a nonlinear eigenvalue problem and employ an inverse power-iteration-based method to solve it. The resulting solution is guaranteed to converge to a nonlinear eigenvector which becomes increasingly localized as our emphasis on sparsity increases. We use several fluids examples to demonstrate that our method indeed identifies the most energetic initial perturbations that are spatially compact. This work was supported by Office of Naval Research through Grant Number N00014-15-1-2522.
Scalable Nonlinear Solvers for Fully Implicit Coupled Nuclear Fuel Modeling. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Xiao-Chuan; Keyes, David; Yang, Chao
2014-09-29
The focus of the project is on the development and customization of some highly scalable domain decomposition based preconditioning techniques for the numerical solution of nonlinear, coupled systems of partial differential equations (PDEs) arising from nuclear fuel simulations. These high-order PDEs represent multiple interacting physical fields (for example, heat conduction, oxygen transport, solid deformation), each is modeled by a certain type of Cahn-Hilliard and/or Allen-Cahn equations. Most existing approaches involve a careful splitting of the fields and the use of field-by-field iterations to obtain a solution of the coupled problem. Such approaches have many advantages such as ease of implementationmore » since only single field solvers are needed, but also exhibit disadvantages. For example, certain nonlinear interactions between the fields may not be fully captured, and for unsteady problems, stable time integration schemes are difficult to design. In addition, when implemented on large scale parallel computers, the sequential nature of the field-by-field iterations substantially reduces the parallel efficiency. To overcome the disadvantages, fully coupled approaches have been investigated in order to obtain full physics simulations.« less
Plasma facing components: a conceptual design strategy for the first wall in FAST tokamak
NASA Astrophysics Data System (ADS)
Labate, C.; Di Gironimo, G.; Renno, F.
2015-09-01
Satellite tokamaks are conceived with the main purpose of developing new or alternative ITER- and DEMO-relevant technologies, able to contribute in resolving the pending issues about plasma operation. In particular, a high criticality needs to be associated to the design of plasma facing components, i.e. first wall (FW) and divertor, due to physical, topological and thermo-structural reasons. In such a context, the design of the FW in FAST fusion plant, whose operational range is close to ITER’s one, takes place. According to the mission of experimental satellites, the FW design strategy, which is presented in this paper relies on a series of innovative design choices and proposals with a particular attention to the typical key points of plasma facing components design. Such an approach, taking into account a series of involved physical constraints and functional requirements to be fulfilled, marks a clear borderline with the FW solution adopted in ITER, in terms of basic ideas, manufacturing aspects, remote maintenance procedure, manifolds management, cooling cycle and support system configuration.
Computational and Physical Analysis of Catalytic Compounds
NASA Astrophysics Data System (ADS)
Wu, Richard; Sohn, Jung Jae; Kyung, Richard
2015-03-01
Nanoparticles exhibit unique physical and chemical properties depending on their geometrical properties. For this reason, synthesis of nanoparticles with controlled shape and size is important to use their unique properties. Catalyst supports are usually made of high-surface-area porous oxides or carbon nanomaterials. These support materials stabilize metal catalysts against sintering at high reaction temperatures. Many studies have demonstrated large enhancements of catalytic behavior due to the role of the oxide-metal interface. In this paper, the catalyzing ability of supported nano metal oxides, such as silicon oxide and titanium oxide compounds as catalysts have been analyzed using computational chemistry method. Computational programs such as Gamess and Chemcraft has been used in an effort to compute the efficiencies of catalytic compounds, and bonding energy changes during the optimization convergence. The result illustrates how the metal oxides stabilize and the steps that it takes. The graph of the energy computation step(N) versus energy(kcal/mol) curve shows that the energy of the titania converges faster at the 7th iteration calculation, whereas the silica converges at the 9th iteration calculation.
Technical Issues for the Fabrication of a CN-HCCB-TBM Based on RAFM Steel CLF-1
NASA Astrophysics Data System (ADS)
Wang, Pinghuai; Chen, Jiming; Fu, Haiying; Liu, Shi; Li, Xiongwei; Xu, Zengyu
2013-02-01
Reduced activation ferritic/martensitic steel (RAFM) is recognized as the primary candidate structural material for ITER's test blanket module (TBM). To provide a material and property database for the design and fabrication of the Chinese helium cooled ceramic breeding TBM (CN HCCB TBM), a type of RAFM steel named CLF-1 was developed and characterized at the Southwestern Institute of Physics (SWIP), China. In this paper, the R&D status of CLF-1 steel and the technical issues in using CLF-1 steel to manufacture CN HCCB TBM were reviewed, including the steel manufacture and different welding technologies. Several kinds of property data have been obtained for its application to the design of the ITER TBM.
Innovative diagnostics for ITER physics addressed in JET
NASA Astrophysics Data System (ADS)
Murari, A.; Edlington, T.; Alfier, A.; Alonso, A.; Andrew, Y.; Arnoux, G.; Beurskens, M.; Coad, P.; Crombe, C.; Gauthier, E.; Giroud, C.; Hidalgo, C.; Hong, S.; Kempenaars, M.; Kiptily, V.; Loarer, T.; Meigs, A.; Pasqualotto, R.; Tala, T.; Contributors, JET-EFDA
2008-12-01
In recent years, JET diagnostic capability has been significantly improved to widen the range of physical phenomena that can be studied and thus contribute to the understanding of some ITER relevant issues. The most significant results reported in this paper refer to the plasma wall interactions, the interplay between core and edge physics and fast particles. A synergy between new infrared cameras, visible cameras and spectroscopy diagnostics has allowed investigating a series of new aspects of the plasma wall interactions. The power loads on the plasma facing components of JET main chambers have been assessed at steady state and during transient events like ELMs and disruptions. Evidence of filaments in the edge region of the plasma has been collected with a new fast visible camera and high resolution Thomson scattering. The physics of detached plasmas and some new aspects of dust formation have also been devoted particular attention. The influence of the edge plasma on the core has been investigated with upgraded active spectroscopy, providing new information on momentum transport and the effects of impurity injection on ELMs and ITBs and their interdependence. Given the fact that JET is the only machine with a plasma volume big enough to confine the alphas, a coherent programme of diagnostic developments for the energetic particles has been undertaken. With upgraded γ-ray spectroscopy and a new scintillator probe, it is now possible to study both the redistribution and the losses of the fast particles in various plasma conditions.
Burger, Joanna
2014-01-01
Ecological evaluation is essential for remediation, restoration, and Natural Resource Damage Assessment (NRDA), and forms the basis for many management practices. These include determining status and trends of biological, physical, or chemical/radiological conditions, conducting environmental impact assessments, performing remedial actions should remediation fail, managing ecosystems and wildlife, and assessing the efficacy of remediation, restoration, and long-term stewardship. The objective of this paper is to explore the meanings of these assessments, examine the relationships among them, and suggest methods of integration that will move environmental management forward. While remediation, restoration, and NRDA, among others, are often conducted separately, it is important to integrate them for contaminated land where the risks to ecoreceptors (including humans) can be high, and the potential damage to functioning ecosystems great. Ecological evaluations can range from inventories of local plants and animals, determinations of reproductive success of particular species, levels of contaminants in organisms, kinds and levels of effects, and environmental impact assessments, to very formal ecological risk assessments for a chemical or other stressor. Such evaluations can range from the individual species to populations, communities, ecosystems or the landscape scale. Ecological evaluations serve as the basis for making decisions about the levels and kinds of remediation, the levels and kinds of restoration possible, and the degree and kinds of natural resource injuries that have occurred because of contamination. Many different disciplines are involved in ecological evaluation, including biologists, conservationists, foresters, restoration ecologists, ecological engineers, economists, hydrologist, and geologists. Since ecological evaluation forms the basis for so many different types of environmental management, it seems reasonable to integrate management options to achieve economies of time, energy, and costs. Integration and iteration among these disciplines is possible only with continued interactions among practitioners, regulators, policy-makers, Native American Tribes, and the general public. PMID:18687455
Study of no-man's land physics in the total-f gyrokinetic code XGC1
NASA Astrophysics Data System (ADS)
Ku, Seung Hoe; Chang, C. S.; Lang, J.
2014-10-01
While the ``transport shortfall'' in the ``no-man's land'' has been observed often in delta-f codes, it has not yet been observed in the global total-f gyrokinetic particle code XGC1. Since understanding the interaction between the edge and core transport appears to be a critical element in the prediction for ITER performance, understanding the no-man's land issue is an important physics research topic. Simulation results using the Holland case will be presented and the physics causing the shortfall phenomenon will be discussed. Nonlinear nonlocal interaction of turbulence, secondary flows, and transport appears to be the key.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-26
... Climate Change (IPCC), Climate Change 2013: The Physical Science Basis Summary: The United States Global... Panel on Climate Change (IPCC) Climate Change 2013: The Physical Science Basis. The United Nations..., and socio-economic information for understanding the scientific basis of climate change, potential...
TH-AB-BRA-09: Stability Analysis of a Novel Dose Calculation Algorithm for MRI Guided Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelyak, O; Fallone, B; Cross Cancer Institute, Edmonton, AB
2016-06-15
Purpose: To determine the iterative deterministic solution stability of the Linear Boltzmann Transport Equation (LBTE) in the presence of magnetic fields. Methods: The LBTE with magnetic fields under investigation is derived using a discrete ordinates approach. The stability analysis is performed using analytical and numerical methods. Analytically, the spectral Fourier analysis is used to obtain the convergence rate of the source iteration procedures based on finding the largest eigenvalue of the iterative operator. This eigenvalue is a function of relevant physical parameters, such as magnetic field strength and material properties, and provides essential information about the domain of applicability requiredmore » for clinically optimal parameter selection and maximum speed of convergence. The analytical results are reinforced by numerical simulations performed using the same discrete ordinates method in angle, and a discontinuous finite element spatial approach. Results: The spectral radius for the source iteration technique of the time independent transport equation with isotropic and anisotropic scattering centers inside infinite 3D medium is equal to the ratio of differential and total cross sections. The result is confirmed numerically by solving LBTE and is in full agreement with previously published results. The addition of magnetic field reveals that the convergence becomes dependent on the strength of magnetic field, the energy group discretization, and the order of anisotropic expansion. Conclusion: The source iteration technique for solving the LBTE with magnetic fields with the discrete ordinates method leads to divergent solutions in the limiting cases of small energy discretizations and high magnetic field strengths. Future investigations into non-stationary Krylov subspace techniques as an iterative solver will be performed as this has been shown to produce greater stability than source iteration. Furthermore, a stability analysis of a discontinuous finite element space-angle approach (which has been shown to provide the greatest stability) will also be investigated. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas
The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning for balancing computational work in pushing particlesmore » and in grid related work, scalable and accurate discretization algorithms for non-linear Coulomb collisions, and communication-avoiding subcycling technology for pushing particles on both CPUs and GPUs are also utilized to dramatically improve the scalability and time-to-solution, hence enabling the difficult kinetic ITER edge simulation on a present-day leadership class computer.« less
Engaging Students In Modeling Instruction for Introductory Physics
NASA Astrophysics Data System (ADS)
Brewe, Eric
2016-05-01
Teaching introductory physics is arguably one of the most important things that a physics department does. It is the primary way that students from other science disciplines engage with physics and it is the introduction to physics for majors. Modeling instruction is an active learning strategy for introductory physics built on the premise that science proceeds through the iterative process of model construction, development, deployment, and revision. We describe the role that participating in authentic modeling has in learning and then explore how students engage in this process in the classroom. In this presentation, we provide a theoretical background on models and modeling and describe how these theoretical elements are enacted in the introductory university physics classroom. We provide both quantitative and video data to link the development of a conceptual model to the design of the learning environment and to student outcomes. This work is supported in part by DUE #1140706.
NASA Astrophysics Data System (ADS)
Yang, C.; Zheng, W.; Zhang, M.; Yuan, T.; Zhuang, G.; Pan, Y.
2016-06-01
Measurement and control of the plasma in real-time are critical for advanced Tokamak operation. It requires high speed real-time data acquisition and processing. ITER has designed the Fast Plant System Controllers (FPSC) for these purposes. At J-TEXT Tokamak, a real-time data acquisition and processing framework has been designed and implemented using standard ITER FPSC technologies. The main hardware components of this framework are an Industrial Personal Computer (IPC) with a real-time system and FlexRIO devices based on FPGA. With FlexRIO devices, data can be processed by FPGA in real-time before they are passed to the CPU. The software elements are based on a real-time framework which runs under Red Hat Enterprise Linux MRG-R and uses Experimental Physics and Industrial Control System (EPICS) for monitoring and configuring. That makes the framework accord with ITER FPSC standard technology. With this framework, any kind of data acquisition and processing FlexRIO FPGA program can be configured with a FPSC. An application using the framework has been implemented for the polarimeter-interferometer diagnostic system on J-TEXT. The application is able to extract phase-shift information from the intermediate frequency signal produced by the polarimeter-interferometer diagnostic system and calculate plasma density profile in real-time. Different algorithms implementations on the FlexRIO FPGA are compared in the paper.
NASA Astrophysics Data System (ADS)
Zhang, M.; Zheng, G. Z.; Zheng, W.; Chen, Z.; Yuan, T.; Yang, C.
2016-04-01
The magnetic confinement nuclear fusion experiments require various real-time control applications like plasma control. ITER has designed the Fast Plant System Controller (FPSC) for this job. ITER provided hardware and software standards and guidelines for building a FPSC. In order to develop various real-time FPSC applications efficiently, a flexible real-time software framework called J-TEXT real-time framework (JRTF) is developed by J-TEXT tokamak team. JRTF allowed developers to implement different functions as independent and reusable modules called Application Blocks (AB). The AB developers only need to focus on implementing the control tasks or the algorithms. The timing, scheduling, data sharing and eventing are handled by the JRTF pipelines. JRTF provides great flexibility on developing ABs. Unit test against ABs can be developed easily and ABs can even be used in non-JRTF applications. JRTF also provides interfaces allowing JRTF applications to be configured and monitored at runtime. JRTF is compatible with ITER standard FPSC hardware and ITER (Control, Data Access and Communication) CODAC Core software. It can be configured and monitored using (Experimental Physics and Industrial Control System) EPICS. Moreover the JRTF can be ported to different platforms and be integrated with supervisory control software other than EPICS. The paper presents the design and implementation of JRTF as well as brief test results.
Status of Europe's contribution to the ITER EC system
NASA Astrophysics Data System (ADS)
Albajar, F.; Aiello, G.; Alberti, S.; Arnold, F.; Avramidis, K.; Bader, M.; Batista, R.; Bertizzolo, R.; Bonicelli, T.; Braunmueller, F.; Brescan, C.; Bruschi, A.; von Burg, B.; Camino, K.; Carannante, G.; Casarin, V.; Castillo, A.; Cauvard, F.; Cavalieri, C.; Cavinato, M.; Chavan, R.; Chelis, J.; Cismondi, F.; Combescure, D.; Darbos, C.; Farina, D.; Fasel, D.; Figini, L.; Gagliardi, M.; Gandini, F.; Gantenbein, G.; Gassmann, T.; Gessner, R.; Goodman, T. P.; Gracia, V.; Grossetti, G.; Heemskerk, C.; Henderson, M.; Hermann, V.; Hogge, J. P.; Illy, S.; Ioannidis, Z.; Jelonnek, J.; Jin, J.; Kasparek, W.; Koning, J.; Krause, A. S.; Landis, J. D.; Latsas, G.; Li, F.; Mazzocchi, F.; Meier, A.; Moro, A.; Nousiainen, R.; Purohit, D.; Nowak, S.; Omori, T.; van Oosterhout, J.; Pacheco, J.; Pagonakis, I.; Platania, P.; Poli, E.; Preis, A. K.; Ronden, D.; Rozier, Y.; Rzesnicki, T.; Saibene, G.; Sanchez, F.; Sartori, F.; Sauter, O.; Scherer, T.; Schlatter, C.; Schreck, S.; Serikov, A.; Siravo, U.; Sozzi, C.; Spaeh, P.; Spichiger, A.; Strauss, D.; Takahashi, K.; Thumm, M.; Tigelis, I.; Vaccaro, A.; Vomvoridis, J.; Tran, M. Q.; Weinhorst, B.
2015-03-01
The electron cyclotron (EC) system of ITER for the initial configuration is designed to provide 20MW of RF power into the plasma during 3600s and a duty cycle of up to 25% for heating and (co and counter) non-inductive current drive, also used to control the MHD plasma instabilities. The EC system is being procured by 5 domestic agencies plus the ITER Organization (IO). F4E has the largest fraction of the EC procurements, which includes 8 high voltage power supplies (HVPS), 6 gyrotrons, the ex-vessel waveguides (includes isolation valves and diamond windows) for all launchers, 4 upper launchers and the main control system. F4E is working with IO to improve the overall design of the EC system by integrating consolidated technological advances, simplifying the interfaces, and doing global engineering analysis and assessments of EC heating and current drive physics and technology capabilities. Examples are the optimization of the HVPS and gyrotron requirements and performance relative to power modulation for MHD control, common qualification programs for diamond window procurements, assessment of the EC grounding system, and the optimization of the launcher steering angles for improved EC access. Here we provide an update on the status of Europe's contribution to the ITER EC system, and a summary of the global activities underway by F4E in collaboration with IO for the optimization of the subsystems.
Anderson acceleration and application to the three-temperature energy equations
NASA Astrophysics Data System (ADS)
An, Hengbin; Jia, Xiaowei; Walker, Homer F.
2017-10-01
The Anderson acceleration method is an algorithm for accelerating the convergence of fixed-point iterations, including the Picard method. Anderson acceleration was first proposed in 1965 and, for some years, has been used successfully to accelerate the convergence of self-consistent field iterations in electronic-structure computations. Recently, the method has attracted growing attention in other application areas and among numerical analysts. Compared with a Newton-like method, an advantage of Anderson acceleration is that there is no need to form the Jacobian matrix. Thus the method is easy to implement. In this paper, an Anderson-accelerated Picard method is employed to solve the three-temperature energy equations, which are a type of strong nonlinear radiation-diffusion equations. Two strategies are used to improve the robustness of the Anderson acceleration method. One strategy is to adjust the iterates when necessary to satisfy the physical constraint. Another strategy is to monitor and, if necessary, reduce the matrix condition number of the least-squares problem in the Anderson-acceleration implementation so that numerical stability can be guaranteed. Numerical results show that the Anderson-accelerated Picard method can solve the three-temperature energy equations efficiently. Compared with the Picard method without acceleration, Anderson acceleration can reduce the number of iterations by at least half. A comparison between a Jacobian-free Newton-Krylov method, the Picard method, and the Anderson-accelerated Picard method is conducted in this paper.
Corrigendum to "Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy".
Zelyak, Oleksandr; Fallone, B Gino; St-Aubin, Joel
2018-03-12
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation. © 2018 Institute of Physics and Engineering in Medicine.
An exploration of advanced X-divertor scenarios on ITER
NASA Astrophysics Data System (ADS)
Covele, B.; Valanju, P.; Kotschenreuther, M.; Mahajan, S.
2014-07-01
It is found that the X-divertor (XD) configuration (Kotschenreuther et al 2004 Proc. 20th Int. Conf. on Fusion Energy (Vilamoura, Portugal, 2004) (Vienna: IAEA) CD-ROM file [IC/P6-43] www-naweb.iaea.org/napc/physics/fec/fec2004/datasets/index.html, Kotschenreuther et al 2006 Proc. 21st Int. Conf. on Fusion Energy 2006 (Chengdu, China, 2006) (Vienna: IAEA), CD-ROM file [IC/P7-12] www-naweb.iaea.org/napc/physics/FEC/FEC2006/html/index.htm, Kotschenreuther et al 2007 Phys. Plasmas 14 072502) can be made with the conventional poloidal field (PF) coil set on ITER (Tomabechi et al and Team 1991 Nucl. Fusion 31 1135), where all PF coils are outside the TF coils. Starting from the standard divertor, a sequence of desirable XD configurations are possible where the PF currents are below the present maximum design limits on ITER, and where the baseline divertor cassette is used. This opens the possibility that the XD could be tested and used to assist in high-power operation on ITER, but some further issues need examination. Note that the increased major radius of the super-X-divertor (Kotschenreuther et al 2007 Bull. Am. Phys. Soc. 53 11, Valanju et al 2009 Phys. Plasmas 16 5, Kotschenreuther et al 2010 Nucl. Fusion 50 035003, Valanju et al 2010 Fusion Eng. Des. 85 46) is not a feature of the XD geometry. In addition, we present an XD configuration for K-DEMO (Kim et al 2013 Fusion Eng. Des. 88 123) to demonstrate that it is also possible to attain the XD configuration in advanced tokamak reactors with all PF coils outside the TF coils. The results given here for the XD are far more encouraging than recent calculations by Lackner and Zohm (2012 Fusion Sci. Technol. 63 43) for the Snowflake (Ryutov 2007 Phys. Plasmas 14 064502, Ryutov et al 2008 Phys. Plasmas 15 092501), where the required high PF currents represent a major technological challenge. The magnetic field structure in the outboard divertor SOL (Kotschenreuther 2013 Phys. Plasmas 20 102507) in the recently created XD configurations reproduces what was presented in the earlier XD papers (Kotschenreuther et al 2004 Proc. 20th Int. Conf. on Fusion Energy (Vilamoura, Portugal, 2004) (Vienna: IAEA) CD-ROM file [IC/P6-43] www-naweb.iaea.org/napc/physics/fec/fec2004/datasets/index.html, Kotschenreuther et al 2006 Proc. 21st Int. Conf. on Fusion Energy 2006 (Chengdu, China, 2006) (Vienna: IAEA) CD-ROM file [IC/P7-12] www-naweb.iaea.org/napc/physics/FEC/FEC2006/html/index.htm, Kotschenreuther et al 2007 Phys. Plasmas 14 072502). Consequently, the same advantages accrue, but no close-in PF coils are employed.
Simulation and study of small numbers of random events
NASA Technical Reports Server (NTRS)
Shelton, R. D.
1986-01-01
Random events were simulated by computer and subjected to various statistical methods to extract important parameters. Various forms of curve fitting were explored, such as least squares, least distance from a line, maximum likelihood. Problems considered were dead time, exponential decay, and spectrum extraction from cosmic ray data using binned data and data from individual events. Computer programs, mostly of an iterative nature, were developed to do these simulations and extractions and are partially listed as appendices. The mathematical basis for the compuer programs is given.
Exact N 3LO results for qq ' → H + X
Anzai, Chihaya; Hasselhuhn, Alexander; Höschele, Maik; ...
2015-07-27
We compute the contribution to the total cross section for the inclusive production of a Standard Model Higgs boson induced by two quarks with different flavour in the initial state. Our calculation is exact in the Higgs boson mass and the partonic center-of-mass energy. Here, we describe the reduction to master integrals, the construction of a canonical basis, and the solution of the corresponding differential equations. Our analytic result contains both Harmonic Polylogarithms and iterated integrals with additional letters in the alphabet.
Computer programs for the solution of systems of linear algebraic equations
NASA Technical Reports Server (NTRS)
Sequi, W. T.
1973-01-01
FORTRAN subprograms for the solution of systems of linear algebraic equations are described, listed, and evaluated in this report. Procedures considered are direct solution, iteration, and matrix inversion. Both incore methods and those which utilize auxiliary data storage devices are considered. Some of the subroutines evaluated require the entire coefficient matrix to be in core, whereas others account for banding or sparceness of the system. General recommendations relative to equation solving are made, and on the basis of tests, specific subprograms are recommended.
Can We Afford Not to Evaluate Services for Elderly Persons with Dementia?
Worrall, Graham; Chambers, Larry W.
1989-01-01
With the increasing expenditure on health-care programs for seniors, there is an urgent need to evaluate such programs. The Measurement Iterative Loop is a tool that can provide both health administrators and health researchers with a method of evaluation of existing programs and identification of gaps in knowledge, and forms a rational basis for health-care policy decisions. In this article, the Loop is applied to one common problem of the elderly: dementia. PMID:21248993
NASA Astrophysics Data System (ADS)
Krafft, Fritz
2011-08-01
The use of modern terminology hinders to understand historical astronomical texts and often misleads the reader. Therefore, this study tries to reconstruct the ideas of the way the planets seem to move against the sphere of fixed stars in a non-teleological manner, that means in the original view and with original terms. The study proceeds historically and explains: (1) Aristotle's system of homocentric spheres being hollow spheres of ether turning equally round the earth in the centre of the world, a number of which makes the apparatus of the movement of a planet which produces its apparently unequal motion. (2) Ptolemy's reductionistic system of geometric circles (eccentric deferents, epicycles etc.), which are indeed great circles on non-concentric hollow spheres, whereupon they turn around equally. The space which they take up in all is surrounded by an inner and an outer concentric spherical surface and makes the sphere of the planet. (3) John's of Sacrobosco transferring of the geometric astronomy to the Latin of Middle Ages and the commentators' precision of the Greek-Latin terms. (4) The tradition of the "Theorica planetarum" which makes this geometry physics by allotting every partial moving to a partial material hollow sphere (with spherical surfaces of different centricity) or full sphere of an epicycle (orbes particulares or partialis), a number of which makes the entire sphere of each planet (orbis totalis or totus). - Copernicus also stood within this tradition, except that his entire spheres differ from the earlier ones in size or thickness (because he eliminated the partly very big synodic epicycles and allocated their effect as a mere parallactic one to the yearly moving of the earth) and in the great intervening spaces between each other (a result of measuring the true distances of the planets on the basis of these parallactic effects). (5) Tycho Brahe's refutation of the unchangingness and unpermeableness and therefore solidity of all etherial spheres, what had been the fundamental condition for creating the indirect ways of the planets in all astronomical systems with partial or entire spheres engaging one another. It was particularly Kepler who recognizes that as a result celestial physics requires a complete change. (6) Kepler's replacement of celestial physics. He did not think any more that the apparent (unequal) way of a planet indirectly results from the combination of several equal movements of etherial partial and entire spheres. His planets move their true and real way caused directly by the joint effect of two corporal forces moving the planets both around the sun and to and from it, which latter makes the planet's speed indeed naturally unequal. For this "real way" he coins in late 1604 the specific term "orbita" (the modern "orbit", the German "Bahn". This term then little by little replaced the former non-specific, general description of the apparent or real way (as "via, iter, ambitus, circulus, circuitus" etc.), and Kepler used it increasingly from its introduction (initially frequently joined to a describing definition of this "way") up to the exclusive use in the fifth book of the "Epitome", after this "orbita" had changed its shape from a perfect eccentric circle to an oval and finally an elliptic form. This way Kepler marks the paradigm change of astronomy caused by himself also terminologically.
NASA Astrophysics Data System (ADS)
Loarte, A.; Huijsmans, G.; Futatani, S.; Baylor, L. R.; Evans, T. E.; Orlov, D. M.; Schmitz, O.; Becoulet, M.; Cahyna, P.; Gribov, Y.; Kavin, A.; Sashala Naik, A.; Campbell, D. J.; Casper, T.; Daly, E.; Frerichs, H.; Kischner, A.; Laengner, R.; Lisgo, S.; Pitts, R. A.; Saibene, G.; Wingen, A.
2014-03-01
Progress in the definition of the requirements for edge localized mode (ELM) control and the application of ELM control methods both for high fusion performance DT operation and non-active low-current operation in ITER is described. Evaluation of the power fluxes for low plasma current H-modes in ITER shows that uncontrolled ELMs will not lead to damage to the tungsten (W) divertor target, unlike for high-current H-modes in which divertor damage by uncontrolled ELMs is expected. Despite the lack of divertor damage at lower currents, ELM control is found to be required in ITER under these conditions to prevent an excessive contamination of the plasma by W, which could eventually lead to an increased disruptivity. Modelling with the non-linear MHD code JOREK of the physics processes determining the flow of energy from the confined plasma onto the plasma-facing components during ELMs at the ITER scale shows that the relative contribution of conductive and convective losses is intrinsically linked to the magnitude of the ELM energy loss. Modelling of the triggering of ELMs by pellet injection for DIII-D and ITER has identified the minimum pellet size required to trigger ELMs and, from this, the required fuel throughput for the application of this technique to ITER is evaluated and shown to be compatible with the installed fuelling and tritium re-processing capabilities in ITER. The evaluation of the capabilities of the ELM control coil system in ITER for ELM suppression is carried out (in the vacuum approximation) and found to have a factor of ˜2 margin in terms of coil current to achieve its design criterion, although such a margin could be substantially reduced when plasma shielding effects are taken into account. The consequences for the spatial distribution of the power fluxes at the divertor of ELM control by three-dimensional (3D) fields are evaluated and found to lead to substantial toroidal asymmetries in zones of the divertor target away from the separatrix. Therefore, specifications for the rotation of the 3D perturbation applied for ELM control in order to avoid excessive localized erosion of the ITER divertor target are derived. It is shown that a rotation frequency in excess of 1 Hz for the whole toroidally asymmetric divertor power flux pattern is required (corresponding to n Hz frequency in the variation of currents in the coils, where n is the toroidal symmetry of the perturbation applied) in order to avoid unacceptable thermal cycling of the divertor target for the highest power fluxes and worst toroidal power flux asymmetries expected. The possible use of the in-vessel vertical stability coils for ELM control as a back-up to the main ELM control systems in ITER is described and the feasibility of its application to control ELMs in low plasma current H-modes, foreseen for initial ITER operation, is evaluated and found to be viable for plasma currents up to 5-10 MA depending on modelling assumptions.
Probing Majorana modes in the tunneling spectra of a resonant level.
Korytár, R; Schmitteckert, P
2013-11-27
Unambiguous identification of Majorana physics presents an outstanding problem whose solution could render topological quantum computing feasible. We develop a numerical approach to treat finite-size superconducting chains supporting Majorana modes, which is based on iterative application of a two-site Bogoliubov transformation. We demonstrate the applicability of the method by studying a resonant level attached to the superconductor subject to external perturbations. In the topological phase, we show that the spectrum of a single resonant level allows us to distinguish peaks coming from Majorana physics from the Kondo resonance.
Mangold, Stefanie; De Cecco, Carlo N; Wichmann, Julian L; Canstein, Christian; Varga-Szemes, Akos; Caruso, Damiano; Fuller, Stephen R; Bamberg, Fabian; Nikolaou, Konstantin; Schoepf, U Joseph
2016-05-01
To compare, on an intra-individual basis, the effect of automated tube voltage selection (ATVS), integrated circuit detector and advanced iterative reconstruction on radiation dose and image quality of aortic CTA studies using 2nd and 3rd generation dual-source CT (DSCT). We retrospectively evaluated 32 patients who had undergone CTA of the entire aorta with both 2nd generation DSCT at 120kV using filtered back projection (FBP) (protocol 1) and 3rd generation DSCT using ATVS, an integrated circuit detector and advanced iterative reconstruction (protocol 2). Contrast-to-noise ratio (CNR) was calculated. Image quality was subjectively evaluated using a five-point scale. Radiation dose parameters were recorded. All studies were considered of diagnostic image quality. CNR was significantly higher with protocol 2 (15.0±5.2 vs 11.0±4.2; p<.0001). Subjective image quality analysis revealed no significant differences for evaluation of attenuation (p=0.08501) but image noise was rated significantly lower with protocol 2 (p=0.0005). Mean tube voltage and effective dose were 94.7±14.1kV and 6.7±3.9mSv with protocol 2; 120±0kV and 11.5±5.2mSv with protocol 1 (p<0.0001, respectively). Aortic CTA performed with 3rd generation DSCT, ATVS, integrated circuit detector, and advanced iterative reconstruction allow a substantial reduction of radiation exposure while improving image quality in comparison to 120kV imaging with FBP. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
19 CFR 133.2 - Application to record trademark.
Code of Federal Regulations, 2013 CFR
2013-04-01
... on the basis of physical and material differences (see Lever Bros. Co. v. United States, 981 F.2d 1330 (D.C. Cir. 1993)), a description of any physical and material difference between the specific... instance, owners who assert that physical and material differences exist must state the basis for such a...
19 CFR 133.2 - Application to record trademark.
Code of Federal Regulations, 2011 CFR
2011-04-01
... on the basis of physical and material differences (see Lever Bros. Co. v. United States, 981 F.2d 1330 (D.C. Cir. 1993)), a description of any physical and material difference between the specific... instance, owners who assert that physical and material differences exist must state the basis for such a...
19 CFR 133.2 - Application to record trademark.
Code of Federal Regulations, 2012 CFR
2012-04-01
... on the basis of physical and material differences (see Lever Bros. Co. v. United States, 981 F.2d 1330 (D.C. Cir. 1993)), a description of any physical and material difference between the specific... instance, owners who assert that physical and material differences exist must state the basis for such a...
19 CFR 133.2 - Application to record trademark.
Code of Federal Regulations, 2010 CFR
2010-04-01
... on the basis of physical and material differences (see Lever Bros. Co. v. United States, 981 F.2d 1330 (D.C. Cir. 1993)), a description of any physical and material difference between the specific... instance, owners who assert that physical and material differences exist must state the basis for such a...
19 CFR 133.2 - Application to record trademark.
Code of Federal Regulations, 2014 CFR
2014-04-01
... on the basis of physical and material differences (see Lever Bros. Co. v. United States, 981 F.2d 1330 (D.C. Cir. 1993)), a description of any physical and material difference between the specific... instance, owners who assert that physical and material differences exist must state the basis for such a...
The Iterative Design of a Virtual Design Studio
ERIC Educational Resources Information Center
Blevis, Eli; Lim, Youn-kyung; Stolterman, Erik; Makice, Kevin
2008-01-01
In this article, the authors explain how they implemented Design eXchange as a shared collaborative online and physical space for design for their students. Their notion for Design eXchange favors a complex mix of key elements namely: (1) a virtual online studio; (2) a forum for review of all things related to design, especially design with the…
Kinecting Physics: Conceptualization of Motion through Visualization and Embodiment
ERIC Educational Resources Information Center
Anderson, Janice L.; Wall, Steven D.
2016-01-01
The purpose of this work was to share our findings in using the Kinect technology to facilitate the understanding of basic kinematics with middle school science classrooms. This study marks the first three iterations of this design-based research that examines the pedagogical potential of using the Kinect technology. To this end, we explored the…
Application of microdosimetry on biological physics for ionizing radiation
NASA Astrophysics Data System (ADS)
Chen, Dandan; Sun, Liang
2018-02-01
Not Available Project supported by the National Natural Science Foundation of China (Grant Nos. 11304212 and 11575124), the Natural Science Foundation of Jiangsu Province, China (Grant Nos. BK20130279), the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD), and the International Thermonuclear Experimental Reactor (ITER) Special Program of China (Grant No. 2014GB112006).
MythBusters, Musicians, and MP3 Players: A Middle School Sound Study
ERIC Educational Resources Information Center
Putney, Ann
2011-01-01
Create your own speakers for an MP3 player while exploring the science of sound. Review of science notebooks, students' intriguing cabinet designs, and listening to students talk with a musician about the physics of an instrument show that complex concepts are being absorbed and extended with each new iteration. Science that matters to students…
Fantz, U; Franzen, P; Kraus, W; Falter, H D; Berger, M; Christ-Koch, S; Fröschle, M; Gutser, R; Heinemann, B; Martens, C; McNeely, P; Riedl, R; Speth, E; Wünderlich, D
2008-02-01
The international fusion experiment ITER requires for the plasma heating and current drive a neutral beam injection system based on negative hydrogen ion sources at 0.3 Pa. The ion source must deliver a current of 40 A D(-) for up to 1 h with an accelerated current density of 200 Am/(2) and a ratio of coextracted electrons to ions below 1. The extraction area is 0.2 m(2) from an aperture array with an envelope of 1.5 x 0.6 m(2). A high power rf-driven negative ion source has been successfully developed at the Max-Planck Institute for Plasma Physics (IPP) at three test facilities in parallel. Current densities of 330 and 230 Am/(2) have been achieved for hydrogen and deuterium, respectively, at a pressure of 0.3 Pa and an electron/ion ratio below 1 for a small extraction area (0.007 m(2)) and short pulses (<4 s). In the long pulse experiment, equipped with an extraction area of 0.02 m(2), the pulse length has been extended to 3600 s. A large rf source, with the width and half the height of the ITER source but without extraction system, is intended to demonstrate the size scaling and plasma homogeneity of rf ion sources. The source operates routinely now. First results on plasma homogeneity obtained from optical emission spectroscopy and Langmuir probes are very promising. Based on the success of the IPP development program, the high power rf-driven negative ion source has been chosen recently for the ITER beam systems in the ITER design review process.
Results of high heat flux qualification tests of W monoblock components for WEST
NASA Astrophysics Data System (ADS)
Greuner, H.; Böswirth, B.; Lipa, M.; Missirlian, M.; Richou, M.
2017-12-01
One goal of the WEST project (W Environment in Steady-state Tokamak) is the manufacturing, quality assessment and operation of ITER-like actively water-cooled divertor plasma facing components made of tungsten. Six W monoblock plasma facing units (PFUs) from different suppliers have been successfully evaluated in the high heat flux test facility GLADIS at IPP. Each PFU is equipped with 35 W monoblocks of an ITER-like geometry. However, the W blocks are made of different tungsten grades and the suppliers applied different bonding techniques between tungsten and the inserted Cu-alloy cooling tubes. The intention of the HHF test campaign was to assess the manufacturing quality of the PFUs on the basis of a statistical analysis of the surface temperature evolution of the individual W monoblocks during thermal loading with 100 cycles at 10 MW m-2. These tests confirm the non-destructive examinations performed by the manufacturer and CEA prior to the installation of the WEST platform, and no defects of the components were detected.
Field tests of a participatory ergonomics toolkit for Total Worker Health.
Nobrega, Suzanne; Kernan, Laura; Plaku-Alakbarova, Bora; Robertson, Michelle; Warren, Nicholas; Henning, Robert
2017-04-01
Growing interest in Total Worker Health ® (TWH) programs to advance worker safety, health and well-being motivated development of a toolkit to guide their implementation. Iterative design of a program toolkit occurred in which participatory ergonomics (PE) served as the primary basis to plan integrated TWH interventions in four diverse organizations. The toolkit provided start-up guides for committee formation and training, and a structured PE process for generating integrated TWH interventions. Process data from program facilitators and participants throughout program implementation were used for iterative toolkit design. Program success depended on organizational commitment to regular design team meetings with a trained facilitator, the availability of subject matter experts on ergonomics and health to support the design process, and retraining whenever committee turnover occurred. A two committee structure (employee Design Team, management Steering Committee) provided advantages over a single, multilevel committee structure, and enhanced the planning, communication, and teamwork skills of participants. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yan, Fei; Tian, Fuli; Shi, Zhongke
2016-10-01
Urban traffic flows are inherently repeated on a daily or weekly basis. This repeatability can help improve the traffic conditions if it is used properly by the control system. In this paper, we propose a novel iterative learning control (ILC) strategy for traffic signals of urban road networks using the repeatability feature of traffic flow. To improve the control robustness, the ILC strategy is further integrated with an error feedback control law in a complementary manner. Theoretical analysis indicates that the ILC-based traffic signal control methods can guarantee the asymptotic learning convergence, despite the presence of modeling uncertainties and exogenous disturbances. Finally, the impacts of the ILC-based signal control strategies on the network macroscopic fundamental diagram (MFD) are examined. The results show that the proposed ILC-based control strategies can homogenously distribute the network accumulation by controlling the vehicle numbers in each link to the desired levels under different traffic demands, which can result in the network with high capacity and mobility.
Sparse time-frequency decomposition based on dictionary adaptation.
Hou, Thomas Y; Shi, Zuoqiang
2016-04-13
In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).
An algorithm of improving speech emotional perception for hearing aid
NASA Astrophysics Data System (ADS)
Xi, Ji; Liang, Ruiyu; Fei, Xianju
2017-07-01
In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.
PREFACE: Joint Varenna-Lausanne International Workshop 2014
NASA Astrophysics Data System (ADS)
2014-11-01
The 2014 joint Varenna-Lausanne international workshop on the theory of fusion plasmas was once more a great meeting. The programme covers a wide variety of topics, namely turbulence, MHD, edge physics and RF wave heating. The broad spectrum of skills involved in this meeting, from fundamental to applied physics, is striking. The works published in this special issue combine mathematics, numerics and physics at various levels - confirming the increasing integration of expertise in our community. As an incentive to read this cluster, let us mention a few outstanding results. Several papers address fundamental issues in turbulent transport, in particular the dynamics of structures. It is quite remarkable that this subject is now mature enough to propose signatures that can be tested by measurements. Linear and non linear MHD was also at the forefront. Several works illustrate the increasing level of realistic description of a fusion device, in particular by implementing complicated wall geometries. Moreover some noticeable progress has been made in the understanding of reconnection processes in collisionless regimes. The activity on radio-frequency heating and current drive is well represented, driven by the future operation of W7-X, ITER, and DEMO on a longer time scale. Finally the development of innovative numerical techniques, an old tradition of the conference, has driven several nice articles. The programme committee is traditionally keen in promoting young scientists. A number of senior scientists also attend the meeting on a regular basis, so that the attendance was nicely balanced. We believe that these efforts have been particularly fruitful this year. The number of young (and less young) faces was particularly impressive and this special issue illustrates this feature. The success of the 2014 edition brings evidence that the joint Varenna-Lausanne is the right place for presenting th The quality and size of the scientific production is illustrated by the 22 papers which appear in the present volume of Journal of Physics Conference Series - all peer reviewed. Let us mention another set of 19 papers to appear in Plasma Physics and Controlled Fusion. We hope the reader will enjoy this special issue and will find ideas for new bright achievements. Xavier Garbet, Olivier Sauter October 23, 2014
Evaluation of coupling approaches for thermomechanical simulations
Novascone, S. R.; Spencer, B. W.; Hales, J. D.; ...
2015-08-10
Many problems of interest, particularly in the nuclear engineering field, involve coupling between the thermal and mechanical response of an engineered system. The strength of the two-way feedback between the thermal and mechanical solution fields can vary significantly depending on the problem. Contact problems exhibit a particularly high degree of two-way feedback between those fields. This paper describes and demonstrates the application of a flexible simulation environment that permits the solution of coupled physics problems using either a tightly coupled approach or a loosely coupled approach. In the tight coupling approach, Newton iterations include the coupling effects between all physics,more » while in the loosely coupled approach, the individual physics models are solved independently, and fixed-point iterations are performed until the coupled system is converged. These approaches are applied to simple demonstration problems and to realistic nuclear engineering applications. The demonstration problems consist of single and multi-domain thermomechanics with and without thermal and mechanical contact. Simulations of a reactor pressure vessel under pressurized thermal shock conditions and a simulation of light water reactor fuel are also presented. Here, problems that include thermal and mechanical contact, such as the contact between the fuel and cladding in the fuel simulation, exhibit much stronger two-way feedback between the thermal and mechanical solutions, and as a result, are better solved using a tight coupling strategy.« less
NASA Astrophysics Data System (ADS)
Giordano, Gerardo
2015-03-01
Recently, I was tasked with the creation and execution of a new themed general education physics class called The Physics of Warfare. In the past, I had used the theme of a class, such as the physics of sports medicine, as a way to create homework and in-class activities, generate discussions, and provide an application to demonstrate that physics isn't always abstract. It is true that the examples and applications in this warfare class practically wrote themselves, but I wanted more for my students. I wanted them to embrace the iterative nature of scientific understanding. I wanted them to yearn for the breakthroughs that lead to paradigm shifts. I wanted them to demand experimental verification of each novel idea. This paper discusses the formation and implementation of a conceptual physics course, full of in-class demonstrations and solidly rooted in the context of humankind's ever-evolving methods of waging war.
NASA Astrophysics Data System (ADS)
Gryanik, Vladimir M.; Lüpkes, Christof
2018-02-01
In climate and weather prediction models the near-surface turbulent fluxes of heat and momentum and related transfer coefficients are usually parametrized on the basis of Monin-Obukhov similarity theory (MOST). To avoid iteration, required for the numerical solution of the MOST equations, many models apply parametrizations of the transfer coefficients based on an approach relating these coefficients to the bulk Richardson number Rib. However, the parametrizations that are presently used in most climate models are valid only for weaker stability and larger surface roughnesses than those documented during the Surface Heat Budget of the Arctic Ocean campaign (SHEBA). The latter delivered a well-accepted set of turbulence data in the stable surface layer over polar sea-ice. Using stability functions based on the SHEBA data, we solve the MOST equations applying a new semi-analytic approach that results in transfer coefficients as a function of Rib and roughness lengths for momentum and heat. It is shown that the new coefficients reproduce the coefficients obtained by the numerical iterative method with a good accuracy in the most relevant range of stability and roughness lengths. For small Rib, the new bulk transfer coefficients are similar to the traditional coefficients, but for large Rib they are much smaller than currently used coefficients. Finally, a possible adjustment of the latter and the implementation of the new proposed parametrizations in models are discussed.
State Transition Matrix for Perturbed Orbital Motion Using Modified Chebyshev Picard Iteration
NASA Astrophysics Data System (ADS)
Read, Julie L.; Younes, Ahmad Bani; Macomber, Brent; Turner, James; Junkins, John L.
2015-06-01
The Modified Chebyshev Picard Iteration (MCPI) method has recently proven to be highly efficient for a given accuracy compared to several commonly adopted numerical integration methods, as a means to solve for perturbed orbital motion. This method utilizes Picard iteration, which generates a sequence of path approximations, and Chebyshev Polynomials, which are orthogonal and also enable both efficient and accurate function approximation. The nodes consistent with discrete Chebyshev orthogonality are generated using cosine sampling; this strategy also reduces the Runge effect and as a consequence of orthogonality, there is no matrix inversion required to find the basis function coefficients. The MCPI algorithms considered herein are parallel-structured so that they are immediately well-suited for massively parallel implementation with additional speedup. MCPI has a wide range of applications beyond ephemeris propagation, including the propagation of the State Transition Matrix (STM) for perturbed two-body motion. A solution is achieved for a spherical harmonic series representation of earth gravity (EGM2008), although the methodology is suitable for application to any gravity model. Included in this representation the normalized, Associated Legendre Functions are given and verified numerically. Modifications of the classical algorithm techniques, such as rewriting the STM equations in a second-order cascade formulation, gives rise to additional speedup. Timing results for the baseline formulation and this second-order formulation are given.
Kleene Monads: Handling Iteration in a Framework of Generic Effects
NASA Astrophysics Data System (ADS)
Goncharov, Sergey; Schröder, Lutz; Mossakowski, Till
Monads are a well-established tool for modelling various computational effects. They form the semantic basis of Moggi’s computational metalanguage, the metalanguage of effects for short, which made its way into modern functional programming in the shape of Haskell’s do-notation. Standard computational idioms call for specific classes of monads that support additional control operations. Here, we introduce Kleene monads, which additionally feature nondeterministic choice and Kleene star, i.e. nondeterministic iteration, and we provide a metalanguage and a sound calculus for Kleene monads, the metalanguage of control and effects, which is the natural joint extension of Kleene algebra and the metalanguage of effects. This provides a framework for studying abstract program equality focussing on iteration and effects. These aspects are known to have decidable equational theories when studied in isolation. However, it is well known that decidability breaks easily; e.g. the Horn theory of continuous Kleene algebras fails to be recursively enumerable. Here, we prove several negative results for the metalanguage of control and effects; in particular, already the equational theory of the unrestricted metalanguage of control and effects over continuous Kleene monads fails to be recursively enumerable. We proceed to identify a fragment of this language which still contains both Kleene algebra and the metalanguage of effects and for which the natural axiomatisation is complete, and indeed the equational theory is decidable.
Implementing partnership-driven clinical federated electronic health record data sharing networks.
Stephens, Kari A; Anderson, Nicholas; Lin, Ching-Ping; Estiri, Hossein
2016-09-01
Building federated data sharing architectures requires supporting a range of data owners, effective and validated semantic alignment between data resources, and consistent focus on end-users. Establishing these resources requires development methodologies that support internal validation of data extraction and translation processes, sustaining meaningful partnerships, and delivering clear and measurable system utility. We describe findings from two federated data sharing case examples that detail critical factors, shared outcomes, and production environment results. Two federated data sharing pilot architectures developed to support network-based research associated with the University of Washington's Institute of Translational Health Sciences provided the basis for the findings. A spiral model for implementation and evaluation was used to structure iterations of development and support knowledge share between the two network development teams, which cross collaborated to support and manage common stages. We found that using a spiral model of software development and multiple cycles of iteration was effective in achieving early network design goals. Both networks required time and resource intensive efforts to establish a trusted environment to create the data sharing architectures. Both networks were challenged by the need for adaptive use cases to define and test utility. An iterative cyclical model of development provided a process for developing trust with data partners and refining the design, and supported measureable success in the development of new federated data sharing architectures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zakhnini, Abdelhamid; Kulenkampff, Johannes; Sauerzapf, Sophie; Pietrzyk, Uwe; Lippmann-Pipke, Johanna
2013-08-01
Understanding conservative fluid flow and reactive tracer transport in soils and rock formations requires quantitative transport visualization methods in 3D+t. After a decade of research and development we established the GeoPET as a non-destructive method with unrivalled sensitivity and selectivity, with due spatial and temporal resolution by applying Positron Emission Tomography (PET), a nuclear medicine imaging method, to dense rock material. Requirements for reaching the physical limit of image resolution of nearly 1 mm are (a) a high-resolution PET-camera, like our ClearPET scanner (Raytest), and (b) appropriate correction methods for scatter and attenuation of 511 keV—photons in the dense geological material. The latter are by far more significant in dense geological material than in human and small animal body tissue (water). Here we present data from Monte Carlo simulations (MCS) reflecting selected GeoPET experiments. The MCS consider all involved nuclear physical processes of the measurement with the ClearPET-system and allow us to quantify the sensitivity of the method and the scatter fractions in geological media as function of material (quartz, Opalinus clay and anhydrite compared to water), PET isotope (18F, 58Co and 124I), and geometric system parameters. The synthetic data sets obtained by MCS are the basis for detailed performance assessment studies allowing for image quality improvements. A scatter correction method is applied exemplarily by subtracting projections of simulated scattered coincidences from experimental data sets prior to image reconstruction with an iterative reconstruction process.
A New Map of Standardized Terrestrial Ecosystems of the Conterminous United States
Sayre, Roger G.; Comer, Patrick; Warner, Harumi; Cress, Jill
2009-01-01
A new map of standardized, mesoscale (tens to thousands of hectares) terrestrial ecosystems for the conterminous United States was developed by using a biophysical stratification approach. The ecosystems delineated in this top-down, deductive modeling effort are described in NatureServe's classification of terrestrial ecological systems of the United States. The ecosystems were mapped as physically distinct areas and were associated with known distributions of vegetation assemblages by using a standardized methodology first developed for South America. This approach follows the geoecosystems concept of R.J. Huggett and the ecosystem geography approach of R.G. Bailey. Unique physical environments were delineated through a geospatial combination of national data layers for biogeography, bioclimate, surficial materials lithology, land surface forms, and topographic moisture potential. Combining these layers resulted in a comprehensive biophysical stratification of the conterminous United States, which produced 13,482 unique biophysical areas. These were considered as fundamental units of ecosystem structure and were aggregated into 419 potential terrestrial ecosystems. The ecosystems classification effort preceded the mapping effort and involved the independent development of diagnostic criteria, descriptions, and nomenclature for describing expert-derived ecological systems. The aggregation and labeling of the mapped ecosystem structure units into the ecological systems classification was accomplished in an iterative, expert-knowledge-based process using automated rulesets for identifying ecosystems on the basis of their biophysical and biogeographic attributes. The mapped ecosystems, at a 30-meter base resolution, represent an improvement in spatial and thematic (class) resolution over existing ecoregionalizations and are useful for a variety of applications, including ecosystem services assessments, climate change impact studies, biodiversity conservation, and resource management.
NASA Astrophysics Data System (ADS)
Oliveira, Rui Jorge; Caldeira, Bento; Borges, José Fernando
2017-04-01
Obtain three-dimensional models of the physical properties of buried structures in the subsurface by inversion of GPR data is an appeal to Archaeology and a challenge to Geophysics. Along the research of solutions to resolve this issue stand out two major problems that need to be solved: 1) Establishment the basis of the computation that allows assign numerically in the synthetic radargrams, the physical conditions at which the GPR wave were generated; and 2) automatic comparison of the computed synthetic radargrams with the correspondent observed ones. The influence of the pulse shape in GPR data processing was a studied topic. The pulse outline emitted by GPR antennas was experimentally acquired and this information has been used in the deconvolution operation, carried out by iterative process, similarly the approach used in seismology to obtain the receiver functions. In order to establish the comparison between real and synthetic radargrams, were tested automatic image adjustment algorithms, which search the best fit between two radargramas and quantify their differences through the calculation of Normalized Root Mean Square Deviation (NRMSD). After the implementation of the last tests, the NRMSD between the synthetic and real data is about 19% (initially it was 29%). These procedures are essential to be able to perform an inversion of GPR data obtained in the field. Acknowledgment: This work is co-funded by the European Union through the European Regional Development Fund, included in the COMPETE 2020 (Operational Program Competitiveness and Internationalization) through the ICT project (UID/GEO/04683/2013) with the reference POCI-01-0145-FEDER-007690.
NASA Astrophysics Data System (ADS)
Liu, Y.; Guo, Q.; Sun, Y.
2014-04-01
In map production and generalization, it is inevitable to arise some spatial conflicts, but the detection and resolution of these spatial conflicts still requires manual operation. It is become a bottleneck hindering the development of automated cartographic generalization. Displacement is the most useful contextual operator that is often used for resolving the conflicts arising between two or more map objects. Automated generalization researches have reported many approaches of displacement including sequential approaches and optimization approaches. As an excellent optimization approach on the basis of energy minimization principles, elastic beams model has been used in resolving displacement problem of roads and buildings for several times. However, to realize a complete displacement solution, techniques of conflict detection and spatial context analysis should be also take into consideration. So we proposed a complete solution of displacement based on the combined use of elastic beams model and constrained Delaunay triangulation (CDT) in this paper. The solution designed as a cyclic and iterative process containing two phases: detection phase and displacement phase. In detection phase, CDT of map is use to detect proximity conflicts, identify spatial relationships and structures, and construct auxiliary structure, so as to support the displacement phase on the basis of elastic beams. In addition, for the improvements of displacement algorithm, a method for adaptive parameters setting and a new iterative strategy are put forward. Finally, we implemented our solution on a testing map generalization platform, and successfully tested it against 2 hand-generated test datasets of roads and buildings respectively.
NASA Astrophysics Data System (ADS)
Yu, Hua-Gen
2016-08-01
We report a new full-dimensional variational algorithm to calculate rovibrational spectra of polyatomic molecules using an exact quantum mechanical Hamiltonian. The rovibrational Hamiltonian of system is derived in a set of orthogonal polyspherical coordinates in the body-fixed frame. It is expressed in an explicitly Hermitian form. The Hamiltonian has a universal formulation regardless of the choice of orthogonal polyspherical coordinates and the number of atoms in molecule, which is suitable for developing a general program to study the spectra of many polyatomic systems. An efficient coupled-state approach is also proposed to solve the eigenvalue problem of the Hamiltonian using a multi-layer Lanczos iterative diagonalization approach via a set of direct product basis set in three coordinate groups: radial coordinates, angular variables, and overall rotational angles. A simple set of symmetric top rotational functions is used for the overall rotation whereas a potential-optimized discrete variable representation method is employed in radial coordinates. A set of contracted vibrationally diabatic basis functions is adopted in internal angular variables. Those diabatic functions are first computed using a neural network iterative diagonalization method based on a reduced-dimension Hamiltonian but only once. The final rovibrational energies are computed using a modified Lanczos method for a given total angular momentum J, which is usually fast. Two numerical applications to CH4 and H2CO are given, together with a comparison with previous results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Hua-Gen, E-mail: hgy@bnl.gov
We report a new full-dimensional variational algorithm to calculate rovibrational spectra of polyatomic molecules using an exact quantum mechanical Hamiltonian. The rovibrational Hamiltonian of system is derived in a set of orthogonal polyspherical coordinates in the body-fixed frame. It is expressed in an explicitly Hermitian form. The Hamiltonian has a universal formulation regardless of the choice of orthogonal polyspherical coordinates and the number of atoms in molecule, which is suitable for developing a general program to study the spectra of many polyatomic systems. An efficient coupled-state approach is also proposed to solve the eigenvalue problem of the Hamiltonian using amore » multi-layer Lanczos iterative diagonalization approach via a set of direct product basis set in three coordinate groups: radial coordinates, angular variables, and overall rotational angles. A simple set of symmetric top rotational functions is used for the overall rotation whereas a potential-optimized discrete variable representation method is employed in radial coordinates. A set of contracted vibrationally diabatic basis functions is adopted in internal angular variables. Those diabatic functions are first computed using a neural network iterative diagonalization method based on a reduced-dimension Hamiltonian but only once. The final rovibrational energies are computed using a modified Lanczos method for a given total angular momentum J, which is usually fast. Two numerical applications to CH{sub 4} and H{sub 2}CO are given, together with a comparison with previous results.« less
BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation
NASA Astrophysics Data System (ADS)
Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana
2006-01-01
Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.
On the physical basis of a theory of human thermoregulation.
NASA Technical Reports Server (NTRS)
Iberall, A. S.; Schindler, A. M.
1973-01-01
Theoretical study of the physical factors which are responsible for thermoregulation in nude resting humans in a physical steady state. The behavior of oxidative metabolism, evaporative and convective thermal fluxes, fluid heat transfer, internal and surface temperatures, and evaporative phase transitions is studied by physiological/physical modeling techniques. The modeling is based on the theories that the body has a vital core with autothermoregulation, that the vital core contracts longitudinally, that the temperature of peripheral regions and extremities decreases towards the ambient, and that a significant portion of the evaporative heat may be lost underneath the skin. A theoretical basis is derived for a consistent modeling of steady-state thermoregulation on the basis of these theories.
The use of virtual fiducials in image-guided kidney surgery
NASA Astrophysics Data System (ADS)
Glisson, Courtenay; Ong, Rowena; Simpson, Amber; Clark, Peter; Herrell, S. D.; Galloway, Robert
2011-03-01
The alignment of image-space to physical-space lies at the heart of all image-guided procedures. In intracranial surgery, point-based registrations can be used with either skin-affixed or bone-implanted extrinsic objects called fiducial markers. The advantages of point-based registration techniques are that they are robust, fast, and have a well developed mathematical foundation for the assessment of registration quality. In abdominal image-guided procedures such techniques have not been successful. It is difficult to accurately locate sufficient homologous intrinsic points in imagespace and physical-space, and the implantation of extrinsic fiducial markers would constitute "surgery before the surgery." Image-space to physical-space registration for abdominal organs has therefore been dominated by surfacebased registration techniques which are iterative, prone to local minima, sensitive to initial pose, and sensitive to percentage coverage of the physical surface. In our work in image-guided kidney surgery we have developed a composite approach using "virtual fiducials." In an open kidney surgery, the perirenal fat is removed and the surface of the kidney is dotted using a surgical marker. A laser range scanner (LRS) is used to obtain a surface representation and matching high definition photograph. A surface to surface registration is performed using a modified iterative closest point (ICP) algorithm. The dots are extracted from the high definition image and assigned the three dimensional values from the LRS pixels over which they lie. As the surgery proceeds, we can then use point-based registrations to re-register the spaces and track deformations due to vascular clamping and surgical tractions.
Gupta, Sabrina S; Aroni, Rosalie; Teede, Helena
2017-02-01
Research indicates that there are worryingly low levels of physical activity among South Asians compared with Anglo-Australians with type 2 diabetes and/or cardiovascular disease (CVD). We compared perceptions, barriers, and enablers of physical activity in these groups. We used a qualitative design, conducting in-depth, semistructured iterative interviews in Victoria with 57 South Asian and Anglo-Australian participants with either type 2 diabetes or CVD. While both groups exhibited knowledge of the value of physical activity in health maintenance and disease management, they wished for more specific and culturally tailored advice from clinicians about the type, duration, and intensity of physical activity required. Physical activity identities were tied to ethnic identities, with members of each group aspiring to meet the norms of their culture regarding engagement with physical activity as specific exercise or as incidental exercise. Individual personal exercise was deemed important by Anglo-Australians whereas South Asians preferred family-based physical activity.
NASA Astrophysics Data System (ADS)
Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin
2018-06-01
Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.
Computer simulating observations of the Lunar physical libration for the Japanese Lunar project ILOM
NASA Astrophysics Data System (ADS)
Petrova, Natalia; Hanada, Hideo
2010-05-01
In the frame of the second stage of the Japanese space mission SELENE-2 (Hanada et al. 2009) the project ILOM (In-situ Lunar Orientation Measurement) planned after 2017years is a kind of instrument for positioning on the Moon. It will be set near the lunar pole and will determine parameters of lunar physical libration by positioning of several tens of stars in the field of view regularly for longer than one year. Presented work is dedicated to analyses of computer simulating future observations. It's proposed that for every star crossing lunar prime meridian its polar distance will be to measure. The methods of optimal star observation are being developed for the future experiment. The equations are constructed to determine libration angles ? (t),ρ(t),σ(t)- on the basis of observed polar distances pobs: (| f1(?,ρ,Iσ,pobs) = 0 |{ f2(?,ρ,Iσ,pobs) = 0 | f3(?,ρ,Iσ,pobs) = 0 |( or f(X) = 0, where ; f = ? f1 ? | f2 | |? f3 |? X = ? ? ? | ρ | |? Iσ |? (1) At the present stage we have developed the software for selection of stars for these future polar observations. Stars were taken from various stellar catalogues, such as the UCAC2-BSS, Hipparcos, Tycho and FK6. The software reduces ICRS coordinates of star to selenographical system at the epoch of observation (Petrova et al., 2009). For example, to the epochs 2017 - 2018 more than 50 stars brighter than m = 12 were selected for the northern pole. In total, these stars give about 600 crossings of the prime meridian during one year. Nevertheless, only a few stars (2-5) may be observed in a vicinity of the one moment. This is not enough to have sufficient sample to exclude various kind of errors. The software includes programmes which can determine the moment of transition of star across the meridian and theoretical values of libration angles at this moments. A serious problem arises when we try to solve equations (1) with the purpose to determine libration angles on the basis of simulated pobs.. Polar distances are calculated using the analytical theory of physical libration Petrova et al. (2008; 2009). We cannot use Newton's method for solution of the equation, because the Jacobian | | || δδfx11 δδfx12 δδf1x3-|| || δδfx2 δδfx2 δδf2x-|| J(X ) = || δf13 δf23 δ3f3-|| = 0. || δx1 δx2 δx3 || We transformed equations to the iteration form xi = φi(X). Used iteration methods have unsatisfactory convergence: inaccuracy in polar distance of 1 milliseconds of arc causes inaccuracy of 0.01arcsec in ρ and in Iσ, and 0.1 arcsec in ?. Results of our computer simulating showed It's necessary to carry out measuring of polar distances of stars in several meridians simultaneously to increase sample of stars. It's necessary to find additional links (relations) between observed parameters and libration angles to have stable mathematical methods to receive solutions for lunar rotation with high accuracy. The research was supported by the Russian-Japanese grant RFFI-JSPS 09-02-92113, (2009-2010) References: Hanada H., Noda H., Kikuchi F. et al., 2009. Different kind of observations of lunar rotation and gravity for SELENE-2. Proc of conf. Astrokazan-2009, August 19 - 26, Kazan, Russia. p. 172-175 Petrova N., Gusev A., Kawano N., Hanada H., 2008. Free librations of the two-layer Moon and the possibilities of their detection. Advances in Space Res., v 42, p. 1398-1404 Petrova N., Gusev A., Hanada H., Ivanova T., Akutina V., 2009. Application of the analytical theory of Lunar physical libration for simulating observations of stars for the future Japanese project ILOM. Proc of conf. Astrokazan-2009, August 19 - 26, Kazan, Russia. p.197 - 201.
Mathematical enhancement of data from scientific measuring instruments
NASA Technical Reports Server (NTRS)
Ioup, J. W.
1982-01-01
The accuracy of any physical measurement is limited by the instruments performing it. The proposed activities of this grant are related to the study of and application of mathematical techniques of deconvolution. Two techniques are being investigated: an iterative method and a function continuation Fourier method. This final status report describes the work performed during the period July 1 to December 31, 1982.
Guarded Motion for Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The Idaho National Laboratory (INL) has created codes that ensure that a robot will come to a stop at a precise, specified distance from any obstacle regardless of the robot's initial speed, its physical characteristics, and the responsiveness of the low-level motor control schema. This Guarded Motion for Mobile Robots system iteratively adjusts the robot's action in response to information about the robot's environment.
ERIC Educational Resources Information Center
Markovic, Živorad; Kopas-Vukašinovic, Emina
2015-01-01
In their work authors consider the significance of the organization of physical activities for the development of abilities of pre-school and school children. Led by theoretical basis that physical development of children represents the basis of their whole development, and that "fine motor skills" are determined by the development of…
NASA Astrophysics Data System (ADS)
McCray, A.; Punjabi, A.; Ali, H.
2004-11-01
Unperturbed magnetic topology of DIII-D shot 115467 is described by the symmetric simple map (SSM) with map parameter k=0.2623 [1], then last good surface passes through x=0 and y=0.9995, q_edge=6.48 (same as in shot 115467) if six iterations of SSM are taken to be equivalent to single toroidal circuit of DIII-D. The dipole map (DM) calculates the effects of localized, external high mode numbers magnetic perturbations on motion of field lines. We use dipole map to describe effects of C-coils on field line trajectories in DIII-D. We apply DM after each iteration of SSM, with s=1.0021, x_dipole=1.5617, y_dipole= 0 [1] for shot 115467. We study the changes in the last good surface and its destruction as a function of I_C-coil. This work is supported by NASA SHARP program and DE-FG02-02ER54673. [1] H. Ali, A. Punjabi, A. Boozer, and T. Evans, presented at the 31st European Physical Society Plasma Physics Meeting, London, UK, June 29, 2004, paper P2-172.
Survival and in-vessel redistribution of beryllium droplets after ITER disruptions
NASA Astrophysics Data System (ADS)
Vignitchouk, L.; Ratynskaia, S.; Tolias, P.; Pitts, R. A.; De Temmerman, G.; Lehnen, M.; Kiramov, D.
2018-07-01
The motion and temperature evolution of beryllium droplets produced by first wall surface melting after ITER major disruptions and vertical displacement events mitigated during the current quench are simulated by the MIGRAINe dust dynamics code. These simulations employ an updated physical model which addresses droplet-plasma interaction in ITER-relevant regimes characterized by magnetized electron collection and thin-sheath ion collection, as well as electron emission processes induced by electron and high-Z ion impacts. The disruption scenarios have been implemented from DINA simulations of the time-evolving plasma parameters, while the droplet injection points are set to the first-wall locations expected to receive the highest thermal quench heat flux according to field line tracing studies. The droplet size, speed and ejection angle are varied within the range of currently available experimental and theoretical constraints, and the final quantities of interest are obtained by weighting single-trajectory output with different size and speed distributions. Detailed estimates of droplet solidification into dust grains and their subsequent deposition in the vessel are obtained. For representative distributions of the droplet injection parameters, the results indicate that at most a few percents of the beryllium mass initially injected is converted into solid dust, while the remaining mass either vaporizes or forms liquid splashes on the wall. Simulated in-vessel spatial distributions are also provided for the surviving dust, with the aim of providing guidance for planned dust diagnostic, retrieval and clean-up systems on ITER.
NASA Astrophysics Data System (ADS)
Nocente, M.; Tardocchi, M.; Barnsley, R.; Bertalot, L.; Brichard, B.; Croci, G.; Brolatti, G.; Di Pace, L.; Fernandes, A.; Giacomelli, L.; Lengar, I.; Moszynski, M.; Krasilnikov, V.; Muraro, A.; Pereira, R. C.; Perelli Cippo, E.; Rigamonti, D.; Rebai, M.; Rzadkiewicz, J.; Salewski, M.; Santosh, P.; Sousa, J.; Zychor, I.; Gorini, G.
2017-07-01
We here present the principles and main physics capabilities behind the design of the radial gamma ray spectrometers (RGRS) system for alpha particle and runaway electron measurements at ITER. The diagnostic benefits from recent advances in gamma-ray spectrometry for tokamak plasmas and combines space and high energy resolution in a single device. The RGRS system as designed can provide information on α ~ particles on a time scale of 1/10 of the slowing down time for the ITER 500 MW full power DT scenario. Spectral observations of the 3.21 and 4.44 MeV peaks from the 9\\text{Be}≤ft(α,nγ \\right){{}12}\\text{C} reaction make the measurements sensitive to α ~ particles at characteristic resonant energies and to possible anisotropies of their slowing down distribution function. An independent assessment of the neutron rate by gamma-ray emission is also feasible. In case of runaway electrons born in disruptions with a typical duration of 100 ms, a time resolution of at least 10 ms for runaway electron studies can be achieved depending on the scenario and down to a current of 40 kA by use of external gas injection. We find that the bremsstrahlung spectrum in the MeV range from confined runaways is sensitive to the electron velocity space up to E≈ 30 -40 MeV, which allows for measurements of the energy distribution of the runaway electrons at ITER.
Identification of threshold concepts for biochemistry.
Loertscher, Jennifer; Green, David; Lewis, Jennifer E; Lin, Sara; Minderhout, Vicky
2014-01-01
Threshold concepts (TCs) are concepts that, when mastered, represent a transformed understanding of a discipline without which the learner cannot progress. We have undertaken a process involving more than 75 faculty members and 50 undergraduate students to identify a working list of TCs for biochemistry. The process of identifying TCs for biochemistry was modeled on extensive work related to TCs across a range of disciplines and included faculty workshops and student interviews. Using an iterative process, we prioritized five concepts on which to focus future development of instructional materials. Broadly defined, the concepts are steady state, biochemical pathway dynamics and regulation, the physical basis of interactions, thermodynamics of macromolecular structure formation, and free energy. The working list presented here is not intended to be exhaustive, but rather is meant to identify a subset of TCs for biochemistry for which instructional and assessment tools for undergraduate biochemistry will be developed. © 2014 J. Loertscher et al. CBE—Life Sciences Education © 2014 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manoli, Gabriele, E-mail: manoli@dmsa.unipd.it; Nicholas School of the Environment, Duke University, Durham, NC 27708; Rossi, Matteo
The modeling of unsaturated groundwater flow is affected by a high degree of uncertainty related to both measurement and model errors. Geophysical methods such as Electrical Resistivity Tomography (ERT) can provide useful indirect information on the hydrological processes occurring in the vadose zone. In this paper, we propose and test an iterated particle filter method to solve the coupled hydrogeophysical inverse problem. We focus on an infiltration test monitored by time-lapse ERT and modeled using Richards equation. The goal is to identify hydrological model parameters from ERT electrical potential measurements. Traditional uncoupled inversion relies on the solution of two sequentialmore » inverse problems, the first one applied to the ERT measurements, the second one to Richards equation. This approach does not ensure an accurate quantitative description of the physical state, typically violating mass balance. To avoid one of these two inversions and incorporate in the process more physical simulation constraints, we cast the problem within the framework of a SIR (Sequential Importance Resampling) data assimilation approach that uses a Richards equation solver to model the hydrological dynamics and a forward ERT simulator combined with Archie's law to serve as measurement model. ERT observations are then used to update the state of the system as well as to estimate the model parameters and their posterior distribution. The limitations of the traditional sequential Bayesian approach are investigated and an innovative iterative approach is proposed to estimate the model parameters with high accuracy. The numerical properties of the developed algorithm are verified on both homogeneous and heterogeneous synthetic test cases based on a real-world field experiment.« less
NASA Astrophysics Data System (ADS)
Di Sipio, Eloisa; Bertermann, David
2017-04-01
Nowadays renewable energy resources for heating/cooling residential and tertiary buildings and agricultural greenhouses are becoming increasingly important. In this framework, a possible, natural and valid alternative for thermal energy supply is represented by soils. In fact, since 1980 soils have been studied and used also as heat reservoir in geothermal applications, acting as a heat source (in winter) or sink (in summer) coupled mainly with heat pumps. Therefore, the knowledge of soil thermal properties and of heat and mass transfer in the soils plays an important role in modeling the performance, reliability and environmental impact in the short and long term of engineering applications. However, the soil thermal behavior varies with soil physical characteristics such as soil texture and water content. The available data are often scattered and incomplete for geothermal applications, especially very shallow geothermal systems (up to 10 m depths), so it is worthy of interest a better comprehension of how the different soil typologies (i.e. sand, loamy sand...) affect and are affected by the heat transfer exchange with very shallow geothermal installations (i.e. horizontal collector systems and special forms). Taking into consideration these premises, the ITER Project (Improving Thermal Efficiency of horizontal ground heat exchangers, http://iter-geo.eu/), funded by European Union, is here presented. An overview of physical-thermal properties variations under different moisture and load conditions for different mixtures of natural material is shown, based on laboratory and field test data. The test site, located in Eltersdorf, near Erlangen (Germany), consists of 5 trenches, filled in each with a different material, where 5 helix have been installed in an horizontal way instead of the traditional vertical option.
Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Shiyu, E-mail: shiyu.xu@gmail.com; Chen, Ying, E-mail: adachen@siu.edu; Lu, Jianping
2015-09-15
Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair basedmore » prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications.« less
A systematic way for the cost reduction of density fitting methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kállay, Mihály, E-mail: kallay@mail.bme.hu
2014-12-28
We present a simple approach for the reduction of the size of auxiliary basis sets used in methods exploiting the density fitting (resolution of identity) approximation for electron repulsion integrals. Starting out of the singular value decomposition of three-center two-electron integrals, new auxiliary functions are constructed as linear combinations of the original fitting functions. The new functions, which we term natural auxiliary functions (NAFs), are analogous to the natural orbitals widely used for the cost reduction of correlation methods. The use of the NAF basis enables the systematic truncation of the fitting basis, and thereby potentially the reduction of themore » computational expenses of the methods, though the scaling with the system size is not altered. The performance of the new approach has been tested for several quantum chemical methods. It is demonstrated that the most pronounced gain in computational efficiency can be expected for iterative models which scale quadratically with the size of the fitting basis set, such as the direct random phase approximation. The approach also has the promise of accelerating local correlation methods, for which the processing of three-center Coulomb integrals is a bottleneck.« less
Shi, Junwei; Liu, Fei; Zhang, Guanglei; Luo, Jianwen; Bai, Jing
2014-04-01
Owing to the high degree of scattering of light through tissues, the ill-posedness of fluorescence molecular tomography (FMT) inverse problem causes relatively low spatial resolution in the reconstruction results. Unlike L2 regularization, L1 regularization can preserve the details and reduce the noise effectively. Reconstruction is obtained through a restarted L1 regularization-based nonlinear conjugate gradient (re-L1-NCG) algorithm, which has been proven to be able to increase the computational speed with low memory consumption. The algorithm consists of inner and outer iterations. In the inner iteration, L1-NCG is used to obtain the L1-regularized results. In the outer iteration, the restarted strategy is used to increase the convergence speed of L1-NCG. To demonstrate the performance of re-L1-NCG in terms of spatial resolution, simulation and physical phantom studies with fluorescent targets located with different edge-to-edge distances were carried out. The reconstruction results show that the re-L1-NCG algorithm has the ability to resolve targets with an edge-to-edge distance of 0.1 cm at a depth of 1.5 cm, which is a significant improvement for FMT.
Inversion of potential field data using the finite element method on parallel computers
NASA Astrophysics Data System (ADS)
Gross, L.; Altinay, C.; Shaw, S.
2015-11-01
In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.
NASA Astrophysics Data System (ADS)
Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.
2008-11-01
We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.
Release phenomena and iterative activities in psychiatric geriatric patients
Villeneuve, A.; Turcotte, J.; Bouchard, M.; Côté, J. M.; Jus, A.
1974-01-01
This survey was undertaken to assess the frequency of some of the so-called release phenomena and iterative activities in an aged psychiatric population. Three groups of geriatric psychiatric patients with diagnoses of (I) organic brain syndrome, including senile dementia (56), (II) functional psychoses, predominantly schizophrenia (51) and (III) chronic schizophrenia never treated by neuroleptics or other biologic agents (16), were compared with (IV) a control group of 32 elderly people in good physical and mental health. In general, for the manifestations studied, the geriatric psychiatric patients suffering from an organic brain syndrome and treated with neuroleptics differed notably from the control group. This latter group, although older, had few neurological signs of senescence and the spontaneous oral movements usually associated with the use of neuroleptics were absent. Release phenomena such as the grasp and pouting reflexes, as well as the stereotyped activities, were encountered significantly more frequently in patients with an organic brain syndrome than in the two other groups of patients. Our survey has yielded limited results with regard to the possible influence of type of illness and neuroleptic treatment on the incidence of release phenomena and iterative activities. PMID:4810188
Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms
NASA Astrophysics Data System (ADS)
Mohan, K. Aditya
2017-10-01
4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.
Numerical optimization of actuator trajectories for ITER hybrid scenario profile evolution
NASA Astrophysics Data System (ADS)
van Dongen, J.; Felici, F.; Hogeweij, G. M. D.; Geelen, P.; Maljaars, E.
2014-12-01
Optimal actuator trajectories for an ITER hybrid scenario ramp-up are computed using a numerical optimization method. For both L-mode and H-mode scenarios, the time trajectory of plasma current, EC heating and current drive distribution is determined that minimizes a chosen cost function, while satisfying constraints. The cost function is formulated to reflect two desired properties of the plasma q profile at the end of the ramp-up. The first objective is to maximize the ITG turbulence threshold by maximizing the volume-averaged s/q ratio. The second objective is to achieve a stationary q profile by having a flat loop voltage profile. Actuator and physics-derived constraints are included, imposing limits on plasma current, ramp rates, internal inductance and q profile. This numerical method uses the fast control-oriented plasma profile evolution code RAPTOR, which is successfully benchmarked against more complete CRONOS simulations for L-mode and H-mode mode ITER hybrid scenarios. It is shown that the optimized trajectories computed using RAPTOR also result in an improved ramp-up scenario for CRONOS simulations using the same input trajectories. Furthermore, the optimal trajectories are shown to vary depending on the precise timing of the L-H transition.
Guidelines for internal optics optimization of the ITER EC H and CD upper launcher
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moro, A.; Bruschi, A.; Figini, L.
2014-02-12
The importance of localized injection of Electron Cyclotron waves to control Magneto-HydroDynamic instability is well assessed in tokamak physics and the set of four Electron Cyclotron (EC) Upper Launchers (UL) in ITER is mainly designed for this purpose. Each of the 4 ULs uses quasi-optical mirrors (shaping and planes, fixed and steerable) to redirect and focus 8 beams (in two rows, with power close to 1 MW per beam coming from the EC transmission lines) in the plasma region where the instability appears. Small beam dimensions and maximum beam superposition guarantee the necessary localization of the driven current. To achievemore » the goal of MHD stabilization with minimum EC power to preserve the energy confinement in the outer half of the plasma cross section, optimization of the quasi-optical design is required and a guideline of a strategy is presented. As a result of this process and following the guidelines indicated, modifications of the design (new mirrors positions, rotation axes and/or focal properties) will be proposed for the next step of an iterative process, including the mandatory compatibility check with the mechanical constraints.« less
Yu, Hua-Gen
2002-01-01
We present a full dimensional variational algorithm to calculate vibrational energies of penta-atomic molecules. The quantum mechanical Hamiltonian of the system for J=0 is derived in a set of orthogonal polyspherical coordinates in the body-fixed frame without any dynamical approximation. Moreover, the vibrational Hamiltonian has been obtained in an explicitly Hermitian form. Variational calculations are performed in a direct product discrete variable representation basis set. The sine functions are used for the radial coordinates, whereas the Legendre polynomials are employed for the polar angles. For the azimuthal angles, the symmetrically adapted Fourier–Chebyshev basis functions are utilized. The eigenvalue problem ismore » solved by a Lanczos iterative diagonalization algorithm. The preliminary application to methane is given. Ultimately, we made a comparison with previous results.« less
Tensorial Basis Spline Collocation Method for Poisson's Equation
NASA Astrophysics Data System (ADS)
Plagne, Laurent; Berthou, Jean-Yves
2000-01-01
This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.
Study on Parameter Identification of Assembly Robot based on Screw Theory
NASA Astrophysics Data System (ADS)
Yun, Shi; Xiaodong, Zhang
2017-11-01
The kinematic model of assembly robot is one of the most important factors affecting repetitive precision. In order to improve the accuracy of model positioning, this paper first establishes the exponential product model of ER16-1600 assembly robot on the basis of screw theory, and then based on iterative least squares method, using ER16-1600 model robot parameter identification. By comparing the experiment before and after the calibration, it is proved that the method has obvious improvement on the positioning accuracy of the assembly robot.
Preliminary design studies of an advanced general aviation aircraft
NASA Technical Reports Server (NTRS)
Barrett, Ron; Demoss, Shane; Dirkzwager, AB; Evans, Darryl; Gomer, Charles; Keiter, Jerry; Knipp, Darren; Seier, Glen; Smith, Steve; Wenninger, ED
1991-01-01
The preliminary design results are presented of the advanced aircraft design project. The goal was to take a revolutionary look into the design of a general aviation aircraft. Phase 1 of the project included the preliminary design of two configurations, a pusher, and a tractor. Phase 2 included the selection of only one configuration for further study. The pusher configuration was selected on the basis of performance characteristics, cabin noise, natural laminar flow, and system layouts. The design was then iterated to achieve higher levels of performance.
Monte Carlo simulation of nonadiabatic expansion in cometary atmospheres - Halley
NASA Astrophysics Data System (ADS)
Hodges, R. R.
1990-02-01
Monte Carlo methods developed for the characterization of velocity-dependent collision processes and ballistic transports in planetary exospheres form the basis of the present computer simulation of icy comet atmospheres, which iteratively undertakes the simultaneous determination of velocity distribution for five neutral species (water, together with suprathermal OH, H2, O, and H) in a flow regime varying from the hydrodynamic to the ballistic. Experimental data from the neutral mass spectrometer carried by Giotto for its March, 1986 encounter with Halley are compared with a model atmosphere.
Computational simulation of laser heat processing of materials
NASA Astrophysics Data System (ADS)
Shankar, Vijaya; Gnanamuthu, Daniel
1987-04-01
A computational model simulating the laser heat treatment of AISI 4140 steel plates with a CW CO2 laser beam has been developed on the basis of the three-dimensional, time-dependent heat equation (subject to the appropriate boundary conditions). The solution method is based on Newton iteration applied to a triple-approximate factorized form of the equation. The method is implicit and time-accurate; the maintenance of time-accuracy in the numerical formulation is noted to be critical for the simulation of finite length workpieces with a finite laser beam dwell time.
Singularity Preserving Numerical Methods for Boundary Integral Equations
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki (Principal Investigator)
1996-01-01
In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.
Finite element analysis of periodic transonic flow problems
NASA Technical Reports Server (NTRS)
Fix, G. J.
1978-01-01
Flow about an oscillating thin airfoil in a transonic stream was considered. It was assumed that the flow field can be decomposed into a mean flow plus a periodic perturbation. On the surface of the airfoil the usual Neumman conditions are imposed. Two computer programs were written, both using linear basis functions over triangles for the finite element space. The first program uses a banded Gaussian elimination solver to solve the matrix problem, while the second uses an iterative technique, namely SOR. The only results obtained are for an oscillating flat plate.
Hruszkewycz, Stephan O; Holt, Martin V; Tripathi, Ash; Maser, Jörg; Fuoss, Paul H
2011-06-15
We present the framework for convergent beam Bragg ptychography, and, using simulations, we demonstrate that nanocrystals can be ptychographically reconstructed from highly convergent x-ray Bragg diffraction. The ptychographic iterative engine is extended to three dimensions and shown to successfully reconstruct a simulated nanocrystal using overlapping raster scans with a defocused curved beam, the diameter of which matches the crystal size. This object reconstruction strategy can serve as the basis for coherent diffraction imaging experiments at coherent scanning nanoprobe x-ray sources.
Intelligent process mapping through systematic improvement of heuristics
NASA Technical Reports Server (NTRS)
Ieumwananonthachai, Arthur; Aizawa, Akiko N.; Schwartz, Steven R.; Wah, Benjamin W.; Yan, Jerry C.
1992-01-01
The present system for automatic learning/evaluation of novel heuristic methods applicable to the mapping of communication-process sets on a computer network has its basis in the testing of a population of competing heuristic methods within a fixed time-constraint. The TEACHER 4.1 prototype learning system implemented or learning new postgame analysis heuristic methods iteratively generates and refines the mappings of a set of communicating processes on a computer network. A systematic exploration of the space of possible heuristic methods is shown to promise significant improvement.
Studio Physics at the Colorado School of Mines: A model for iterative development and assessment
NASA Astrophysics Data System (ADS)
Kohl, Patrick; Kuo, Vincent
2009-05-01
The Colorado School of Mines (CSM) has taught its first-semester introductory physics course using a hybrid lecture/Studio Physics format for several years. Based on this previous success, over the past 18 months we have converted the second semester of our traditional calculus-based introductory physics course (Physics II) to a Studio Physics format. In this talk, we describe the recent history of the Physics II course and of Studio at Mines, discuss the PER-based improvements that we are implementing, and characterize our progress via several metrics, including pre/post Conceptual Survey of Electricity and Magnetism (CSEM) scores, Colorado Learning About Science Survey scores (CLASS), failure rates, and exam scores. We also report on recent attempts to involve students in the department's Senior Design program with our course. Our ultimate goal is to construct one possible model for a practical and successful transition from a lecture course to a Studio (or Studio-like) course.
Simulation of the hot rolling of steel with direct iteration
NASA Astrophysics Data System (ADS)
Hanoglu, Umut; Šarler, Božidar
2017-10-01
In this study a simulation system based on the meshless Local Radial Basis Function Collocation Method (LRBFCM) is applied for the hot rolling of steel. Rolling is a complex, 3D, thermo-mechanical problem; however, 2D cross-sectional slices are used as computational domains that are aligned with the rolling direction and no heat flow or strain is considered in the direction that is orthogonal to the slices. For each predefined position with respect to the rolling direction, the solution procedure is repeated until the slice reaches the final rolling position. Collocation nodes are initially distributed over the domain and boundaries of the initial slice. A local solution is achieved by considering the overlapping influence domains with either 5 or 7 nodes. Radial Basis Functions (RBFs) are used for the temperature discretization in the thermal model and displacement discretization in the mechanical model. The meshless solution procedure does not require a mesh-generation algorithm in the classic sense. Strong-form mechanical and thermal models are run for each slice regarding the contact with the roll's surface. Ideal plastic material behavior is considered for the mechanical results, where the nonlinear stress-strain relation is solved with a direct iteration. The majority of the Finite Element Model (FEM) simulations, including commercial software, use a conventional Newton-Raphson algorithm. However, direct iteration is chosen here due to its better compatibility with meshless methods. In order to overcome any unforeseen stability issues, the redistribution of the nodes by Elliptic Node Generation (ENG) is applied to one or more slices throughout the simulation. The rolling simulation presented here helps the user to design, test and optimize different rolling schedules. The results can be seen minutes after the simulation's start in terms of temperature, displacement, stress and strain fields as well as important technological parameters, like the roll-separating forces, roll toque, etc. An example of a rolling simulation, in which an initial size of 110x110 mm steel is rolled to a round bar with 80 mm diameter, is shown in Fig. 3. A user-friendly computer application for industrial use is created by using the C# and .NET frameworks.
NASA Astrophysics Data System (ADS)
Stambaugh, Ronald
2012-04-01
I am very pleased to join the outstanding leadership team for the journal Nuclear Fusion as Scientific Editor. The journal's high position in the field of fusion energy research derives in no small measure from the efforts of the IAEA team in Vienna, the production and marketing of IOP Publishing, the Board of Editors led by its chairman Mitsuru Kikuchi, the Associate Editor for Inertial Confinement Max Tabak and the outgoing Scientific Editor, Paul Thomas. During Paul's five year tenure submissions have grown by over 40%. The usage of the electronic journal has grown year by year with about 300 000 full text downloads of Nuclear Fusion articles in 2011, an impressive figure due in part to the launch of the full 50 year archive. High quality has been maintained while times for peer review and publishing have been reduced and the journal achieved some of the highest impact factors ever (as high as 4.27). The journal has contributed greatly to building the international scientific basis for fusion. I was privileged to serve from 2003 to 2010 as chairman of the Coordinating Committee for the International Tokamak Physics Activity (ITPA) which published in Nuclear Fusion the first ITER Physics Basis (1999) and its later update (2007). The scientific basis that has been developed to date for fusion has led to the construction of major facilities to demonstrate the production of power-plant relevant levels of fusion reactions. We look forward to the journal continuing to play a key role in the international effort toward fusion energy as these exciting major facilities and the various approaches to fusion continue to be developed. It is clear that Nuclear Fusion maintains its position in the field because of the perceived high quality of the submissions, the refereeing and the editorial processes, and the availability and utility of the online journal. The creation of the Nuclear Fusion Prize, led by the Board of Editors chairman Mitsuru Kikuchi, for the most outstanding paper published in the journal each year has furthered the submission and recognition of papers of the highest quality. The accomplishments of the journal's team over the last five years will be a tough act to follow but I look forward to working with this competent and dedicated group to continue the journal's high standards and ensure that Nuclear Fusion remains the journal of choice for authors and readers alike.
NASA Astrophysics Data System (ADS)
Gan, Chee Kwan; Challacombe, Matt
2003-05-01
Recently, early onset linear scaling computation of the exchange-correlation matrix has been achieved using hierarchical cubature [J. Chem. Phys. 113, 10037 (2000)]. Hierarchical cubature differs from other methods in that the integration grid is adaptive and purely Cartesian, which allows for a straightforward domain decomposition in parallel computations; the volume enclosing the entire grid may be simply divided into a number of nonoverlapping boxes. In our data parallel approach, each box requires only a fraction of the total density to perform the necessary numerical integrations due to the finite extent of Gaussian-orbital basis sets. This inherent data locality may be exploited to reduce communications between processors as well as to avoid memory and copy overheads associated with data replication. Although the hierarchical cubature grid is Cartesian, naive boxing leads to irregular work loads due to strong spatial variations of the grid and the electron density. In this paper we describe equal time partitioning, which employs time measurement of the smallest sub-volumes (corresponding to the primitive cubature rule) to load balance grid-work for the next self-consistent-field iteration. After start-up from a heuristic center of mass partitioning, equal time partitioning exploits smooth variation of the density and grid between iterations to achieve load balance. With the 3-21G basis set and a medium quality grid, equal time partitioning applied to taxol (62 heavy atoms) attained a speedup of 61 out of 64 processors, while for a 110 molecule water cluster at standard density it achieved a speedup of 113 out of 128. The efficiency of equal time partitioning applied to hierarchical cubature improves as the grid work per processor increases. With a fine grid and the 6-311G(df,p) basis set, calculations on the 26 atom molecule α-pinene achieved a parallel efficiency better than 99% with 64 processors. For more coarse grained calculations, superlinear speedups are found to result from reduced computational complexity associated with data parallelism.
Iterative expansion microscopy.
Chang, Jae-Byum; Chen, Fei; Yoon, Young-Gyu; Jung, Erica E; Babcock, Hazen; Kang, Jeong Seuk; Asano, Shoh; Suk, Ho-Jun; Pak, Nikita; Tillberg, Paul W; Wassie, Asmamaw T; Cai, Dawen; Boyden, Edward S
2017-06-01
We recently developed a method called expansion microscopy, in which preserved biological specimens are physically magnified by embedding them in a densely crosslinked polyelectrolyte gel, anchoring key labels or biomolecules to the gel, mechanically homogenizing the specimen, and then swelling the gel-specimen composite by ∼4.5× in linear dimension. Here we describe iterative expansion microscopy (iExM), in which a sample is expanded ∼20×. After preliminary expansion a second swellable polymer mesh is formed in the space newly opened up by the first expansion, and the sample is expanded again. iExM expands biological specimens ∼4.5 × 4.5, or ∼20×, and enables ∼25-nm-resolution imaging of cells and tissues on conventional microscopes. We used iExM to visualize synaptic proteins, as well as the detailed architecture of dendritic spines, in mouse brain circuitry.
Iterative expansion microscopy
Chang, Jae-Byum; Chen, Fei; Yoon, Young-Gyu; Jung, Erica E.; Babcock, Hazen; Kang, Jeong Seuk; Asano, Shoh; Suk, Ho-Jun; Pak, Nikita; Tillberg, Paul W.; Wassie, Asmamaw; Cai, Dawen; Boyden, Edward S.
2017-01-01
We recently discovered it was possible to physically magnify preserved biological specimens by embedding them in a densely crosslinked polyelectrolyte gel, anchoring key labels or biomolecules to the gel, mechanically homogenizing the specimen, and then swelling the gel-specimen composite by ~4.5x in linear dimension, a process we call expansion microscopy (ExM). Here we describe iterative expansion microscopy (iExM), in which a sample is expanded, then a second swellable polymer mesh is formed in the space newly opened up by the first expansion, and finally the sample is expanded again. iExM expands biological specimens ~4.5 × 4.5 or ~20x, and enables ~25 nm resolution imaging of cells and tissues on conventional microscopes. We used iExM to visualize synaptic proteins, as well as the detailed architecture of dendritic spines, in mouse brain circuitry. PMID:28417997
Free iterative-complement-interaction calculations of the hydrogen molecule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurokawa, Yusaku; Nakashima, Hiroyuki; Nakatsuji, Hiroshi
2005-12-15
The free iterative-complement-interaction (ICI) method based on the scaled Schroedinger equation proposed previously has been applied to the calculations of very accurate wave functions of the hydrogen molecule in an analytical expansion form. All the variables were determined with the variational principle by calculating the necessary integrals analytically. The initial wave function and the scaling function were changes to see the effects on the convergence speed of the ICI calculations. The free ICI wave functions that were generated automatically were different from the existing wave functions, and this difference was shown to be physically important. The best wave function reportedmore » in this paper seems to be the best worldwide in the literature from the variational point of view. The quality of the wave function was examined by calculating the nuclear and electron cusps.« less
Lattice dynamics and thermal conductivity of lithium fluoride via first-principles calculations
NASA Astrophysics Data System (ADS)
Liang, Ting; Chen, Wen-Qi; Hu, Cui-E.; Chen, Xiang-Rong; Chen, Qi-Feng
2018-04-01
The lattice thermal conductivity of lithium fluoride (LiF) is accurately computed from a first-principles approach based on an iterative solution of the Boltzmann transport equation. Real-space finite-difference supercell approach is employed to generate the second- and third-order interatomic force constants. The related physical quantities of LiF are calculated by the second- and third- order potential interactions at 30 K-1000 K. The calculated lattice thermal conductivity 13.89 W/(m K) for LiF at room temperature agrees well with the experimental value, demonstrating that the parameter-free approach can furnish precise descriptions of the lattice thermal conductivity for this material. Besides, the Born effective charges, dielectric constants and phonon spectrum of LiF accord well with the existing data. The lattice thermal conductivities for the iterative solution of BTE are also presented.
NASA Astrophysics Data System (ADS)
Klein, Andreas; Gerlach, Gerald
1998-09-01
This paper deals with the simulation of the fluid-structure interaction phenomena in micropumps. The proposed solution approach is based on external coupling of two different solvers, which are considered here as `black boxes'. Therefore, no specific intervention is necessary into the program code, and solvers can be exchanged arbitrarily. For the realization of the external iteration loop, two algorithms are considered: the relaxation-based Gauss-Seidel method and the computationally more extensive Newton method. It is demonstrated in terms of a simplified test case, that for rather weak coupling, the Gauss-Seidel method is sufficient. However, by simply changing the considered fluid from air to water, the two physical domains become strongly coupled, and the Gauss-Seidel method fails to converge in this case. The Newton iteration scheme must be used instead.
NASA Astrophysics Data System (ADS)
Masalmah, Yahya M.; Vélez-Reyes, Miguel
2007-04-01
The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.
BOOK REVIEW: Fusion: The Energy of the Universe
NASA Astrophysics Data System (ADS)
Lister, J.
2006-05-01
This book outlines the quest for fusion energy. It is presented in a form which is accessible to the interested layman, but which is precise and detailed for the specialist as well. The book contains 12 detailed chapters which cover the whole of the intended subject matter with copious illustrations and a balance between science and the scientific and political context. In addition, the book presents a useful glossary and a brief set of references for further non-specialist reading. Chapters 1 to 3 treat the underlying physics of nuclear energy and of the reactions in the sun and in the stars in considerable detail, including the creation of the matter in the universe. Chapter 4 presents the fusion reactions which can be harnessed on earth, and poses the fundamental problems of realising fusion energy as a source for our use, explaining the background to the Lawson criterion on the required quality of energy confinement, which 50 years later remains our fundamental milestone. Chapter 5 presents the basis for magnetic confinement, introducing some early attempts as well as some straightforward difficulties and treating linear and circular devices. The origins of the stellarator and of the tokamak are described. Chapter 6 is not essential to the mission of usefully harnessing fusion energy, but nonetheless explains to the layman the difference between fusion and fission in weapons, which should help the readers understand the differences as sources of peaceful energy as well, since this popular confusion remains a problem when proposing fusion with the `nuclear' label. Chapter 7 returns to energy sources with laser fusion, or inertial confinement fusion, which constitutes both military and civil research, depending on the country. The chapter provides a broad overview of the progress right up to today's hopes for fast ignition. The difficulty of harnessing fusion energy by magnetic or inertial confinement has created a breeding ground for what the authors call `false trails', since it is so tempting to produce a `backroom' solution to mankind's hunger for energy. Unfortunately, Chapter 8 can only regret that none of them has passed closer peer review. Chapters 9 and 10 concentrate on the `tokamak' concept for magnetic confinement, the basis for the JET and ITER projects, as well as for a wealth of smaller, national projects. The hopes and the disappointments are well and very frankly illustrated. The motivation for building a project of the size of ITER is made very clear. Present fusion research cannot forget that its mission is to develop an industrial reactor, not just a powerful research tool. Chapter 11 presents the major challenges between ITER and a reactor. Finally, Chapter 12 reminds us of why we need energy, why we do not have a credible solution at the mid-term (20 years) and why we have no solution in the longer term. The public awareness of this is growing, at last, even though the arguments were all on the table in the 1970's. This chapter therefore closes the book by bringing the reader back to earth rather suitably with the hard reality of energy needs and the absence of credible policies. This book has already received impressive approval among a wide range of people, since it so evidently succeeds in its goal to explain Fusion to many levels of reader. Gary McCracken and Peter Stott (one time editor of Plasma Physics and Controlled Fusion) both dedicated their careers to magnetic confinement fusion, mostly at Culham working on UKAEA projects and later on the JET project. They were both deeply involved with international collaborations and both were working abroad when they retired. The mixture between ideas, developments and people is most successfully developed. They clearly underline the importance of strong international collaboration on which this field depends. This open background is tangible in their recently published work, in which they have tried to communicate their love and understanding of this exciting field to the non-specialist. Their attempt has resulted in a remarkable success, filling a hole in the available literature. The format of this book, with boxed technical details, allows the casual reader to browse without being trapped by excessive detail, whereas the information is still there for the more assiduous reader. The only technical fault is the marring of the presentation by some unresolved production details in chapter 10. With the long-awaited decision to site ITER in Europe, there will inevitably be a strong demand for more information on fusion research for non-specialists, simply to understand what is behind this large project. This book fits the bill. It is written with technical accuracy but without resort to mathematics—a notably tricky target. The non-specialist wishing to find out about the field of fusion research, whether working as a journalist, administrator, secretary, politician, engineer or technician, will find a wealth of detail expressed in an accessible language. The specialist will be surprised by the precision of the text, and by the depth of the historical basis to this research. He will learn much, even if he is already familiar with the current state of art of fusion research. The younger researchers will find a clear history of their chosen field. The reviewer knows of no other book which has met this difficult goal with such ease, and strongly recommends it for the educated layman as well as for the ITER generation of younger physicists who did not live through the evolutionary period of fusion research, with its doubts, disappointments and successes.
NASA Astrophysics Data System (ADS)
Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Pilan, N.; Marcuzzi, D.; Serianni, G.; Veltri, P.
2011-09-01
Consorzio RFX in Padova is currently using a comprehensive set of numerical and analytical codes, for the physics and engineering design of the SPIDER (Source for Production of Ion of Deuterium Extracted from RF plasma) and MITICA (Megavolt ITER Injector Concept Advancement) experiments, planned to be built at Consorzio RFX. This paper presents a set of studies on different possible geometries for the MITICA accelerator, with the objective to compare different design concepts and choose the most suitable one (or ones) to be further developed and possibly adopted in the experiment. Different design solutions have been discussed and compared, taking into account their advantages and drawbacks by both the physics and engineering points of view.
2005-12-01
passive and active versions of each fiber designed under this task. Crystal Fibre shall provide characteristics of the fiber fabricated to include core...passive version of multicore fiber iteration 2. 15. SUBJECT TERMS EOARD, Laser physics, Fibre Lasers, Photonic Crystal, Multicore, Fiber Laser 16...9 00* 0 " CRYSTAL FIBRE INT ODUCTION This report describes the photonic crystal fibers developed under agreement No FA8655-o5-a- 3046. All
An evolving research agenda for human-coastal systems
NASA Astrophysics Data System (ADS)
Lazarus, Eli D.; Ellis, Michael A.; Brad Murray, A.; Hall, Damon M.
2016-03-01
Within the broad discourses of environmental change, sustainability science, and anthropogenic Earth-surface systems, a focused body of work involves the coupled economic and physical dynamics of developed shorelines. Rapid rates of change in coastal environments, from wetlands and deltas to inlets and dune systems, help researchers recognize, observe, and investigate coupling in natural (non-human) morphodynamics and biomorphodynamics. This same intrinsic quality of fast-paced change also makes developed coastal zones exemplars of observable coupling between physical processes and human activities. In many coastal communities, beach erosion is a natural hazard with economic costs that coastal management counters through a variety of mitigation strategies, including beach replenishment, groynes, revetments, and seawalls. As cycles of erosion and mitigation iterate, coastline change and economically driven interventions become mutually linked. Emergent dynamics of two-way economic-physical coupling is a recent research discovery. Having established a strong theoretical basis, research into coupled human-coastal systems has passed its early proof-of-concept phase. This paper frames three major challenges that need resolving in order to advance theoretical and empirical treatments of human-coastal systems: (1) codifying salient individual and social behaviors of decision-making in ways that capture societal actions across a range of scales (thus engaging economics, social science, and policy disciplines); (2) quantifying anthropogenic effects on alongshore and cross-shore sediment pathways and long-term landscape evolution in coastal zones through time, including direct measurement of cumulative changes to sediment cells resulting from coastal development and management practices (e.g., construction of buildings and artificial dunes, bulldozer removal of overwash after major storms); and (3) reciprocal knowledge and data exchange between researchers in coastal morphodynamics and practitioners of coastal management. Future research into human-coastal systems can benefit from decades of interdisciplinary work on the complex dynamics of common-pool resources, from computational efficiency and new techniques in numerical modeling, and from the growing catalog of high-resolution geospatial data for natural and developed coastlines around the world.
Perl Modules for Constructing Iterators
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2009-01-01
The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.
NASA Astrophysics Data System (ADS)
Phillips, D.
1980-10-01
Currently on NOAA/NESS's VIRGS system at the World Weather Building star images are being ingested on a daily basis. The image coordinates of the star locations are measured and stored. Subsequently, the information is used to determine the attitude, the misalignment angles between the spin axis and the principal axis of the satellite, and the precession rate and direction. This is done for both the 'East' and 'West' operational geosynchronous satellites. This orientation information is then combined with image measurements of earth based landmarks to determine the orbit of each satellite. The method for determining the orbit is simple. For each landmark measurement one determines a nominal position vector for the satellite by extending a ray from the landmark's position towards the satellite and intersecting the ray with a sphere with center coinciding with the Earth's center and with radius equal to the nominal height for a geosynchronous satellite. The apparent motion of the satellite around the Earth's center is then approximated with a Keplerian model. In turn the variations of the satellite's height, as a function of time found by using this model, are used to redetermine the successive satellite positions by again using the Earth based landmark measurements and intersecting rays from these landmarks with the newly determined spheres. This process is performed iteratively until convergence is achieved. Only three iterations are required.
NASA Astrophysics Data System (ADS)
Bastani, Ali Foroush; Dastgerdi, Maryam Vahid; Mighani, Abolfazl
2018-06-01
The main aim of this paper is the analytical and numerical study of a time-dependent second-order nonlinear partial differential equation (PDE) arising from the endogenous stochastic volatility model, introduced in [Bensoussan, A., Crouhy, M. and Galai, D., Stochastic equity volatility related to the leverage effect (I): equity volatility behavior. Applied Mathematical Finance, 1, 63-85, 1994]. As the first step, we derive a consistent set of initial and boundary conditions to complement the PDE, when the firm is financed by equity and debt. In the sequel, we propose a Newton-based iteration scheme for nonlinear parabolic PDEs which is an extension of a method for solving elliptic partial differential equations introduced in [Fasshauer, G. E., Newton iteration with multiquadrics for the solution of nonlinear PDEs. Computers and Mathematics with Applications, 43, 423-438, 2002]. The scheme is based on multilevel collocation using radial basis functions (RBFs) to solve the resulting locally linearized elliptic PDEs obtained at each level of the Newton iteration. We show the effectiveness of the resulting framework by solving a prototypical example from the field and compare the results with those obtained from three different techniques: (1) a finite difference discretization; (2) a naive RBF collocation and (3) a benchmark approximation, introduced for the first time in this paper. The numerical results confirm the robustness, higher convergence rate and good stability properties of the proposed scheme compared to other alternatives. We also comment on some possible research directions in this field.
High-order nonlinear susceptibilities of He
NASA Astrophysics Data System (ADS)
Liu, W.-C.; Clark, Charles W.
1996-05-01
High-order nonlinear optical response of noble gases to intense laser radiation is of considerable experimental interest, but is difficult to measure or calculate accurately. We have begun a set of calculations of frequency-dependent nonlinear susceptibilities of He 1s^2, within the framework of Rayleigh-Schrödinger perturbation theory at lowest applicable order, with the goal of providing critically evaluated atomic data for modelling high harmonic generation processes. The atomic Hamiltonian is decomposed in term of Hylleraas coordinates and spherical harmonics using the formalism of Pont and Shakeshaft (M. Pont and R. Shakeshaft, Phy. Rev. A 51), 257 (1995), and the hierarchy of inhomogeneous equations of perturbation theory is solved iteratively. A combination of Hylleraas and Frankowski basis functions is used(J. D. Baker, Master thesis, U. Delaware (1988); J. D. Baker, R. N. Hill, and J. D. Morgan, AIP Conference Proceedings 189) 123(1989); the compact Hylleraas basis provides a highly accurate representation of the ground state wavefunction, whereas the diffuse Frankowski basis functions efficiently reproduce the correct asymptotic structure of the perturbed orbitals.
Graphic matching based on shape contexts and reweighted random walks
NASA Astrophysics Data System (ADS)
Zhang, Mingxuan; Niu, Dongmei; Zhao, Xiuyang; Liu, Mingjun
2018-04-01
Graphic matching is a very critical issue in all aspects of computer vision. In this paper, a new graphics matching algorithm combining shape contexts and reweighted random walks was proposed. On the basis of the local descriptor, shape contexts, the reweighted random walks algorithm was modified to possess stronger robustness and correctness in the final result. Our main process is to use the descriptor of the shape contexts for the random walk on the iteration, of which purpose is to control the random walk probability matrix. We calculate bias matrix by using descriptors and then in the iteration we use it to enhance random walks' and random jumps' accuracy, finally we get the one-to-one registration result by discretization of the matrix. The algorithm not only preserves the noise robustness of reweighted random walks but also possesses the rotation, translation, scale invariance of shape contexts. Through extensive experiments, based on real images and random synthetic point sets, and comparisons with other algorithms, it is confirmed that this new method can produce excellent results in graphic matching.
Enhanced Low-Enriched Uranium Fuel Element for the Advanced Test Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, M. A.; DeHart, M. D.; Morrell, S. R.
2015-03-01
Under the current US Department of Energy (DOE) policy and planning scenario, the Advanced Test Reactor (ATR) and its associated critical facility (ATRC) will be reconfigured to operate on low-enriched uranium (LEU) fuel. This effort has produced a conceptual design for an Enhanced LEU Fuel (ELF) element. This fuel features monolithic U-10Mo fuel foils and aluminum cladding separated by a thin zirconium barrier. As with previous iterations of the ELF design, radial power peaking is managed using different U-10Mo foil thicknesses in different plates of the element. The lead fuel element design, ELF Mk1A, features only three fuel meat thicknesses,more » a reduction from the previous iterations meant to simplify manufacturing. Evaluation of the ELF Mk1A fuel design against reactor performance requirements is ongoing, as are investigations of the impact of manufacturing uncertainty on safety margins. The element design has been evaluated in what are expected to be the most demanding design basis accident scenarios and has met all initial thermal-hydraulic criteria.« less
NASA Astrophysics Data System (ADS)
Izmaylov, Artur F.; Staroverov, Viktor N.; Scuseria, Gustavo E.; Davidson, Ernest R.; Stoltz, Gabriel; Cancès, Eric
2007-02-01
We have recently formulated a new approach, named the effective local potential (ELP) method, for calculating local exchange-correlation potentials for orbital-dependent functionals based on minimizing the variance of the difference between a given nonlocal potential and its desired local counterpart [V. N. Staroverov et al., J. Chem. Phys. 125, 081104 (2006)]. Here we show that under a mildly simplifying assumption of frozen molecular orbitals, the equation defining the ELP has a unique analytic solution which is identical with the expression arising in the localized Hartree-Fock (LHF) and common energy denominator approximations (CEDA) to the optimized effective potential. The ELP procedure differs from the CEDA and LHF in that it yields the target potential as an expansion in auxiliary basis functions. We report extensive calculations of atomic and molecular properties using the frozen-orbital ELP method and its iterative generalization to prove that ELP results agree with the corresponding LHF and CEDA values, as they should. Finally, we make the case for extending the iterative frozen-orbital ELP method to full orbital relaxation.
Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam
2014-07-01
This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng
2012-01-01
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969
Millimeter wave experiment of ITER equatorial EC launcher mock-up
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, K.; Oda, Y.; Kajiwara, K.
2014-02-12
The full-scale mock-up of the equatorial launcher was fabricated in basis of the baseline design to investigate the mm-wave propagation properties of the launcher, the manufacturability, the cooling line management, how to assemble the components and so on. The mock-up consists of one of three mm-wave transmission sets and one of eight waveguide lines can deliver the mm-wave power. The mock-up was connected to the ITER compatible transmission line and the 170GHz gyrotron and the high power experiment was carried out. The measured radiation pattern of the beam at the location of 2.5m away from the EL mock-up shows themore » successful steering capability of 20°∼40°. It was also revealed that the radiated profile at both steering and fixed focusing mirror agreed with the calculation. The result also suggests that some unwanted modes are included in the radiated beam. Transmission of 0.5MW-0.4sec and of 0.12MW-50sec were also demonstrated.« less
Error control for reliable digital data transmission and storage systems
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Deng, R. H.
1985-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.
Webb, J; Foster, J; Poulter, E
2016-04-01
Being physically active has multiple benefits for cancer patients. Despite this only 23% are active to the national recommendations and 31% are completely inactive. A cancer diagnosis offers a teachable moment in which patients might be more receptive to lifestyle changes. Nurses are well placed to offer physical activity advice, however, only 9% of UK nurses involved in cancer care talk to all cancer patients about physical activity. A change in the behaviour of nurses is needed to routinely deliver physical activity advice to cancer patients. As recommended by the Medical Research Council, behavioural change interventions should be evidenced-based and use a relevant and coherent theoretical framework to stand the best chance of success. This paper presents a case study on the development of an intervention to improve the frequency of delivery of very brief advice (VBA) on physical activity by nurses to cancer patients, using the Behaviour Change Wheel (BCW). The eight composite steps outlined by the BCW guided the intervention development process. An iterative approach was taken involving key stakeholders (n = 45), with four iterations completed in total. This was not defined a priori but emerged during the development process. A 60 min training intervention, delivered in either a face-to-face or online setting, with follow-up at eight weeks, was designed to improve the capability, opportunity and motivation of nurses to deliver VBA on physical activity to people living with cancer. This intervention incorporates seven behaviour change techniques of goal setting coupled with commitment; instructions on how to perform the behaviour; salience of the consequences of delivering VBA; a demonstration on how to give VBA, all delivered via a credible source with objects added to the environment to support behavioural change. The BCW is a time consuming process, however, it provides a useful and comprehensive framework for intervention development and greater control over intervention replication and evaluation. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Xu, J C; Wang, L; Xu, G S; Luo, G N; Yao, D M; Li, Q; Cao, L; Chen, L; Zhang, W; Liu, S C; Wang, H Q; Jia, M N; Feng, W; Deng, G Z; Hu, L Q; Wan, B N; Li, J; Sun, Y W; Guo, H Y
2016-08-01
In order to withstand rapid increase in particle and power impact onto the divertor and demonstrate the feasibility of the ITER design under long pulse operation, the upper divertor of the EAST tokamak has been upgraded to actively water-cooled, ITER-like tungsten mono-block structure since the 2014 campaign, which is the first attempt for ITER on the tokamak devices. Therefore, a new divertor Langmuir probe diagnostic system (DivLP) was designed and successfully upgraded on the tungsten divertor to obtain the plasma parameters in the divertor region such as electron temperature, electron density, particle and heat fluxes. More specifically, two identical triple probe arrays have been installed at two ports of different toroidal positions (112.5-deg separated toroidally), which can provide fundamental data to study the toroidal asymmetry of divertor power deposition and related 3-dimension (3D) physics, as induced by resonant magnetic perturbations, lower hybrid wave, and so on. The shape of graphite tip and fixed structure of the probe are designed according to the structure of the upper tungsten divertor. The ceramic support, small graphite tip, and proper connector installed make it possible to be successfully installed in the very narrow interval between the cassette body and tungsten mono-block, i.e., 13.5 mm. It was demonstrated during the 2014 and 2015 commissioning campaigns that the newly upgraded divertor Langmuir probe diagnostic system is successful. Representative experimental data are given and discussed for the DivLP measurements, then proving its availability and reliability.
An Assessment of Iterative Reconstruction Methods for Sparse Ultrasound Imaging
Valente, Solivan A.; Zibetti, Marcelo V. W.; Pipa, Daniel R.; Maia, Joaquim M.; Schneider, Fabio K.
2017-01-01
Ultrasonic image reconstruction using inverse problems has recently appeared as an alternative to enhance ultrasound imaging over beamforming methods. This approach depends on the accuracy of the acquisition model used to represent transducers, reflectivity, and medium physics. Iterative methods, well known in general sparse signal reconstruction, are also suited for imaging. In this paper, a discrete acquisition model is assessed by solving a linear system of equations by an ℓ1-regularized least-squares minimization, where the solution sparsity may be adjusted as desired. The paper surveys 11 variants of four well-known algorithms for sparse reconstruction, and assesses their optimization parameters with the goal of finding the best approach for iterative ultrasound imaging. The strategy for the model evaluation consists of using two distinct datasets. We first generate data from a synthetic phantom that mimics real targets inside a professional ultrasound phantom device. This dataset is contaminated with Gaussian noise with an estimated SNR, and all methods are assessed by their resulting images and performances. The model and methods are then assessed with real data collected by a research ultrasound platform when scanning the same phantom device, and results are compared with beamforming. A distinct real dataset is finally used to further validate the proposed modeling. Although high computational effort is required by iterative methods, results show that the discrete model may lead to images closer to ground-truth than traditional beamforming. However, computing capabilities of current platforms need to evolve before frame rates currently delivered by ultrasound equipments are achievable. PMID:28282862
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J. C.; Jia, M. N.; Feng, W.
2016-08-15
In order to withstand rapid increase in particle and power impact onto the divertor and demonstrate the feasibility of the ITER design under long pulse operation, the upper divertor of the EAST tokamak has been upgraded to actively water-cooled, ITER-like tungsten mono-block structure since the 2014 campaign, which is the first attempt for ITER on the tokamak devices. Therefore, a new divertor Langmuir probe diagnostic system (DivLP) was designed and successfully upgraded on the tungsten divertor to obtain the plasma parameters in the divertor region such as electron temperature, electron density, particle and heat fluxes. More specifically, two identical triplemore » probe arrays have been installed at two ports of different toroidal positions (112.5-deg separated toroidally), which can provide fundamental data to study the toroidal asymmetry of divertor power deposition and related 3-dimension (3D) physics, as induced by resonant magnetic perturbations, lower hybrid wave, and so on. The shape of graphite tip and fixed structure of the probe are designed according to the structure of the upper tungsten divertor. The ceramic support, small graphite tip, and proper connector installed make it possible to be successfully installed in the very narrow interval between the cassette body and tungsten mono-block, i.e., 13.5 mm. It was demonstrated during the 2014 and 2015 commissioning campaigns that the newly upgraded divertor Langmuir probe diagnostic system is successful. Representative experimental data are given and discussed for the DivLP measurements, then proving its availability and reliability.« less
NASA Astrophysics Data System (ADS)
Boozer, Allen H.
2017-05-01
The potential for damage, the magnitude of the extrapolation, and the importance of the atypical—incidents that occur once in a thousand shots—make theory and simulation essential for ensuring that relativistic runaway electrons will not prevent ITER from achieving its mission. Most of the theoretical literature on electron runaway assumes magnetic surfaces exist. ITER planning for the avoidance of halo and runaway currents is focused on massive-gas or shattered-pellet injection of impurities. In simulations of experiments, such injections lead to a rapid large-scale magnetic-surface breakup. Surface breakup, which is a magnetic reconnection, can occur on a quasi-ideal Alfvénic time scale when the resistance is sufficiently small. Nevertheless, the removal of the bulk of the poloidal flux, as in halo-current mitigation, is on a resistive time scale. The acceleration of electrons to relativistic energies requires the confinement of some tubes of magnetic flux within the plasma and a resistive time scale. The interpretation of experiments on existing tokamaks and their extrapolation to ITER should carefully distinguish confined versus unconfined magnetic field lines and quasi-ideal versus resistive evolution. The separation of quasi-ideal from resistive evolution is extremely challenging numerically, but is greatly simplified by constraints of Maxwell’s equations, and in particular those associated with magnetic helicity. The physics of electron runaway along confined magnetic field lines is clarified by relations among the poloidal flux change required for an e-fold in the number of electrons, the energy distribution of the relativistic electrons, and the number of relativistic electron strikes that can be expected in a single disruption event.
Boozer, Allen H.
2017-03-24
The potential for damage, the magnitude of the extrapolation, and the importance of the atypical—incidents that occur once in a thousand shots—make theory and simulation essential for ensuring that relativistic runaway electrons will not prevent ITER from achieving its mission. Most of the theoretical literature on electron runaway assumes magnetic surfaces exist. ITER planning for the avoidance of halo and runaway currents is focused on massive gas or shattered-pellet injection of impurities. In simulations of experiments, such injections lead to a rapid large-scale magnetic-surface breakup. Surface breakup, which is a magnetic reconnection, can occur on a quasi-ideal Alfvénic time scalemore » when the resistance is sufficiently small. Nevertheless, the removal of the bulk of the poloidal flux, as in halo-current mitigation, is on a resistive time scale. The acceleration of electrons to relativistic energies requires the confinement of some tubes of magnetic flux within the plasma and a resistive time scale. The interpretation of experiments on existing tokamaks and their extrapolation to ITER should carefully distinguish confined versus unconfined magnetic field lines and quasi-ideal versus resistive evolution. The separation of quasi-ideal from resistive evolution is extremely challenging numerically, but is greatly simplified by constraints of Maxwell’s equations, and in particular those associated with magnetic helicity. Thus, the physics of electron runaway along confined magnetic field lines is clarified by relations among the poloidal flux change required for an e-fold in the number of electrons, the energy distribution of the relativistic electrons, and the number of relativistic electron strikes that can be expected in a single disruption event.« less
Fractals, Coherence and Brain Dynamics
NASA Astrophysics Data System (ADS)
Vitiello, Giuseppe
2010-11-01
I show that the self-similarity property of deterministic fractals provides a direct connection with the space of the entire analytical functions. Fractals are thus described in terms of coherent states in the Fock-Bargmann representation. Conversely, my discussion also provides insights on the geometrical properties of coherent states: it allows to recognize, in some specific sense, fractal properties of coherent states. In particular, the relation is exhibited between fractals and q-deformed coherent states. The connection with the squeezed coherent states is also displayed. In this connection, the non-commutative geometry arising from the fractal relation with squeezed coherent states is discussed and the fractal spectral properties are identified. I also briefly discuss the description of neuro-phenomenological data in terms of squeezed coherent states provided by the dissipative model of brain and consider the fact that laboratory observations have shown evidence that self-similarity characterizes the brain background activity. This suggests that a connection can be established between brain dynamics and the fractal self-similarity properties on the basis of the relation discussed in this report between fractals and squeezed coherent states. Finally, I do not consider in this paper the so-called random fractals, namely those fractals obtained by randomization processes introduced in their iterative generation. Since self-similarity is still a characterizing property in many of such random fractals, my conjecture is that also in such cases there must exist a connection with the coherent state algebraic structure. In condensed matter physics, in many cases the generation by the microscopic dynamics of some kind of coherent states is involved in the process of the emergence of mesoscopic/macroscopic patterns. The discussion presented in this paper suggests that also fractal generation may provide an example of emergence of global features, namely long range correlation at mesoscopic/macroscopic level, from microscopic local deformation processes. In view of the wide spectrum of application of both, fractal studies and coherent state physics, spanning from solid state physics to laser physics, quantum optics, complex dynamical systems and biological systems, the results presented in the present report may lead to interesting practical developments in many research sectors.
An Application of Gröbner Basis in Differential Equations of Physics
NASA Astrophysics Data System (ADS)
Chaharbashloo, Mohammad Saleh; Basiri, Abdolali; Rahmany, Sajjad; Zarrinkamar, Saber
2013-11-01
We apply the Gröbner basis to the ansatz method in quantum mechanics to obtain the energy eigenvalues and the wave functions in a very simple manner. There are important physical potentials such as the Cornell interaction which play significant roles in particle physics and can be treated via this technique. As a typical example, the algorithm is applied to the semi-relativistic spinless Salpeter equation under the Cornell interaction. Many other applications of the idea in a wide range of physical fields are listed as well.
Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems
NASA Astrophysics Data System (ADS)
Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.
2010-12-01
Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.
Measuring experimental cyclohexane-water distribution coefficients for the SAMPL5 challenge
NASA Astrophysics Data System (ADS)
Rustenburg, Ariën S.; Dancer, Justin; Lin, Baiwei; Feng, Jianwen A.; Ortwine, Daniel F.; Mobley, David L.; Chodera, John D.
2016-11-01
Small molecule distribution coefficients between immiscible nonaqueuous and aqueous phases—such as cyclohexane and water—measure the degree to which small molecules prefer one phase over another at a given pH. As distribution coefficients capture both thermodynamic effects (the free energy of transfer between phases) and chemical effects (protonation state and tautomer effects in aqueous solution), they provide an exacting test of the thermodynamic and chemical accuracy of physical models without the long correlation times inherent to the prediction of more complex properties of relevance to drug discovery, such as protein-ligand binding affinities. For the SAMPL5 challenge, we carried out a blind prediction exercise in which participants were tasked with the prediction of distribution coefficients to assess its potential as a new route for the evaluation and systematic improvement of predictive physical models. These measurements are typically performed for octanol-water, but we opted to utilize cyclohexane for the nonpolar phase. Cyclohexane was suggested to avoid issues with the high water content and persistent heterogeneous structure of water-saturated octanol phases, since it has greatly reduced water content and a homogeneous liquid structure. Using a modified shake-flask LC-MS/MS protocol, we collected cyclohexane/water distribution coefficients for a set of 53 druglike compounds at pH 7.4. These measurements were used as the basis for the SAMPL5 Distribution Coefficient Challenge, where 18 research groups predicted these measurements before the experimental values reported here were released. In this work, we describe the experimental protocol we utilized for measurement of cyclohexane-water distribution coefficients, report the measured data, propose a new bootstrap-based data analysis procedure to incorporate multiple sources of experimental error, and provide insights to help guide future iterations of this valuable exercise in predictive modeling.
Progress toward commissioning and plasma operation in NSTX-U
NASA Astrophysics Data System (ADS)
Ono, M.; Chrzanowski, J.; Dudek, L.; Gerhardt, S.; Heitzenroeder, P.; Kaita, R.; Menard, J. E.; Perry, E.; Stevenson, T.; Strykowsky, R.; Titus, P.; von Halle, A.; Williams, M.; Atnafu, N. D.; Blanchard, W.; Cropper, M.; Diallo, A.; Gates, D. A.; Ellis, R.; Erickson, K.; Hosea, J.; Hatcher, R.; Jurczynski, S. Z.; Kaye, S.; Labik, G.; Lawson, J.; LeBlanc, B.; Maingi, R.; Neumeyer, C.; Raman, R.; Raftopoulos, S.; Ramakrishnan, R.; Roquemore, A. L.; Sabbagh, S. A.; Sichta, P.; Schneider, H.; Smith, M.; Stratton, B.; Soukhanovskii, V.; Taylor, G.; Tresemer, K.; Zolfaghari, A.; The NSTX-U Team
2015-07-01
The National Spherical Torus Experiment-Upgrade (NSTX-U) is the most powerful spherical torus facility at PPPL, Princeton USA. The major mission of NSTX-U is to develop the physics basis for an ST-based Fusion Nuclear Science Facility (FNSF). The ST-based FNSF has the promise of achieving the high neutron fluence needed for reactor component testing with relatively modest tritium consumption. At the same time, the unique operating regimes of NSTX-U can contribute to several important issues in the physics of burning plasmas to optimize the performance of ITER. NSTX-U further aims to determine the attractiveness of the compact ST for addressing key research needs on the path toward a fusion demonstration power plant (DEMO). The upgrade will nearly double the toroidal magnetic field BT to 1 T at a major radius of R0 = 0.93 m, plasma current Ip to 2 MA and neutral beam injection (NBI) heating power to 14 MW. The anticipated plasma performance enhancement is a quadrupling of the plasma stored energy and near doubling of the plasma confinement time, which would result in a 5-10 fold increase in the fusion performance parameter nτ T. A much more tangential 2nd NBI system, with 2-3 times higher current drive efficiency compared to the 1st NBI system, is installed to attain the 100% non-inductive operation needed for a compact FNSF design. With higher fields and heating powers, the NSTX-U plasma collisionality will be reduced by a factor of 3-6 to help explore the favourable trend in transport towards the low collisionality FNSF regime. The NSTX-U first plasma is planned for the Summer of 2015, at which time the transition to plasma operations will occur.
Higher Order Bases in a 2D Hybrid BEM/FEM Formulation
NASA Technical Reports Server (NTRS)
Fink, Patrick W.; Wilton, Donald R.
2002-01-01
The advantages of using higher order, interpolatory basis functions are examined in the analysis of transverse electric (TE) plane wave scattering by homogeneous, dielectric cylinders. A boundary-element/finite-element (BEM/FEM) hybrid formulation is employed in which the interior dielectric region is modeled with the vector Helmholtz equation, and a radiation boundary condition is supplied by an Electric Field Integral Equation (EFIE). An efficient method of handling the singular self-term arising in the EFIE is presented. The iterative solution of the partially dense system of equations is obtained using the Quasi-Minimal Residual (QMR) algorithm with an Incomplete LU Threshold (ILUT) preconditioner. Numerical results are shown for the case of an incident wave impinging upon a square dielectric cylinder. The convergence of the solution is shown versus the number of unknowns as a function of the completeness order of the basis functions.
NASA Astrophysics Data System (ADS)
Shinoda, Yukio; Yabe, Kuniaki; Tanaka, Hideo; Akisawa, Atsushi; Kashiwagi, Takao
In this paper we consider that there are two economical social behaviors when new technologies are introduced. One is on the short-term economic basis, the other one is on the long-tem economic basis. If we consider a learning curve on the technology, it is more economical than short-term behavior to accelerate the introduction of the technology much wider in the earlier term than that on short-term economic basis. The costs in the accelerated term are higher, but the introduction costs in the later terms are cheaper by learning curve. This paper focuses on the plug-in hybrid electric vehicles (PHEVs). The ways to derive the results on short-term economic basis and the results on long-term economic basis are shown. The result of short-term behaviors can be derived by using the iteration method in which the battery costs in every term are adjusted to the learning curve. The result of long-term behaviors can be derived by seeking to the way where the amount of battery capacity is increased. We also estimate that how much subsidy does it need to get close to results on the long-term economic basis when social behavior is on the short-term economic basis. We assume subsidy for PHEV's initial costs, which can be financed by charging fee on petroleum consumption. In that case, there is no additional cost in the system. We show that the greater the total amount of money to that subsidy is, the less the amount of both CO2 emissions and system costs.
Assessing performance of flaw characterization methods through uncertainty propagation
NASA Astrophysics Data System (ADS)
Miorelli, R.; Le Bourdais, F.; Artusi, X.
2018-04-01
In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.
NASA Astrophysics Data System (ADS)
Smyth, R. T.; Ballance, C. P.; Ramsbottom, C. A.; Johnson, C. A.; Ennis, D. A.; Loch, S. D.
2018-05-01
Neutral tungsten is the primary candidate as a wall material in the divertor region of the International Thermonuclear Experimental Reactor (ITER). The efficient operation of ITER depends heavily on precise atomic physics calculations for the determination of reliable erosion diagnostics, helping to characterize the influx of tungsten impurities into the core plasma. The following paper presents detailed calculations of the atomic structure of neutral tungsten using the multiconfigurational Dirac-Fock method, drawing comparisons with experimental measurements where available, and includes a critical assessment of existing atomic structure data. We investigate the electron-impact excitation of neutral tungsten using the Dirac R -matrix method, and by employing collisional-radiative models, we benchmark our results with recent Compact Toroidal Hybrid measurements. The resulting comparisons highlight alternative diagnostic lines to the widely used 400.88-nm line.
Intelligent cooperation: A framework of pedagogic practice in the operating room.
Sutkin, Gary; Littleton, Eliza B; Kanter, Steven L
2018-04-01
Surgeons who work with trainees must address their learning needs without compromising patient safety. We used a constructivist grounded theory approach to examine videos of five teaching surgeries. Attending surgeons were interviewed afterward while watching cued videos of their cases. Codes were iteratively refined into major themes, and then constructed into a larger framework. We present a novel framework, Intelligent Cooperation, which accounts for the highly adaptive, iterative features of surgical teaching in the operating room. Specifically, we define Intelligent Cooperation as a sequence of coordinated exchanges between attending and trainee that accomplishes small surgical steps while simultaneously uncovering the trainee's learning needs. Intelligent Cooperation requires the attending to accurately determine learning needs, perform real-time needs assessment, provide critical scaffolding, and work with the learner to accomplish the next step in the surgery. This is achieved through intense, coordinated verbal and physical cooperation. Copyright © 2017 Elsevier Inc. All rights reserved.
Application of Gauss's law space-charge limited emission model in iterative particle tracking method
NASA Astrophysics Data System (ADS)
Altsybeyev, V. V.; Ponomarev, V. A.
2016-11-01
The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. The results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.
Implicit flux-split schemes for the Euler equations
NASA Technical Reports Server (NTRS)
Thomas, J. L.; Walters, R. W.; Van Leer, B.
1985-01-01
Recent progress in the development of implicit algorithms for the Euler equations using the flux-vector splitting method is described. Comparisons of the relative efficiency of relaxation and spatially-split approximately factored methods on a vector processor for two-dimensional flows are made. For transonic flows, the higher convergence rate per iteration of the Gauss-Seidel relaxation algorithms, which are only partially vectorizable, is amply compensated for by the faster computational rate per iteration of the approximately factored algorithm. For supersonic flows, the fully-upwind line-relaxation method is more efficient since the numerical domain of dependence is more closely matched to the physical domain of dependence. A hybrid three-dimensional algorithm using relaxation in one coordinate direction and approximate factorization in the cross-flow plane is developed and applied to a forebody shape at supersonic speeds and a swept, tapered wing at transonic speeds.
Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
NASA Astrophysics Data System (ADS)
Raburn, Daniel Louis
We have developed a preconditioned, globalized Jacobian-free Newton-Krylov (JFNK) solver for calculating equilibria with magnetic islands. The solver has been developed in conjunction with the Princeton Iterative Equilibrium Solver (PIES) and includes two notable enhancements over a traditional JFNK scheme: (1) globalization of the algorithm by a sophisticated backtracking scheme, which optimizes between the Newton and steepest-descent directions; and, (2) adaptive preconditioning, wherein information regarding the system Jacobian is reused between Newton iterations to form a preconditioner for our GMRES-like linear solver. We have developed a formulation for calculating saturated neoclassical tearing modes (NTMs) which accounts for the incomplete loss of a bootstrap current due to gradients of multiple physical quantities. We have applied the coupled PIES-JFNK solver to calculate saturated island widths on several shots from the Tokamak Fusion Test Reactor (TFTR) and have found reasonable agreement with experimental measurement.
NASA Technical Reports Server (NTRS)
Cooke, C. H.; Blanchard, D. K.
1975-01-01
A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
NASA Astrophysics Data System (ADS)
Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.
2017-09-01
The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite-difference, finite-volume, finite-element or spectral method.
Linear tearing mode stability equations for a low collisionality toroidal plasma
NASA Astrophysics Data System (ADS)
Connor, J. W.; Hastie, R. J.; Helander, P.
2009-01-01
Tearing mode stability is normally analysed using MHD or two-fluid Braginskii plasma models. However for present, or future, large hot tokamaks like JET or ITER the collisionality is such as to place them in the banana regime. Here we develop a linear stability theory for the resonant layer physics appropriate to such a regime. The outcome is a set of 'fluid' equations whose coefficients encapsulate all neoclassical physics: the neoclassical Ohm's law, enhanced ion inertia, cross-field transport of particles, heat and momentum all play a role. While earlier treatments have also addressed this type of neoclassical physics we differ in incorporating the more physically relevant 'semi-collisional fluid' regime previously considered in cylindrical geometry; semi-collisional effects tend to screen the resonant surface from the perturbed magnetic field, preventing reconnection. Furthermore we also include thermal physics, which may modify the results. While this electron description is of wide relevance and validity, the fluid treatment of the ions requires the ion banana orbit width to be less than the semi-collisional electron layer. This limits the application of the present theory to low magnetic shear—however, this is highly relevant to the sawtooth instability—or to colder ions. The outcome of the calculation is a set of one-dimensional radial differential equations of rather high order. However, various simplifications that reduce the computational task of solving these are discussed. In the collisional regime, when the set reduces to a single second-order differential equation, the theory extends previous work by Hahm et al (1988 Phys. Fluids 31 3709) to include diamagnetic-type effects arising from plasma gradients, both in Ohm's law and the ion inertia term of the vorticity equation. The more relevant semi-collisional regime pertaining to JET or ITER, is described by a pair of second-order differential equations, extending the cylindrical equations of Drake et al (1983 Phys. Fluids 26 2509) to toroidal geometry.
NASA Astrophysics Data System (ADS)
Miliordos, Evangelos; Xantheas, Sotiris S.
2015-03-01
We report the variation of the binding energy of the Formic Acid Dimer with the size of the basis set at the Coupled Cluster with iterative Singles, Doubles and perturbatively connected Triple replacements [CCSD(T)] level of theory, estimate the Complete Basis Set (CBS) limit, and examine the validity of the Basis Set Superposition Error (BSSE)-correction for this quantity that was previously challenged by Kalescky, Kraka, and Cremer (KKC) [J. Chem. Phys. 140, 084315 (2014)]. Our results indicate that the BSSE correction, including terms that account for the substantial geometry change of the monomers due to the formation of two strong hydrogen bonds in the dimer, is indeed valid for obtaining accurate estimates for the binding energy of this system as it exhibits the expected decrease with increasing basis set size. We attribute the discrepancy between our current results and those of KKC to their use of a valence basis set in conjunction with the correlation of all electrons (i.e., including the 1s of C and O). We further show that the use of a core-valence set in conjunction with all electron correlation converges faster to the CBS limit as the BSSE correction is less than half than the valence electron/valence basis set case. The uncorrected and BSSE-corrected binding energies were found to produce the same (within 0.1 kcal/mol) CBS limits. We obtain CCSD(T)/CBS best estimates for De = - 16.1 ± 0.1 kcal/mol and for D0 = - 14.3 ± 0.1 kcal/mol, the later in excellent agreement with the experimental value of -14.22 ± 0.12 kcal/mol.
Finite element procedures for coupled linear analysis of heat transfer, fluid and solid mechanics
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1993-01-01
Coupled finite element formulations for fluid mechanics, heat transfer, and solid mechanics are derived from the conservation laws for energy, mass, and momentum. To model the physics of interactions among the participating disciplines, the linearized equations are coupled by combining domain and boundary coupling procedures. Iterative numerical solution strategy is presented to solve the equations, with the partitioning of temporal discretization implemented.
ERIC Educational Resources Information Center
Campbell, Rebecca; Greeson, Megan R.; Bybee, Deborah; Raja, Sheela
2008-01-01
This study examined the co-occurrence of childhood sexual abuse, adult sexual assault, intimate partner violence, and sexual harassment in a predominantly African American sample of 268 female veterans, randomly sampled from an urban Veterans Affairs hospital women's clinic. A combination of hierarchical and iterative cluster analysis was used to…
Hierarchical Engine for Large-scale Infrastructure Co-Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-04-24
HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.
NASA Astrophysics Data System (ADS)
Ongena, J.; Koch, R.; Wolf, R.; Zohm, H.
2016-05-01
Our modern society requires environmentally friendly solutions for energy production. Energy can be released not only from the fission of heavy nuclei but also from the fusion of light nuclei. Nuclear fusion is an important option for a clean and safe solution for our long-term energy needs. The extremely high temperatures required for the fusion reaction are routinely realized in several magnetic-fusion machines. Since the early 1990s, up to 16 MW of fusion power has been released in pulses of a few seconds, corresponding to a power multiplication close to break-even. Our understanding of the very complex behaviour of a magnetized plasma at temperatures between 150 and 200 million °C surrounded by cold walls has also advanced substantially. This steady progress has resulted in the construction of ITER, a fusion device with a planned fusion power output of 500 MW in pulses of 400 s. ITER should provide answers to remaining important questions on the integration of physics and technology, through a full-size demonstration of a tenfold power multiplication, and on nuclear safety aspects. Here we review the basic physics underlying magnetic fusion: past achievements, present efforts and the prospects for future production of electrical energy. We also discuss questions related to the safety, waste management and decommissioning of a future fusion power plant.
The Joint European Torus (JET)
NASA Astrophysics Data System (ADS)
Rebut, Paul-Henri
2017-02-01
This paper addresses the history of JET, the Tokamak that reached the highest performances and the experiment that so far came closest to the eventual goal of a fusion reactor. The reader must be warned, however, that this document is not a comprehensive study of controlled thermonuclear fusion or even of JET. The next step on this road, the ITER project, is an experimental reactor. Actually, several prototypes will be required before a commercial reactor can be built. The fusion history is far from been finalised. JET is still in operation some 32 years after the first plasma and still has to provide answers to many questions before ITER takes the lead on research. Some physical interpretations of the observed phenomena, although coherent, are still under discussion. This paper also recalls some basic physics concepts necessary to the understanding of confinement: a knowledgeable reader can ignore these background sections. This fascinating story, comprising successes and failures, is imbedded in the complexities of twentieth and the early twenty-first centuries at a time when world globalization is evolving and the future seems loaded with questions. The views here expressed on plasma confinement are solely those of the author. This is especially the case for magnetic turbulence, for which other scientists may have different views.
Gyrokinetic simulation of edge blobs and divertor heat-load footprint
NASA Astrophysics Data System (ADS)
Chang, C. S.; Ku, S.; Hager, R.; Churchill, M.; D'Azevedo, E.; Worley, P.
2015-11-01
Gyrokinetic study of divertor heat-load width Lq has been performed using the edge gyrokinetic code XGC1. Both neoclassical and electrostatic turbulence physics are self-consistently included in the simulation with fully nonlinear Fokker-Planck collision operation and neutral recycling. Gyrokinetic ions and drift kinetic electrons constitute the plasma in realistic magnetic separatrix geometry. The electron density fluctuations from nonlinear turbulence form blobs, as similarly seen in the experiments. DIII-D and NSTX geometries have been used to represent today's conventional and tight aspect ratio tokamaks. XGC1 shows that the ion neoclassical orbit dynamics dominates over the blob physics in setting Lq in the sample DIII-D and NSTX plasmas, re-discovering the experimentally observed 1/Ip type scaling. Magnitude of Lq is in the right ballpark, too, in comparison with experimental data. However, in an ITER standard plasma, XGC1 shows that the negligible neoclassical orbit excursion effect makes the blob dynamics to dominate Lq. Differently from Lq 1mm (when mapped back to outboard midplane) as was predicted by simple-minded extrapolation from the present-day data, XGC1 shows that Lq in ITER is about 1 cm that is somewhat smaller than the average blob size. Supported by US DOE and the INCITE program.
Cognitive Styles and Socialized Attitudes of Men Who Batter: Where Should We Intervene?
ERIC Educational Resources Information Center
Eisikovits, Zvi C.; And Others
1991-01-01
Attempted to differentiate among violent and nonviolent Israeli men (n=120) and predict their physical violence. Violent and nonviolent men could be differentiated primarily on basis of their attitudes and, to lesser degree, on basis of cognitions. Batterers' physical violence was significantly predicted by men's negative attitudes toward battered…
NASA Astrophysics Data System (ADS)
Chen, Lei; Liu, Xiang; Lian, Youyun; Cai, Laizhong
2015-09-01
The hypervapotron (HV), as an enhanced heat transfer technique, will be used for ITER divertor components in the dome region as well as the enhanced heat flux first wall panels. W-Cu brazing technology has been developed at SWIP (Southwestern Institute of Physics), and one W/CuCrZr/316LN component of 450 mm×52 mm×166 mm with HV cooling channels will be fabricated for high heat flux (HHF) tests. Before that a relevant analysis was carried out to optimize the structure of divertor component elements. ANSYS-CFX was used in CFD analysis and ABAQUS was adopted for thermal-mechanical calculations. Commercial code FE-SAFE was adopted to compute the fatigue life of the component. The tile size, thickness of tungsten tiles and the slit width among tungsten tiles were optimized and its HHF performances under International Thermonuclear Experimental Reactor (ITER) loading conditions were simulated. One brand new tokamak HL-2M with advanced divertor configuration is under construction in SWIP, where ITER-like flat-tile divertor components are adopted. This optimized design is expected to supply valuable data for HL-2M tokamak. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2011GB110001 and 2011GB110004)
Drifts, currents, and power scrape-off width in SOLPS-ITER modeling of DIII-D
Meier, E. T.; Goldston, R. J.; Kaveeva, E. G.; ...
2016-12-27
The effects of drifts and associated flows and currents on the width of the parallel heat flux channel (λ q) in the tokamak scrape-off layer (SOL) are analyzed using the SOLPS-ITER 2D fluid transport code. Motivation is supplied by Goldston’s heuristic drift (HD) model for λ q, which yields the same approximately inverse poloidal magnetic field dependence seen in multi-machine regression. The analysis, focusing on a DIII-D H-mode discharge, reveals HD-like features, including comparable density and temperature fall-off lengths in the SOL, and up-down ion pressure asymmetry that allows net cross-separatrix ion magnetic drift flux to exceed net anomalous ionmore » flux. In experimentally relevant high-recycling cases, scans of both toroidal and poloidal magnetic field (B tor and B pol) are conducted, showing minimal λ q dependence on either component of the field. Insensitivity to B tor is expected, and suggests that SOLPS-ITER is effectively capturing some aspects of HD physics. Absence of λ q dependence on B pol, however, is inconsistent with both the HD model and experimental results. As a result, the inconsistency is attributed to strong variation in the parallel Mach number, which violates one of the premises of the HD model.« less
Using AORSA to simulate helicon waves in DIIID and ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lau, Cornwall H; Jaeger, E. F.; Berry, Lee Alan
2014-01-01
Recent efforts by Vdovin [1] and Prater [2] have shown that helicon waves (fast waves at ~30 ion cyclotron frequency harmonic) may be an attractive option for driving efficient off-axis current drive during non-inductive tokamak operation for DIIID, ITER and DEMO. For DIIID scenarios, the ray tracing code GENRAY has been extensively used to study helicon current drive efficiency and location as a function many plasma parameters. has some limitations on absorption at high cyclotron harmonics, so the full wave code AORSA, which is applicable to arbitrary Larmor radius and can therefore resolve high ion cyclotron harmonics, has been recentlymore » used to validate the GENRAY model. It will be shown that the GENRAY and AORSA driven current drive profiles are comparable for the envisioned high temperature and density advanced scenarios for DIIID, where there is high single pass absorption due to electron Landau damping. AORSA results will be shown for various plasma parameters for DIIID and for ITER. Computational difficulties in achieving these AORSA results will also be discussed. * Work supported by USDOE Contract No. DE-AC05-00OR22725 [1] V. L. Vdovin, Plasma Physics Reports, V.39, No.2, 2013 [2] R. Prater et al, Nucl. Fusion, 52, 083024, 2014« less
Physics and Diplomacy: A True Story
NASA Astrophysics Data System (ADS)
Sessoms, Allen
2017-01-01
Physics has played a prominent role in U.S. diplomacy since the development of nuclear weapons during World War II. The discipline expanded its reach during the Atoms for Peace initiative of president Eisenhower and continued through the Cold War with the Soviet Union. Physics maintains a prominent role in the diplomatic dialogue through efforts in the nuclear non-proliferation arena and in major international science collaborations such as in experiments at CERN, ITER and the International Space Station. Physics has also served as the template for the much broader impact of science on diplomacy. For example, climate change, energy efficiency and ocean science have all benefitted from the path blazed by physicists. But how effective have physicists been in steering clear of political dynamics while trying to infuse scientific facts into policy debates? This talk will consider this through the eyes of a physicist who has spent many years providing advice to policy makers, both inside and outside of government.
Multiscale solvers and systematic upscaling in computational physics
NASA Astrophysics Data System (ADS)
Brandt, A.
2005-07-01
Multiscale algorithms can overcome the scale-born bottlenecks that plague most computations in physics. These algorithms employ separate processing at each scale of the physical space, combined with interscale iterative interactions, in ways which use finer scales very sparingly. Having been developed first and well known as multigrid solvers for partial differential equations, highly efficient multiscale techniques have more recently been developed for many other types of computational tasks, including: inverse PDE problems; highly indefinite (e.g., standing wave) equations; Dirac equations in disordered gauge fields; fast computation and updating of large determinants (as needed in QCD); fast integral transforms; integral equations; astrophysics; molecular dynamics of macromolecules and fluids; many-atom electronic structures; global and discrete-state optimization; practical graph problems; image segmentation and recognition; tomography (medical imaging); fast Monte-Carlo sampling in statistical physics; and general, systematic methods of upscaling (accurate numerical derivation of large-scale equations from microscopic laws).
NASA Astrophysics Data System (ADS)
Krantz, Richard; Douthett, Jack
2009-05-01
Although it is common practice to borrow tools from mathematics to apply to physics or music, it is unusual to use tools developed in music theory to mathematically describe physical phenomena. So called ``Maximally Even Set'' theory fits this unusual case. In this poster, we summarize, by example, the theory of Maximally Even (ME) sets and show how this formalism leads to the distribution of black and white keys on the piano keyboard. We then show how ME sets lead to a generalization of the well-known ``Cycle-of-Fifths'' in music theory. Subsequently, we describe ordering in one-dimensional spin-1/2 anti-ferromagnets using ME sets showing that this description leads to a fractal ``Devil's Staircase'' magnetic phase diagram. Finally, we examine an extension of ME sets, ``Iterated Maximally Even Sets'' that describes chord structure in music.
NASA Astrophysics Data System (ADS)
Krantz, Richard; Douthett, Jack
2009-10-01
Although it is common practice to borrow tools from mathematics to apply to physics or music, it is unusual to use tools developed in music theory to mathematically describe physical phenomena. So called ``Maximally Even Set'' theory fits this unusual case. In this poster, we summarize, by example, the theory of Maximally Even (ME) sets and show how this formalism leads to the distribution of black and white keys on the piano keyboard. We then show how ME sets lead to a generalization of the well-known ``Cycle-of-Fifths'' in music theory. Subsequently, we describe ordering in one-dimensional spin-1/2 anti-ferromagnets using ME sets showing that this description leads to a fractal ``Devil's Staircase'' magnetic phase diagram. Finally, we examine an extension of ME sets, ``Iterated Maximally Even'' sets that describes chord structure in music.
Mutual information, neural networks and the renormalization group
NASA Astrophysics Data System (ADS)
Koch-Janusz, Maciej; Ringel, Zohar
2018-06-01
Physical systems differing in their microscopic details often display strikingly similar behaviour when probed at macroscopic scales. Those universal properties, largely determining their physical characteristics, are revealed by the powerful renormalization group (RG) procedure, which systematically retains `slow' degrees of freedom and integrates out the rest. However, the important degrees of freedom may be difficult to identify. Here we demonstrate a machine-learning algorithm capable of identifying the relevant degrees of freedom and executing RG steps iteratively without any prior knowledge about the system. We introduce an artificial neural network based on a model-independent, information-theoretic characterization of a real-space RG procedure, which performs this task. We apply the algorithm to classical statistical physics problems in one and two dimensions. We demonstrate RG flow and extract the Ising critical exponent. Our results demonstrate that machine-learning techniques can extract abstract physical concepts and consequently become an integral part of theory- and model-building.