Sample records for experiments simulating iter

  1. Experiments and Simulations of ITER-like Plasmas in Alcator C-Mod

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    .R. Wilson, C.E. Kessel, S. Wolfe, I.H. Hutchinson, P. Bonoli, C. Fiore, A.E. Hubbard, J. Hughes, Y. Lin, Y. Ma, D. Mikkelsen, M. Reinke, S. Scott, A.C.C. Sips, S. Wukitch and the C-Mod Team

    Alcator C-Mod is performing ITER-like experiments to benchmark and verify projections to 15 MA ELMy H-mode Inductive ITER discharges. The main focus has been on the transient ramp phases. The plasma current in C-Mod is 1.3 MA and toroidal field is 5.4 T. Both Ohmic and ion cyclotron (ICRF) heated discharges are examined. Plasma current rampup experiments have demonstrated that (ICRF and LH) heating in the rise phase can save voltseconds (V-s), as was predicted for ITER by simulations, but showed that the ICRF had no effect on the current profile versus Ohmic discharges. Rampdown experiments show an overcurrent inmore » the Ohmic coil (OH) at the H to L transition, which can be mitigated by remaining in H-mode into the rampdown. Experiments have shown that when the EDA H-mode is preserved well into the rampdown phase, the density and temperature pedestal heights decrease during the plasma current rampdown. Simulations of the full C-Mod discharges have been done with the Tokamak Simulation Code (TSC) and the Coppi-Tang energy transport model is used with modified settings to provide the best fit to the experimental electron temperature profile. Other transport models have been examined also. __________________________________________________« less

  2. Validation of the thermal transport model used for ITER startup scenario predictions with DIII-D experimental data

    DOE PAGES

    Casper, T. A.; Meyer, W. H.; Jackson, G. L.; ...

    2010-12-08

    We are exploring characteristics of ITER startup scenarios in similarity experiments conducted on the DIII-D Tokamak. In these experiments, we have validated scenarios for the ITER current ramp up to full current and developed methods to control the plasma parameters to achieve stability. Predictive simulations of ITER startup using 2D free-boundary equilibrium and 1D transport codes rely on accurate estimates of the electron and ion temperature profiles that determine the electrical conductivity and pressure profiles during the current rise. Here we present results of validation studies that apply the transport model used by the ITER team to DIII-D discharge evolutionmore » and comparisons with data from our similarity experiments.« less

  3. An iterative forward analysis technique to determine the equation of state of dynamically compressed materials

    DOE PAGES

    Ali, S. J.; Kraus, R. G.; Fratanduono, D. E.; ...

    2017-05-18

    Here, we developed an iterative forward analysis (IFA) technique with the ability to use hydrocode simulations as a fitting function for analysis of dynamic compression experiments. The IFA method optimizes over parameterized quantities in the hydrocode simulations, breaking the degeneracy of contributions to the measured material response. Velocity profiles from synthetic data generated using a hydrocode simulation are analyzed as a first-order validation of the technique. We also analyze multiple magnetically driven ramp compression experiments on copper and compare with more conventional techniques. Excellent agreement is obtained in both cases.

  4. Non-iterative distance constraints enforcement for cloth drapes simulation

    NASA Astrophysics Data System (ADS)

    Hidajat, R. L. L. G.; Wibowo, Arifin, Z.; Suyitno

    2016-03-01

    A cloth simulation represents the behavior of cloth objects such as flag, tablecloth, or even garments has application in clothing animation for games and virtual shops. Elastically deformable models have widely used to provide realistic and efficient simulation, however problem of overstretching is encountered. We introduce a new cloth simulation algorithm that replaces iterative distance constraint enforcement steps with non-iterative ones for preventing over stretching in a spring-mass system for cloth modeling. Our method is based on a simple position correction procedure applied at one end of a spring. In our experiments, we developed a rectangle cloth model which is initially at a horizontal position with one point is fixed, and it is allowed to drape by its own weight. Our simulation is able to achieve a plausible cloth drapes as in reality. This paper aims to demonstrate the reliability of our approach to overcome overstretches while decreasing the computational cost of the constraint enforcement process due to an iterative procedure that is eliminated.

  5. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  6. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE PAGES

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...

    2018-04-20

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  7. Progress in preparing scenarios for operation of the International Thermonuclear Experimental Reactor

    NASA Astrophysics Data System (ADS)

    Sips, A. C. C.; Giruzzi, G.; Ide, S.; Kessel, C.; Luce, T. C.; Snipes, J. A.; Stober, J. K.

    2015-02-01

    The development of operating scenarios is one of the key issues in the research for ITER which aims to achieve a fusion gain (Q) of ˜10, while producing 500 MW of fusion power for ≥300 s. The ITER Research plan proposes a success oriented schedule starting in hydrogen and helium, to be followed by a nuclear operation phase with a rapid development towards Q ˜ 10 in deuterium/tritium. The Integrated Operation Scenarios Topical Group of the International Tokamak Physics Activity initiates joint activities among worldwide institutions and experiments to prepare ITER operation. Plasma formation studies report robust plasma breakdown in devices with metal walls over a wide range of conditions, while other experiments use an inclined EC launch angle at plasma formation to mimic the conditions in ITER. Simulations of the plasma burn-through predict that at least 4 MW of Electron Cyclotron heating (EC) assist would be required in ITER. For H-modes at q95 ˜ 3, many experiments have demonstrated operation with scaled parameters for the ITER baseline scenario at ne/nGW ˜ 0.85. Most experiments, however, obtain stable discharges at H98(y,2) ˜ 1.0 only for βN = 2.0-2.2. For the rampup in ITER, early X-point formation is recommended, allowing auxiliary heating to reduce the flux consumption. A range of plasma inductance (li(3)) can be obtained from 0.65 to 1.0, with the lowest values obtained in H-mode operation. For the rampdown, the plasma should stay diverted maintaining H-mode together with a reduction of the elongation from 1.85 to 1.4. Simulations show that the proposed rampup and rampdown schemes developed since 2007 are compatible with the present ITER design for the poloidal field coils. At 13-15 MA and densities down to ne/nGW ˜ 0.5, long pulse operation (>1000 s) in ITER is possible at Q ˜ 5, useful to provide neutron fluence for Test Blanket Module assessments. ITER scenario preparation in hydrogen and helium requires high input power (>50 MW). H-mode operation in helium may be possible at input powers above 35 MW at a toroidal field of 2.65 T, for studying H-modes and ELM mitigation. In hydrogen, H-mode operation is expected to be marginal, even at 2.65 T with 60 MW of input power. Simulation code benchmark studies using hybrid and steady state scenario parameters have proved to be a very challenging and lengthy task of testing suites of codes, consisting of tens of sophisticated modules. Nevertheless, the general basis of the modelling appears sound, with substantial consistency among codes developed by different groups. For a hybrid scenario at 12 MA, the code simulations give a range for Q = 6.5-8.3, using 30 MW neutral beam injection and 20 MW ICRH. For non-inductive operation at 7-9 MA, the simulation results show more variation. At high edge pedestal pressure (Tped ˜ 7 keV), the codes predict Q = 3.3-3.8 using 33 MW NB, 20 MW EC, and 20 MW ion cyclotron to demonstrate the feasibility of steady-state operation with the day-1 heating systems in ITER. Simulations using a lower edge pedestal temperature (˜3 keV) but improved core confinement obtain Q = 5-6.5, when ECCD is concentrated at mid-radius and ˜20 MW off-axis current drive (ECCD or LHCD) is added. Several issues remain to be studied, including plasmas with dominant electron heating, mitigation of transient heat loads integrated in scenario demonstrations and (burn) control simulations in ITER scenarios.

  8. EC assisted start-up experiments reproduction in FTU and AUG for simulations of the ITER case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granucci, G.; Ricci, D.; Farina, D.

    The breakdown and plasma start-up in ITER are well known issues studied in the last few years in many tokamaks with the aid of calculation based on simplified modeling. The thickness of ITER metallic wall and the voltage limits of the Central Solenoid Power Supply strongly limit the maximum toroidal electric field achievable (0.3 V/m), well below the level used in the present generation of tokamaks. In order to have a safe and robust breakdown, the use of Electron Cyclotron Power to assist plasma formation and current rump up has been foreseen. This has raised attention on plasma formation phasemore » in presence of EC wave, especially in order to predict the required power for a robust breakdown in ITER. Few detailed theory studies have been performed up to nowadays, due to the complexity of the problems. A simplified approach, extended from that proposed in ref[1] has been developed including a impurity multispecies distribution and an EC wave propagation and absorption based on GRAY code. This integrated model (BK0D) has been benchmarked on ohmic and EC assisted experiments on FTU and AUG, finding the key aspects for a good reproduction of data. On the basis of this, the simulation has been devoted to understand the best configuration for ITER case. The dependency of impurity distribution content and neutral gas pressure limits has been considered. As results of the analysis a reasonable amount of power (1 - 2 MW) seems to be enough to extend in a significant way the breakdown and current start up capability of ITER. The work reports the FTU data reproduction and the ITER case simulations.« less

  9. Harmonics analysis of the ITER poloidal field converter based on a piecewise method

    NASA Astrophysics Data System (ADS)

    Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU

    2017-12-01

    Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.

  10. Iterative repair for scheduling and rescheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Deale, Michael

    1991-01-01

    An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.

  11. Modeling and simulation of a beam emission spectroscopy diagnostic for the ITER prototype neutral beam injector.

    PubMed

    Barbisan, M; Zaniol, B; Pasqualotto, R

    2014-11-01

    A test facility for the development of the neutral beam injection system for ITER is under construction at Consorzio RFX. It will host two experiments: SPIDER, a 100 keV H(-)/D(-) ion RF source, and MITICA, a prototype of the full performance ITER injector (1 MV, 17 MW beam). A set of diagnostics will monitor the operation and allow to optimize the performance of the two prototypes. In particular, beam emission spectroscopy will measure the uniformity and the divergence of the fast particles beam exiting the ion source and travelling through the beam line components. This type of measurement is based on the collection of the Hα/Dα emission resulting from the interaction of the energetic particles with the background gas. A numerical model has been developed to simulate the spectrum of the collected emissions in order to design this diagnostic and to study its performance. The paper describes the model at the base of the simulations and presents the modeled Hα spectra in the case of MITICA experiment.

  12. Examination of the Entry to Burn and Burn Control for the ITER 15 MA Baseline and Other Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kesse, Charles E.; Kim, S-H.; Koechl, F.

    2014-09-01

    The entry to burn and flattop burn control in ITER will be a critical need from the first DT experiments. Simulations are used to address time-dependent behavior under a range of possible conditions that include injected power level, impurity content (W, Ar, Be), density evolution, H-mode regimes, controlled parameter (Wth, Pnet, Pfusion), and actuator (Paux, fueling, fAr), with a range of transport models. A number of physics issues at the L-H transition require better understanding to project to ITER, however, simulations indicate viable control with sufficient auxiliary power (up to 73 MW), while lower powers become marginal (as low asmore » 43 MW).« less

  13. Modeling and simulation of a beam emission spectroscopy diagnostic for the ITER prototype neutral beam injector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbisan, M., E-mail: marco.barbisan@igi.cnr.it; Zaniol, B.; Pasqualotto, R.

    2014-11-15

    A test facility for the development of the neutral beam injection system for ITER is under construction at Consorzio RFX. It will host two experiments: SPIDER, a 100 keV H{sup −}/D{sup −} ion RF source, and MITICA, a prototype of the full performance ITER injector (1 MV, 17 MW beam). A set of diagnostics will monitor the operation and allow to optimize the performance of the two prototypes. In particular, beam emission spectroscopy will measure the uniformity and the divergence of the fast particles beam exiting the ion source and travelling through the beam line components. This type of measurementmore » is based on the collection of the H{sub α}/D{sub α} emission resulting from the interaction of the energetic particles with the background gas. A numerical model has been developed to simulate the spectrum of the collected emissions in order to design this diagnostic and to study its performance. The paper describes the model at the base of the simulations and presents the modeled H{sub α} spectra in the case of MITICA experiment.« less

  14. Improved evaluation of optical depth components from Langley plot data

    NASA Technical Reports Server (NTRS)

    Biggar, S. F.; Gellman, D. I.; Slater, P. N.

    1990-01-01

    A simple, iterative procedure to determine the optical depth components of the extinction optical depth measured by a solar radiometer is presented. Simulated data show that the iterative procedure improves the determination of the exponent of a Junge law particle size distribution. The determination of the optical depth due to aerosol scattering is improved as compared to a method which uses only two points from the extinction data. The iterative method was used to determine spectral optical depth components for June 11-13, 1988 during the MAC III experiment.

  15. Physics and technology in the ion-cyclotron range of frequency on Tore Supra and TITAN test facility: implication for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litaudon, X; Bernard, J. M.; Colas, L.

    2013-01-01

    To support the design of an ITER ion-cyclotron range of frequency heating (ICRH) system and to mitigate risks of operation in ITER, CEA has initiated an ambitious Research & Development program accompanied by experiments on Tore Supra or test-bed facility together with a significant modelling effort. The paper summarizes the recent results in the following areas: Comprehensive characterization (experiments and modelling) of a new Faraday screen concept tested on the Tore Supra antenna. A new model is developed for calculating the ICRH sheath rectification at the antenna vicinity. The model is applied to calculate the local heat flux on Toremore » Supra and ITER ICRH antennas. Full-wave modelling of ITER ICRH heating and current drive scenarios with the EVE code. With 20 MW of power, a current of 400 kA could be driven on axis in the DT scenario. Comparison between DT and DT(3He) scenario is given for heating and current drive efficiencies. First operation of CW test-bed facility, TITAN, designed for ITER ICRH components testing and could host up to a quarter of an ITER antenna. R&D of high permittivity materials to improve load of test facilities to better simulate ITER plasma antenna loading conditions.« less

  16. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.

  17. Neutron streaming studies along JET shielding penetrations

    NASA Astrophysics Data System (ADS)

    Stamatelatos, Ion E.; Vasilopoulou, Theodora; Batistoni, Paola; Obryk, Barbara; Popovichev, Sergey; Naish, Jonathan

    2017-09-01

    Neutronic benchmark experiments are carried out at JET aiming to assess the neutronic codes and data used in ITER analysis. Among other activities, experiments are performed in order to validate neutron streaming simulations along long penetrations in the JET shielding configuration. In this work, neutron streaming calculations along the JET personnel entrance maze are presented. Simulations were performed using the MCNP code for Deuterium-Deuterium and Deuterium- Tritium plasma sources. The results of the simulations were compared against experimental data obtained using thermoluminescence detectors and activation foils.

  18. The Iterative Design Process in Research and Development: A Work Experience Paper

    NASA Technical Reports Server (NTRS)

    Sullivan, George F. III

    2013-01-01

    The iterative design process is one of many strategies used in new product development. Top-down development strategies, like waterfall development, place a heavy emphasis on planning and simulation. The iterative process, on the other hand, is better suited to the management of small to medium scale projects. Over the past four months, I have worked with engineers at Johnson Space Center on a multitude of electronics projects. By describing the work I have done these last few months, analyzing the factors that have driven design decisions, and examining the testing and verification process, I will demonstrate that iterative design is the obvious choice for research and development projects.

  19. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles.

    PubMed

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-08-13

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.

  20. ITER Baseline Scenario with ECCD Applied to Neoclassical Tearing Modes in DIII-D

    NASA Astrophysics Data System (ADS)

    Welander, A. G.; La Haye, R. J.; Lohr, J. M.; Humphreys, D. A.; Prater, R.; Paz-Soldan, C.; Kolemen, E.; Turco, F.; Olofsson, E.

    2015-11-01

    The neoclassical tearing mode (NTM) is a magnetic island that can occur on flux surfaces where the safety factor q is a rational number. Both m/n=3/2 and 2/1 NTM's degrade confinement, and the 2/1 mode often locks to the wall and disrupts the plasma. An NTM can be suppressed by depositing electron cyclotron current drive (ECCD) on the q-surface by injecting microwave beams into the plasma from gyrotrons. Recent DIII-D experiments have studied the application of ECCD/ECRH in the ITER Baseline Scenario. The power required from the gyrotrons can be significant enough to impact the fusion gain, Q in ITER. However, if gyrotron power could be minimized or turned off in ITER when not needed, this impact would be small. In fact, tearing-stable operation at low torque has been achieved previously in DIII-D without EC power. A vision for NTM control in ITER will be described together with results obtained from simulations and experiments in DIII-D under ITER like conditions. Work supported by the US DOE under DE-FC02-04ER54698, DE-AC02-09CH11466, DE-FG02-04ER54761.

  1. Inductive flux usage and its optimization in tokamak operation

    DOE PAGES

    Luce, Timothy C.; Humphreys, David A.; Jackson, Gary L.; ...

    2014-07-30

    The energy flow from the poloidal field coils of a tokamak to the electromagnetic and kinetic stored energy of the plasma are considered in the context of optimizing the operation of ITER. The goal is to optimize the flux usage in order to allow the longest possible burn in ITER at the desired conditions to meet the physics objectives (500 MW fusion power with energy gain of 10). A mathematical formulation of the energy flow is derived and applied to experiments in the DIII-D tokamak that simulate the ITER design shape and relevant normalized current and pressure. The rate ofmore » rise of the plasma current was varied, and the fastest stable current rise is found to be the optimum for flux usage in DIII-D. A method to project the results to ITER is formulated. The constraints of the ITER poloidal field coil set yield an optimum at ramp rates slower than the maximum stable rate for plasmas similar to the DIII-D plasmas. Finally, experiments in present-day tokamaks for further optimization of the current rise and validation of the projections are suggested.« less

  2. A Huygens immersed-finite-element particle-in-cell method for modeling plasma-surface interactions with moving interface

    NASA Astrophysics Data System (ADS)

    Cao, Huijun; Cao, Yong; Chu, Yuchuan; He, Xiaoming; Lin, Tao

    2018-06-01

    Surface evolution is an unavoidable issue in engineering plasma applications. In this article an iterative method for modeling plasma-surface interactions with moving interface is proposed and validated. In this method, the plasma dynamics is simulated by an immersed finite element particle-in-cell (IFE-PIC) method, and the surface evolution is modeled by the Huygens wavelet method which is coupled with the iteration of the IFE-PIC method. Numerical experiments, including prototypical engineering applications, such as the erosion of Hall thruster channel wall, are presented to demonstrate features of this Huygens IFE-PIC method for simulating the dynamic plasma-surface interactions.

  3. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles

    PubMed Central

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-01-01

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194

  4. Stokes space modulation format classification based on non-iterative clustering algorithm for coherent optical receivers.

    PubMed

    Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui

    2017-02-06

    A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.

  5. Iterated learning and the evolution of language.

    PubMed

    Kirby, Simon; Griffiths, Tom; Smith, Kenny

    2014-10-01

    Iterated learning describes the process whereby an individual learns their behaviour by exposure to another individual's behaviour, who themselves learnt it in the same way. It can be seen as a key mechanism of cultural evolution. We review various methods for understanding how behaviour is shaped by the iterated learning process: computational agent-based simulations; mathematical modelling; and laboratory experiments in humans and non-human animals. We show how this framework has been used to explain the origins of structure in language, and argue that cultural evolution must be considered alongside biological evolution in explanations of language origins. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Research on material removal accuracy analysis and correction of removal function during ion beam figuring

    NASA Astrophysics Data System (ADS)

    Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin

    2016-09-01

    Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Guoyong; Budny, Robert; Gorelenkov, Nikolai

    We report here the work done for the FY14 OFES Theory Performance Target as given below: "Understanding alpha particle confinement in ITER, the world's first burning plasma experiment, is a key priority for the fusion program. In FY 2014, determine linear instability trends and thresholds of energetic particle-driven shear Alfven eigenmodes in ITER for a range of parameters and profiles using a set of complementary simulation models (gyrokinetic, hybrid, and gyrofluid). Carry out initial nonlinear simulations to assess the effects of the unstable modes on energetic particle transport". In the past year (FY14), a systematic study of the alpha-driven Alfvenmore » modes in ITER has been carried out jointly by researchers from six institutions involving seven codes including the transport simulation code TRANSP (R. Budny and F. Poli, PPPL), three gyrokinetic codes: GEM (Y. Chen, Univ. of Colorado), GTC (J. McClenaghan, Z. Lin, UCI), and GYRO (E. Bass, R. Waltz, UCSD/GA), the hybrid code M3D-K (G.Y. Fu, PPPL), the gyro-fluid code TAEFL (D. Spong, ORNL), and the linear kinetic stability code NOVA-K (N. Gorelenkov, PPPL). A range of ITER parameters and profiles are specified by TRANSP simulation of a hybrid scenario case and a steady-state scenario case. Based on the specified ITER equilibria linear stability calculations are done to determine the stability boundary of alpha-driven high-n TAEs using the five initial value codes (GEM, GTC, GYRO, M3D-K, and TAEFL) and the kinetic stability code (NOVA-K). Both the effects of alpha particles and beam ions have been considered. Finally, the effects of the unstable modes on energetic particle transport have been explored using GEM and M3D-K.« less

  8. Multiscale optical simulation settings: challenging applications handled with an iterative ray-tracing FDTD interface method.

    PubMed

    Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian

    2016-03-20

    We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.

  9. Use of Multi-class Empirical Orthogonal Function for Identification of Hydrogeological Parameters and Spatiotemporal Pattern of Multiple Recharges in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.

    2017-12-01

    This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.

  10. Comparisons of NIF convergent ablation simulations with radiograph data.

    PubMed

    Olson, R E; Hicks, D G; Meezan, N B; Koch, J A; Landen, O L

    2012-10-01

    A technique for comparing simulation results directly with radiograph data from backlit capsule implosion experiments will be discussed. Forward Abel transforms are applied to the kappa*rho profiles of the simulation. These provide the transmission ratio (optical depth) profiles of the simulation. Gaussian and top hat blurs are applied to the simulated transmission ratio profiles in order to account for the motion blurring and imaging slit resolution of the experimental measurement. Comparisons between the simulated transmission ratios and the radiograph data lineouts are iterated until a reasonable backlighter profile is obtained. This backlighter profile is combined with the blurred, simulated transmission ratios to obtain simulated intensity profiles that can be directly compared with the radiograph data. Examples will be shown from recent convergent ablation (backlit implosion) experiments at the NIF.

  11. Suppression of tritium retention in remote areas of ITER by nonperturbative reactive gas injection.

    PubMed

    Tabarés, F L; Ferreira, J A; Ramos, A; van Rooij, G; Westerhout, J; Al, R; Rapp, J; Drenik, A; Mozetic, M

    2010-10-22

    A technique based on reactive gas injection in the afterglow region of the divertor plasma is proposed for the suppression of tritium-carbon codeposits in remote areas of ITER when operated with carbon-based divertor targets. Experiments in a divertor simulator plasma device indicate that a 4  nm/min deposition can be suppressed by addition of 1  Pa·m³ s⁻¹ ammonia flow at 10 cm from the plasma. These results bolster the concept of nonperturbative scavenger injection for tritium inventory control in carbon-based fusion plasma devices, thus paving the way for ITER operation in the active phase under a carbon-dominated, plasma facing component background.

  12. Analysis and Design of ITER 1 MV Core Snubber

    NASA Astrophysics Data System (ADS)

    Wang, Haitian; Li, Ge

    2012-11-01

    The core snubber, as a passive protection device, can suppress arc current and absorb stored energy in stray capacitance during the electrical breakdown in accelerating electrodes of ITER NBI. In order to design the core snubber of ITER, the control parameters of the arc peak current have been firstly analyzed by the Fink-Baker-Owren (FBO) method, which are used for designing the DIIID 100 kV snubber. The B-H curve can be derived from the measured voltage and current waveforms, and the hysteresis loss of the core snubber can be derived using the revised parallelogram method. The core snubber can be a simplified representation as an equivalent parallel resistance and inductance, which has been neglected by the FBO method. A simulation code including the parallel equivalent resistance and inductance has been set up. The simulation and experiments result in dramatically large arc shorting currents due to the parallel inductance effect. The case shows that the core snubber utilizing the FBO method gives more compact design.

  13. Progress in Development of the ITER Plasma Control System Simulation Platform

    NASA Astrophysics Data System (ADS)

    Walker, Michael; Humphreys, David; Sammuli, Brian; Ambrosino, Giuseppe; de Tommasi, Gianmaria; Mattei, Massimiliano; Raupp, Gerhard; Treutterer, Wolfgang; Winter, Axel

    2017-10-01

    We report on progress made and expected uses of the Plasma Control System Simulation Platform (PCSSP), the primary test environment for development of the ITER Plasma Control System (PCS). PCSSP will be used for verification and validation of the ITER PCS Final Design for First Plasma, to be completed in 2020. We discuss the objectives of PCSSP, its overall structure, selected features, application to existing devices, and expected evolution over the lifetime of the ITER PCS. We describe an archiving solution for simulation results, methods for incorporating physics models of the plasma and physical plant (tokamak, actuator, and diagnostic systems) into PCSSP, and defining characteristics of models suitable for a plasma control development environment such as PCSSP. Applications of PCSSP simulation models including resistive plasma equilibrium evolution are demonstrated. PCSSP development supported by ITER Organization under ITER/CTS/6000000037. Resistive evolution code developed under General Atomics' Internal funding. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.

  14. Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.

    PubMed

    Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos

    2010-07-01

    To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.

  15. Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform

    PubMed Central

    Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos

    2013-01-01

    Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028

  16. Simulation of Fusion Plasmas

    ScienceCinema

    Holland, Chris [UC San Diego, San Diego, California, United States

    2017-12-09

    The upcoming ITER experiment (www.iter.org) represents the next major milestone in realizing the promise of using nuclear fusion as a commercial energy source, by moving into the “burning plasma” regime where the dominant heat source is the internal fusion reactions. As part of its support for the ITER mission, the US fusion community is actively developing validated predictive models of the behavior of magnetically confined plasmas. In this talk, I will describe how the plasma community is using the latest high performance computing facilities to develop and refine our models of the nonlinear, multiscale plasma dynamics, and how recent advances in experimental diagnostics are allowing us to directly test and validate these models at an unprecedented level.

  17. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  18. Runaway electrons and ITER

    NASA Astrophysics Data System (ADS)

    Boozer, Allen H.

    2017-05-01

    The potential for damage, the magnitude of the extrapolation, and the importance of the atypical—incidents that occur once in a thousand shots—make theory and simulation essential for ensuring that relativistic runaway electrons will not prevent ITER from achieving its mission. Most of the theoretical literature on electron runaway assumes magnetic surfaces exist. ITER planning for the avoidance of halo and runaway currents is focused on massive-gas or shattered-pellet injection of impurities. In simulations of experiments, such injections lead to a rapid large-scale magnetic-surface breakup. Surface breakup, which is a magnetic reconnection, can occur on a quasi-ideal Alfvénic time scale when the resistance is sufficiently small. Nevertheless, the removal of the bulk of the poloidal flux, as in halo-current mitigation, is on a resistive time scale. The acceleration of electrons to relativistic energies requires the confinement of some tubes of magnetic flux within the plasma and a resistive time scale. The interpretation of experiments on existing tokamaks and their extrapolation to ITER should carefully distinguish confined versus unconfined magnetic field lines and quasi-ideal versus resistive evolution. The separation of quasi-ideal from resistive evolution is extremely challenging numerically, but is greatly simplified by constraints of Maxwell’s equations, and in particular those associated with magnetic helicity. The physics of electron runaway along confined magnetic field lines is clarified by relations among the poloidal flux change required for an e-fold in the number of electrons, the energy distribution of the relativistic electrons, and the number of relativistic electron strikes that can be expected in a single disruption event.

  19. Runaway electrons and ITER

    DOE PAGES

    Boozer, Allen H.

    2017-03-24

    The potential for damage, the magnitude of the extrapolation, and the importance of the atypical—incidents that occur once in a thousand shots—make theory and simulation essential for ensuring that relativistic runaway electrons will not prevent ITER from achieving its mission. Most of the theoretical literature on electron runaway assumes magnetic surfaces exist. ITER planning for the avoidance of halo and runaway currents is focused on massive gas or shattered-pellet injection of impurities. In simulations of experiments, such injections lead to a rapid large-scale magnetic-surface breakup. Surface breakup, which is a magnetic reconnection, can occur on a quasi-ideal Alfvénic time scalemore » when the resistance is sufficiently small. Nevertheless, the removal of the bulk of the poloidal flux, as in halo-current mitigation, is on a resistive time scale. The acceleration of electrons to relativistic energies requires the confinement of some tubes of magnetic flux within the plasma and a resistive time scale. The interpretation of experiments on existing tokamaks and their extrapolation to ITER should carefully distinguish confined versus unconfined magnetic field lines and quasi-ideal versus resistive evolution. The separation of quasi-ideal from resistive evolution is extremely challenging numerically, but is greatly simplified by constraints of Maxwell’s equations, and in particular those associated with magnetic helicity. Thus, the physics of electron runaway along confined magnetic field lines is clarified by relations among the poloidal flux change required for an e-fold in the number of electrons, the energy distribution of the relativistic electrons, and the number of relativistic electron strikes that can be expected in a single disruption event.« less

  20. A coupling method for a cardiovascular simulation model which includes the Kalman filter.

    PubMed

    Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya

    2012-01-01

    Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.

  1. Comparison of simulator fidelity model predictions with in-simulator evaluation data

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.

    1983-01-01

    A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.

  2. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less

  3. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.

    PubMed

    Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei

    2013-03-01

    A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.

  4. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    PubMed Central

    Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei

    2013-01-01

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329

  5. High energy flux thermo-mechanical test of 1D-carbon-carbon fibre composite prototypes for the SPIDER diagnostic calorimeter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Muri, M., E-mail: michela.demuri@igi.cnr.it; Pasqualotto, R.; Dalla Palma, M.

    2014-02-15

    Operation of the thermonuclear fusion experiment ITER requires additional heating via injection of neutral beams from accelerated negative ions. In the SPIDER test facility, under construction in Padova, the production of negative ions will be studied and optimised. STRIKE (Short-Time Retractable Instrumented Kalorimeter Experiment) is a diagnostic used to characterise the SPIDER beam during short pulse operation (several seconds) to verify if the beam meets the ITER requirements about the maximum allowed beam non-uniformity (below ±10%). The major components of STRIKE are 16 1D-CFC (Carbon-Carbon Fibre Composite) tiles, observed at the rear side by a thermal camera. This contribution givesmore » an overview of some tests under high energy particle flux, aimed at verifying the thermo-mechanical behaviour of several CFC prototype tiles. The tests were performed in the GLADIS facility at IPP (Max-Plank-Institut für Plasmaphysik), Garching. Dedicated linear and nonlinear simulations were carried out to interpret the experiments and a comparison of the experimental data with the simulation results is presented. The results of some morphological and structural studies on the material after exposure to the GLADIS beam are also given.« less

  6. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  7. Modelling of edge localised modes and edge localised mode control [Modelling of ELMs and ELM control

    DOE PAGES

    Huijsmans, G. T. A.; Chang, C. S.; Ferraro, N.; ...

    2015-02-07

    Edge Localised Modes (ELMs) in ITER Q = 10 H-mode plasmas are likely to lead to large transient heat loads to the divertor. In order to avoid an ELM induced reduction of the divertor lifetime, the large ELM energy losses need to be controlled. In ITER, ELM control is foreseen using magnetic field perturbations created by in-vessel coils and the injection of small D2 pellets. ITER plasmas are characterised by low collisionality at a high density (high fraction of the Greenwald density limit). These parameters cannot simultaneously be achieved in current experiments. Thus, the extrapolation of the ELM properties andmore » the requirements for ELM control in ITER relies on the development of validated physics models and numerical simulations. Here, we describe the modelling of ELMs and ELM control methods in ITER. The aim of this paper is not a complete review on the subject of ELM and ELM control modelling but rather to describe the current status and discuss open issues.« less

  8. Motion and positional error correction for cone beam 3D-reconstruction with mobile C-arms.

    PubMed

    Bodensteiner, C; Darolti, C; Schumacher, H; Matthäus, L; Schweikard, A

    2007-01-01

    CT-images acquired by mobile C-arm devices can contain artefacts caused by positioning errors. We propose a data driven method based on iterative 3D-reconstruction and 2D/3D-registration to correct projection data inconsistencies. With a 2D/3D-registration algorithm, transformations are computed to align the acquired projection images to a previously reconstructed volume. In an iterative procedure, the reconstruction algorithm uses the results of the registration step. This algorithm also reduces small motion artefacts within 3D-reconstructions. Experiments with simulated projections from real patient data show the feasibility of the proposed method. In addition, experiments with real projection data acquired with an experimental robotised C-arm device have been performed with promising results.

  9. Design of the DEMO Fusion Reactor Following ITER.

    PubMed

    Garabedian, Paul R; McFadden, Geoffrey B

    2009-01-01

    Runs of the NSTAB nonlinear stability code show there are many three-dimensional (3D) solutions of the advanced tokamak problem subject to axially symmetric boundary conditions. These numerical simulations based on mathematical equations in conservation form predict that the ITER international tokamak project will encounter persistent disruptions and edge localized mode (ELMS) crashes. Test particle runs of the TRAN transport code suggest that for quasineutrality to prevail in tokamaks a certain minimum level of 3D asymmetry of the magnetic spectrum is required which is comparable to that found in quasiaxially symmetric (QAS) stellarators. The computational theory suggests that a QAS stellarator with two field periods and proportions like those of ITER is a good candidate for a fusion reactor. For a demonstration reactor (DEMO) we seek an experiment that combines the best features of ITER, with a system of QAS coils providing external rotational transform, which is a measure of the poloidal field. We have discovered a configuration with unusually good quasisymmetry that is ideal for this task.

  10. Design of the DEMO Fusion Reactor Following ITER

    PubMed Central

    Garabedian, Paul R.; McFadden, Geoffrey B.

    2009-01-01

    Runs of the NSTAB nonlinear stability code show there are many three-dimensional (3D) solutions of the advanced tokamak problem subject to axially symmetric boundary conditions. These numerical simulations based on mathematical equations in conservation form predict that the ITER international tokamak project will encounter persistent disruptions and edge localized mode (ELMS) crashes. Test particle runs of the TRAN transport code suggest that for quasineutrality to prevail in tokamaks a certain minimum level of 3D asymmetry of the magnetic spectrum is required which is comparable to that found in quasiaxially symmetric (QAS) stellarators. The computational theory suggests that a QAS stellarator with two field periods and proportions like those of ITER is a good candidate for a fusion reactor. For a demonstration reactor (DEMO) we seek an experiment that combines the best features of ITER, with a system of QAS coils providing external rotational transform, which is a measure of the poloidal field. We have discovered a configuration with unusually good quasisymmetry that is ideal for this task. PMID:27504224

  11. P-CSI v1.0, an accelerated barotropic solver for the high-resolution ocean model component in the Community Earth System Model v2.0

    NASA Astrophysics Data System (ADS)

    Huang, Xiaomeng; Tang, Qiang; Tseng, Yuheng; Hu, Yong; Baker, Allison H.; Bryan, Frank O.; Dennis, John; Fu, Haohuan; Yang, Guangwen

    2016-11-01

    In the Community Earth System Model (CESM), the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.

  12. Isotope and fast ions turbulence suppression effects: Consequences for high-β ITER plasmas

    NASA Astrophysics Data System (ADS)

    Garcia, J.; Görler, T.; Jenko, F.

    2018-05-01

    The impact of isotope effects and fast ions on microturbulence is analyzed by means of non-linear gyrokinetic simulations for an ITER hybrid scenario at high beta obtained from previous integrated modelling simulations with simplified assumptions. Simulations show that ITER might work very close to threshold, and in these conditions, significant turbulence suppression is found from DD to DT plasmas. Electromagnetic effects are shown to play an important role in the onset of this isotope effect. Additionally, even external ExB flow shear, which is expected to be low in ITER, has a stronger impact on DT than on DD. The fast ions generated by fusion reactions can additionally reduce turbulence even more although the impact in ITER seems weaker than in present-day tokamaks.

  13. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  14. Iterative LQG Controller Design Through Closed-Loop Identification

    NASA Technical Reports Server (NTRS)

    Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.

    1996-01-01

    This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.

  15. Spotting the difference in molecular dynamics simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun; Kono, Hidetoshi

    2016-08-01

    Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.

  16. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  17. An Experimental Examination of the Loss-of-Flow Accident Phenomenon for Prototypical ITER Divertor Channels of Y = 0 and Y = 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Theron D.; McDonald, Jimmie M.; Cadwallader, Lee C.

    2000-01-15

    This paper discusses the thermal response of two prototypical International Thermonuclear Experimental Reactor (ITER) divertor channels during simulated loss-of-flow-accident (LOFA) experiments. The thermal response was characterized by the time-to-burnout (TBO), which is a figure of merit on the mockups' survivability. Data from the LOFA experiments illustrate that (a) the pre-LOFA inlet velocity does not significantly influence the TBO, (b) the incident heat flux (IHF) does influence the TBO, and (c) a swirl tape insert significantly improves the TBO and promotes the initiation of natural circulation. This natural circulation enabled the mockup to absorb steady-state IHFs after the coolant circulation pumpmore » was disabled. Several methodologies for thermal-hydraulic modeling of the LOFA were attempted.« less

  18. An experimental examination of the loss-of-flow accident phenomenon for prototypical ITER divertor channels of Y=0 and Y=2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, T.D.; McDonald, J.M.; Cadwallader, L.C.

    2000-01-01

    This paper discusses the thermal response of two prototypical International Thermonuclear Experimental Reactor (ITER) divertor channels during simulated loss-of-flow-accident (LOFA) experiments. The thermal response was characterized by the time-to-burnout (TBO), which is a figure of merit on the mockups' survivability. Data from the LOFA experiments illustrate that (a) the pre-LOFA inlet velocity does not significantly influence the TBO, (b) the incident heat flux (IHF) does influence the TBO, and (c) a swirl tape insert significantly improves the TBO and promotes the initiation of natural circulation. This natural circulation enabled the mockup to absorb steady-state IHFs after the coolant circulation pumpmore » was disabled. Several methodologies for thermal-hydraulic modeling of the LOFA were attempted.« less

  19. On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method

    PubMed Central

    Roux, Benoît; Weare, Jonathan

    2013-01-01

    An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140

  20. On iterative algorithms for quantitative photoacoustic tomography in the radiative transport regime

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Zhou, Tie

    2017-11-01

    In this paper, we present a numerical reconstruction method for quantitative photoacoustic tomography (QPAT), based on the radiative transfer equation (RTE), which models light propagation more accurately than diffusion approximation (DA). We investigate the reconstruction of absorption coefficient and scattering coefficient of biological tissues. An improved fixed-point iterative method to retrieve the absorption coefficient, given the scattering coefficient, is proposed for its cheap computational cost; the convergence of this method is also proved. The Barzilai-Borwein (BB) method is applied to retrieve two coefficients simultaneously. Since the reconstruction of optical coefficients involves the solutions of original and adjoint RTEs in the framework of optimization, an efficient solver with high accuracy is developed from Gao and Zhao (2009 Transp. Theory Stat. Phys. 38 149-92). Simulation experiments illustrate that the improved fixed-point iterative method and the BB method are competitive methods for QPAT in the relevant cases.

  1. Quantum-Inspired Multidirectional Associative Memory With a Self-Convergent Iterative Learning.

    PubMed

    Masuyama, Naoki; Loo, Chu Kiong; Seera, Manjeevan; Kubota, Naoyuki

    2018-04-01

    Quantum-inspired computing is an emerging research area, which has significantly improved the capabilities of conventional algorithms. In general, quantum-inspired hopfield associative memory (QHAM) has demonstrated quantum information processing in neural structures. This has resulted in an exponential increase in storage capacity while explaining the extensive memory, and it has the potential to illustrate the dynamics of neurons in the human brain when viewed from quantum mechanics perspective although the application of QHAM is limited as an autoassociation. We introduce a quantum-inspired multidirectional associative memory (QMAM) with a one-shot learning model, and QMAM with a self-convergent iterative learning model (IQMAM) based on QHAM in this paper. The self-convergent iterative learning enables the network to progressively develop a resonance state, from inputs to outputs. The simulation experiments demonstrate the advantages of QMAM and IQMAM, especially the stability to recall reliability.

  2. Integrated modeling of plasma ramp-up in DIII-D ITER-like and high bootstrap current scenario discharges

    NASA Astrophysics Data System (ADS)

    Wu, M. Q.; Pan, C. K.; Chan, V. S.; Li, G. Q.; Garofalo, A. M.; Jian, X.; Liu, L.; Ren, Q. L.; Chen, J. L.; Gao, X.; Gong, X. Z.; Ding, S. Y.; Qian, J. P.; Cfetr Physics Team

    2018-04-01

    Time-dependent integrated modeling of DIII-D ITER-like and high bootstrap current plasma ramp-up discharges has been performed with the equilibrium code EFIT, and the transport codes TGYRO and ONETWO. Electron and ion temperature profiles are simulated by TGYRO with the TGLF (SAT0 or VX model) turbulent and NEO neoclassical transport models. The VX model is a new empirical extension of the TGLF turbulent model [Jian et al., Nucl. Fusion 58, 016011 (2018)], which captures the physics of multi-scale interaction between low-k and high-k turbulence from nonlinear gyro-kinetic simulation. This model is demonstrated to accurately model low Ip discharges from the EAST tokamak. Time evolution of the plasma current density profile is simulated by ONETWO with the experimental current ramp-up rate. The general trend of the predicted evolution of the current density profile is consistent with that obtained from the equilibrium reconstruction with Motional Stark effect constraints. The predicted evolution of βN , li , and βP also agrees well with the experiments. For the ITER-like cases, the predicted electron and ion temperature profiles using TGLF_Sat0 agree closely with the experimental measured profiles, and are demonstrably better than other proposed transport models. For the high bootstrap current case, the predicted electron and ion temperature profiles perform better in the VX model. It is found that the SAT0 model works well at high IP (>0.76 MA) while the VX model covers a wider range of plasma current ( IP > 0.6 MA). The results reported in this paper suggest that the developed integrated modeling could be a candidate for ITER and CFETR ramp-up engineering design modeling.

  3. Penalized weighted least-squares approach for low-dose x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-03-01

    The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.

  4. Progress of IRSN R&D on ITER Safety Assessment

    NASA Astrophysics Data System (ADS)

    Van Dorsselaere, J. P.; Perrault, D.; Barrachin, M.; Bentaib, A.; Gensdarmes, F.; Haeck, W.; Pouvreau, S.; Salat, E.; Seropian, C.; Vendel, J.

    2012-08-01

    The French "Institut de Radioprotection et de Sûreté Nucléaire" (IRSN), in support to the French "Autorité de Sûreté Nucléaire", is analysing the safety of ITER fusion installation on the basis of the ITER operator's safety file. IRSN set up a multi-year R&D program in 2007 to support this safety assessment process. Priority has been given to four technical issues and the main outcomes of the work done in 2010 and 2011 are summarized in this paper: for simulation of accident scenarios in the vacuum vessel, adaptation of the ASTEC system code; for risk of explosion of gas-dust mixtures in the vacuum vessel, adaptation of the TONUS-CFD code for gas distribution, development of DUST code for dust transport, and preparation of IRSN experiments on gas inerting, dust mobilization, and hydrogen-dust mixtures explosion; for evaluation of the efficiency of the detritiation systems, thermo-chemical calculations of tritium speciation during transport in the gas phase and preparation of future experiments to evaluate the most influent factors on detritiation; for material neutron activation, adaptation of the VESTA Monte Carlo depletion code. The first results of these tasks have been used in 2011 for the analysis of the ITER safety file. In the near future, this R&D global programme may be reoriented to account for the feedback of the latter analysis or for new knowledge.

  5. Demonstrating the Physics Basis for the ITER 15 MA Inductive Discharge on Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Kessel, C. E.; Wolfe, S. M.; Hutchinson, I. H.; Hughes, J. W.; Lin, Y.; Ma, Y.; Mikkelsen, D. R.; Poli, F.; Reinke, M. L.; Wukitch, S. J.

    2012-10-01

    Rampup discharges in C-Mod, matching ITE's current diffusion times show ICRF heating can save V-s but results in only weak effects on the current profile, despite strong modifications of the central electron temperature. Simulation of these discharges with TSC, and TORIC for ICRF, using multiple transport models, do not reproduce the temperature profile evolution, or the experimental internal self-inductance li, by sufficiently large amounts to be unacceptable for projections to ITER operation. For the flattop phase experiments EDA H-modes approach the ITER parameter targets of q95=3, H98=1, n/nGr=0.85, betaN=1.7, and k=1.8, and sustain them similar to a normalized ITER flattop time. The discharges show a degradation of energy confinement at higher densities, but increasing H98 with increasing net power to the plasma. For these discharges intrinsic impurities (B, Mo) provided radiated power fractions of 25-37%. Experiments show the plasma can remain in H-mode in rampdown with ICRF injection, the density will decrease with Ip while in the H-mode, and the back transition occurs when the net power reaches about half the L-H transition power. C-Mod indicates that faster rampdowns are preferable. Work supported by US Dept of Energy under DE-AC02-CH0911466 and DE-FC02-99ER54512.

  6. SciDAC GSEP: Gyrokinetic Simulation of Energetic Particle Turbulence and Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Zhihong

    Energetic particle (EP) confinement is a key physics issue for burning plasma experiment ITER, the crucial next step in the quest for clean and abundant energy, since ignition relies on self-heating by energetic fusion products (α-particles). Due to the strong coupling of EP with burning thermal plasmas, plasma confinement property in the ignition regime is one of the most uncertain factors when extrapolating from existing fusion devices to the ITER tokamak. EP population in current tokamaks are mostly produced by auxiliary heating such as neutral beam injection (NBI) and radio frequency (RF) heating. Remarkable progress in developing comprehensive EP simulationmore » codes and understanding basic EP physics has been made by two concurrent SciDAC EP projects GSEP funded by the Department of Energy (DOE) Office of Fusion Energy Science (OFES), which have successfully established gyrokinetic turbulence simulation as a necessary paradigm shift for studying the EP confinement in burning plasmas. Verification and validation have rapidly advanced through close collaborations between simulation, theory, and experiment. Furthermore, productive collaborations with computational scientists have enabled EP simulation codes to effectively utilize current petascale computers and emerging exascale computers. We review here key physics progress in the GSEP projects regarding verification and validation of gyrokinetic simulations, nonlinear EP physics, EP coupling with thermal plasmas, and reduced EP transport models. Advances in high performance computing through collaborations with computational scientists that enable these large scale electromagnetic simulations are also highlighted. These results have been widely disseminated in numerous peer-reviewed publications including many Phys. Rev. Lett. papers and many invited presentations at prominent fusion conferences such as the biennial International Atomic Energy Agency (IAEA) Fusion Energy Conference and the annual meeting of the American Physics Society, Division of Plasma Physics (APS-DPP).« less

  7. Tungsten impurity transport experiments in Alcator C-Mod to address high priority research and development for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loarte, A.; Polevoi, A. R.; Hosokawa, M.

    2015-05-15

    Experiments in Alcator C-Mod tokamak plasmas in the Enhanced D-alpha H-mode regime with ITER-like mid-radius plasma density peaking and Ion Cyclotron Resonant heating, in which tungsten is introduced by the laser blow-off technique, have demonstrated that accumulation of tungsten in the central region of the plasma does not take place in these conditions. The measurements obtained are consistent with anomalous transport dominating tungsten transport except in the central region of the plasma where tungsten transport is neoclassical, as previously observed in other devices with dominant neutral beam injection heating, such as JET and ASDEX Upgrade. In contrast to such results,more » however, the measured scale lengths for plasma temperature and density in the central region of these Alcator C-Mod plasmas, with density profiles relatively flat in the core region due to the lack of core fuelling, are favourable to prevent inter and intra sawtooth tungsten accumulation in this region under dominance of neoclassical transport. Simulations of ITER H-mode plasmas, including both anomalous (modelled by the Gyro-Landau-Fluid code GLF23) and neoclassical transport for main ions and tungsten and with density profiles of similar peaking to those obtained in Alcator C-Mod show that accumulation of tungsten in the central plasma region is also unlikely to occur in stationary ITER H-mode plasmas due to the low fuelling source by the neutral beam injection (injection energy ∼ 1 MeV), which is in good agreement with findings in the Alcator C-Mod experiments.« less

  8. Computer Simulation and Experiments on the Quasi-Static Mechanics and Transport Properties of Granular Materials

    DTIC Science & Technology

    1993-10-01

    ismalzO cif(iprepk.eq.2)istp=1 c--anumber of cyclic shearings for packing or steps for simple shear do 598 istaistold+1,istold+istp 3 icheck =0 iter3O0...lowstrn=0 highstrna=0 nostop=O I692 i~tr.e3adnso~qOte icheck ~ icheck +1 call rmatx-.copy(3,3,struO,stru)I else if(lowstrn.it .2)then icheck ~ icheck +1 do...690 i=1,3 do 690 j=1,3 strn(i,j)=strn(i,j)/2.0 690 continueI lowstrn=lowstrn+l iter3O0 if(nostop. eq. 1)nostop=0 if (highutrn. eq. 0)then icheck

  9. Post-game analysis: An initial experiment for heuristic-based resource management in concurrent systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.

    1987-01-01

    In concurrent systems, a major responsibility of the resource management system is to decide how the application program is to be mapped onto the multi-processor. Instead of using abstract program and machine models, a generate-and-test framework known as 'post-game analysis' that is based on data gathered during program execution is proposed. Each iteration consists of (1) (a simulation of) an execution of the program; (2) analysis of the data gathered; and (3) the proposal of a new mapping that would have a smaller execution time. These heuristics are applied to predict execution time changes in response to small perturbations applied to the current mapping. An initial experiment was carried out using simple strategies on 'pipeline-like' applications. The results obtained from four simple strategies demonstrated that for this kind of application, even simple strategies can produce acceptable speed-up with a small number of iterations.

  10. Fast Ion Effects During Test Blanket Module Simulation Experiments in DIII-D

    NASA Astrophysics Data System (ADS)

    Kramer, G. J.; Budny, R.; Nazikian, R.; Heidbrink, W. W.; Kurki-Suonio, T.; Salmi, A.; Schaffer, M. J.; van Zeeland, M. A.; Shinohara, K.; Snipes, J. A.; Spong, D.

    2010-11-01

    The fast beam-ion confinement in the presence of a scaled mock-up of two Test Blanket Modules (TBM) for ITER was studied in DIII-D. The TBM on DIII-D has four vertically arranged protective carbon tiles with thermocouples placed at the back of each tile. Temperature increases of up to 200^oC were measured for the two tiles closest to the midplane when the TBM fields were present. These measurements agree qualitatively with results from the full orbit-following beam-ion code, SPIRAL, that predict beam-ion losses to be localized on the central two carbon tiles when the TBM fields present. Within the experimental uncertainties no significant change in the fast-ion population was found in the core of these plasmas which is consistent with SPIRAL analysis. These experiments indicate that the TBM fields do not affect the fast-ion confinement in a harmful way which is good news for ITER.

  11. Research on error control and compensation in magnetorheological finishing.

    PubMed

    Dai, Yifan; Hu, Hao; Peng, Xiaoqiang; Wang, Jianmin; Shi, Feng

    2011-07-01

    Although magnetorheological finishing (MRF) is a deterministic finishing technology, the machining results always fall short of simulation precision in the actual process, and it cannot meet the precision requirements just through a single treatment but after several iterations. We investigate the reasons for this problem through simulations and experiments. Through controlling and compensating the chief errors in the manufacturing procedure, such as removal function calculation error, positioning error of the removal function, and dynamic performance limitation of the CNC machine, the residual error convergence ratio (ratio of figure error before and after processing) in a single process is obviously increased, and higher figure precision is achieved. Finally, an improved technical process is presented based on these researches, and the verification experiment is accomplished on the experimental device we developed. The part is a circular plane mirror of fused silica material, and the surface figure error is improved from the initial λ/5 [peak-to-valley (PV) λ=632.8 nm], λ/30 [root-mean-square (rms)] to the final λ/40 (PV), λ/330 (rms) just through one iteration in 4.4 min. Results show that a higher convergence ratio and processing precision can be obtained by adopting error control and compensation techniques in MRF.

  12. Cannibalism, Kuru, and Mad Cows: Prion Disease As a "Choose-Your-Own-Experiment" Case Study to Simulate Scientific Inquiry in Large Lectures.

    PubMed

    Serrano, Antonio; Liebner, Jeffrey; Hines, Justin K

    2016-01-01

    Despite significant efforts to reform undergraduate science education, students often perform worse on assessments of perceptions of science after introductory courses, demonstrating a need for new educational interventions to reverse this trend. To address this need, we created An Inexplicable Disease, an engaging, active-learning case study that is unusual because it aims to simulate scientific inquiry by allowing students to iteratively investigate the Kuru epidemic of 1957 in a choose-your-own-experiment format in large lectures. The case emphasizes the importance of specialization and communication in science and is broadly applicable to courses of any size and sub-discipline of the life sciences.

  13. Time-to-burnout data for a prototypical ITER divertor tube during a simulated loss of flow accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, T.D.; Watson, R.D.; McDonald, J.M.

    The Loss of Flow Accident (LOFA) is a serious safety concern for the International Thermonuclear Experimental Reactor (ITER) as it has been suggested that greater than 100 seconds are necessary to safely shutdown the plasma when ITER is operating at full power. In this experiment, the thermal response of a prototypical ITER divertor tube during a simulated LOFA was studied. The divertor tube was fabricated from oxygen-free high-conductivity copper to have a square geometry with a circular coolant channel. The coolant channel inner diameter was 0.77 cm, the heated length was 4.0 cm, and the heated width was 1.6 cm.more » The mockup did not feature any flow enhancement techniques, i.e., swirl tape, helical coils, or internal fins. One-sided surface heating of the mockup was accomplished through the use of the 30 kW Sandia Electron Beam Test System. After reaching steady state temperatures in the mockup, as determined by two Type-K thermocouples installed 0.5 mm beneath the heated surface, the coolant pump was manually tripped off and the coolant flow allowed to naturally coast down. Electron beam heating continued after the pump trip until the divertor tube`s heated surface exhibited the high temperature transient normally indicative of rapidly approaching burnout. Experimental data showed that time-to-burnout increases proportionally with increasing inlet velocity and decreases proportionally with increasing incident heat flux.« less

  14. Investigation of the Iterative Phase Retrieval Algorithm for Interferometric Applications

    NASA Astrophysics Data System (ADS)

    Gombkötő, Balázs; Kornis, János

    2010-04-01

    Sequentially recorded intensity patterns reflected from a coherently illuminated diffuse object can be used to reconstruct the complex amplitude of the scattered beam. Several iterative phase retrieval algorithms are known in the literature to obtain the initially unknown phase from these longitudinally displaced intensity patterns. When two sequences are recorded in two different states of a centimeter sized object in optical setups that are similar to digital holographic interferometry-but omitting the reference wave-, displacement, deformation, or shape measurement is theoretically possible. To do this, the retrieved phase pattern should contain information not only about the intensities and locations of the point sources of the object surface, but their relative phase as well. Not only experiments require strict mechanical precision to record useful data, but even in simulations several parameters influence the capabilities of iterative phase retrieval, such as object to camera distance range, uniform or varying camera step sequence, speckle field characteristics, and sampling. Experiments were done to demonstrate this principle with an as large as 5×5 cm sized deformable object as well. Good initial results were obtained in an imaging setup, where the intensity pattern sequences were recorded near the image plane.

  15. Simulation and Analysis of Launch Teams (SALT)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    A SALT effort was initiated in late 2005 with seed funding from the Office of Safety and Mission Assurance Human Factors organization. Its objectives included demonstrating human behavior and performance modeling and simulation technologies for launch team analysis, training, and evaluation. The goal of the research is to improve future NASA operations and training. The project employed an iterative approach, with the first iteration focusing on the last 70 minutes of a nominal-case Space Shuttle countdown, the second iteration focusing on aborts and launch commit criteria violations, the third iteration focusing on Ares I-X communications, and the fourth iteration focusing on Ares I-X Firing Room configurations. SALT applied new commercial off-the-shelf technologies from industry and the Department of Defense in the spaceport domain.

  16. Framework for three-dimensional coherent diffraction imaging by focused beam x-ray Bragg ptychography.

    PubMed

    Hruszkewycz, Stephan O; Holt, Martin V; Tripathi, Ash; Maser, Jörg; Fuoss, Paul H

    2011-06-15

    We present the framework for convergent beam Bragg ptychography, and, using simulations, we demonstrate that nanocrystals can be ptychographically reconstructed from highly convergent x-ray Bragg diffraction. The ptychographic iterative engine is extended to three dimensions and shown to successfully reconstruct a simulated nanocrystal using overlapping raster scans with a defocused curved beam, the diameter of which matches the crystal size. This object reconstruction strategy can serve as the basis for coherent diffraction imaging experiments at coherent scanning nanoprobe x-ray sources.

  17. Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Admissibility and Termination Analysis.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao

    In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.

  18. Comparison of JET AVDE disruption data with M3D simulations and implications for ITER

    DOE PAGES

    Strauss, H.; Joffrin, E.; Riccardo, V.; ...

    2017-10-02

    Nonlinear 3D MHD asymmetric vertical displacement disruption simulations have been performed using JET equilibrium reconstruction initial data. There were several experimentally measured quantities compared with the simulation. These include vertical displacement, halo current, toroidal current asymmetry, and toroidal rotation. The experimental data and the simulations are in reasonable agreement. Also compared was the correlation of the toroidal current asymmetry and the vertical displacement asymmetry. The Noll relation between asymmetric wall force and vertical current moment is verified in the simulations. Also verified is the toroidal flux asymmetry. Though, JET is a good predictor of ITER disruption behavior, JET and ITERmore » can be in different parameter regimes, and extrapolating from JET data can overestimate the ITER wall force.« less

  19. Comparison of JET AVDE disruption data with M3D simulations and implications for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, H.; Joffrin, E.; Riccardo, V.

    Nonlinear 3D MHD asymmetric vertical displacement disruption simulations have been performed using JET equilibrium reconstruction initial data. There were several experimentally measured quantities compared with the simulation. These include vertical displacement, halo current, toroidal current asymmetry, and toroidal rotation. The experimental data and the simulations are in reasonable agreement. Also compared was the correlation of the toroidal current asymmetry and the vertical displacement asymmetry. The Noll relation between asymmetric wall force and vertical current moment is verified in the simulations. Also verified is the toroidal flux asymmetry. Though, JET is a good predictor of ITER disruption behavior, JET and ITERmore » can be in different parameter regimes, and extrapolating from JET data can overestimate the ITER wall force.« less

  20. Material Surface Damage under High Pulse Loads Typical for ELM Bursts and Disruptions in ITER

    NASA Astrophysics Data System (ADS)

    Landman, I. S.; Pestchanyi, S. E.; Safronov, V. M.; Bazylev, B. N.; Garkusha, I. E.

    The divertor armour material for the tokamak ITER will probably be carbon manufactured as fibre composites (CFC) and tungsten as either brush-like structures or thin plates. Disruptive pulse loads where the heat deposition Q may reach 102 MJ/m 2 on a time scale Ïä of 3 ms, or operation in the ELMy H-mode at repetitive loads with Q âe 1/4 3 MJ/m2 and Ïä âe 1/4 0.3 ms, deteriorate armour performance. This work surveys recent numerical and experimental investigations of erosion mechanisms at these off-normal regimes carried out at FZK, TRINITI, and IPP-Kharkov. The modelling uses the anisotropic thermomechanics code PEGASUS-3D for the simulation of CFC brittle destruction, the surface melt motion code MEMOS-1.5D for tungsten targets, and the radiation-magnetohydrodynamics code FOREV-2D for calculating the plasma impact and simulating the heat loads for the ITER regime. Experiments aimed at validating these codes are being carried out at the plasma gun facilities MK-200UG, QSPA-T, and QSPA-Kh50 which produce powerful streams of hydrogen plasma with Q = 10–30 MJ/m2 and Ïä = 0.03–0.5 ms. Essential results are, for CFC targets, the experiments at high heat loads and the development of a local overheating model incorporated in PEGASUS-3D, and for the tungsten targets the analysis of evaporation- and melt motion erosion on the base of MEMOS-1.5D calculations for repetitive ELMs.

  1. Advances in simulation of wave interactions with extended MHD phenomena

    NASA Astrophysics Data System (ADS)

    Batchelor, D.; Abla, G.; D'Azevedo, E.; Bateman, G.; Bernholdt, D. E.; Berry, L.; Bonoli, P.; Bramley, R.; Breslau, J.; Chance, M.; Chen, J.; Choi, M.; Elwasif, W.; Foley, S.; Fu, G.; Harvey, R.; Jaeger, E.; Jardin, S.; Jenkins, T.; Keyes, D.; Klasky, S.; Kruger, S.; Ku, L.; Lynch, V.; McCune, D.; Ramos, J.; Schissel, D.; Schnack, D.; Wright, J.

    2009-07-01

    The Integrated Plasma Simulator (IPS) provides a framework within which some of the most advanced, massively-parallel fusion modeling codes can be interoperated to provide a detailed picture of the multi-physics processes involved in fusion experiments. The presentation will cover four topics: 1) recent improvements to the IPS, 2) application of the IPS for very high resolution simulations of ITER scenarios, 3) studies of resistive and ideal MHD stability in tokamk discharges using IPS facilities, and 4) the application of RF power in the electron cyclotron range of frequencies to control slowly growing MHD modes in tokamaks and initial evaluations of optimized location for RF power deposition.

  2. Simulation of High Power Lasers (Preprint)

    DTIC Science & Technology

    2010-06-01

    integration, which requires communication of zonal boundary information after each inner- iteration of the Gauss - Seidel or Jacobi matrix solver. Each...experiment consisting of a supersonic (M~2.2) converging -diverging nozzle section with secondary mass injection in the nozzle expansion downstream of...consists of a section of a supersonic (M~2.2) converging -diverging slit nozzle with one large and two small orifices that inject reactants into the

  3. The Volcanic Hazards Simulation: Students behaving expert-like when faced with challenging, authentic tasks during a simulated Volcanic Crisis

    NASA Astrophysics Data System (ADS)

    Dohaney, J. A.; kennedy, B.; Brogt, E.; Gravley, D.; Wilson, T.; O'Steen, B.

    2011-12-01

    This qualitative study investigates behaviors and experiences of upper-year geosciences undergraduate students during an intensive role-play simulation, in which the students interpret geological data streams and manage a volcanic crisis event. We present the development of the simulation, its academic tasks, (group) role assignment strategies and planned facilitator interventions over three iterations. We aim to develop and balance an authentic, intensive and highly engaging capstone activity for volcanology and geo-hazard courses. Interview data were collected from academic and professional experts in the fields of Volcanology and Hazard Management (n=11) in order to characterize expertise in the field, characteristics of key roles in the simulation, and to validate the authenticity of tasks and scenarios. In each iteration, observations and student artifacts were collected (total student participants: 68) along with interviews (n=36) and semi-structured, open-ended questionnaires (n=26). Our analysis of these data indicates that increasing the structure (i.e. organization, role-specific tasks and responsibilities) lessens non-productive group dynamics, which allows for an increase in difficulty of academic tasks within the simulation without increasing the cognitive load on students. Under these conditions, students exhibit professional expert-like behaviours, in particular in the quality of decision-making, communication skills and task-efficiency. In addition to illustrating the value of using this simulation to teach geosciences concepts, this study has implications for many complex situated-learning activities.

  4. Incoherent beam combining based on the momentum SPGD algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Guo, Jin; Wang, Tingfeng

    2018-05-01

    Incoherent beam combining (ICBC) technology is one of the most promising ways to achieve high-energy, near-diffraction laser output. In this paper, the momentum method is proposed as a modification of the stochastic parallel gradient descent (SPGD) algorithm. The momentum method can improve the speed of convergence of the combining system efficiently. The analytical method is employed to interpret the principle of the momentum method. Furthermore, the proposed algorithm is testified through simulations as well as experiments. The results of the simulations and the experiments show that the proposed algorithm not only accelerates the speed of the iteration, but also keeps the stability of the combining process. Therefore the feasibility of the proposed algorithm in the beam combining system is testified.

  5. Multi-machine analysis of termination scenarios with comparison to simulations of controlled shutdown of ITER discharges

    DOE PAGES

    de Vries, Peter C.; Luce, Timothy C.; Bae, Young-soon; ...

    2017-11-22

    To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in fGW limits the duration ofmore » the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q95~3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in βp at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. Here, the results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.« less

  6. Multi-machine analysis of termination scenarios with comparison to simulations of controlled shutdown of ITER discharges

    NASA Astrophysics Data System (ADS)

    de Vries, P. C.; Luce, T. C.; Bae, Y. S.; Gerhardt, S.; Gong, X.; Gribov, Y.; Humphreys, D.; Kavin, A.; Khayrutdinov, R. R.; Kessel, C.; Kim, S. H.; Loarte, A.; Lukash, V. E.; de la Luna, E.; Nunes, I.; Poli, F.; Qian, J.; Reinke, M.; Sauter, O.; Sips, A. C. C.; Snipes, J. A.; Stober, J.; Treutterer, W.; Teplukhina, A. A.; Voitsekhovitch, I.; Woo, M. H.; Wolfe, S.; Zabeo, L.; the Alcator C-MOD Team; the ASDEX Upgrade Team; the DIII-D Team; the EAST Team; contributors, JET; the KSTAR Team; the NSTX-U Team; the TCV Team; IOS members, ITPA; experts

    2018-02-01

    To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in f GW limits the duration of the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q 95 ~ 3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in β p at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. The results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.

  7. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  8. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction.

    PubMed

    Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc

    2017-11-01

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Cosmic Microwave Background Mapmaking with a Messenger Field

    NASA Astrophysics Data System (ADS)

    Huffenberger, Kevin M.; Næss, Sigurd K.

    2018-01-01

    We apply a messenger field method to solve the linear minimum-variance mapmaking equation in the context of Cosmic Microwave Background (CMB) observations. In simulations, the method produces sky maps that converge significantly faster than those from a conjugate gradient descent algorithm with a diagonal preconditioner, even though the computational cost per iteration is similar. The messenger method recovers large scales in the map better than conjugate gradient descent, and yields a lower overall χ2. In the single, pencil beam approximation, each iteration of the messenger mapmaking procedure produces an unbiased map, and the iterations become more optimal as they proceed. A variant of the method can handle differential data or perform deconvolution mapmaking. The messenger method requires no preconditioner, but a high-quality solution needs a cooling parameter to control the convergence. We study the convergence properties of this new method and discuss how the algorithm is feasible for the large data sets of current and future CMB experiments.

  10. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    PubMed

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  11. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE PAGES

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...

    2017-07-03

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  12. Fast generating Greenberger-Horne-Zeilinger state via iterative interaction pictures

    NASA Astrophysics Data System (ADS)

    Huang, Bi-Hua; Chen, Ye-Hong; Wu, Qi-Cheng; Song, Jie; Xia, Yan

    2016-10-01

    We delve a little deeper into the construction of shortcuts to adiabatic passage for three-level systems by iterative interaction picture (multiple Schrödinger dynamics). As an application example, we use the deduced iterative based shortcuts to rapidly generate the Greenberger-Horne-Zeilinger (GHZ) state in a three-atom system with the help of quantum Zeno dynamics. Numerical simulation shows the dynamics designed by the iterative picture method is physically feasible and the shortcut scheme performs much better than that using the conventional adiabatic passage techniques. Also, the influences of various decoherence processes are discussed by numerical simulation and the results prove that the scheme is fast and robust against decoherence and operational imperfection.

  13. Heritability of decisions and outcomes of public goods games

    PubMed Central

    Hiraishi, Kai; Shikishima, Chizuru; Yamagata, Shinji; Ando, Juko

    2015-01-01

    Prosociality is one of the most distinctive features of human beings but there are individual differences in cooperative behavior. Employing the twin method, we examined the heritability of cooperativeness and its outcomes on public goods games using a strategy method. In two experiments (Study 1 and Study 2), twin participants were asked to indicate (1) how much they would contribute to a group when they did not know how much the other group members were contributing, and (2) how much they would contribute if they knew the contributions of others. Overall, the heritability estimates were relatively small for each type of decision, but heritability was greater when participants knew that the others had made larger contributions. Using registered decisions in Study 2, we conducted seven Monte Carlo simulations to examine genetic and environmental influences on the expected game payoffs. For the simulated one-shot game, the heritability estimates were small, comparable to those of game decisions. For the simulated iterated games, we found that the genetic influences first decreased, then increased as the numbers of iterations grew. The implication for the evolution of individual differences in prosociality is discussed. PMID:25954213

  14. Three-dimensional inverse modelling of damped elastic wave propagation in the Fourier domain

    NASA Astrophysics Data System (ADS)

    Petrov, Petr V.; Newman, Gregory A.

    2014-09-01

    3-D full waveform inversion (FWI) of seismic wavefields is routinely implemented with explicit time-stepping simulators. A clear advantage of explicit time stepping is the avoidance of solving large-scale implicit linear systems that arise with frequency domain formulations. However, FWI using explicit time stepping may require a very fine time step and (as a consequence) significant computational resources and run times. If the computational challenges of wavefield simulation can be effectively handled, an FWI scheme implemented within the frequency domain utilizing only a few frequencies, offers a cost effective alternative to FWI in the time domain. We have therefore implemented a 3-D FWI scheme for elastic wave propagation in the Fourier domain. To overcome the computational bottleneck in wavefield simulation, we have exploited an efficient Krylov iterative solver for the elastic wave equations approximated with second and fourth order finite differences. The solver does not exploit multilevel preconditioning for wavefield simulation, but is coupled efficiently to the inversion iteration workflow to reduce computational cost. The workflow is best described as a series of sequential inversion experiments, where in the case of seismic reflection acquisition geometries, the data has been laddered such that we first image highly damped data, followed by data where damping is systemically reduced. The key to our modelling approach is its ability to take advantage of solver efficiency when the elastic wavefields are damped. As the inversion experiment progresses, damping is significantly reduced, effectively simulating non-damped wavefields in the Fourier domain. While the cost of the forward simulation increases as damping is reduced, this is counterbalanced by the cost of the outer inversion iteration, which is reduced because of a better starting model obtained from the larger damped wavefield used in the previous inversion experiment. For cross-well data, it is also possible to launch a successful inversion experiment without laddering the damping constants. With this type of acquisition geometry, the solver is still quite effective using a small fixed damping constant. To avoid cycle skipping, we also employ a multiscale imaging approach, in which frequency content of the data is also laddered (with the data now including both reflection and cross-well data acquisition geometries). Thus the inversion process is launched using low frequency data to first recover the long spatial wavelength of the image. With this image as a new starting model, adding higher frequency data refines and enhances the resolution of the image. FWI using laddered frequencies with an efficient damping schemed enables reconstructing elastic attributes of the subsurface at a resolution that approaches half the smallest wavelength utilized to image the subsurface. We show the possibility of effectively carrying out such reconstructions using two to six frequencies, depending upon the application. Using the proposed FWI scheme, massively parallel computing resources are essential for reasonable execution times.

  15. Applying the scientific method to small catchment studies: Areview of the Panola Mountain experience

    USGS Publications Warehouse

    Hooper, R.P.

    2001-01-01

    A hallmark of the scientific method is its iterative application to a problem to increase and refine the understanding of the underlying processes controlling it. A successful iterative application of the scientific method to catchment science (including the fields of hillslope hydrology and biogeochemistry) has been hindered by two factors. First, the scale at which controlled experiments can be performed is much smaller than the scale of the phenomenon of interest. Second, computer simulation models generally have not been used as hypothesis-testing tools as rigorously as they might have been. Model evaluation often has gone only so far as evaluation of goodness of fit, rather than a full structural analysis, which is more useful when treating the model as a hypothesis. An iterative application of a simple mixing model to the Panola Mountain Research Watershed is reviewed to illustrate the increase in understanding gained by this approach and to discern general principles that may be applicable to other studies. The lessons learned include the need for an explicitly stated conceptual model of the catchment, the definition of objective measures of its applicability, and a clear linkage between the scale of observations and the scale of predictions. Published in 2001 by John Wiley & Sons. Ltd.

  16. Advances in the high bootstrap fraction regime on DIII-D towards the Q = 5 mission of ITER steady state

    DOE PAGES

    Qian, Jinping P.; Garofalo, Andrea M.; Gong, Xianzu Z.; ...

    2017-03-20

    Recent EAST/DIII-D joint experiments on the high poloidal betamore » $${{\\beta}_{\\text{P}}}$$ regime in DIII-D have extended operation with internal transport barriers (ITBs) and excellent energy confinement (H 98y2 ~ 1.6) to higher plasma current, for lower q 95 ≤ 7.0, and more balanced neutral beam injection (NBI) (torque injection < 2 Nm), for lower plasma rotation than previous results. Transport analysis and experimental measurements at low toroidal rotation suggest that the E × B shear effect is not key to the ITB formation in these high $${{\\beta}_{\\text{P}}}$$ discharges. Experiments and TGLF modeling show that the Shafranov shift has a key stabilizing effect on turbulence. Extrapolation of the DIII-D results using a 0D model shows that with the improved confinement, the high bootstrap fraction regime could achieve fusion gain Q = 5 in ITER at $${{\\beta}_{\\text{N}}}$$ ~ 2.9 and q 95 ~ 7. With the optimization of q(0), the required improved confinement is achievable when using 1.5D TGLF-SAT1 for transport simulations. Furthermore, results reported in this paper suggest that the DIII-D high $${{\\beta}_{\\text{P}}}$$ scenario could be a candidate for ITER steady state operation.« less

  17. An iterative particle filter approach for coupled hydro-geophysical inversion of a controlled infiltration experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manoli, Gabriele, E-mail: manoli@dmsa.unipd.it; Nicholas School of the Environment, Duke University, Durham, NC 27708; Rossi, Matteo

    The modeling of unsaturated groundwater flow is affected by a high degree of uncertainty related to both measurement and model errors. Geophysical methods such as Electrical Resistivity Tomography (ERT) can provide useful indirect information on the hydrological processes occurring in the vadose zone. In this paper, we propose and test an iterated particle filter method to solve the coupled hydrogeophysical inverse problem. We focus on an infiltration test monitored by time-lapse ERT and modeled using Richards equation. The goal is to identify hydrological model parameters from ERT electrical potential measurements. Traditional uncoupled inversion relies on the solution of two sequentialmore » inverse problems, the first one applied to the ERT measurements, the second one to Richards equation. This approach does not ensure an accurate quantitative description of the physical state, typically violating mass balance. To avoid one of these two inversions and incorporate in the process more physical simulation constraints, we cast the problem within the framework of a SIR (Sequential Importance Resampling) data assimilation approach that uses a Richards equation solver to model the hydrological dynamics and a forward ERT simulator combined with Archie's law to serve as measurement model. ERT observations are then used to update the state of the system as well as to estimate the model parameters and their posterior distribution. The limitations of the traditional sequential Bayesian approach are investigated and an innovative iterative approach is proposed to estimate the model parameters with high accuracy. The numerical properties of the developed algorithm are verified on both homogeneous and heterogeneous synthetic test cases based on a real-world field experiment.« less

  18. Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures

    NASA Astrophysics Data System (ADS)

    Gault, Travis R.; Jansen, Melissa E.; DeCoster, Mallory E.; Jansing, E. David; Rodriguez, Benjamin M.

    2016-05-01

    Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor's field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory's Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the output component signatures is calculated using the spectral angle mapper. Results show that iterative methods significantly outperform the traditional methods under the given test conditions.

  19. Test of 1D carbon-carbon composite prototype tiles for the SPIDER diagnostic calorimeter

    NASA Astrophysics Data System (ADS)

    Serianni, G.; Pimazzoni, A.; Canton, A.; Palma, M. Dalla; Delogu, R.; Fasolo, D.; Franchin, L.; Pasqualotto, R.; Tollin, M.

    2017-08-01

    Additional heating will be provided to the thermonuclear fusion experiment ITER by injection of neutral beams from accelerated negative ions. In the SPIDER test facility, under construction at Consorzio RFX in Padova (Italy), the production of negative ions will be studied and optimised. To this purpose the STRIKE (Short-Time Retractable Instrumented Kalorimeter Experiment) diagnostic will be used to characterise the SPIDER beam during short operation (several seconds) and to verify if the beam meets the ITER requirement regarding the maximum allowed beam non-uniformity (below ±10%). The most important measurements performed by STRIKE are beam uniformity, beamlet divergence and stripping losses. The major components of STRIKE are 16 1D-CFC (Carbon matrix-Carbon Fibre reinforced Composite) tiles, observed at the rear side by a thermal camera. The requirements of the 1D CFC material include a large thermal conductivity along the tile thickness (at least 10 times larger than in the other directions); low specific heat and density; uniform parameters over the tile surface; capability to withstand localised heat loads resulting in steep temperature gradients. So 1D CFC is a very anisotropic and delicate material, not commercially available, and prototypes are being specifically realised. This contribution gives an overview of the tests performed on the CFC prototype tiles, aimed at verifying their thermal behaviour. The spatial uniformity of the parameters and the ratio between the thermal conductivities are assessed by means of a power laser at Consorzio RFX. Dedicated linear and non-linear simulations are carried out to interpret the experiments and to estimate the thermal conductivities; these simulations are described and a comparison of the experimental data with the simulation results is presented.

  20. Initial results from divertor heat-flux instrumentation on Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Labombard, B.; Brunner, D.; Payne, J.; Reinke, M.; Terry, J. L.; Hughes, J. W.; Lipschultz, B.; Whyte, D.

    2009-11-01

    Physics-based plasma transport models that can accurately simulate the heat-flux power widths observed in the tokamak boundary are lacking at the present time. Yet this quantity is of fundamental importance for ITER and most critically important for DEMO, a reactor similar to ITER but with ˜4 times the power exhaust. In order to improve our understanding, C-Mod, DIII-D and NSTX will aim experiments in FY10 towards characterizing the divertor ``footprint'' and its connection to conditions ``upstream'' in the boundary and core plasmas [2]. Standard IR-based heat-flux measurements are particularly difficult in C-Mod, due to its vertical-oriented divertor targets. To overcome this, a suite of embedded heat-flux sensor probes (tile thermocouples, calorimeters, surface thermocouples) combined with IR thermography was installed during the FY09 opening, along with a new divertor bolometer system. This paper will report on initial experiments aimed at unfolding the heat-flux dependencies on plasma operating conditions. [2] a proposed US DoE Joint Facilities Milestone.

  1. Application of a dual-resolution voxelization scheme to compressed-sensing (CS)-based iterative reconstruction in digital tomosynthesis (DTS)

    NASA Astrophysics Data System (ADS)

    Park, S. Y.; Kim, G. A.; Cho, H. S.; Park, C. K.; Lee, D. Y.; Lim, H. W.; Lee, H. W.; Kim, K. S.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Je, U. K.; Woo, T. H.; Oh, J. E.

    2018-02-01

    In recent digital tomosynthesis (DTS), iterative reconstruction methods are often used owing to the potential to provide multiplanar images of superior image quality to conventional filtered-backprojection (FBP)-based methods. However, they require enormous computational cost in the iterative process, which has still been an obstacle to put them to practical use. In this work, we propose a new DTS reconstruction method incorporated with a dual-resolution voxelization scheme in attempt to overcome these difficulties, in which the voxels outside a small region-of-interest (ROI) containing target diagnosis are binned by 2 × 2 × 2 while the voxels inside the ROI remain unbinned. We considered a compressed-sensing (CS)-based iterative algorithm with a dual-constraint strategy for more accurate DTS reconstruction. We implemented the proposed algorithm and performed a systematic simulation and experiment to demonstrate its viability. Our results indicate that the proposed method seems to be effective for reducing computational cost considerably in iterative DTS reconstruction, keeping the image quality inside the ROI not much degraded. A binning size of 2 × 2 × 2 required only about 31.9% computational memory and about 2.6% reconstruction time, compared to those for no binning case. The reconstruction quality was evaluated in terms of the root-mean-square error (RMSE), the contrast-to-noise ratio (CNR), and the universal-quality index (UQI).

  2. Chain Pooling modeling selection as developed for the statistical analysis of a rotor burst protection experiment

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1977-01-01

    As many as three iterated statistical model deletion procedures were considered for an experiment. Population model coefficients were chosen to simulate a saturated 2 to the 4th power experiment having an unfavorable distribution of parameter values. Using random number studies, three model selection strategies were developed, namely, (1) a strategy to be used in anticipation of large coefficients of variation, approximately 65 percent, (2) a strategy to be sued in anticipation of small coefficients of variation, 4 percent or less, and (3) a security regret strategy to be used in the absence of such prior knowledge.

  3. Anatomical-based partial volume correction for low-dose dedicated cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Chan, Chung; Grobshtein, Yariv; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Stacy, Mitchel R.; Sinusas, Albert J.; Liu, Chi

    2015-09-01

    Due to the limited spatial resolution, partial volume effect has been a major degrading factor on quantitative accuracy in emission tomography systems. This study aims to investigate the performance of several anatomical-based partial volume correction (PVC) methods for a dedicated cardiac SPECT/CT system (GE Discovery NM/CT 570c) with focused field-of-view over a clinically relevant range of high and low count levels for two different radiotracer distributions. These PVC methods include perturbation geometry transfer matrix (pGTM), pGTM followed by multi-target correction (MTC), pGTM with known concentration in blood pool, the former followed by MTC and our newly proposed methods, which perform the MTC method iteratively, where the mean values in all regions are estimated and updated by the MTC-corrected images each time in the iterative process. The NCAT phantom was simulated for cardiovascular imaging with 99mTc-tetrofosmin, a myocardial perfusion agent, and 99mTc-red blood cell (RBC), a pure intravascular imaging agent. Images were acquired at six different count levels to investigate the performance of PVC methods in both high and low count levels for low-dose applications. We performed two large animal in vivo cardiac imaging experiments following injection of 99mTc-RBC for evaluation of intramyocardial blood volume (IMBV). The simulation results showed our proposed iterative methods provide superior performance than other existing PVC methods in terms of image quality, quantitative accuracy, and reproducibility (standard deviation), particularly for low-count data. The iterative approaches are robust for both 99mTc-tetrofosmin perfusion imaging and 99mTc-RBC imaging of IMBV and blood pool activity even at low count levels. The animal study results indicated the effectiveness of PVC to correct the overestimation of IMBV due to blood pool contamination. In conclusion, the iterative PVC methods can achieve more accurate quantification, particularly for low count cardiac SPECT studies, typically obtained from low-dose protocols, gated studies, and dynamic applications.

  4. An Iterative Method for Problems with Multiscale Conductivity

    PubMed Central

    Kim, Hyea Hyun; Minhas, Atul S.; Woo, Eung Je

    2012-01-01

    A model with its conductivity varying highly across a very thin layer will be considered. It is related to a stable phantom model, which is invented to generate a certain apparent conductivity inside a region surrounded by a thin cylinder with holes. The thin cylinder is an insulator and both inside and outside the thin cylinderare filled with the same saline. The injected current can enter only through the holes adopted to the thin cylinder. The model has a high contrast of conductivity discontinuity across the thin cylinder and the thickness of the layer and the size of holes are very small compared to the domain of the model problem. Numerical methods for such a model require a very fine mesh near the thin layer to resolve the conductivity discontinuity. In this work, an efficient numerical method for such a model problem is proposed by employing a uniform mesh, which need not resolve the conductivity discontinuity. The discrete problem is then solved by an iterative method, where the solution is improved by solving a simple discrete problem with a uniform conductivity. At each iteration, the right-hand side is updated by integrating the previous iterate over the thin cylinder. This process results in a certain smoothing effect on microscopic structures and our discrete model can provide a more practical tool for simulating the apparent conductivity. The convergence of the iterative method is analyzed regarding the contrast in the conductivity and the relative thickness of the layer. In numerical experiments, solutions of our method are compared to reference solutions obtained from COMSOL, where very fine meshes are used to resolve the conductivity discontinuity in the model. Errors of the voltage in L2 norm follow O(h) asymptotically and the current density matches quitewell those from the reference solution for a sufficiently small mesh size h. The experimental results present a promising feature of our approach for simulating the apparent conductivity related to changes in microscopic cellular structures. PMID:23304238

  5. Integrated tokamak modeling: when physics informs engineering and research planning

    NASA Astrophysics Data System (ADS)

    Poli, Francesca

    2017-10-01

    Simulations that integrate virtually all the relevant engineering and physics aspects of a real tokamak experiment are a power tool for experimental interpretation, model validation and planning for both present and future devices. This tutorial will guide through the building blocks of an ``integrated'' tokamak simulation, such as magnetic flux diffusion, thermal, momentum and particle transport, external heating and current drive sources, wall particle sources and sinks. Emphasis is given to the connection and interplay between external actuators and plasma response, between the slow time scales of the current diffusion and the fast time scales of transport, and how reduced and high-fidelity models can contribute to simulate a whole device. To illustrate the potential and limitations of integrated tokamak modeling for discharge prediction, a helium plasma scenario for the ITER pre-nuclear phase is taken as an example. This scenario presents challenges because it requires core-edge integration and advanced models for interaction between waves and fast-ions, which are subject to a limited experimental database for validation and guidance. Starting from a scenario obtained by re-scaling parameters from the demonstration inductive ``ITER baseline'', it is shown how self-consistent simulations that encompass both core and edge plasma regions, as well as high-fidelity heating and current drive source models are needed to set constraints on the density, magnetic field and heating scheme. This tutorial aims at demonstrating how integrated modeling, when used with adequate level of criticism, can not only support design of operational scenarios, but also help to asses the limitations and gaps in the available models, thus indicating where improved modeling tools are required and how present experiments can help their validation and inform research planning. Work supported by DOE under DE-AC02-09CH1146.

  6. Simulation of Forward and Inverse X-ray Scattering From Shocked Materials

    NASA Astrophysics Data System (ADS)

    Barber, John; Marksteiner, Quinn; Barnes, Cris

    2012-02-01

    The next generation of high-intensity, coherent light sources should generate sufficient brilliance to perform in-situ coherent x-ray diffraction imaging (CXDI) of shocked materials. In this work, we present beginning-to-end simulations of this process. This includes the calculation of the partially-coherent intensity profiles of self-amplified stimulated emission (SASE) x-ray free electron lasers (XFELs), as well as the use of simulated, shocked molecular-dynamics-based samples to predict the evolution of the resulting diffraction patterns. In addition, we will explore the corresponding inverse problem by performing iterative phase retrieval to generate reconstructed images of the simulated sample. The development of these methods in the context of materials under extreme conditions should provide crucial insights into the design and capabilities of shocked in-situ imaging experiments.

  7. Reducing the latency of the Fractal Iterative Method to half an iteration

    NASA Astrophysics Data System (ADS)

    Béchet, Clémentine; Tallon, Michel

    2013-12-01

    The fractal iterative method for atmospheric tomography (FRiM-3D) has been introduced to solve the wavefront reconstruction at the dimensions of an ELT with a low-computational cost. Previous studies reported the requirement of only 3 iterations of the algorithm in order to provide the best adaptive optics (AO) performance. Nevertheless, any iterative method in adaptive optics suffer from the intrinsic latency induced by the fact that one iteration can start only once the previous one is completed. Iterations hardly match the low-latency requirement of the AO real-time computer. We present here a new approach to avoid iterations in the computation of the commands with FRiM-3D, thus allowing low-latency AO response even at the scale of the European ELT (E-ELT). The method highlights the importance of "warm-start" strategy in adaptive optics. To our knowledge, this particular way to use the "warm-start" has not been reported before. Futhermore, removing the requirement of iterating to compute the commands, the computational cost of the reconstruction with FRiM-3D can be simplified and at least reduced to half the computational cost of a classical iteration. Thanks to simulations of both single-conjugate and multi-conjugate AO for the E-ELT,with FRiM-3D on Octopus ESO simulator, we demonstrate the benefit of this approach. We finally enhance the robustness of this new implementation with respect to increasing measurement noise, wind speed and even modeling errors.

  8. ITER Simulations Using the PEDESTAL Module in the PTRANSP Code

    NASA Astrophysics Data System (ADS)

    Halpern, F. D.; Bateman, G.; Kritz, A. H.; Pankin, A. Y.; Budny, R. V.; Kessel, C.; McCune, D.; Onjun, T.

    2006-10-01

    PTRANSP simulations with a computed pedestal height are carried out for ITER scenarios including a standard ELMy H-mode (15 MA discharge) and a hybrid scenario (12MA discharge). It has been found that fusion power production predicted in simulations of ITER discharges depends sensitively on the height of the H-mode temperature pedestal [1]. In order to study this effect, the NTCC PEDESTAL module [2] has been implemented in PTRANSP code to provide boundary conditions used for the computation of the projected performance of ITER. The PEDESTAL module computes both the temperature and width of the pedestal at the edge of type I ELMy H-mode discharges once the threshold conditions for the H-mode are satisfied. The anomalous transport in the plasma core is predicted using the GLF23 or MMM95 transport models. To facilitate the steering of lengthy PTRANSP computations, the PTRANSP code has been modified to allow changes in the transport model when simulations are restarted. The PTRANSP simulation results are compared with corresponding results obtained using other integrated modeling codes.[1] G. Bateman, T. Onjun and A.H. Kritz, Plasma Physics and Controlled Fusion, 45, 1939 (2003).[2] T. Onjun, G. Bateman, A.H. Kritz, and G. Hammett, Phys. Plasmas 9, 5018 (2002).

  9. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  10. NASA UAS Integration into the NAS Project Detect and Avoid Display Evaluations

    NASA Technical Reports Server (NTRS)

    Shively, Jay

    2016-01-01

    As part of the Air Force - NASA Bi-Annual Research Council Meeting, slides will be presented on phase 1 Detect and Avoid (DAA) display evaluations. A series of iterative human-in-the-loops (HITL) experiments were conducted with different display configurations to objectively measure pilot performance on maintaining well clear. To date, four simulations and two mini-HITLs have been conducted. Data from these experiments have been incorporated into a revised alerting structure and included in the RTCA SC 228 Phase 1 Minimum Operational Performance Standards (MOPS) proposal. Plans for phase 2 are briefly discussed.

  11. Iterative simulated quenching for designing irregular-spot-array generators.

    PubMed

    Gillet, J N; Sheng, Y

    2000-07-10

    We propose a novel, to our knowledge, algorithm of iterative simulated quenching with temperature rescaling for designing diffractive optical elements, based on an analogy between simulated annealing and statistical thermodynamics. The temperature is iteratively rescaled at the end of each quenching process according to ensemble statistics to bring the system back from a frozen imperfect state with a local minimum of energy to a dynamic state in a Boltzmann heat bath in thermal equilibrium at the rescaled temperature. The new algorithm achieves much lower cost function and reconstruction error and higher diffraction efficiency than conventional simulated annealing with a fast exponential cooling schedule and is easy to program. The algorithm is used to design binary-phase generators of large irregular spot arrays. The diffractive phase elements have trapezoidal apertures of varying heights, which fit ideal arbitrary-shaped apertures better than do trapezoidal apertures of fixed heights.

  12. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  13. Anderson Acceleration for Fixed-Point Iterations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Homer F.

    The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.

  14. RF Pulse Design using Nonlinear Gradient Magnetic Fields

    PubMed Central

    Kopanoglu, Emre; Constable, R. Todd

    2014-01-01

    Purpose An iterative k-space trajectory and radio-frequency (RF) pulse design method is proposed for Excitation using Nonlinear Gradient Magnetic fields (ENiGMa). Theory and Methods The spatial encoding functions (SEFs) generated by nonlinear gradient fields (NLGFs) are linearly dependent in Cartesian-coordinates. Left uncorrected, this may lead to flip-angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a Matching-Pursuit algorithm, and the RF pulse is designed using a Conjugate-Gradient algorithm. Three variants of the proposed approach are given: the full-algorithm, a computationally-cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. Results The method is compared to other iterative (Matching-Pursuit and Conjugate Gradient) and non-iterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity significantly. Conclusion An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. PMID:25203286

  15. Performance of spectral MSE diagnostic on C-Mod and ITER

    NASA Astrophysics Data System (ADS)

    Liao, Ken; Rowan, William; Mumgaard, Robert; Granetz, Robert; Scott, Steve; Marchuk, Oleksandr; Ralchenko, Yuri; Alcator C-Mod Team

    2015-11-01

    Magnetic field was measured on Alcator C-mod by applying spectral Motional Stark Effect techniques based on line shift (MSE-LS) and line ratio (MSE-LR) to the H-alpha emission spectrum of the diagnostic neutral beam atoms. The high field of Alcator C-mod allows measurements to be made at close to ITER values of Stark splitting (~ Bv⊥) with similar background levels to those expected for ITER. Accurate modeling of the spectrum requires a non-statistical, collisional-radiative analysis of the excited beam population and quadratic and Zeeman corrections to the Stark shift. A detailed synthetic diagnostic was developed and used to estimate the performance of the diagnostic at C-Mod and ITER parameters. Our analysis includes the sensitivity to view and beam geometry, aperture and divergence broadening, magnetic field, pixel size, background noise, and signal levels. Analysis of preliminary experiments agree with Kinetic+(polarization)MSE EFIT within ~2° in pitch angle and simulations predict uncertainties of 20 mT in | B | and <2° in pitch angle. This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences under Award Number DE-FG03-96ER-54373 and DE-FC02-99ER54512.

  16. Predicting rotation for ITER via studies of intrinsic torque and momentum transport in DIII-D

    DOE PAGES

    Chrystal, C.; Grierson, B. A.; Staebler, G. M.; ...

    2017-03-30

    Here, experiments at the DIII-D tokamak have used dimensionless parameter scans to investigate the dependencies of intrinsic torque and momentum transport in order to inform a prediction of the rotation profile in ITER. Measurements of intrinsic torque profiles and momentum confinement time in dimensionless parameter scans of normalized gyroradius and collisionality are used to predict the amount of intrinsic rotation in the pedestal of ITER. Additional scans of T e/T i and safety factor are used to determine the accuracy of momentum flux predictions of the quasi-linear gyrokinetic code TGLF. In these scans, applications of modulated torque are used tomore » measure the incremental momentum diffusivity, and results are consistent with the E x B shear suppression of turbulent transport. These incremental transport measurements are also compared with the TGLF results. In order to form a prediction of the rotation profile for ITER, the pedestal prediction is used as a boundary condition to a simulation that uses TGLF to determine the transport in the core of the plasma. The predicted rotation is ≈20 krad/s in the core, lower than in many current tokamak operating scenarios. TGLF predictions show that this rotation is still significant enough to have a strong effect on confinement via E x B shear.« less

  17. Theory of runaway electrons in ITER: Equations, important parameters, and implications for mitigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boozer, Allen H., E-mail: ahb17@columbia.edu

    2015-03-15

    The plasma current in ITER cannot be allowed to transfer from thermal to relativistic electron carriers. The potential for damage is too great. Before the final design is chosen for the mitigation system to prevent such a transfer, it is important that the parameters that control the physics be understood. Equations that determine these parameters and their characteristic values are derived. The mitigation benefits of the injection of impurities with the highest possible atomic number Z and the slowing plasma cooling during halo current mitigation to ≳40 ms in ITER are discussed. The highest possible Z increases the poloidal flux consumptionmore » required for each e-fold in the number of relativistic electrons and reduces the number of high energy seed electrons from which exponentiation builds. Slow cooling of the plasma during halo current mitigation also reduces the electron seed. Existing experiments could test physics elements required for mitigation but cannot carry out an integrated demonstration. ITER itself cannot carry out an integrated demonstration without excessive danger of damage unless the probability of successful mitigation is extremely high. The probability of success depends on the reliability of the theory. Equations required for a reliable Monte Carlo simulation are derived.« less

  18. ISS Double-Gimbaled CMG Subsystem Simulation Using the Agile Development Method

    NASA Technical Reports Server (NTRS)

    Inampudi, Ravi

    2016-01-01

    This paper presents an evolutionary approach in simulating a cluster of 4 Control Moment Gyros (CMG) on the International Space Station (ISS) using a common sense approach (the agile development method) for concurrent mathematical modeling and simulation of the CMG subsystem. This simulation is part of Training systems for the 21st Century simulator which will provide training for crew members, instructors, and flight controllers. The basic idea of how the CMGs on the space station are used for its non-propulsive attitude control is briefly explained to set up the context for simulating a CMG subsystem. Next different reference frames and the detailed equations of motion (EOM) for multiple double-gimbal variable-speed control moment gyroscopes (DGVs) are presented. Fixing some of the terms in the EOM becomes the special case EOM for ISS's double-gimbaled fixed speed CMGs. CMG simulation development using the agile development method is presented in which customer's requirements and solutions evolve through iterative analysis, design, coding, unit testing and acceptance testing. At the end of the iteration a set of features implemented in that iteration are demonstrated to the flight controllers thus creating a short feedback loop and helping in creating adaptive development cycles. The unified modeling language (UML) tool is used in illustrating the user stories, class designs and sequence diagrams. This incremental development approach of mathematical modeling and simulating the CMG subsystem involved the development team and the customer early on, thus improving the quality of the working CMG system in each iteration and helping the team to accurately predict the cost, schedule and delivery of the software.

  19. Parabolized Navier-Stokes Code for Computing Magneto-Hydrodynamic Flowfields

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B. (Technical Monitor); Tannehill, J. C.

    2003-01-01

    This report consists of two published papers, 'Computation of Magnetohydrodynamic Flows Using an Iterative PNS Algorithm' and 'Numerical Simulation of Turbulent MHD Flows Using an Iterative PNS Algorithm'.

  20. On the efficient and reliable numerical solution of rate-and-state friction problems

    NASA Astrophysics Data System (ADS)

    Pipping, Elias; Kornhuber, Ralf; Rosenau, Matthias; Oncken, Onno

    2016-03-01

    We present a mathematically consistent numerical algorithm for the simulation of earthquake rupture with rate-and-state friction. Its main features are adaptive time stepping, a novel algebraic solution algorithm involving nonlinear multigrid and a fixed point iteration for the rate-and-state decoupling. The algorithm is applied to a laboratory scale subduction zone which allows us to compare our simulations with experimental results. Using physical parameters from the experiment, we find a good fit of recurrence time of slip events as well as their rupture width and peak slip. Computations in 3-D confirm efficiency and robustness of our algorithm.

  1. Aircraft Rollout Iterative Energy Simulation

    NASA Technical Reports Server (NTRS)

    Kinoshita, L.

    1986-01-01

    Aircraft Rollout Iterative Energy Simulation (ARIES) program analyzes aircraft-brake performance during rollout. Simulates threedegree-of-freedom rollout after nose-gear touchdown. Amount of brake energy dissipated during aircraft landing determines life expectancy of brake pads. ARIES incorporates brake pressure, actual flight data, crosswinds, and runway characteristics to calculate following: brake energy during rollout for up to four independent brake systems; time profiles of rollout distance, velocity, deceleration, and lateral runway position; and all aerodynamic moments on aircraft. ARIES written in FORTRAN 77 for batch execution.

  2. LIDAR TS for ITER core plasma. Part II: simultaneous two wavelength LIDAR TS

    NASA Astrophysics Data System (ADS)

    Gowers, C.; Nielsen, P.; Salzmann, H.

    2017-12-01

    We have shown recently, and in more detail at this conference (Salzmann et al) that the LIDAR approach to ITER core TS measurements requires only two mirrors in the inaccessible port plug area of the machine. This leads to simplified and robust alignment, lower risk of mirror damage by plasma contamination and much simpler calibration, compared with the awkward and vulnerable optical geometry of the conventional imaging TS approach, currently under development by ITER. In the present work we have extended the simulation code used previously to include the case of launching two laser pulses, of different wavelengths, simultaneously in LIDAR geometry. The aim of this approach is to broaden the choice of lasers available for the diagnostic. In the simulation code it is assumed that two short duration (300 ps) laser pulses of different wavelengths, from an Nd:YAG laser are launched through the plasma simultaneously. The temperature and density profiles are deduced in the usual way but from the resulting combined scattered signals in the different spectral channels of the single spectrometer. The spectral response and quantum efficiencies of the detectors used in the simulation are taken from catalogue data for commercially available Hamamatsu MCP-PMTs. The response times, gateability and tolerance to stray light levels of this type of photomultiplier have already been demonstrated in the JET LIDAR system and give sufficient spatial resolution to meet the ITER specification. Here we present the new simulation results from the code. They demonstrate that when the detectors are combined with this two laser, LIDAR approach, the full range of the specified ITER core plasma Te and ne can be measured with sufficient accuracy. So, with commercially available detectors and a simple modification of a Nd:YAG laser similar to that currently being used in the design of the conventional ITER core TS design mentioned above, the ITER requirements can be met.

  3. Robust multi-site MR data processing: iterative optimization of bias correction, tissue classification, and registration.

    PubMed

    Young Kim, Eun; Johnson, Hans J

    2013-01-01

    A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.

  4. Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method

    NASA Astrophysics Data System (ADS)

    Mehl, S.

    2012-12-01

    Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.

  5. Nonuniform update for sparse target recovery in fluorescence molecular tomography accelerated by ordered subsets.

    PubMed

    Zhu, Dianwen; Li, Changqing

    2014-12-01

    Fluorescence molecular tomography (FMT) is a promising imaging modality and has been actively studied in the past two decades since it can locate the specific tumor position three-dimensionally in small animals. However, it remains a challenging task to obtain fast, robust and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden, the noisy measurement and the ill-posed nature of the inverse problem. In this paper we propose a nonuniform preconditioning method in combination with L (1) regularization and ordered subsets technique (NUMOS) to take care of the different updating needs at different pixels, to enhance sparsity and suppress noise, and to further boost convergence of approximate solutions for fluorescence molecular tomography. Using both simulated data and phantom experiment, we found that the proposed nonuniform updating method outperforms its popular uniform counterpart by obtaining a more localized, less noisy, more accurate image. The computational cost was greatly reduced as well. The ordered subset (OS) technique provided additional 5 times and 3 times speed enhancements for simulation and phantom experiments, respectively, without degrading image qualities. When compared with the popular L (1) algorithms such as iterative soft-thresholding algorithm (ISTA) and Fast iterative soft-thresholding algorithm (FISTA) algorithms, NUMOS also outperforms them by obtaining a better image in much shorter period of time.

  6. M-estimator for the 3D symmetric Helmert coordinate transformation

    NASA Astrophysics Data System (ADS)

    Chang, Guobin; Xu, Tianhe; Wang, Qianxin

    2018-01-01

    The M-estimator for the 3D symmetric Helmert coordinate transformation problem is developed. Small-angle rotation assumption is abandoned. The direction cosine matrix or the quaternion is used to represent the rotation. The 3 × 1 multiplicative error vector is defined to represent the rotation estimation error. An analytical solution can be employed to provide the initial approximate for iteration, if the outliers are not large. The iteration is carried out using the iterative reweighted least-squares scheme. In each iteration after the first one, the measurement equation is linearized using the available parameter estimates, the reweighting matrix is constructed using the residuals obtained in the previous iteration, and then the parameter estimates with their variance-covariance matrix are calculated. The influence functions of a single pseudo-measurement on the least-squares estimator and on the M-estimator are derived to theoretically show the robustness. In the solution process, the parameter is rescaled in order to improve the numerical stability. Monte Carlo experiments are conducted to check the developed method. Different cases to investigate whether the assumed stochastic model is correct are considered. The results with the simulated data slightly deviating from the true model are used to show the developed method's statistical efficacy at the assumed stochastic model, its robustness against the deviations from the assumed stochastic model, and the validity of the estimated variance-covariance matrix no matter whether the assumed stochastic model is correct or not.

  7. Software Estimates Costs of Testing Rocket Engines

    NASA Technical Reports Server (NTRS)

    Smith, C. L.

    2003-01-01

    Simulation-Based Cost Model (SiCM), a discrete event simulation developed in Extend , simulates pertinent aspects of the testing of rocket propulsion test articles for the purpose of estimating the costs of such testing during time intervals specified by its users. A user enters input data for control of simulations; information on the nature of, and activity in, a given testing project; and information on resources. Simulation objects are created on the basis of this input. Costs of the engineering-design, construction, and testing phases of a given project are estimated from numbers and labor rates of engineers and technicians employed in each phase, the duration of each phase; costs of materials used in each phase; and, for the testing phase, the rate of maintenance of the testing facility. The three main outputs of SiCM are (1) a curve, updated at each iteration of the simulation, that shows overall expenditures vs. time during the interval specified by the user; (2) a histogram of the total costs from all iterations of the simulation; and (3) table displaying means and variances of cumulative costs for each phase from all iterations. Other outputs include spending curves for each phase.

  8. A Block Preconditioned Conjugate Gradient-type Iterative Solver for Linear Systems in Thermal Reservoir Simulation

    NASA Astrophysics Data System (ADS)

    Betté, Srinivas; Diaz, Julio C.; Jines, William R.; Steihaug, Trond

    1986-11-01

    A preconditioned residual-norm-reducing iterative solver is described. Based on a truncated form of the generalized-conjugate-gradient method for nonsymmetric systems of linear equations, the iterative scheme is very effective for linear systems generated in reservoir simulation of thermal oil recovery processes. As a consequence of employing an adaptive implicit finite-difference scheme to solve the model equations, the number of variables per cell-block varies dynamically over the grid. The data structure allows for 5- and 9-point operators in the areal model, 5-point in the cross-sectional model, and 7- and 11-point operators in the three-dimensional model. Block-diagonal-scaling of the linear system, done prior to iteration, is found to have a significant effect on the rate of convergence. Block-incomplete-LU-decomposition (BILU) and block-symmetric-Gauss-Seidel (BSGS) methods, which result in no fill-in, are used as preconditioning procedures. A full factorization is done on the well terms, and the cells are ordered in a manner which minimizes the fill-in in the well-column due to this factorization. The convergence criterion for the linear (inner) iteration is linked to that of the nonlinear (Newton) iteration, thereby enhancing the efficiency of the computation. The algorithm, with both BILU and BSGS preconditioners, is evaluated in the context of a variety of thermal simulation problems. The solver is robust and can be used with little or no user intervention.

  9. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  10. A methodology to determine the elastic moduli of crystals by matching experimental and simulated lattice strain pole figures using discrete harmonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wielewski, Euan; Boyce, Donald E.; Park, Jun-Sang

    Determining reliable single crystal material parameters for complex polycrystalline materials is a significant challenge for the materials community. In this work, a novel methodology for determining those parameters is outlined and successfully applied to the titanium alloy, Ti-6Al-4V. Utilizing the results from a lattice strain pole figure experiment conducted at the Cornell High Energy Synchrotron Source, an iterative approach is used to optimize the single crystal elastic moduli by comparing experimental and simulated lattice strain pole figures at discrete load steps during a uniaxial tensile test. Due to the large number of unique measurements taken during the experiments, comparisons weremore » made by using the discrete spherical harmonic modes of both the experimental and simulated lattice strain pole figures, allowing the complete pole figures to be used to determine the single crystal elastic moduli. (C) 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.« less

  11. Transport simulations of linear plasma generators with the B2.5-Eirene and EMC3-Eirene codes

    DOE PAGES

    Rapp, Juergen; Owen, Larry W.; Bonnin, X.; ...

    2014-12-20

    Linear plasma generators are cost effective facilities to simulate divertor plasma conditions of present and future fusion reactors. For this research, the codes B2.5-Eirene and EMC3-Eirene were extensively used for design studies of the planned Material Plasma Exposure eXperiment (MPEX). Effects on the target plasma of the gas fueling and pumping locations, heating power, device length, magnetic configuration and transport model were studied with B2.5-Eirene. Effects of tilted or vertical targets were calculated with EMC3-Eirene and showed that spreading the incident flux over a larger area leads to lower density, higher temperature and off-axis profile peaking in front of themore » target. In conclusion, the simulations indicate that with sufficient heating power MPEX can reach target plasma conditions that are similar to those expected in the ITER divertor. B2.5-Eirene simulations of the MAGPIE experiment have been carried out in order to establish an additional benchmark with experimental data from a linear device with helicon wave heating.« less

  12. Monte Carlo Simulations: Number of Iterations and Accuracy

    DTIC Science & Technology

    2015-07-01

    iterations because of its added complexity compared to the WM . We recommend that the WM be used for a priori estimates of the number of MC ...inaccurate.15 Although the WM and the WSM have generally proven useful in estimating the number of MC iterations and addressing the accuracy of the MC ...Theorem 3 3. A Priori Estimate of Number of MC Iterations 7 4. MC Result Accuracy 11 5. Using Percentage Error of the Mean to Estimate Number of MC

  13. Iterating between lessons on concepts and procedures can improve mathematics knowledge.

    PubMed

    Rittle-Johnson, Bethany; Koedinger, Kenneth

    2009-09-01

    Knowledge of concepts and procedures seems to develop in an iterative fashion, with increases in one type of knowledge leading to increases in the other type of knowledge. This suggests that iterating between lessons on concepts and procedures may improve learning. The purpose of the current study was to evaluate the instructional benefits of an iterative lesson sequence compared to a concepts-before-procedures sequence for students learning decimal place-value concepts and arithmetic procedures. In two classroom experiments, sixth-grade students from two schools participated (N=77 and 26). Students completed six decimal lessons on an intelligent-tutoring systems. In the iterative condition, lessons cycled between concept and procedure lessons. In the concepts-first condition, all concept lessons were presented before introducing the procedure lessons. In both experiments, students in the iterative condition gained more knowledge of arithmetic procedures, including ability to transfer the procedures to problems with novel features. Knowledge of concepts was fairly comparable across conditions. Finally, pre-test knowledge of one type predicted gains in knowledge of the other type across experiments. An iterative sequencing of lessons seems to facilitate learning and transfer, particularly of mathematical procedures. The findings support an iterative perspective for the development of knowledge of concepts and procedures.

  14. Overview of LH experiments in JET with an ITER-like wall

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirov, K. K.; Baranov, Yu.; Brix, M.

    2014-02-12

    An overview of the recent results of Lower Hybrid (LH) experiments at JET with the ITER-like wall (ILW) is presented. Topics relevant to LH wave coupling are addressed as well as issues related to ILW and LH system protections. LH wave coupling was studied in conditions determined by ILW recycling and operational constraints. It was concluded that LH wave coupling was not significantly affected and the pre-ILW performance could be recovered after optimising the launcher position and local gas puffing. SOL density measurements were performed using a Li-beam diagnostic. Dependencies on the D2 injection rate from the dedicated gas valve,more » the LH power and the LH launcher position were analysed. SOL density modifications due to LH were modelled by the EDGE2D code assuming SOL heating by collisional dissipation of the LH wave and/or possible ExB drifts in the SOL. The simulations matched reasonably well the measured SOL profiles. Observations of arcs and hotspots with visible and IR cameras viewing the LH launcher are presented.« less

  15. EC power management and NTM control in ITER

    NASA Astrophysics Data System (ADS)

    Poli, Francesca; Fredrickson, E.; Henderson, M.; Bertelli, N.; Farina, D.; Figini, L.; Nowak, S.; Poli, E.; Sauter, O.

    2016-10-01

    The suppression of Neoclassical Tearing Modes (NTMs) is an essential requirement for the achievement of the demonstration baseline in ITER. The Electron Cyclotron upper launcher is specifically designed to provide highly localized heating and current drive for NTM stabilization. In order to assess the power management for shared applications, we have performed time-dependent simulations for ITER scenarios covering operation from half to full field. The free-boundary TRANSP simulations evolve the magnetic equilibrium and the pressure profiles in response to the heating and current drive sources and are interfaced with a GRE for the evolution of size and frequency of the magnetic islands. Combined with a feedback control of the EC power and the steering angle, these simulations are used to model the plasma response to NTM control, accounting for the misalignment of the EC deposition with the resonant surfaces, uncertainties in the magnetic equilibrium reconstruction and in the magnetic island detection threshold. Simulations indicate that the threshold for detection of the island should not exceed 2-3cm, that pre-emptive control is a preferable option, and that for safe operation the power needed for NTM control should be reserved, rather than shared with other applications. Work supported by ITER under IO/RFQ/13/9550/JTR and by DOE under DE-AC02-09CH11466.

  16. Three dimensional iterative beam propagation method for optical waveguide devices

    NASA Astrophysics Data System (ADS)

    Ma, Changbao; Van Keuren, Edward

    2006-10-01

    The finite difference beam propagation method (FD-BPM) is an effective model for simulating a wide range of optical waveguide structures. The classical FD-BPMs are based on the Crank-Nicholson scheme, and in tridiagonal form can be solved using the Thomas method. We present a different type of algorithm for 3-D structures. In this algorithm, the wave equation is formulated into a large sparse matrix equation which can be solved using iterative methods. The simulation window shifting scheme and threshold technique introduced in our earlier work are utilized to overcome the convergence problem of iterative methods for large sparse matrix equation and wide-angle simulations. This method enables us to develop higher-order 3-D wide-angle (WA-) BPMs based on Pade approximant operators and the multistep method, which are commonly used in WA-BPMs for 2-D structures. Simulations using the new methods will be compared to the analytical results to assure its effectiveness and applicability.

  17. Implementation of an improved adaptive-implicit method in a thermal compositional simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, T.B.

    1988-11-01

    A multicomponent thermal simulator with an adaptive-implicit-method (AIM) formulation/inexact-adaptive-Newton (IAN) method is presented. The final coefficient matrix retains the original banded structure so that conventional iterative methods can be used. Various methods for selection of the eliminated unknowns are tested. AIM/IAN method has a lower work count per Newtonian iteration than fully implicit methods, but a wrong choice of unknowns will result in excessive Newtonian iterations. For the problems tested, the residual-error method described in the paper for selecting implicit unknowns, together with the IAN method, had an improvement of up to 28% of the CPU time over the fullymore » implicit method.« less

  18. Two-wavelength LIDAR Thomson scattering for ITER core plasma

    NASA Astrophysics Data System (ADS)

    Nielsen, P.; Gowers, C.; Salzmann, H.

    2017-07-01

    Our proposal for a LIDAR Thomson scattering system to measure Te and ne profiles in the ITER core plasma, is based on experience with the LIDAR system on JET, which is still operational after 30 years. The design uses currently available technology and complies with the measurement requirements given by ITER. In addition, it offers the following advantages over the conventional imaging approach currently being adopted by ITER: 1) No gas fill of the vessel required for absolute calibration. 2) Easier alignment. 3) Measurements over almost the complete plasma diameter. 4) Two mirrors only as front optics. For a given laser wavelength the dynamic range of the Te measurements is mainly limited by the collection optics' transmission roll-off in the blue and the range of spectral sensitivity of the required fast photomultipliers. With the originally proposed Ti:Sapphire laser, measurements of the envisaged maximum temperature of 40 keV are marginally possible. Here we present encouraging simulation results on the use of other laser systems and on the use of two lasers with different wavelength. Alternating two wavelengths was proposed already in 1997 as a method for calibrating the transmission of the collection system. In the present analysis, the two laser pulses are injected simultaneously. We find that the use of Nd:YAG lasers operated at fundamental and second harmonic, respectively, yields excellent results and preserves the spectral recalibration feature.

  19. Curvilinear Immersed Boundary Method for Simulating Fluid Structure Interaction with Complex 3D Rigid Bodies

    PubMed Central

    Borazjani, Iman; Ge, Liang; Sotiropoulos, Fotis

    2010-01-01

    The sharp-interface CURVIB approach of Ge and Sotiropoulos [L. Ge, F. Sotiropoulos, A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries, Journal of Computational Physics 225 (2007) 1782–1809] is extended to simulate fluid structure interaction (FSI) problems involving complex 3D rigid bodies undergoing large structural displacements. The FSI solver adopts the partitioned FSI solution approach and both loose and strong coupling strategies are implemented. The interfaces between immersed bodies and the fluid are discretized with a Lagrangian grid and tracked with an explicit front-tracking approach. An efficient ray-tracing algorithm is developed to quickly identify the relationship between the background grid and the moving bodies. Numerical experiments are carried out for two FSI problems: vortex induced vibration of elastically mounted cylinders and flow through a bileaflet mechanical heart valve at physiologic conditions. For both cases the computed results are in excellent agreement with benchmark simulations and experimental measurements. The numerical experiments suggest that both the properties of the structure (mass, geometry) and the local flow conditions can play an important role in determining the stability of the FSI algorithm. Under certain conditions unconditionally unstable iteration schemes result even when strong coupling FSI is employed. For such cases, however, combining the strong-coupling iteration with under-relaxation in conjunction with the Aitken’s acceleration technique is shown to effectively resolve the stability problems. A theoretical analysis is presented to explain the findings of the numerical experiments. It is shown that the ratio of the added mass to the mass of the structure as well as the sign of the local time rate of change of the force or moment imparted on the structure by the fluid determine the stability and convergence of the FSI algorithm. The stabilizing role of under-relaxation is also clarified and an upper bound of the required for stability under-relaxation coefficient is derived. PMID:20981246

  20. AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.

    PubMed

    Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S

    2017-09-01

    Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Optimized up-down asymmetry to drive fast intrinsic rotation in tokamaks

    NASA Astrophysics Data System (ADS)

    Ball, Justin; Parra, Felix I.; Landreman, Matt; Barnes, Michael

    2018-02-01

    Breaking the up-down symmetry of the tokamak poloidal cross-section can significantly increase the spontaneous rotation due to turbulent momentum transport. In this work, we optimize the shape of flux surfaces with both tilted elongation and tilted triangularity in order to maximize this drive of intrinsic rotation. Nonlinear gyrokinetic simulations demonstrate that adding optimally-tilted triangularity can double the momentum transport of a tilted elliptical shape. This work indicates that tilting the elongation and triangularity in an ITER-like device can reduce the energy transport and drive intrinsic rotation with an Alfvén Mach number of roughly 1% . This rotation is four times larger than the rotation expected in ITER and is approximately what is needed to stabilize MHD instabilities. It is shown that this optimal shape can be created using the shaping coils of several present-day experiments.

  2. Phase retrieval with the transport-of-intensity equation in an arbitrarily-shaped aperture by iterative discrete cosine transforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Zuo, Chao; Idir, Mourad

    A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less

  3. Phase retrieval with the transport-of-intensity equation in an arbitrarily-shaped aperture by iterative discrete cosine transforms

    DOE PAGES

    Huang, Lei; Zuo, Chao; Idir, Mourad; ...

    2015-04-21

    A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less

  4. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    NASA Astrophysics Data System (ADS)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  5. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    PubMed Central

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194

  6. Deblurring in digital tomosynthesis by iterative self-layer subtraction

    NASA Astrophysics Data System (ADS)

    Youn, Hanbean; Kim, Jee Young; Jang, SunYoung; Cho, Min Kook; Cho, Seungryong; Kim, Ho Kyung

    2010-04-01

    Recent developments in large-area flat-panel detectors have made tomosynthesis technology revisited in multiplanar xray imaging. However, the typical shift-and-add (SAA) or backprojection reconstruction method is notably claimed by a lack of sharpness in the reconstructed images because of blur artifact which is the superposition of objects which are out of planes. In this study, we have devised an intuitive simple method to reduce the blur artifact based on an iterative approach. This method repeats a forward and backward projection procedure to determine the blur artifact affecting on the plane-of-interest (POI), and then subtracts it from the POI. The proposed method does not include any Fourierdomain operations hence excluding the Fourier-domain-originated artifacts. We describe the concept of the self-layer subtractive tomosynthesis and demonstrate its performance with numerical simulation and experiments. Comparative analysis with the conventional methods, such as the SAA and filtered backprojection methods, is addressed.

  7. Multiplicative noise removal through fractional order tv-based model and fast numerical schemes for its approximation

    NASA Astrophysics Data System (ADS)

    Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad

    2017-07-01

    This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.

  8. Critical current density measurement of striated multifilament-coated conductors using a scanning Hall probe microscope

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Fen; Kochat, Mehdi; Majkic, Goran; Selvamanickam, Venkat

    2016-08-01

    In this paper the authors succeeded in measuring the critical current density ({J}{{c}}) of multifilament-coated conductors (CCs) with thin filaments as low as 0.25 mm using the scanning hall probe microscope (SHPM) technique. A new iterative method of data analysis is developed to make the calculation of {J}{{c}} for thin filaments possible, even without a very small scan distance. The authors also discussed in detail the advantage and limitation of the iterative method using both simulation and experiment results. The results of the new method correspond well with the traditional fast Fourier transform method where this is still applicable. However, the new method is applicable for the filamentized CCs in much wider measurement conditions such as with thin filament and a large scan distance, thus overcoming the barrier for application of the SHPM technique on {J}{{c}} measurement of long filamentized CCs with narrow filaments.

  9. Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.

    PubMed

    Junker, André; Brenner, Karl-Heinz

    2018-03-01

    The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.

  10. An evaluation of different setups for simulating lighting characteristics

    NASA Astrophysics Data System (ADS)

    Salters, Bart; Murdoch, Michael; Sekulovksi, Dragan; Chen, Shih-Han; Seuntiens, Pieter

    2012-03-01

    The advance of technology continuously enables new luminaire designs and concepts. Evaluating such designs has traditionally been done using actual prototypes, in a real environment. The iterations needed to build, verify, and improve luminaire designs incur substantial costs and slow down the design process. A more attractive way is to evaluate designs using simulations, as they can be made cheaper and quicker for a wider variety of prototypes. However, the value of such simulations is determined by how closely they predict the outcome of actual perception experiments. In this paper, we discuss an actual perception experiment including several lighting settings in a normal office environment. The same office environment also has been modeled using different software tools, and photo-realistic renderings have been created of these models. These renderings were subsequently processed using various tonemapping operators in preparation for display. The total imaging chain can be considered a simulation setup, and we have executed several perception experiments on different setups. Our real interest is in finding which imaging chain gives us the best result, or in other words, which of them yields the closest match between virtual and real experiment. To answer this question, first of all an answer has to be found to the question, "which simulation setup matches the real world best?" As there is no unique, widely accepted measure to describe the performance of a certain setup, we consider a number of options and discuss the reasoning behind them along with their advantages and disadvantages.

  11. 3-D Analysis of Flanged Joints Through Various Preload Methods Using ANSYS

    NASA Astrophysics Data System (ADS)

    Murugan, Jeyaraj Paul; Kurian, Thomas; Jayaprakash, Janardhan; Sreedharapanickar, Somanath

    2015-10-01

    Flanged joints are being employed in aerospace solid rocket motor hardware for the integration of various systems or subsystems. Hence, the design of flanged joints is very important in ensuring the integrity of motor while functioning. As these joints are subjected to higher loads due to internal pressure acting inside the motor chamber, an appropriate preload is required to be applied in this joint before subjecting it to the external load. Preload, also known as clamp load, is applied on the fastener and helps to hold the mating flanges together. Generally preload is simulated as a thermal load and the exact preload is obtained through number of iterations. Infact, more iterations are required when considering the material nonlinearity of the bolt. This way of simulation will take more computational time for generating the required preload. Now a days most commercial software packages use pretension elements for simulating the preload. This element does not require iterations for inducing the preload and it can be solved with single iteration. This approach takes less computational time and thus one can study the characteristics of the joint easily by varying the preload. When the structure contains more number of joints with different sizes of fasteners, pretension elements can be used compared to thermal load approach for simulating each size of fastener. This paper covers the details of analyses carried out simulating the preload through various options viz., preload through thermal, initial state command and pretension element etc. using ANSYS finite element package.

  12. TMAP-7 simulation of D2 thermal release data from Be co-deposited layers

    NASA Astrophysics Data System (ADS)

    Baldwin, M. J.; Schwarz-Selinger, T.; Yu, J. H.; Doerner, R. P.

    2013-07-01

    The efficacy of (1) bake-out at 513 K and 623 K, and (2) thermal transient (10 ms) loading to up to 1000 K, is explored for reducing D inventory in 1 μm thick Be-D (D/Be ˜0.1) co-deposited layers formed at 323 K for experiment (1) and ˜500 K for experiment (2). D release data from co-deposits are obtained by thermal desorption and used to validate a model input into the Tritium Migration & Analysis Program 7 (TMAP). In (1), good agreement with experiment is found for a TMAP model encorporating traps of activation energies, 0.80 eV and 0.98 eV, whereas an additional 2 eV trap was required to model experiment (2). Thermal release is found to be trap limited, but simulations are optimal when surface recombination is taken into account. Results suggest that thick built-up co-deposited layers will hinder ITER inventory control, and that bake periods (˜1 day) will be more effective in inventory reduction than transient thermal loading.

  13. Exploring a New Simulation Approach to Improve Clinical Reasoning Teaching and Assessment: Randomized Trial Protocol

    PubMed Central

    Moussa, Ahmed; Loye, Nathalie; Charlin, Bernard; Audétat, Marie-Claude

    2016-01-01

    Background Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. Objective The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. Methods This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. Results This study is in its preliminary stages and the results are expected to be made available by April, 2016. Conclusions This will be the first study to explore a new simulation approach designed to enhance clinical reasoning. By assessing more closely reasoning processes throughout a simulation session, we believe that Simulation with Iterative Discussions will be an interesting and more effective approach for students. The findings of the study will benefit medical educators, education programs, and medical students. PMID:26888076

  14. Exploring a New Simulation Approach to Improve Clinical Reasoning Teaching and Assessment: Randomized Trial Protocol.

    PubMed

    Pennaforte, Thomas; Moussa, Ahmed; Loye, Nathalie; Charlin, Bernard; Audétat, Marie-Claude

    2016-02-17

    Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. This study is in its preliminary stages and the results are expected to be made available by April, 2016. This will be the first study to explore a new simulation approach designed to enhance clinical reasoning. By assessing more closely reasoning processes throughout a simulation session, we believe that Simulation with Iterative Discussions will be an interesting and more effective approach for students. The findings of the study will benefit medical educators, education programs, and medical students.

  15. Advances in the high bootstrap fraction regime on DIII-D towards the Q  =  5 mission of ITER steady state

    NASA Astrophysics Data System (ADS)

    Qian, J. P.; Garofalo, A. M.; Gong, X. Z.; Ren, Q. L.; Ding, S. Y.; Solomon, W. M.; Xu, G. S.; Grierson, B. A.; Guo, W. F.; Holcomb, C. T.; McClenaghan, J.; McKee, G. R.; Pan, C. K.; Huang, J.; Staebler, G. M.; Wan, B. N.

    2017-05-01

    Recent EAST/DIII-D joint experiments on the high poloidal beta {β\\text{P}} regime in DIII-D have extended operation with internal transport barriers (ITBs) and excellent energy confinement (H 98y2 ~ 1.6) to higher plasma current, for lower q 95  ⩽  7.0, and more balanced neutral beam injection (NBI) (torque injection  <  2 Nm), for lower plasma rotation than previous results (Garofalo et al, IAEA 2014, Gong et al 2014 IAEA Int. Conf. on Fusion Energy). Transport analysis and experimental measurements at low toroidal rotation suggest that the E  ×  B shear effect is not key to the ITB formation in these high {β\\text{P}} discharges. Experiments and TGLF modeling show that the Shafranov shift has a key stabilizing effect on turbulence. Extrapolation of the DIII-D results using a 0D model shows that with the improved confinement, the high bootstrap fraction regime could achieve fusion gain Q  =  5 in ITER at {β\\text{N}} ~ 2.9 and q 95 ~ 7. With the optimization of q(0), the required improved confinement is achievable when using 1.5D TGLF-SAT1 for transport simulations. Results reported in this paper suggest that the DIII-D high {β\\text{P}} scenario could be a candidate for ITER steady state operation.

  16. Nonlinear MHD simulations of QH-mode DIII-D plasmas and implications for ITER high Q scenarios

    NASA Astrophysics Data System (ADS)

    Liu, F.; Huijsmans, G. T. A.; Loarte, A.; Garofalo, A. M.; Solomon, W. M.; Hoelzl, M.; Nkonga, B.; Pamela, S.; Becoulet, M.; Orain, F.; Van Vugt, D.

    2018-01-01

    In nonlinear MHD simulations of DIII-D QH-mode plasmas it has been found that low n kink/peeling modes (KPMs) are unstable and grow to a saturated kink-peeling mode. The features of the dominant saturated KPMs, which are localised toroidally by nonlinear coupling of harmonics, such as mode frequencies, density fluctuations and their effect on pedestal particle and energy transport, are in good agreement with the observations of the edge harmonic oscillation typically present in DIII-D QH-mode experiments. The nonlinear evolution of MHD modes including both kink-peeling modes and ballooning modes, is investigated through MHD simulations by varying the pedestal current and pressure relative to the initial conditions of DIII-D QH-mode plasma. The edge current and pressure at the pedestal are key parameters for the plasma either saturating to a QH-mode regime or a ballooning mode dominant regime. The influence of E × B flow and its shear on the QH-mode plasma has been investigated. E × B flow shear has a strong stabilisation effect on the medium to high-n modes but is destabilising for the n = 2 mode. The QH-mode extrapolation results of an ITER Q = 10 plasma show that the pedestal currents are large enough to destabilise n = 1-5 KPMs, leading to a stationary saturated kink-peeling mode.

  17. Phantom experiments to improve parathyroid lesion detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Kenneth J.; Tronco, Gene G.; Tomas, Maria B.

    2007-12-15

    This investigation tested the hypothesis that visual analysis of iteratively reconstructed tomograms by ordered subset expectation maximization (OSEM) provides the highest accuracy for localizing parathyroid lesions using {sup 99m}Tc-sestamibi SPECT data. From an Institutional Review Board approved retrospective review of 531 patients evaluated for parathyroid localization, image characteristics were determined for 85 {sup 99m}Tc-sestamibi SPECT studies originally read as equivocal (EQ). Seventy-two plexiglas phantoms using cylindrical simulated lesions were acquired for a clinically realistic range of counts (mean simulated lesion counts of 75{+-}50 counts/pixel) and target-to-background (T:B) ratios (range=2.0 to 8.0) to determine an optimal filter for OSEM. Two experiencedmore » nuclear physicians graded simulated lesions, blinded to whether chambers contained radioactivity or plain water, and two observers used the same scale to read all phantom and clinical SPECT studies, blinded to pathology findings and clinical information. For phantom data and all clinical data, T:B analyses were not statistically different for OSEM versus FB, but visual readings were significantly more accurate than T:B (88{+-}6% versus 68{+-}6%, p=0.001) for OSEM processing, and OSEM was significantly more accurate than FB for visual readings (88{+-}6% versus 58{+-}6%, p<0.0001). These data suggest that visual analysis of iteratively reconstructed MIBI tomograms should be incorporated into imaging protocols performed to localize parathyroid lesions.« less

  18. Performance issues for iterative solvers in device simulation

    NASA Technical Reports Server (NTRS)

    Fan, Qing; Forsyth, P. A.; Mcmacken, J. R. F.; Tang, Wei-Pai

    1994-01-01

    Due to memory limitations, iterative methods have become the method of choice for large scale semiconductor device simulation. However, it is well known that these methods still suffer from reliability problems. The linear systems which appear in numerical simulation of semiconductor devices are notoriously ill-conditioned. In order to produce robust algorithms for practical problems, careful attention must be given to many implementation issues. This paper concentrates on strategies for developing robust preconditioners. In addition, effective data structures and convergence check issues are also discussed. These algorithms are compared with a standard direct sparse matrix solver on a variety of problems.

  19. Radiofrequency pulse design using nonlinear gradient magnetic fields.

    PubMed

    Kopanoglu, Emre; Constable, R Todd

    2015-09-01

    An iterative k-space trajectory and radiofrequency (RF) pulse design method is proposed for excitation using nonlinear gradient magnetic fields. The spatial encoding functions (SEFs) generated by nonlinear gradient fields are linearly dependent in Cartesian coordinates. Left uncorrected, this may lead to flip angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a matching pursuit algorithm, and the RF pulse is designed using a conjugate gradient algorithm. Three variants of the proposed approach are given: the full algorithm, a computationally cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. The method is compared with other iterative (matching pursuit and conjugate gradient) and noniterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity. An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. © 2014 Wiley Periodicals, Inc.

  20. Long-term fuel retention and release in JET ITER-Like Wall at ITER-relevant baking temperatures

    NASA Astrophysics Data System (ADS)

    Heinola, K.; Likonen, J.; Ahlgren, T.; Brezinsek, S.; De Temmerman, G.; Jepu, I.; Matthews, G. F.; Pitts, R. A.; Widdowson, A.; Contributors, JET

    2017-08-01

    The fuel outgassing efficiency from plasma-facing components exposed in JET-ILW has been studied at ITER-relevant baking temperatures. Samples retrieved from the W divertor and Be main chamber were annealed at 350 and 240 °C, respectively. Annealing was performed with thermal desoprtion spectrometry (TDS) for 0, 5 and 15 h to study the deuterium removal effectiveness at the nominal baking temperatures. The remained fraction was determined by emptying the samples fully of deuterium by heating W and Be samples up to 1000 and 775 °C,respectively. Results showed the deposits in the divertor having an increasing effect to the remaining retention at temperatures above baking. Highest remaining fractions 54 and 87 % were observed with deposit thicknesses of 10 and 40 μm, respectively. Substantially high fractions were obtained in the main chamber samples from the deposit-free erosion zone of the limiter midplane, in which the dominant fuel retention mechanism is via implantation: 15 h annealing resulted in retained deuterium higher than 90 % . TDS results from the divertor were simulated with TMAP7 calculations. The spectra were modelled with three deuterium activation energies resulting in good agreement with the experiments.

  1. Estimation of Longitudinal Force and Sideslip Angle for Intelligent Four-Wheel Independent Drive Electric Vehicles by Observer Iteration and Information Fusion.

    PubMed

    Chen, Te; Chen, Long; Xu, Xing; Cai, Yingfeng; Jiang, Haobin; Sun, Xiaoqiang

    2018-04-20

    Exact estimation of longitudinal force and sideslip angle is important for lateral stability and path-following control of four-wheel independent driven electric vehicle. This paper presents an effective method for longitudinal force and sideslip angle estimation by observer iteration and information fusion for four-wheel independent drive electric vehicles. The electric driving wheel model is introduced into the vehicle modeling process and used for longitudinal force estimation, the longitudinal force reconstruction equation is obtained via model decoupling, the a Luenberger observer and high-order sliding mode observer are united for longitudinal force observer design, and the Kalman filter is applied to restrain the influence of noise. Via the estimated longitudinal force, an estimation strategy is then proposed based on observer iteration and information fusion, in which the Luenberger observer is applied to achieve the transcendental estimation utilizing less sensor measurements, the extended Kalman filter is used for a posteriori estimation with higher accuracy, and a fuzzy weight controller is used to enhance the adaptive ability of observer system. Simulations and experiments are carried out, and the effectiveness of proposed estimation method is verified.

  2. Estimation of Longitudinal Force and Sideslip Angle for Intelligent Four-Wheel Independent Drive Electric Vehicles by Observer Iteration and Information Fusion

    PubMed Central

    Chen, Long; Xu, Xing; Cai, Yingfeng; Jiang, Haobin; Sun, Xiaoqiang

    2018-01-01

    Exact estimation of longitudinal force and sideslip angle is important for lateral stability and path-following control of four-wheel independent driven electric vehicle. This paper presents an effective method for longitudinal force and sideslip angle estimation by observer iteration and information fusion for four-wheel independent drive electric vehicles. The electric driving wheel model is introduced into the vehicle modeling process and used for longitudinal force estimation, the longitudinal force reconstruction equation is obtained via model decoupling, the a Luenberger observer and high-order sliding mode observer are united for longitudinal force observer design, and the Kalman filter is applied to restrain the influence of noise. Via the estimated longitudinal force, an estimation strategy is then proposed based on observer iteration and information fusion, in which the Luenberger observer is applied to achieve the transcendental estimation utilizing less sensor measurements, the extended Kalman filter is used for a posteriori estimation with higher accuracy, and a fuzzy weight controller is used to enhance the adaptive ability of observer system. Simulations and experiments are carried out, and the effectiveness of proposed estimation method is verified. PMID:29677124

  3. Iterative Track Fitting Using Cluster Classification in Multi Wire Proportional Chamber

    NASA Astrophysics Data System (ADS)

    Primor, David; Mikenberg, Giora; Etzion, Erez; Messer, Hagit

    2007-10-01

    This paper addresses the problem of track fitting of a charged particle in a multi wire proportional chamber (MWPC) using cathode readout strips. When a charged particle crosses a MWPC, a positive charge is induced on a cluster of adjacent strips. In the presence of high radiation background, the cluster charge measurements may be contaminated due to background particles, leading to less accurate hit position estimation. The least squares method for track fitting assumes the same position error distribution for all hits and thus loses its optimal properties on contaminated data. For this reason, a new robust algorithm is proposed. The algorithm first uses the known spatial charge distribution caused by a single charged particle over the strips, and classifies the clusters into ldquocleanrdquo and ldquodirtyrdquo clusters. Then, using the classification results, it performs an iterative weighted least squares fitting procedure, updating its optimal weights each iteration. The performance of the suggested algorithm is compared to other track fitting techniques using a simulation of tracks with radiation background. It is shown that the algorithm improves the track fitting performance significantly. A practical implementation of the algorithm is presented for muon track fitting in the cathode strip chamber (CSC) of the ATLAS experiment.

  4. Iterative load-balancing method with multigrid level relaxation for particle simulation with short-range interactions

    NASA Astrophysics Data System (ADS)

    Furuichi, Mikito; Nishiura, Daisuke

    2017-10-01

    We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.

  5. Sensing resonant objects in the presence of noise and clutter using iterative, single-channel acoustic time reversal

    NASA Astrophysics Data System (ADS)

    Waters, Zachary John

    The presence of noise and coherent returns from clutter often confounds efforts to acoustically detect and identify target objects buried in inhomogeneous media. Using iterative time reversal with a single channel transducer, returns from resonant targets are enhanced, yielding convergence to a narrowband waveform characteristic of the dominant mode in a target's elastic scattering response. The procedure consists of exciting the target with a broadband acoustic pulse, sampling the return using a finite time window, reversing the signal in time, and using this reversed signal as the source waveform for the next interrogation. Scaled laboratory experiments (0.4-2 MHz) are performed employing a piston transducer and spherical targets suspended in the free field and buried in a sediment phantom. In conjunction with numerical simulations, these experiments provide an inexpensive and highly controlled means with which to examine the efficacy of the technique. Signal-to-noise enhancement of target echoes is demonstrated. The methodology reported provides a means to extract both time and frequency information for surface waves that propagate on an elastic target. Methods developed in the laboratory are then applied in medium scale (20-200 kHz) pond experiments for the detection of a steel shell buried in sandy sediment.

  6. An iterative method for near-field Fresnel region polychromatic phase contrast imaging

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2017-07-01

    We present an iterative method for polychromatic phase contrast imaging that is suitable for broadband illumination and which allows for the quantitative determination of the thickness of an object given the refractive index of the sample material. Experimental and simulation results suggest the iterative method provides comparable image quality and quantitative object thickness determination when compared to the analytical polychromatic transport of intensity and contrast transfer function methods. The ability of the iterative method to work over a wider range of experimental conditions means the iterative method is a suitable candidate for use with polychromatic illumination and may deliver more utility for laboratory-based x-ray sources, which typically have a broad spectrum.

  7. Prediction, experimental results and analysis of the ITER TF insert coil quench propagation tests, using the 4C code

    NASA Astrophysics Data System (ADS)

    Zanino, R.; Bonifetto, R.; Brighenti, A.; Isono, T.; Ozeki, H.; Savoldi, L.

    2018-07-01

    The ITER toroidal field insert (TFI) coil is a single-layer Nb3Sn solenoid tested in 2016-2017 at the National Institutes for Quantum and Radiological Science and Technology (former JAEA) in Naka, Japan. The TFI, the last in a series of ITER insert coils, was tested in operating conditions relevant for the actual ITER TF coils, inserting it in the borehole of the central solenoid model coil, which provided the background magnetic field. In this paper, we consider the five quench propagation tests that were performed using one or two inductive heaters (IHs) as drivers; out of these, three used just one IH but with increasing delay times, up to 7.5 s, between the quench detection and the TFI current dump. The results of the 4C code prediction of the quench propagation up to the current dump are presented first, based on simulations performed before the tests. We then describe the experimental results, showing good reproducibility. Finally, we compare the 4C code predictions with the measurements, confirming the 4C code capability to accurately predict the quench propagation, and the evolution of total and local voltages, as well as of the hot spot temperature. To the best of our knowledge, such a predictive validation exercise is performed here for the first time for the quench of a Nb3Sn coil. Discrepancies between prediction and measurement are found in the evolution of the jacket temperatures, in the He pressurization and quench acceleration in the late phase of the transient before the dump, as well as in the early evolution of the inlet and outlet He mass flow rate. Based on the lessons learned in the predictive exercise, the model is then refined to try and improve a posteriori (i.e. in interpretive, as opposed to predictive mode) the agreement between simulation and experiment.

  8. Performance analysis of improved iterated cubature Kalman filter and its application to GNSS/INS.

    PubMed

    Cui, Bingbo; Chen, Xiyuan; Xu, Yuan; Huang, Haoqian; Liu, Xiao

    2017-01-01

    In order to improve the accuracy and robustness of GNSS/INS navigation system, an improved iterated cubature Kalman filter (IICKF) is proposed by considering the state-dependent noise and system uncertainty. First, a simplified framework of iterated Gaussian filter is derived by using damped Newton-Raphson algorithm and online noise estimator. Then the effect of state-dependent noise coming from iterated update is analyzed theoretically, and an augmented form of CKF algorithm is applied to improve the estimation accuracy. The performance of IICKF is verified by field test and numerical simulation, and results reveal that, compared with non-iterated filter, iterated filter is less sensitive to the system uncertainty, and IICKF improves the accuracy of yaw, roll and pitch by 48.9%, 73.1% and 83.3%, respectively, compared with traditional iterated KF. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Statistical Engineering in Air Traffic Management Research

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.

    2015-01-01

    NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.

  10. Quantized Iterative Learning Consensus Tracking of Digital Networks With Limited Information Communication.

    PubMed

    Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie

    2017-06-01

    This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.

  11. An iterative solver for the 3D Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Belonosov, Mikhail; Dmitriev, Maxim; Kostin, Victor; Neklyudov, Dmitry; Tcheverda, Vladimir

    2017-09-01

    We develop a frequency-domain iterative solver for numerical simulation of acoustic waves in 3D heterogeneous media. It is based on the application of a unique preconditioner to the Helmholtz equation that ensures convergence for Krylov subspace iteration methods. Effective inversion of the preconditioner involves the Fast Fourier Transform (FFT) and numerical solution of a series of boundary value problems for ordinary differential equations. Matrix-by-vector multiplication for iterative inversion of the preconditioned matrix involves inversion of the preconditioner and pointwise multiplication of grid functions. Our solver has been verified by benchmarking against exact solutions and a time-domain solver.

  12. Adapting an Agent-Based Model of Socio-Technical Systems to Analyze System and Security Failures

    DTIC Science & Technology

    2016-05-09

    statistically significant amount, which it did with a p-valueɘ.0003 on a simulation of 3125 iterations; the data is shown in the Delegation 1 column of...Blackout metric to a statistically significant amount, with a p-valueɘ.0003 on a simulation of 3125 iterations; the data is shown in the Delegation 2...Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1, pp. 1007- 1014 . International Foundation

  13. Controlling Air Traffic (Simulated) in the Presence of Automation (CATS PAu) 1995: A Study of Measurement Techniques for Situation Awareness in Air Traffic Control

    NASA Technical Reports Server (NTRS)

    French, Jennifer R.

    1995-01-01

    As automated systems proliferate in aviation systems, human operators are taking on less and less of an active role in the jobs they once performed, often reducing what should be important jobs to tasks barely more complex than monitoring machines. When operators are forced into these roles, they risk slipping into hazardous states of awareness, which can lead to reduced skills, lack of vigilance, and the inability to react quickly and competently when there is a machine failure. Using Air Traffic Control (ATC) as a model, the present study developed tools for conducting tests focusing on levels of automation as they relate to situation awareness. Subjects participated in a two-and-a-half hour experiment that consisted of a training period followed by a simulation of air traffic control similar to the system presently used by the FAA, then an additional simulation employing automated assistance. Through an iterative design process utilizing numerous revisions and three experimental sessions, several measures for situational awareness in a simulated Air Traffic Control System were developed and are prepared for use in future experiments.

  14. DENSITY-DEPENDENT FLOW IN ONE-DIMENSIONAL VARIABLY-SATURATED MEDIA

    EPA Science Inventory

    A one-dimensional finite element is developed to simulate density-dependent flow of saltwater in variably saturated media. The flow and solute equations were solved in a coupled mode (iterative), in a partially coupled mode (non-iterative), and in a completely decoupled mode. P...

  15. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less

  16. Overview of the JET results in support to ITER

    NASA Astrophysics Data System (ADS)

    Litaudon, X.; Abduallev, S.; Abhangi, M.; Abreu, P.; Afzal, M.; Aggarwal, K. M.; Ahlgren, T.; Ahn, J. H.; Aho-Mantila, L.; Aiba, N.; Airila, M.; Albanese, R.; Aldred, V.; Alegre, D.; Alessi, E.; Aleynikov, P.; Alfier, A.; Alkseev, A.; Allinson, M.; Alper, B.; Alves, E.; Ambrosino, G.; Ambrosino, R.; Amicucci, L.; Amosov, V.; Andersson Sundén, E.; Angelone, M.; Anghel, M.; Angioni, C.; Appel, L.; Appelbee, C.; Arena, P.; Ariola, M.; Arnichand, H.; Arshad, S.; Ash, A.; Ashikawa, N.; Aslanyan, V.; Asunta, O.; Auriemma, F.; Austin, Y.; Avotina, L.; Axton, M. D.; Ayres, C.; Bacharis, M.; Baciero, A.; Baião, D.; Bailey, S.; Baker, A.; Balboa, I.; Balden, M.; Balshaw, N.; Bament, R.; Banks, J. W.; Baranov, Y. F.; Barnard, M. A.; Barnes, D.; Barnes, M.; Barnsley, R.; Baron Wiechec, A.; Barrera Orte, L.; Baruzzo, M.; Basiuk, V.; Bassan, M.; Bastow, R.; Batista, A.; Batistoni, P.; Baughan, R.; Bauvir, B.; Baylor, L.; Bazylev, B.; Beal, J.; Beaumont, P. S.; Beckers, M.; Beckett, B.; Becoulet, A.; Bekris, N.; Beldishevski, M.; Bell, K.; Belli, F.; Bellinger, M.; Belonohy, É.; Ben Ayed, N.; Benterman, N. A.; Bergsåker, H.; Bernardo, J.; Bernert, M.; Berry, M.; Bertalot, L.; Besliu, C.; Beurskens, M.; Bieg, B.; Bielecki, J.; Biewer, T.; Bigi, M.; Bílková, P.; Binda, F.; Bisoffi, A.; Bizarro, J. P. S.; Björkas, C.; Blackburn, J.; Blackman, K.; Blackman, T. R.; Blanchard, P.; Blatchford, P.; Bobkov, V.; Boboc, A.; Bodnár, G.; Bogar, O.; Bolshakova, I.; Bolzonella, T.; Bonanomi, N.; Bonelli, F.; Boom, J.; Booth, J.; Borba, D.; Borodin, D.; Borodkina, I.; Botrugno, A.; Bottereau, C.; Boulting, P.; Bourdelle, C.; Bowden, M.; Bower, C.; Bowman, C.; Boyce, T.; Boyd, C.; Boyer, H. J.; Bradshaw, J. M. A.; Braic, V.; Bravanec, R.; Breizman, B.; Bremond, S.; Brennan, P. D.; Breton, S.; Brett, A.; Brezinsek, S.; Bright, M. D. J.; Brix, M.; Broeckx, W.; Brombin, M.; Brosławski, A.; Brown, D. P. D.; Brown, M.; Bruno, E.; Bucalossi, J.; Buch, J.; Buchanan, J.; Buckley, M. A.; Budny, R.; Bufferand, H.; Bulman, M.; Bulmer, N.; Bunting, P.; Buratti, P.; Burckhart, A.; Buscarino, A.; Busse, A.; Butler, N. K.; Bykov, I.; Byrne, J.; Cahyna, P.; Calabrò, G.; Calvo, I.; Camenen, Y.; Camp, P.; Campling, D. C.; Cane, J.; Cannas, B.; Capel, A. J.; Card, P. J.; Cardinali, A.; Carman, P.; Carr, M.; Carralero, D.; Carraro, L.; Carvalho, B. B.; Carvalho, I.; Carvalho, P.; Casson, F. J.; Castaldo, C.; Catarino, N.; Caumont, J.; Causa, F.; Cavazzana, R.; Cave-Ayland, K.; Cavinato, M.; Cecconello, M.; Ceccuzzi, S.; Cecil, E.; Cenedese, A.; Cesario, R.; Challis, C. D.; Chandler, M.; Chandra, D.; Chang, C. S.; Chankin, A.; Chapman, I. T.; Chapman, S. C.; Chernyshova, M.; Chitarin, G.; Ciraolo, G.; Ciric, D.; Citrin, J.; Clairet, F.; Clark, E.; Clark, M.; Clarkson, R.; Clatworthy, D.; Clements, C.; Cleverly, M.; Coad, J. P.; Coates, P. A.; Cobalt, A.; Coccorese, V.; Cocilovo, V.; Coda, S.; Coelho, R.; Coenen, J. W.; Coffey, I.; Colas, L.; Collins, S.; Conka, D.; Conroy, S.; Conway, N.; Coombs, D.; Cooper, D.; Cooper, S. R.; Corradino, C.; Corre, Y.; Corrigan, G.; Cortes, S.; Coster, D.; Couchman, A. S.; Cox, M. P.; Craciunescu, T.; Cramp, S.; Craven, R.; Crisanti, F.; Croci, G.; Croft, D.; Crombé, K.; Crowe, R.; Cruz, N.; Cseh, G.; Cufar, A.; Cullen, A.; Curuia, M.; Czarnecka, A.; Dabirikhah, H.; Dalgliesh, P.; Dalley, S.; Dankowski, J.; Darrow, D.; Davies, O.; Davis, W.; Day, C.; Day, I. E.; De Bock, M.; de Castro, A.; de la Cal, E.; de la Luna, E.; De Masi, G.; de Pablos, J. L.; De Temmerman, G.; De Tommasi, G.; de Vries, P.; Deakin, K.; Deane, J.; Degli Agostini, F.; Dejarnac, R.; Delabie, E.; den Harder, N.; Dendy, R. O.; Denis, J.; Denner, P.; Devaux, S.; Devynck, P.; Di Maio, F.; Di Siena, A.; Di Troia, C.; Dinca, P.; D'Inca, R.; Ding, B.; Dittmar, T.; Doerk, H.; Doerner, R. P.; Donné, T.; Dorling, S. E.; Dormido-Canto, S.; Doswon, S.; Douai, D.; Doyle, P. T.; Drenik, A.; Drewelow, P.; Drews, P.; Duckworth, Ph.; Dumont, R.; Dumortier, P.; Dunai, D.; Dunne, M.; Ďuran, I.; Durodié, F.; Dutta, P.; Duval, B. P.; Dux, R.; Dylst, K.; Dzysiuk, N.; Edappala, P. V.; Edmond, J.; Edwards, A. M.; Edwards, J.; Eich, Th.; Ekedahl, A.; El-Jorf, R.; Elsmore, C. G.; Enachescu, M.; Ericsson, G.; Eriksson, F.; Eriksson, J.; Eriksson, L. G.; Esposito, B.; Esquembri, S.; Esser, H. G.; Esteve, D.; Evans, B.; Evans, G. E.; Evison, G.; Ewart, G. D.; Fagan, D.; Faitsch, M.; Falie, D.; Fanni, A.; Fasoli, A.; Faustin, J. M.; Fawlk, N.; Fazendeiro, L.; Fedorczak, N.; Felton, R. C.; Fenton, K.; Fernades, A.; Fernandes, H.; Ferreira, J.; Fessey, J. A.; Février, O.; Ficker, O.; Field, A.; Fietz, S.; Figueiredo, A.; Figueiredo, J.; Fil, A.; Finburg, P.; Firdaouss, M.; Fischer, U.; Fittill, L.; Fitzgerald, M.; Flammini, D.; Flanagan, J.; Fleming, C.; Flinders, K.; Fonnesu, N.; Fontdecaba, J. M.; Formisano, A.; Forsythe, L.; Fortuna, L.; Fortuna-Zalesna, E.; Fortune, M.; Foster, S.; Franke, T.; Franklin, T.; Frasca, M.; Frassinetti, L.; Freisinger, M.; Fresa, R.; Frigione, D.; Fuchs, V.; Fuller, D.; Futatani, S.; Fyvie, J.; Gál, K.; Galassi, D.; Gałązka, K.; Galdon-Quiroga, J.; Gallagher, J.; Gallart, D.; Galvão, R.; Gao, X.; Gao, Y.; Garcia, J.; Garcia-Carrasco, A.; García-Muñoz, M.; Gardarein, J.-L.; Garzotti, L.; Gaudio, P.; Gauthier, E.; Gear, D. F.; Gee, S. J.; Geiger, B.; Gelfusa, M.; Gerasimov, S.; Gervasini, G.; Gethins, M.; Ghani, Z.; Ghate, M.; Gherendi, M.; Giacalone, J. C.; Giacomelli, L.; Gibson, C. S.; Giegerich, T.; Gil, C.; Gil, L.; Gilligan, S.; Gin, D.; Giovannozzi, E.; Girardo, J. B.; Giroud, C.; Giruzzi, G.; Glöggler, S.; Godwin, J.; Goff, J.; Gohil, P.; Goloborod'ko, V.; Gomes, R.; Gonçalves, B.; Goniche, M.; Goodliffe, M.; Goodyear, A.; Gorini, G.; Gosk, M.; Goulding, R.; Goussarov, A.; Gowland, R.; Graham, B.; Graham, M. E.; Graves, J. P.; Grazier, N.; Grazier, P.; Green, N. R.; Greuner, H.; Grierson, B.; Griph, F. S.; Grisolia, C.; Grist, D.; Groth, M.; Grove, R.; Grundy, C. N.; Grzonka, J.; Guard, D.; Guérard, C.; Guillemaut, C.; Guirlet, R.; Gurl, C.; Utoh, H. H.; Hackett, L. J.; Hacquin, S.; Hagar, A.; Hager, R.; Hakola, A.; Halitovs, M.; Hall, S. J.; Hallworth Cook, S. P.; Hamlyn-Harris, C.; Hammond, K.; Harrington, C.; Harrison, J.; Harting, D.; Hasenbeck, F.; Hatano, Y.; Hatch, D. R.; Haupt, T. D. V.; Hawes, J.; Hawkes, N. C.; Hawkins, J.; Hawkins, P.; Haydon, P. W.; Hayter, N.; Hazel, S.; Heesterman, P. J. L.; Heinola, K.; Hellesen, C.; Hellsten, T.; Helou, W.; Hemming, O. N.; Hender, T. C.; Henderson, M.; Henderson, S. S.; Henriques, R.; Hepple, D.; Hermon, G.; Hertout, P.; Hidalgo, C.; Highcock, E. G.; Hill, M.; Hillairet, J.; Hillesheim, J.; Hillis, D.; Hizanidis, K.; Hjalmarsson, A.; Hobirk, J.; Hodille, E.; Hogben, C. H. A.; Hogeweij, G. M. D.; Hollingsworth, A.; Hollis, S.; Homfray, D. A.; Horáček, J.; Hornung, G.; Horton, A. R.; Horton, L. D.; Horvath, L.; Hotchin, S. P.; Hough, M. R.; Howarth, P. J.; Hubbard, A.; Huber, A.; Huber, V.; Huddleston, T. M.; Hughes, M.; Huijsmans, G. T. A.; Hunter, C. L.; Huynh, P.; Hynes, A. M.; Iglesias, D.; Imazawa, N.; Imbeaux, F.; Imríšek, M.; Incelli, M.; Innocente, P.; Irishkin, M.; Ivanova-Stanik, I.; Jachmich, S.; Jacobsen, A. S.; Jacquet, P.; Jansons, J.; Jardin, A.; Järvinen, A.; Jaulmes, F.; Jednoróg, S.; Jenkins, I.; Jeong, C.; Jepu, I.; Joffrin, E.; Johnson, R.; Johnson, T.; Johnston, Jane; Joita, L.; Jones, G.; Jones, T. T. C.; Hoshino, K. K.; Kallenbach, A.; Kamiya, K.; Kaniewski, J.; Kantor, A.; Kappatou, A.; Karhunen, J.; Karkinsky, D.; Karnowska, I.; Kaufman, M.; Kaveney, G.; Kazakov, Y.; Kazantzidis, V.; Keeling, D. L.; Keenan, T.; Keep, J.; Kempenaars, M.; Kennedy, C.; Kenny, D.; Kent, J.; Kent, O. N.; Khilkevich, E.; Kim, H. T.; Kim, H. S.; Kinch, A.; king, C.; King, D.; King, R. F.; Kinna, D. J.; Kiptily, V.; Kirk, A.; Kirov, K.; Kirschner, A.; Kizane, G.; Klepper, C.; Klix, A.; Knight, P.; Knipe, S. J.; Knott, S.; Kobuchi, T.; Köchl, F.; Kocsis, G.; Kodeli, I.; Kogan, L.; Kogut, D.; Koivuranta, S.; Kominis, Y.; Köppen, M.; Kos, B.; Koskela, T.; Koslowski, H. R.; Koubiti, M.; Kovari, M.; Kowalska-Strzęciwilk, E.; Krasilnikov, A.; Krasilnikov, V.; Krawczyk, N.; Kresina, M.; Krieger, K.; Krivska, A.; Kruezi, U.; Książek, I.; Kukushkin, A.; Kundu, A.; Kurki-Suonio, T.; Kwak, S.; Kwiatkowski, R.; Kwon, O. J.; Laguardia, L.; Lahtinen, A.; Laing, A.; Lam, N.; Lambertz, H. T.; Lane, C.; Lang, P. T.; Lanthaler, S.; Lapins, J.; Lasa, A.; Last, J. R.; Łaszyńska, E.; Lawless, R.; Lawson, A.; Lawson, K. D.; Lazaros, A.; Lazzaro, E.; Leddy, J.; Lee, S.; Lefebvre, X.; Leggate, H. J.; Lehmann, J.; Lehnen, M.; Leichtle, D.; Leichuer, P.; Leipold, F.; Lengar, I.; Lennholm, M.; Lerche, E.; Lescinskis, A.; Lesnoj, S.; Letellier, E.; Leyland, M.; Leysen, W.; Li, L.; Liang, Y.; Likonen, J.; Linke, J.; Linsmeier, Ch.; Lipschultz, B.; Liu, G.; Liu, Y.; Lo Schiavo, V. P.; Loarer, T.; Loarte, A.; Lobel, R. C.; Lomanowski, B.; Lomas, P. J.; Lönnroth, J.; López, J. M.; López-Razola, J.; Lorenzini, R.; Losada, U.; Lovell, J. J.; Loving, A. B.; Lowry, C.; Luce, T.; Lucock, R. M. A.; Lukin, A.; Luna, C.; Lungaroni, M.; Lungu, C. P.; Lungu, M.; Lunniss, A.; Lupelli, I.; Lyssoivan, A.; Macdonald, N.; Macheta, P.; Maczewa, K.; Magesh, B.; Maget, P.; Maggi, C.; Maier, H.; Mailloux, J.; Makkonen, T.; Makwana, R.; Malaquias, A.; Malizia, A.; Manas, P.; Manning, A.; Manso, M. E.; Mantica, P.; Mantsinen, M.; Manzanares, A.; Maquet, Ph.; Marandet, Y.; Marcenko, N.; Marchetto, C.; Marchuk, O.; Marinelli, M.; Marinucci, M.; Markovič, T.; Marocco, D.; Marot, L.; Marren, C. A.; Marshal, R.; Martin, A.; Martin, Y.; Martín de Aguilera, A.; Martínez, F. J.; Martín-Solís, J. R.; Martynova, Y.; Maruyama, S.; Masiello, A.; Maslov, M.; Matejcik, S.; Mattei, M.; Matthews, G. F.; Maviglia, F.; Mayer, M.; Mayoral, M. L.; May-Smith, T.; Mazon, D.; Mazzotta, C.; McAdams, R.; McCarthy, P. J.; McClements, K. G.; McCormack, O.; McCullen, P. A.; McDonald, D.; McIntosh, S.; McKean, R.; McKehon, J.; Meadows, R. C.; Meakins, A.; Medina, F.; Medland, M.; Medley, S.; Meigh, S.; Meigs, A. G.; Meisl, G.; Meitner, S.; Meneses, L.; Menmuir, S.; Mergia, K.; Merrigan, I. R.; Mertens, Ph.; Meshchaninov, S.; Messiaen, A.; Meyer, H.; Mianowski, S.; Michling, R.; Middleton-Gear, D.; Miettunen, J.; Militello, F.; Militello-Asp, E.; Miloshevsky, G.; Mink, F.; Minucci, S.; Miyoshi, Y.; Mlynář, J.; Molina, D.; Monakhov, I.; Moneti, M.; Mooney, R.; Moradi, S.; Mordijck, S.; Moreira, L.; Moreno, R.; Moro, F.; Morris, A. W.; Morris, J.; Moser, L.; Mosher, S.; Moulton, D.; Murari, A.; Muraro, A.; Murphy, S.; Asakura, N. N.; Na, Y. S.; Nabais, F.; Naish, R.; Nakano, T.; Nardon, E.; Naulin, V.; Nave, M. F. F.; Nedzelski, I.; Nemtsev, G.; Nespoli, F.; Neto, A.; Neu, R.; Neverov, V. S.; Newman, M.; Nicholls, K. J.; Nicolas, T.; Nielsen, A. H.; Nielsen, P.; Nilsson, E.; Nishijima, D.; Noble, C.; Nocente, M.; Nodwell, D.; Nordlund, K.; Nordman, H.; Nouailletas, R.; Nunes, I.; Oberkofler, M.; Odupitan, T.; Ogawa, M. T.; O'Gorman, T.; Okabayashi, M.; Olney, R.; Omolayo, O.; O'Mullane, M.; Ongena, J.; Orsitto, F.; Orszagh, J.; Oswuigwe, B. I.; Otin, R.; Owen, A.; Paccagnella, R.; Pace, N.; Pacella, D.; Packer, L. W.; Page, A.; Pajuste, E.; Palazzo, S.; Pamela, S.; Panja, S.; Papp, P.; Paprok, R.; Parail, V.; Park, M.; Parra Diaz, F.; Parsons, M.; Pasqualotto, R.; Patel, A.; Pathak, S.; Paton, D.; Patten, H.; Pau, A.; Pawelec, E.; Soldan, C. Paz; Peackoc, A.; Pearson, I. J.; Pehkonen, S.-P.; Peluso, E.; Penot, C.; Pereira, A.; Pereira, R.; Pereira Puglia, P. P.; Perez von Thun, C.; Peruzzo, S.; Peschanyi, S.; Peterka, M.; Petersson, P.; Petravich, G.; Petre, A.; Petrella, N.; Petržilka, V.; Peysson, Y.; Pfefferlé, D.; Philipps, V.; Pillon, M.; Pintsuk, G.; Piovesan, P.; Pires dos Reis, A.; Piron, L.; Pironti, A.; Pisano, F.; Pitts, R.; Pizzo, F.; Plyusnin, V.; Pomaro, N.; Pompilian, O. G.; Pool, P. J.; Popovichev, S.; Porfiri, M. T.; Porosnicu, C.; Porton, M.; Possnert, G.; Potzel, S.; Powell, T.; Pozzi, J.; Prajapati, V.; Prakash, R.; Prestopino, G.; Price, D.; Price, M.; Price, R.; Prior, P.; Proudfoot, R.; Pucella, G.; Puglia, P.; Puiatti, M. E.; Pulley, D.; Purahoo, K.; Pütterich, Th.; Rachlew, E.; Rack, M.; Ragona, R.; Rainford, M. S. J.; Rakha, A.; Ramogida, G.; Ranjan, S.; Rapson, C. J.; Rasmussen, J. J.; Rathod, K.; Rattá, G.; Ratynskaia, S.; Ravera, G.; Rayner, C.; Rebai, M.; Reece, D.; Reed, A.; Réfy, D.; Regan, B.; Regaña, J.; Reich, M.; Reid, N.; Reimold, F.; Reinhart, M.; Reinke, M.; Reiser, D.; Rendell, D.; Reux, C.; Reyes Cortes, S. D. A.; Reynolds, S.; Riccardo, V.; Richardson, N.; Riddle, K.; Rigamonti, D.; Rimini, F. G.; Risner, J.; Riva, M.; Roach, C.; Robins, R. J.; Robinson, S. A.; Robinson, T.; Robson, D. W.; Roccella, R.; Rodionov, R.; Rodrigues, P.; Rodriguez, J.; Rohde, V.; Romanelli, F.; Romanelli, M.; Romanelli, S.; Romazanov, J.; Rowe, S.; Rubel, M.; Rubinacci, G.; Rubino, G.; Ruchko, L.; Ruiz, M.; Ruset, C.; Rzadkiewicz, J.; Saarelma, S.; Sabot, R.; Safi, E.; Sagar, P.; Saibene, G.; Saint-Laurent, F.; Salewski, M.; Salmi, A.; Salmon, R.; Salzedas, F.; Samaddar, D.; Samm, U.; Sandiford, D.; Santa, P.; Santala, M. I. K.; Santos, B.; Santucci, A.; Sartori, F.; Sartori, R.; Sauter, O.; Scannell, R.; Schlummer, T.; Schmid, K.; Schmidt, V.; Schmuck, S.; Schneider, M.; Schöpf, K.; Schwörer, D.; Scott, S. D.; Sergienko, G.; Sertoli, M.; Shabbir, A.; Sharapov, S. E.; Shaw, A.; Shaw, R.; Sheikh, H.; Shepherd, A.; Shevelev, A.; Shumack, A.; Sias, G.; Sibbald, M.; Sieglin, B.; Silburn, S.; Silva, A.; Silva, C.; Simmons, P. A.; Simpson, J.; Simpson-Hutchinson, J.; Sinha, A.; Sipilä, S. K.; Sips, A. C. C.; Sirén, P.; Sirinelli, A.; Sjöstrand, H.; Skiba, M.; Skilton, R.; Slabkowska, K.; Slade, B.; Smith, N.; Smith, P. G.; Smith, R.; Smith, T. J.; Smithies, M.; Snoj, L.; Soare, S.; Solano, E. R.; Somers, A.; Sommariva, C.; Sonato, P.; Sopplesa, A.; Sousa, J.; Sozzi, C.; Spagnolo, S.; Spelzini, T.; Spineanu, F.; Stables, G.; Stamatelatos, I.; Stamp, M. F.; Staniec, P.; Stankūnas, G.; Stan-Sion, C.; Stead, M. J.; Stefanikova, E.; Stepanov, I.; Stephen, A. V.; Stephen, M.; Stevens, A.; Stevens, B. D.; Strachan, J.; Strand, P.; Strauss, H. R.; Ström, P.; Stubbs, G.; Studholme, W.; Subba, F.; Summers, H. P.; Svensson, J.; Świderski, Ł.; Szabolics, T.; Szawlowski, M.; Szepesi, G.; Suzuki, T. T.; Tál, B.; Tala, T.; Talbot, A. R.; Talebzadeh, S.; Taliercio, C.; Tamain, P.; Tame, C.; Tang, W.; Tardocchi, M.; Taroni, L.; Taylor, D.; Taylor, K. A.; Tegnered, D.; Telesca, G.; Teplova, N.; Terranova, D.; Testa, D.; Tholerus, E.; Thomas, J.; Thomas, J. D.; Thomas, P.; Thompson, A.; Thompson, C.-A.; Thompson, V. K.; Thorne, L.; Thornton, A.; Thrysøe, A. S.; Tigwell, P. A.; Tipton, N.; Tiseanu, I.; Tojo, H.; Tokitani, M.; Tolias, P.; Tomeš, M.; Tonner, P.; Towndrow, M.; Trimble, P.; Tripsky, M.; Tsalas, M.; Tsavalas, P.; Tskhakaya jun, D.; Turner, I.; Turner, M. M.; Turnyanskiy, M.; Tvalashvili, G.; Tyrrell, S. G. J.; Uccello, A.; Ul-Abidin, Z.; Uljanovs, J.; Ulyatt, D.; Urano, H.; Uytdenhouwen, I.; Vadgama, A. P.; Valcarcel, D.; Valentinuzzi, M.; Valisa, M.; Vallejos Olivares, P.; Valovic, M.; Van De Mortel, M.; Van Eester, D.; Van Renterghem, W.; van Rooij, G. J.; Varje, J.; Varoutis, S.; Vartanian, S.; Vasava, K.; Vasilopoulou, T.; Vega, J.; Verdoolaege, G.; Verhoeven, R.; Verona, C.; Verona Rinati, G.; Veshchev, E.; Vianello, N.; Vicente, J.; Viezzer, E.; Villari, S.; Villone, F.; Vincenzi, P.; Vinyar, I.; Viola, B.; Vitins, A.; Vizvary, Z.; Vlad, M.; Voitsekhovitch, I.; Vondráček, P.; Vora, N.; Vu, T.; Pires de Sa, W. W.; Wakeling, B.; Waldon, C. W. F.; Walkden, N.; Walker, M.; Walker, R.; Walsh, M.; Wang, E.; Wang, N.; Warder, S.; Warren, R. J.; Waterhouse, J.; Watkins, N. W.; Watts, C.; Wauters, T.; Weckmann, A.; Weiland, J.; Weisen, H.; Weiszflog, M.; Wellstood, C.; West, A. T.; Wheatley, M. R.; Whetham, S.; Whitehead, A. M.; Whitehead, B. D.; Widdowson, A. M.; Wiesen, S.; Wilkinson, J.; Williams, J.; Williams, M.; Wilson, A. R.; Wilson, D. J.; Wilson, H. R.; Wilson, J.; Wischmeier, M.; Withenshaw, G.; Withycombe, A.; Witts, D. M.; Wood, D.; Wood, R.; Woodley, C.; Wray, S.; Wright, J.; Wright, J. C.; Wu, J.; Wukitch, S.; Wynn, A.; Xu, T.; Yadikin, D.; Yanling, W.; Yao, L.; Yavorskij, V.; Yoo, M. G.; Young, C.; Young, D.; Young, I. D.; Young, R.; Zacks, J.; Zagorski, R.; Zaitsev, F. S.; Zanino, R.; Zarins, A.; Zastrow, K. D.; Zerbini, M.; Zhang, W.; Zhou, Y.; Zilli, E.; Zoita, V.; Zoletnik, S.; Zychor, I.; JET Contributors

    2017-10-01

    The 2014-2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L-H power threshold in Deuterium and Hydrogen are given, stressing the importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H  =  1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D-T campaign and 14 MeV neutron calibration strategy are reviewed.

  17. Improved cryoEM-Guided Iterative Molecular Dynamics–Rosetta Protein Structure Refinement Protocol for High Precision Protein Structure Prediction

    PubMed Central

    2016-01-01

    Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538

  18. Taking Lessons Learned from a Proxy Application to a Full Application for SNAP and PARTISN

    DOE PAGES

    Womeldorff, Geoffrey Alan; Payne, Joshua Estes; Bergen, Benjamin Karl

    2017-06-09

    SNAP is a proxy application which simulates the computational motion of a neutral particle transport code, PARTISN. Here in this work, we have adapted parts of SNAP separately; we have re-implemented the iterative shell of SNAP in the task-model runtime Legion, showing an improvement to the original schedule, and we have created multiple Kokkos implementations of the computational kernel of SNAP, displaying similar performance to the native Fortran. We then translate our Kokkos experiments in SNAP to PARTISN, necessitating engineering development, regression testing, and further thought.

  19. Sparse Covariance Matrix Estimation With Eigenvalue Constraints

    PubMed Central

    LIU, Han; WANG, Lie; ZHAO, Tuo

    2014-01-01

    We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866

  20. Taking Lessons Learned from a Proxy Application to a Full Application for SNAP and PARTISN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Womeldorff, Geoffrey Alan; Payne, Joshua Estes; Bergen, Benjamin Karl

    SNAP is a proxy application which simulates the computational motion of a neutral particle transport code, PARTISN. Here in this work, we have adapted parts of SNAP separately; we have re-implemented the iterative shell of SNAP in the task-model runtime Legion, showing an improvement to the original schedule, and we have created multiple Kokkos implementations of the computational kernel of SNAP, displaying similar performance to the native Fortran. We then translate our Kokkos experiments in SNAP to PARTISN, necessitating engineering development, regression testing, and further thought.

  1. Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media

    NASA Astrophysics Data System (ADS)

    Jakobsen, Morten; Tveit, Svenn

    2018-05-01

    We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.

  2. An actuator extension transformation for a motion simulator and an inverse transformation applying Newton-Raphson's method

    NASA Technical Reports Server (NTRS)

    Dieudonne, J. E.

    1972-01-01

    A set of equations which transform position and angular orientation of the centroid of the payload platform of a six-degree-of-freedom motion simulator into extensions of the simulator's actuators has been derived and is based on a geometrical representation of the system. An iterative scheme, Newton-Raphson's method, has been successfully used in a real time environment in the calculation of the position and angular orientation of the centroid of the payload platform when the magnitude of the actuator extensions is known. Sufficient accuracy is obtained by using only one Newton-Raphson iteration per integration step of the real time environment.

  3. Improving absolute gravity estimates by the L p -norm approximation of the ballistic trajectory

    NASA Astrophysics Data System (ADS)

    Nagornyi, V. D.; Svitlov, S.; Araya, A.

    2016-04-01

    Iteratively re-weighted least squares (IRLS) were used to simulate the L p -norm approximation of the ballistic trajectory in absolute gravimeters. Two iterations of the IRLS delivered sufficient accuracy of the approximation without a significant bias. The simulations were performed on different samplings and perturbations of the trajectory. For the platykurtic distributions of the perturbations, the L p -approximation with 3  <  p  <  4 was found to yield several times more precise gravity estimates compared to the standard least-squares. The simulation results were confirmed by processing real gravity observations performed at the excessive noise conditions.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Jinping P.; Garofalo, Andrea M.; Gong, Xianzu Z.

    Recent EAST/DIII-D joint experiments on the high poloidal betamore » $${{\\beta}_{\\text{P}}}$$ regime in DIII-D have extended operation with internal transport barriers (ITBs) and excellent energy confinement (H 98y2 ~ 1.6) to higher plasma current, for lower q 95 ≤ 7.0, and more balanced neutral beam injection (NBI) (torque injection < 2 Nm), for lower plasma rotation than previous results. Transport analysis and experimental measurements at low toroidal rotation suggest that the E × B shear effect is not key to the ITB formation in these high $${{\\beta}_{\\text{P}}}$$ discharges. Experiments and TGLF modeling show that the Shafranov shift has a key stabilizing effect on turbulence. Extrapolation of the DIII-D results using a 0D model shows that with the improved confinement, the high bootstrap fraction regime could achieve fusion gain Q = 5 in ITER at $${{\\beta}_{\\text{N}}}$$ ~ 2.9 and q 95 ~ 7. With the optimization of q(0), the required improved confinement is achievable when using 1.5D TGLF-SAT1 for transport simulations. Furthermore, results reported in this paper suggest that the DIII-D high $${{\\beta}_{\\text{P}}}$$ scenario could be a candidate for ITER steady state operation.« less

  5. Combining wet and dry research: experience with model development for cardiac mechano-electric structure-function studies

    PubMed Central

    Quinn, T. Alexander; Kohl, Peter

    2013-01-01

    Since the development of the first mathematical cardiac cell model 50 years ago, computational modelling has become an increasingly powerful tool for the analysis of data and for the integration of information related to complex cardiac behaviour. Current models build on decades of iteration between experiment and theory, representing a collective understanding of cardiac function. All models, whether computational, experimental, or conceptual, are simplified representations of reality and, like tools in a toolbox, suitable for specific applications. Their range of applicability can be explored (and expanded) by iterative combination of ‘wet’ and ‘dry’ investigation, where experimental or clinical data are used to first build and then validate computational models (allowing integration of previous findings, quantitative assessment of conceptual models, and projection across relevant spatial and temporal scales), while computational simulations are utilized for plausibility assessment, hypotheses-generation, and prediction (thereby defining further experimental research targets). When implemented effectively, this combined wet/dry research approach can support the development of a more complete and cohesive understanding of integrated biological function. This review illustrates the utility of such an approach, based on recent examples of multi-scale studies of cardiac structure and mechano-electric function. PMID:23334215

  6. Adaptive optimal control of unknown constrained-input systems using policy iteration and neural networks.

    PubMed

    Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher

    2013-10-01

    This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.

  7. Monte Carlo simulation of single accident airport risk profile

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A computer simulation model was developed for estimating the potential economic impacts of a carbon fiber release upon facilities within an 80 kilometer radius of a major airport. The model simulated the possible range of release conditions and the resulting dispersion of the carbon fibers. Each iteration of the model generated a specific release scenario, which would cause a specific amount of dollar loss to the surrounding community. By repeated iterations, a risk profile was generated, showing the probability distribution of losses from one accident. Using accident probability estimates, the risks profile for annual losses was derived. The mechanics are described of the simulation model, the required input data, and the risk profiles generated for the 26 large hub airports.

  8. Designing Free Energy Surfaces That Match Experimental Data with Metadynamics

    DOE PAGES

    White, Andrew D.; Dama, James F.; Voth, Gregory A.

    2015-04-30

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. Previously we introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. We also introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psimore » angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. Finally, the example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.« less

  9. Designing free energy surfaces that match experimental data with metadynamics.

    PubMed

    White, Andrew D; Dama, James F; Voth, Gregory A

    2015-06-09

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. We previously introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. In this work, we introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. The example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing Yanfei, E-mail: yanfeijing@uestc.edu.c; Huang Tingzhu, E-mail: tzhuang@uestc.edu.c; Duan Yong, E-mail: duanyong@yahoo.c

    This study is mainly focused on iterative solutions with simple diagonal preconditioning to two complex-valued nonsymmetric systems of linear equations arising from a computational chemistry model problem proposed by Sherry Li of NERSC. Numerical experiments show the feasibility of iterative methods to some extent when applied to the problems and reveal the competitiveness of our recently proposed Lanczos biconjugate A-orthonormalization methods to other classic and popular iterative methods. By the way, experiment results also indicate that application specific preconditioners may be mandatory and required for accelerating convergence.

  11. SIMPSON: A General Simulation Program for Solid-State NMR Spectroscopy

    NASA Astrophysics Data System (ADS)

    Bak, Mads; Rasmussen, Jimmy T.; Nielsen, Niels Chr.

    2000-12-01

    A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tcl scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basicly, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple 1D experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments.

  12. SIMPSON: A general simulation program for solid-state NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Bak, Mads; Rasmussen, Jimmy T.; Nielsen, Niels Chr.

    2011-12-01

    A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tel scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basicly, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple ID experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments.

  13. Improving the iterative Linear Interaction Energy approach using automated recognition of configurational transitions.

    PubMed

    Vosmeer, C Ruben; Kooi, Derk P; Capoferri, Luigi; Terpstra, Margreet M; Vermeulen, Nico P E; Geerke, Daan P

    2016-01-01

    Recently an iterative method was proposed to enhance the accuracy and efficiency of ligand-protein binding affinity prediction through linear interaction energy (LIE) theory. For ligand binding to flexible Cytochrome P450s (CYPs), this method was shown to decrease the root-mean-square error and standard deviation of error prediction by combining interaction energies of simulations starting from different conformations. Thereby, different parts of protein-ligand conformational space are sampled in parallel simulations. The iterative LIE framework relies on the assumption that separate simulations explore different local parts of phase space, and do not show transitions to other parts of configurational space that are already covered in parallel simulations. In this work, a method is proposed to (automatically) detect such transitions during the simulations that are performed to construct LIE models and to predict binding affinities. Using noise-canceling techniques and splines to fit time series of the raw data for the interaction energies, transitions during simulation between different parts of phase space are identified. Boolean selection criteria are then applied to determine which parts of the interaction energy trajectories are to be used as input for the LIE calculations. Here we show that this filtering approach benefits the predictive quality of our previous CYP 2D6-aryloxypropanolamine LIE model. In addition, an analysis is performed of the gain in computational efficiency that can be obtained from monitoring simulations using the proposed filtering method and by prematurely terminating simulations accordingly.

  14. Self-prior strategy for organ reconstruction in fluorescence molecular tomography

    PubMed Central

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-01-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy. PMID:29082094

  15. Self-prior strategy for organ reconstruction in fluorescence molecular tomography.

    PubMed

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-10-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.

  16. Conceptual design of the ITER fast-ion loss detector.

    PubMed

    Garcia-Munoz, M; Kocan, M; Ayllon-Guerola, J; Bertalot, L; Bonnet, Y; Casal, N; Galdon, J; Garcia Lopez, J; Giacomin, T; Gonzalez-Martin, J; Gunn, J P; Jimenez-Ramos, M C; Kiptily, V; Pinches, S D; Rodriguez-Ramos, M; Reichle, R; Rivero-Rodriguez, J F; Sanchis-Sanchez, L; Snicker, A; Vayakis, G; Veshchev, E; Vorpahl, Ch; Walsh, M; Walton, R

    2016-11-01

    A conceptual design of a reciprocating fast-ion loss detector for ITER has been developed and is presented here. Fast-ion orbit simulations in a 3D magnetic equilibrium and up-to-date first wall have been carried out to revise the measurement requirements for the lost alpha monitor in ITER. In agreement with recent observations, the simulations presented here suggest that a pitch-angle resolution of ∼5° might be necessary to identify the loss mechanisms. Synthetic measurements including realistic lost alpha-particle as well as neutron and gamma fluxes predict scintillator signal-to-noise levels measurable with standard light acquisition systems with the detector aperture at ∼11 cm outside of the diagnostic first wall. At measurement position, heat load on detector head is comparable to that in present devices.

  17. Erosion of tungsten armor after multiple intense transient events in ITER

    NASA Astrophysics Data System (ADS)

    Bazylev, B. N.; Janeschitz, G.; Landman, I. S.; Pestchanyi, S. E.

    2005-03-01

    Macroscopic erosion by melt motion is the dominating damage mechanism for tungsten armour under high-heat loads with energy deposition W > 1 MJ/m 2 and τ > 0.1 ms. For ITER divertor armour the results of a fluid dynamics simulation of the melt motion erosion after repetitive stochastically varying plasma heat loads of consecutive disruptions interspaced by ELMs are presented. The heat loads for particular single transient events are numerically simulated using the two-dimensional MHD code FOREV-2D. The whole melt motion is calculated by the fluid dynamics code MEMOS-1.5D. In addition for the ITER dome melt motion erosion of tungsten armour caused by the lateral radiation impact from the plasma shield at the disruption and ELM heat loads is estimated.

  18. Simulating transient dynamics of the time-dependent time fractional Fokker-Planck systems

    NASA Astrophysics Data System (ADS)

    Kang, Yan-Mei

    2016-09-01

    For a physically realistic type of time-dependent time fractional Fokker-Planck (FP) equation, derived as the continuous limit of the continuous time random walk with time-modulated Boltzmann jumping weight, a semi-analytic iteration scheme based on the truncated (generalized) Fourier series is presented to simulate the resultant transient dynamics when the external time modulation is a piece-wise constant signal. At first, the iteration scheme is demonstrated with a simple time-dependent time fractional FP equation on finite interval with two absorbing boundaries, and then it is generalized to the more general time-dependent Smoluchowski-type time fractional Fokker-Planck equation. The numerical examples verify the efficiency and accuracy of the iteration method, and some novel dynamical phenomena including polarized motion orientations and periodic response death are discussed.

  19. Gas Gun Model and Comparison to Experimental Performance of Pipe Guns Operating with Light Propellant Gases and Large Cryogenic Pellets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J. R.; Carmichael, J. R.; Gebhart, T. E.

    Injection of multiple large (~10 to 30 mm diameter) shattered pellets into ITER plasmas is presently part of the scheme planned to mitigate the deleterious effects of disruptions on the vessel components. To help in the design and optimize performance of the pellet injectors for this application, a model referred to as “the gas gun simulator” has been developed and benchmarked against experimental data. The computer code simulator is a Java program that models the gas-dynamics characteristics of a single-stage gas gun. Following a stepwise approach, the code utilizes a variety of input parameters to incrementally simulate and analyze themore » dynamics of the gun as the projectile is launched down the barrel. Using input data, the model can calculate gun performance based on physical characteristics, such as propellant-gas and fast-valve properties, barrel geometry, and pellet mass. Although the model is fundamentally generic, the present version is configured to accommodate cryogenic pellets composed of H2, D2, Ne, Ar, and mixtures of them and light propellant gases (H2, D2, and He). The pellets are solidified in situ in pipe guns that consist of stainless steel tubes and fast-acting valves that provide the propellant gas for pellet acceleration (to speeds ~200 to 700 m/s). The pellet speed is the key parameter in determining the response time of a shattered pellet system to a plasma disruption event. The calculated speeds from the code simulations of experiments were typically in excellent agreement with the measured values. With the gas gun simulator validated for many test shots and over a wide range of physical and operating parameters, it is a valuable tool for optimization of the injector design, including the fast valve design (orifice size and volume) for any operating pressure (~40 bar expected for the ITER application) and barrel length for any pellet size (mass, diameter, and length). Key design parameters and proposed values for the pellet injectors for the ITER disruption mitigation systems are discussed.« less

  20. Gas Gun Model and Comparison to Experimental Performance of Pipe Guns Operating with Light Propellant Gases and Large Cryogenic Pellets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combs, S. K.; Reed, J. R.; Lyttle, M. S.

    2016-01-01

    Injection of multiple large (~10 to 30 mm diameter) shattered pellets into ITER plasmas is presently part of the scheme planned to mitigate the deleterious effects of disruptions on the vessel components. To help in the design and optimize performance of the pellet injectors for this application, a model referred to as “the gas gun simulator” has been developed and benchmarked against experimental data. The computer code simulator is a Java program that models the gas-dynamics characteristics of a single-stage gas gun. Following a stepwise approach, the code utilizes a variety of input parameters to incrementally simulate and analyze themore » dynamics of the gun as the projectile is launched down the barrel. Using input data, the model can calculate gun performance based on physical characteristics, such as propellant-gas and fast-valve properties, barrel geometry, and pellet mass. Although the model is fundamentally generic, the present version is configured to accommodate cryogenic pellets composed of H2, D2, Ne, Ar, and mixtures of them and light propellant gases (H2, D2, and He). The pellets are solidified in situ in pipe guns that consist of stainless steel tubes and fast-acting valves that provide the propellant gas for pellet acceleration (to speeds ~200 to 700 m/s). The pellet speed is the key parameter in determining the response time of a shattered pellet system to a plasma disruption event. The calculated speeds from the code simulations of experiments were typically in excellent agreement with the measured values. With the gas gun simulator validated for many test shots and over a wide range of physical and operating parameters, it is a valuable tool for optimization of the injector design, including the fast valve design (orifice size and volume) for any operating pressure (~40 bar expected for the ITER application) and barrel length for any pellet size (mass, diameter, and length). Key design parameters and proposed values for the pellet injectors for the ITER disruption mitigation systems are discussed.« less

  1. On the breakdown modes and parameter space of Ohmic Tokamak startup

    NASA Astrophysics Data System (ADS)

    Peng, Yanli; Jiang, Wei; Zhang, Ya; Hu, Xiwei; Zhuang, Ge; Innocenti, Maria; Lapenta, Giovanni

    2017-10-01

    Tokamak plasma has to be hot. The process of turning the initial dilute neutral hydrogen gas at room temperature into fully ionized plasma is called tokamak startup. Even with over 40 years of research, the parameter ranges for the successful startup still aren't determined by numerical simulations but by trial and errors. However, in recent years it has drawn much attention due to one of the challenges faced by ITER: the maximum electric field for startup can't exceed 0.3 V/m, which makes the parameter range for successful startup narrower. Besides, this physical mechanism is far from being understood either theoretically or numerically. In this work, we have simulated the plasma breakdown phase driven by pure Ohmic heating using a particle-in-cell/Monte Carlo code, with the aim of giving a predictive parameter range for most tokamaks, even for ITER. We have found three situations during the discharge, as a function of the initial parameters: no breakdown, breakdown and runaway. Moreover, breakdown delay and volt-second consumption under different initial conditions are evaluated. In addition, we have simulated breakdown on ITER and confirmed that when the electric field is 0.3 V/m, the optimal pre-filling pressure is 0.001 Pa, which is in good agreement with ITER's design.

  2. TGLF Recalibration for ITER Standard Case Parameters FY2015: Theory and Simulation Performance Target Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J.

    2015-12-01

    This work was motivated by the observation, as early as 2008, that GYRO simulations of some ITER operating scenarios exhibited nonlinear zonal-flow generation large enough to effectively quench turbulence inside r /a ~ 0.5. This observation of flow-dominated, low-transport states persisted even as more accurate and comprehensive predictions of ITER profiles were made using the state-of-the-art TGLF transport model. This core stabilization is in stark contrast to GYRO-TGLF comparisons for modern-day tokamaks, for which GYRO and TGLF are typically in very close agreement. So, we began to suspect that TGLF needed to be generalized to include the effect of zonal-flowmore » stabilization in order to be more accurate for the conditions of reactor simulations. While the precise cause of the GYRO-TGLF discrepancy for ITER parameters was not known, it was speculated that closeness to threshold in the absence of driven rotation, as well as electromagnetic stabilization, created conditions more sensitive the self-generated zonal-flow stabilization than in modern tokamaks. Need for nonlinear zonal-flow stabilization: To explore the inclusion of a zonal-flow stabilization mechanism in TGLF, we started with a nominal ITER profile predicted by TGLF, and then performed linear and nonlinear GYRO simulations to characterize the behavior at and slightly above the nominal temperature gradients for finite levels of energy transport. Then, we ran TGLF on these cases to see where the discrepancies were largest. The predicted ITER profiles were indeed near to the TGLF threshold over most of the plasma core in the hybrid discharge studied (weak magnetic shear, q > 1). Scanning temperature gradients above the TGLF power balance values also showed that TGLF overpredicted the electron energy transport in the low-collisionality ITER plasma. At first (in Q3), a model of only the zonal-flow stabilization (Dimits shift) was attempted. Although we were able to construct an ad hoc model of the zonal flows that fit the GYRO simulations, the parameters of the model had to be tuned to each case. A physics basis for the zonal flow model was lacking. Electron energy transport at short wavelength: A secondary issue – the high-k electron energy flux – was initially assumed to be independent of the zonal flow effect. However, detailed studies of the fluctuation spectra from recent multiscale (electron and ion scale) GYRO simulations provided a critical new insight into the role of zonal flows. The multiscale simulations suggested that advection by the zonal flows strongly suppressed electron-scale turbulence. Radial shear of the zonal E×B fluctuation could not compete with the large electron-scale linear growth rate, but the k x-mixing rate of the E×B advection could. This insight led to a preliminary new model for the way zonal flows saturate both electron- and ion-scale turbulence. It was also discovered that the strength of the zonal E×B velocity could be computed from the linear growth rate spectrum. The new saturation model (SAT1), which replaces the original model (SAT0), was fit to the multiscale GYRO simulations as well as the ion-scale GYRO simulations used to calibrate the original SAT0 model. Thus, SAT1 captures the physics of both multiscale electron transport and zonal-flow stabilization. In future work, the SAT1 model will require significant further testing and (expensive) calibration with nonlinear multiscale gyrokinetic simulations over a wider variety of plasma conditions – certainly more than the small set of scans about a single C-Mod L-mode discharge. We believe the SAT1 model holds great promise as a physics-based model of the multiscale turbulent transport in fusion devices. Correction to ITER performance predictions: Finally, the impact of the SAT1model on the ITER hybrid case is mixed. Without the electron-scale contribution to the fluxes, the Dimits shift makes a significant improvement in the predicted fusion power as originally posited. Alas, including the high-k electron transport reduces the improvement, yielding a modest net increase in predicted fusion power compared to the TGLF prediction with the original SAT0 model.« less

  3. Survival and in-vessel redistribution of beryllium droplets after ITER disruptions

    NASA Astrophysics Data System (ADS)

    Vignitchouk, L.; Ratynskaia, S.; Tolias, P.; Pitts, R. A.; De Temmerman, G.; Lehnen, M.; Kiramov, D.

    2018-07-01

    The motion and temperature evolution of beryllium droplets produced by first wall surface melting after ITER major disruptions and vertical displacement events mitigated during the current quench are simulated by the MIGRAINe dust dynamics code. These simulations employ an updated physical model which addresses droplet-plasma interaction in ITER-relevant regimes characterized by magnetized electron collection and thin-sheath ion collection, as well as electron emission processes induced by electron and high-Z ion impacts. The disruption scenarios have been implemented from DINA simulations of the time-evolving plasma parameters, while the droplet injection points are set to the first-wall locations expected to receive the highest thermal quench heat flux according to field line tracing studies. The droplet size, speed and ejection angle are varied within the range of currently available experimental and theoretical constraints, and the final quantities of interest are obtained by weighting single-trajectory output with different size and speed distributions. Detailed estimates of droplet solidification into dust grains and their subsequent deposition in the vessel are obtained. For representative distributions of the droplet injection parameters, the results indicate that at most a few percents of the beryllium mass initially injected is converted into solid dust, while the remaining mass either vaporizes or forms liquid splashes on the wall. Simulated in-vessel spatial distributions are also provided for the surviving dust, with the aim of providing guidance for planned dust diagnostic, retrieval and clean-up systems on ITER.

  4. Real time flight simulation methodology

    NASA Technical Reports Server (NTRS)

    Parrish, E. A.; Cook, G.; Mcvey, E. S.

    1977-01-01

    Substitutional methods for digitization, input signal-dependent integrator approximations, and digital autopilot design were developed. The software framework of a simulator design package is described. Included are subroutines for iterative designs of simulation models and a rudimentary graphics package.

  5. Evidences of trapping in tungsten and implications for plasma-facing components

    NASA Astrophysics Data System (ADS)

    Longhurst, G. R.; Anderl, R. A.; Holland, D. F.

    Trapping effects that include significant delays in permeation saturation, abrupt changes in permeation rate associated with temperature changes, and larger than expected inventories of hydrogen isotopes in the material, were seen in implantation-driven permeation experiments using 25- and 50-micron thick tungsten foils at temperatures of 638 to 825 K. Computer models that simulate permeation transients reproduce the steady-state permeation and reemission behavior of these experiments with expected values of material parameters. However, the transient time characteristics were not successfully simulated without the assumption of traps of substantial trap energy and concentration. An analytical model based on the assumptions of thermodynamic equilibrium between trapped hydrogen atoms and a comparatively low mobile atom concentration successfully accounts for the observed behavior. Using steady-state and transient permeation data from experiments at different temperatures, the effective trap binding energy may be inferred. We analyze a tungsten coated divertor plate design representative of those proposed for ITER and ARIES and consider the implications for tritium permeation and retention if the same trapping we observed was present in that tungsten. Inventory increases of several orders of magnitude may result.

  6. Stokes-Doppler coherence imaging for ITER boundary tomography.

    PubMed

    Howard, J; Kocan, M; Lisgo, S; Reichle, R

    2016-11-01

    An optical coherence imaging system is presently being designed for impurity transport studies and other applications on ITER. The wide variation in magnetic field strength and pitch angle (assumed known) across the field of view generates additional Zeeman-polarization-weighting information that can improve the reliability of tomographic reconstructions. Because background reflected light will be somewhat depolarized analysis of only the polarized fraction may be enough to provide a level of background suppression. We present the principles behind these ideas and some simulations that demonstrate how the approach might work on ITER. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.

  7. Convergence Results on Iteration Algorithms to Linear Systems

    PubMed Central

    Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo

    2014-01-01

    In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640

  8. Computer simulation of supersonic rarefied gas flow in the transition region, about a spherical probe; a Monte Carlo approach with application to rocket-borne ion probe experiments

    NASA Technical Reports Server (NTRS)

    Horton, B. E.; Bowhill, S. A.

    1971-01-01

    This report describes a Monte Carlo simulation of transition flow around a sphere. Conditions for the simulation correspond to neutral monatomic molecules at two altitudes (70 and 75 km) in the D region of the ionosphere. Results are presented in the form of density contours, velocity vector plots and density, velocity and temperature profiles for the two altitudes. Contours and density profiles are related to independent Monte Carlo and experimental studies, and drag coefficients are calculated and compared with available experimental data. The small computer used is a PDP-15 with 16 K of core, and a typical run for 75 km requires five iterations, each taking five hours. The results are recorded on DECTAPE to be printed when required, and the program provides error estimates for any flow field parameter.

  9. Morphological representation of order-statistics filters.

    PubMed

    Charif-Chefchaouni, M; Schonfeld, D

    1995-01-01

    We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided.

  10. Gyrokinetic simulation of edge blobs and divertor heat-load footprint

    NASA Astrophysics Data System (ADS)

    Chang, C. S.; Ku, S.; Hager, R.; Churchill, M.; D'Azevedo, E.; Worley, P.

    2015-11-01

    Gyrokinetic study of divertor heat-load width Lq has been performed using the edge gyrokinetic code XGC1. Both neoclassical and electrostatic turbulence physics are self-consistently included in the simulation with fully nonlinear Fokker-Planck collision operation and neutral recycling. Gyrokinetic ions and drift kinetic electrons constitute the plasma in realistic magnetic separatrix geometry. The electron density fluctuations from nonlinear turbulence form blobs, as similarly seen in the experiments. DIII-D and NSTX geometries have been used to represent today's conventional and tight aspect ratio tokamaks. XGC1 shows that the ion neoclassical orbit dynamics dominates over the blob physics in setting Lq in the sample DIII-D and NSTX plasmas, re-discovering the experimentally observed 1/Ip type scaling. Magnitude of Lq is in the right ballpark, too, in comparison with experimental data. However, in an ITER standard plasma, XGC1 shows that the negligible neoclassical orbit excursion effect makes the blob dynamics to dominate Lq. Differently from Lq 1mm (when mapped back to outboard midplane) as was predicted by simple-minded extrapolation from the present-day data, XGC1 shows that Lq in ITER is about 1 cm that is somewhat smaller than the average blob size. Supported by US DOE and the INCITE program.

  11. Overview of the JET results in support to ITER

    DOE PAGES

    Litaudon, X.; Abduallev, S.; Abhangi, M.; ...

    2017-06-15

    Here, the 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing themore » importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed.« less

  12. Overview of the JET results in support to ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litaudon, X.; Abduallev, S.; Abhangi, M.

    Here, the 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing themore » importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed.« less

  13. Numerical simulation and comparison of nonlinear self-focusing based on iteration and ray tracing

    NASA Astrophysics Data System (ADS)

    Li, Xiaotong; Chen, Hao; Wang, Weiwei; Ruan, Wangchao; Zhang, Luwei; Cen, Zhaofeng

    2017-05-01

    Self-focusing is observed in nonlinear materials owing to the interaction between laser and matter when laser beam propagates. Some of numerical simulation strategies such as the beam propagation method (BPM) based on nonlinear Schrödinger equation and ray tracing method based on Fermat's principle have applied to simulate the self-focusing process. In this paper we present an iteration nonlinear ray tracing method in that the nonlinear material is also cut into massive slices just like the existing approaches, but instead of paraxial approximation and split-step Fourier transform, a large quantity of sampled real rays are traced step by step through the system with changing refractive index and laser intensity by iteration. In this process a smooth treatment is employed to generate a laser density distribution at each slice to decrease the error caused by the under-sampling. The characteristics of this method is that the nonlinear refractive indices of the points on current slice are calculated by iteration so as to solve the problem of unknown parameters in the material caused by the causal relationship between laser intensity and nonlinear refractive index. Compared with the beam propagation method, this algorithm is more suitable for engineering application with lower time complexity, and has the calculation capacity for numerical simulation of self-focusing process in the systems including both of linear and nonlinear optical media. If the sampled rays are traced with their complex amplitudes and light paths or phases, it will be possible to simulate the superposition effects of different beam. At the end of the paper, the advantages and disadvantages of this algorithm are discussed.

  14. Mechanical Characterization of the Iter Mock-Up Insulation after Reactor Irradiation

    NASA Astrophysics Data System (ADS)

    Prokopec, R.; Humer, K.; Fillunger, H.; Maix, R. K.; Weber, H. W.

    2010-04-01

    The ITER mock-up project was launched in order to demonstrate the feasibility of an industrial impregnation process using the new cyanate ester/epoxy blend. The mock-up simulates the TF winding pack cross section by a stainless steel structure with the same dimensions as the TF winding pack at a length of 1 m. It consists of 7 plates simulating the double pancakes, each of them is wrapped with glass fiber/Kapton sandwich tapes. After stacking the 7 plates, additional insulation layers are wrapped to simulate the ground insulation. This paper presents the results of the mechanical quality tests on the mock-up pancake insulation. Tensile and short beam shear specimens were cut from the plates extracted from the mock-up and tested at 77 K using a servo-hydraulic material testing device. All tests were repeated after reactor irradiation to a fast neutron fluence of 1×1022 m-2 (E>0.1 MeV). In order to simulate the pulsed operation of ITER, tension-tension fatigue measurements were performed in the load controlled mode. Initial results show a high mechanical strength as expected from the high number of thin glass fiber layers, and an excellent homogeneity of the material.

  15. Influence of Iterative Reconstruction Algorithms on PET Image Resolution

    NASA Astrophysics Data System (ADS)

    Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.

  16. Estimation of carbon fibre composites as ITER divertor armour

    NASA Astrophysics Data System (ADS)

    Pestchanyi, S.; Safronov, V.; Landman, I.

    2004-08-01

    Exposure of the carbon fibre composites (CFC) NB31 and NS31 by multiple plasma pulses has been performed at the plasma guns MK-200UG and QSPA. Numerical simulation for the same CFCs under ITER type I ELM typical heat load has been carried out using the code PEGASUS-3D. Comparative analysis of the numerical and experimental results allowed understanding the erosion mechanism of CFC based on the simulation results. A modification of CFC structure has been proposed in order to decrease the armour erosion rate.

  17. Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction

    PubMed Central

    Jian, Y; Planeta, B; Carson, R E

    2016-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254

  18. Evaluation of bias and variance in low-count OSEM list mode reconstruction

    NASA Astrophysics Data System (ADS)

    Jian, Y.; Planeta, B.; Carson, R. E.

    2015-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chrystal, C.; Grierson, B. A.; Staebler, G. M.

    Here, experiments at the DIII-D tokamak have used dimensionless parameter scans to investigate the dependencies of intrinsic torque and momentum transport in order to inform a prediction of the rotation profile in ITER. Measurements of intrinsic torque profiles and momentum confinement time in dimensionless parameter scans of normalized gyroradius and collisionality are used to predict the amount of intrinsic rotation in the pedestal of ITER. Additional scans of T e/T i and safety factor are used to determine the accuracy of momentum flux predictions of the quasi-linear gyrokinetic code TGLF. In these scans, applications of modulated torque are used tomore » measure the incremental momentum diffusivity, and results are consistent with the E x B shear suppression of turbulent transport. These incremental transport measurements are also compared with the TGLF results. In order to form a prediction of the rotation profile for ITER, the pedestal prediction is used as a boundary condition to a simulation that uses TGLF to determine the transport in the core of the plasma. The predicted rotation is ≈20 krad/s in the core, lower than in many current tokamak operating scenarios. TGLF predictions show that this rotation is still significant enough to have a strong effect on confinement via E x B shear.« less

  20. Phase extraction based on iterative algorithm using five-frame crossed fringes in phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao

    2018-06-01

    In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.

  1. RELAP5 Model of the First Wall/Blanket Primary Heat Transfer System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popov, Emilian L; Yoder Jr, Graydon L; Kim, Seokho H

    2010-06-01

    ITER inductive power operation is modeled and simulated using a system level computer code to evaluate the behavior of the Primary Heat Transfer System (PHTS) and predict parameter operational ranges. The control algorithm strategy and derivation are summarized in this report as well. A major feature of ITER is pulsed operation. The plasma does not burn continuously, but the power is pulsed with large periods of zero power between pulses. This feature requires active temperature control to maintain a constant blanket inlet temperature and requires accommodation of coolant thermal expansion during the pulse. In view of the transient nature ofmore » the power (plasma) operation state a transient system thermal-hydraulics code was selected: RELAP5. The code has a well-documented history for nuclear reactor transient analyses, it has been benchmarked against numerous experiments, and a large user database of commonly accepted modeling practices exists. The process of heat deposition and transfer in the blanket modules is multi-dimensional and cannot be accurately captured by a one-dimensional code such as RELAP5. To resolve this, a separate CFD calculation of blanket thermal power evolution was performed using the 3-D SC/Tetra thermofluid code. A 1D-3D co-simulation more realistically models FW/blanket internal time-dependent thermal inertia while eliminating uncertainties in the time constant assumed in a 1-D system code. Blanket water outlet temperature and heat release histories for any given ITER pulse operation scenario are calculated. These results provide the basis for developing time dependent power forcing functions which are used as input in the RELAP5 calculations.« less

  2. Efficient generation of low-energy folded states of a model protein

    NASA Astrophysics Data System (ADS)

    Gordon, Heather L.; Kwan, Wai Kei; Gong, Chunhang; Larrass, Stefan; Rothstein, Stuart M.

    2003-01-01

    A number of short simulated annealing runs are performed on a highly-frustrated 46-"residue" off-lattice model protein. We perform, in an iterative fashion, a principal component analysis of the 946 nonbonded interbead distances, followed by two varieties of cluster analyses: hierarchical and k-means clustering. We identify several distinct sets of conformations with reasonably consistent cluster membership. Nonbonded distance constraints are derived for each cluster and are employed within a distance geometry approach to generate many new conformations, previously unidentified by the simulated annealing experiments. Subsequent analyses suggest that these new conformations are members of the parent clusters from which they were generated. Furthermore, several novel, previously unobserved structures with low energy were uncovered, augmenting the ensemble of simulated annealing results, and providing a complete distribution of low-energy states. The computational cost of this approach to generating low-energy conformations is small when compared to the expense of further Monte Carlo simulated annealing runs.

  3. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  4. Traffic Aware Planner for Cockpit-Based Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Woods, Sharon E.; Vivona, Robert A.; Henderson, Jeffrey; Wing, David J.; Burke, Kelly A.

    2016-01-01

    The Traffic Aware Planner (TAP) software application is a cockpit-based advisory tool designed to be hosted on an Electronic Flight Bag and to enable and test the NASA concept of Traffic Aware Strategic Aircrew Requests (TASAR). The TASAR concept provides pilots with optimized route changes (including altitude) that reduce fuel burn and/or flight time, avoid interactions with known traffic, weather and restricted airspace, and may be used by the pilots to request a route and/or altitude change from Air Traffic Control. Developed using an iterative process, TAP's latest improvements include human-machine interface design upgrades and added functionality based on the results of human-in-the-loop simulation experiments and flight trials. Architectural improvements have been implemented to prepare the system for operational-use trials with partner commercial airlines. Future iterations will enhance coordination with airline dispatch and add functionality to improve the acceptability of TAP-generated route-change requests to pilots, dispatchers, and air traffic controllers.

  5. Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods

    PubMed Central

    Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.

    2013-01-01

    Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822

  6. Progress with MGI and CHI Research on NSTX-U

    NASA Astrophysics Data System (ADS)

    Raman, R.; Lay, W.-S.; Jarboe, T. R.; Nelson, B. A.; Mueller, D.; Gerhardt, S. P.; Ebrahimi, F.; Jardin, S. C.; Taylor, G.

    2016-10-01

    NSTX-U experiments on Massive Gas Injection (MGI) will offer new insight to the MGI database by studying gas assimilation efficiencies for MGI gas injection from different poloidal locations. In support of this research, two ITER-type MGI valves have been successfully commissioned on NSTX-U. Results from the planned experiment `Comparison of Private Flux Region with Conventional Mid-plane MGI on NSTX-U', will be reported. In support of planned Coaxial Helicity Injection (CHI) research on NSTX-U, a new high-resolution grid has been generated for TSC simulations of CHI. This improves the resolution of the CHI injector region, and better models the closely-spaced divertor coils on NSTX-U. These new simulations support previous analysis that suggests a solenoid-free plasma current initiation capability of more than 400kA on NSTX-U. This work is supported by U.S. DOE Contracts: DE-AC02-09CH11466, DE-FG02-99ER54519 AM08, and DE-SC0006757.

  7. Ostomate-for-a-Day: A Novel Pedagogy for Teaching Ostomy Care to Baccalaureate Nursing Students.

    PubMed

    Kerr, Noël

    2015-08-01

    The literature describing successful pedagogies for teaching ostomy care to baccalaureate nursing students is limited. This qualitative study investigated the potential benefits of participating in an immersive simulation that allowed baccalaureate nursing students to explore the physical and psychosocial impact of ostomy surgery. Junior-level nursing students attended a 2-hour interactive session during which they learned about preoperative stoma site marking and practiced the maneuvers on a peer. Students then wore an ostomy appliance for the next 24 hours, completed tasks simulating ostomy self-care, and submitted a three- to four-page reflection on the experience. These data were coded using the iterative process of constant comparison described by Glaser. Six major themes were identified: Accommodation for Activities of Daily Living, Coping with Annoyances, Body Image and Feelings, Disclosure, Insights for Teaching, and Empathy. Each participant affirmed the value of the experience. Suggestions for future research studies are discussed. Copyright 2015, SLACK Incorporated.

  8. Conceptual design of the ITER fast-ion loss detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Munoz, M., E-mail: mgm@us.es; Ayllon-Guerola, J.; Galdon, J.

    2016-11-15

    A conceptual design of a reciprocating fast-ion loss detector for ITER has been developed and is presented here. Fast-ion orbit simulations in a 3D magnetic equilibrium and up-to-date first wall have been carried out to revise the measurement requirements for the lost alpha monitor in ITER. In agreement with recent observations, the simulations presented here suggest that a pitch-angle resolution of ∼5° might be necessary to identify the loss mechanisms. Synthetic measurements including realistic lost alpha-particle as well as neutron and gamma fluxes predict scintillator signal-to-noise levels measurable with standard light acquisition systems with the detector aperture at ∼11 cmmore » outside of the diagnostic first wall. At measurement position, heat load on detector head is comparable to that in present devices.« less

  9. Virtual fringe projection system with nonparallel illumination based on iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian

    2017-06-01

    Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.

  10. Vortex breakdown simulation

    NASA Technical Reports Server (NTRS)

    Hafez, M.; Ahmad, J.; Kuruvila, G.; Salas, M. D.

    1987-01-01

    In this paper, steady, axisymmetric inviscid, and viscous (laminar) swirling flows representing vortex breakdown phenomena are simulated using a stream function-vorticity-circulation formulation and two numerical methods. The first is based on an inverse iteration, where a norm of the solution is prescribed and the swirling parameter is calculated as a part of the output. The second is based on direct Newton iterations, where the linearized equations, for all the unknowns, are solved simultaneously by an efficient banded Gaussian elimination procedure. Several numerical solutions for inviscid and viscous flows are demonstrated, followed by a discussion of the results. Some improvements on previous work have been achieved: first order upwind differences are replaced by second order schemes, line relaxation procedure (with linear convergence rate) is replaced by Newton's iterations (which converge quadratically), and Reynolds numbers are extended from 200 up to 1000.

  11. Examinations for leak tightness of actively cooled components in ITER and fusion devices

    NASA Astrophysics Data System (ADS)

    Hirai, T.; Barabash, V.; Carrat, R.; Chappuis, Ph; Durocher, A.; Escourbiac, F.; Merola, M.; Raffray, R.; Worth, L.; Boscary, J.; Chantant, M.; Chuilon, B.; Guilhem, D.; Hatchressian, J.-C.; Hong, S. H.; Kim, K. M.; Masuzaki, S.; Mogaki, K.; Nicolai, D.; Wilson, D.; Yao, D.

    2017-12-01

    Any leak in one of the ITER actively cooled components would cause significant consequences for machine operations; therefore, the risk of leak must be minimized as much as possible. In this paper, the strategy of examination to ensure leak tightness of the ITER internal components (i.e. examination of base materials, vacuum boundary joints and final components) and the hydraulic parameters for ITER internal components are summarized. The experiences of component tests, especially hot helium leak tests in recent fusion devices, were reviewed and the parameters were discussed. Through these experiences, it was confirmed that the hot He leak test was effective to detect small leak paths which were not always possible to detect by volumetric examination due to limited spatial resolution.

  12. Iterative integral parameter identification of a respiratory mechanics model.

    PubMed

    Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey

    2012-07-18

    Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  13. Use of direct and iterative solvers for estimation of SNP effects in genome-wide selection

    PubMed Central

    2010-01-01

    The aim of this study was to compare iterative and direct solvers for estimation of marker effects in genomic selection. One iterative and two direct methods were used: Gauss-Seidel with Residual Update, Cholesky Decomposition and Gentleman-Givens rotations. For resembling different scenarios with respect to number of markers and of genotyped animals, a simulated data set divided into 25 subsets was used. Number of markers ranged from 1,200 to 5,925 and number of animals ranged from 1,200 to 5,865. Methods were also applied to real data comprising 3081 individuals genotyped for 45181 SNPs. Results from simulated data showed that the iterative solver was substantially faster than direct methods for larger numbers of markers. Use of a direct solver may allow for computing (co)variances of SNP effects. When applied to real data, performance of the iterative method varied substantially, depending on the level of ill-conditioning of the coefficient matrix. From results with real data, Gentleman-Givens rotations would be the method of choice in this particular application as it provided an exact solution within a fairly reasonable time frame (less than two hours). It would indeed be the preferred method whenever computer resources allow its use. PMID:21637627

  14. Iterative methods for mixed finite element equations

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.

    1985-01-01

    Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.

  15. Three-dimensional analysis of tokamaks and stellarators

    PubMed Central

    Garabedian, Paul R.

    2008-01-01

    The NSTAB equilibrium and stability code and the TRAN Monte Carlo transport code furnish a simple but effective numerical simulation of essential features of present tokamak and stellarator experiments. When the mesh size is comparable to the island width, an accurate radial difference scheme in conservation form captures magnetic islands successfully despite a nested surface hypothesis imposed by the mathematics. Three-dimensional asymmetries in bifurcated numerical solutions of the axially symmetric tokamak problem are relevant to the observation of unstable neoclassical tearing modes and edge localized modes in experiments. Islands in compact stellarators with quasiaxial symmetry are easier to control, so these configurations will become good candidates for magnetic fusion if difficulties with safety and stability are encountered in the International Thermonuclear Experimental Reactor (ITER) project. PMID:18768807

  16. Facilitating Tough Conversations: Using an Innovative Simulation-Primed Qualitative Inquiry in Pediatric Research.

    PubMed

    Wong, Ambrose H; Tiyyagura, Gunjan K; Dodington, James M; Hawkins, Bonnie; Hersey, Denise; Auerbach, Marc A

    Deep exploration of a complex health care issue in pediatrics might be hindered by the sensitive or infrequent nature of a particular topic in pediatrics. Health care simulation builds on constructivist theories to guide individuals through an experiential cycle of action, self-reflection, and open discussion, but has traditionally been applied to the educational domain in health sciences. Leveraging the emotional activation of a simulated experience, investigators can prime participants to engage in open dialogue for the purposes of qualitative research. The framework of simulation-primed qualitative inquiry consists of 3 main iterative steps. First, researchers determine applicability by consideration of the need for an exploratory approach and potential to enrich data through simulation priming of participants. Next, careful attention is needed to design the simulation, with consideration of medium, technology, theoretical frameworks, and quality to create simulated reality relevant to the research question. Finally, data collection planning consists of a qualitative approach and method selection, with particular attention paid to psychological safety of subjects participating in the simulation. A literature review revealed 37 articles that used this newly described method across a variety of clinical and educational research topics and used a spectrum of simulation modalities and qualitative methods. Although some potential limitations and pitfalls might exist with regard to resources, fidelity, and psychological safety under the auspices of educational research, simulation-primed qualitative inquiry can be a powerful technique to explore difficult topics when subjects might experience vulnerability or hesitation. Copyright © 2017 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  17. Developing DIII-D To Prepare For ITER And The Path To Fusion Energy

    NASA Astrophysics Data System (ADS)

    Buttery, Richard; Hill, David; Solomon, Wayne; Guo, Houyang; DIII-D Team

    2017-10-01

    DIII-D pursues the advancement of fusion energy through scientific understanding and discovery of solutions. Research targets two key goals. First, to prepare for ITER we must resolve how to use its flexible control tools to rapidly reach Q =10, and develop the scientific basis to interpret results from ITER for fusion projection. Second, we must determine how to sustain a high performance fusion core in steady state conditions, with minimal actuators and a plasma exhaust solution. DIII-D will target these missions with: (i) increased electron heating and balanced torque neutral beams to simulate burning plasma conditions (ii) new 3D coil arrays to resolve control of transients (iii) off axis current drive to study physics in steady state regimes (iv) divertors configurations to promote detachment with low upstream density (v) a reactor relevant wall to qualify materials and resolve physics in reactor-like conditions. With new diagnostics and leading edge simulation, this will position the US for success in ITER and a unique knowledge to accelerate the approach to fusion energy. Supported by the US DOE under DE-FC02-04ER54698.

  18. Iterated reaction graphs: simulating complex Maillard reaction pathways.

    PubMed

    Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W

    2001-01-01

    This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.

  19. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  20. Analog Design for Digital Deployment of a Serious Leadership Game

    NASA Technical Reports Server (NTRS)

    Maxwell, Nicholas; Lang, Tristan; Herman, Jeffrey L.; Phares, Richard

    2012-01-01

    This paper presents the design, development, and user testing of a leadership development simulation. The authors share lessons learned from using a design process for a board game to allow for quick and inexpensive revision cycles during the development of a serious leadership development game. The goal of this leadership simulation is to accelerate the development of leadership capacity in high-potential mid-level managers (GS-15 level) in a federal government agency. Simulation design included a mixed-method needs analysis, using both quantitative and qualitative approaches to determine organizational leadership needs. Eight design iterations were conducted, including three user testing phases. Three re-design iterations followed initial development, enabling game testing as part of comprehensive instructional events. Subsequent design, development and testing processes targeted digital application to a computer- and tablet-based environment. Recommendations include pros and cons of development and learner testing of an initial analog simulation prior to full digital simulation development.

  1. Finite element-integral simulation of static and flight fan noise radiation from the JT15D turbofan engine

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Horowitz, S. J.

    1982-01-01

    An iterative finite element integral technique is used to predict the sound field radiated from the JT15D turbofan inlet. The sound field is divided into two regions: the sound field within and near the inlet which is computed using the finite element method and the radiation field beyond the inlet which is calculated using an integral solution technique. The velocity potential formulation of the acoustic wave equation was employed in the program. For some single mode JT15D data, the theory and experiment are in good agreement for the far field radiation pattern as well as suppressor attenuation. Also, the computer program is used to simulate flight effects that cannot be performed on a ground static test stand.

  2. ITER's woes

    NASA Astrophysics Data System (ADS)

    jjeherrera; Duffield, John; ZoloftNotWorking; esromac; protogonus; mleconte; cmfluteguy; adivita

    2014-07-01

    In reply to the physicsworld.com news story “US sanctions on Russia hit ITER council” (20 May, http://ow.ly/xF7oc and also June p8), about how a meeting of the fusion experiment's council had to be moved from St Petersburg and the US Congress's call for ITER boss Osamu Motojima to step down.

  3. Developing Conceptual Understanding and Procedural Skill in Mathematics: An Iterative Process.

    ERIC Educational Resources Information Center

    Rittle-Johnson, Bethany; Siegler, Robert S.; Alibali, Martha Wagner

    2001-01-01

    Proposes that conceptual and procedural knowledge develop in an iterative fashion and improved problem representation is one mechanism underlying the relations between them. Two experiments were conducted with 5th and 6th grade students learning about decimal fractions. Results indicate conceptual and procedural knowledge do develop, iteratively,…

  4. A photoacoustic imaging reconstruction method based on directional total variation with adaptive directivity.

    PubMed

    Wang, Jin; Zhang, Chen; Wang, Yuanyuan

    2017-05-30

    In photoacoustic tomography (PAT), total variation (TV) based iteration algorithm is reported to have a good performance in PAT image reconstruction. However, classical TV based algorithm fails to preserve the edges and texture details of the image because it is not sensitive to the direction of the image. Therefore, it is of great significance to develop a new PAT reconstruction algorithm to effectively solve the drawback of TV. In this paper, a directional total variation with adaptive directivity (DDTV) model-based PAT image reconstruction algorithm, which weightedly sums the image gradients based on the spatially varying directivity pattern of the image is proposed to overcome the shortcomings of TV. The orientation field of the image is adaptively estimated through a gradient-based approach. The image gradients are weighted at every pixel based on both its anisotropic direction and another parameter, which evaluates the estimated orientation field reliability. An efficient algorithm is derived to solve the iteration problem associated with DDTV and possessing directivity of the image adaptively updated for each iteration step. Several texture images with various directivity patterns are chosen as the phantoms for the numerical simulations. The 180-, 90- and 30-view circular scans are conducted. Results obtained show that the DDTV-based PAT reconstructed algorithm outperforms the filtered back-projection method (FBP) and TV algorithms in the quality of reconstructed images with the peak signal-to-noise rations (PSNR) exceeding those of TV and FBP by about 10 and 18 dB, respectively, for all cases. The Shepp-Logan phantom is studied with further discussion of multimode scanning, convergence speed, robustness and universality aspects. In-vitro experiments are performed for both the sparse-view circular scanning and linear scanning. The results further prove the effectiveness of the DDTV, which shows better results than that of the TV with sharper image edges and clearer texture details. Both numerical simulation and in vitro experiments confirm that the DDTV provides a significant quality improvement of PAT reconstructed images for various directivity patterns.

  5. A Newton-Raphson Method Approach to Adjusting Multi-Source Solar Simulators

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Wolford, David S.

    2012-01-01

    NASA Glenn Research Center has been using an in house designed X25 based multi-source solar simulator since 2003. The simulator is set up for triple junction solar cells prior to measurements b y adjusting the three sources to produce the correct short circuit current, lsc, in each of three AM0 calibrated sub-cells. The past practice has been to adjust one source on one sub-cell at a time, iterating until all the sub-cells have the calibrated Isc. The new approach is to create a matrix of measured lsc for small source changes on each sub-cell. A matrix, A, is produced. This is normalized to unit changes in the sources so that Ax(delta)s = (delta)isc. This matrix can now be inverted and used with the known Isc differences from the AM0 calibrated values to indicate changes in the source settings, (delta)s = A ·'x.(delta)isc This approach is still an iterative one, but all sources are changed during each iteration step. It typically takes four to six steps to converge on the calibrated lsc values. Even though the source lamps may degrade over time, the initial matrix evaluation i s not performed each time, since measurement matrix needs to be only approximate. Because an iterative approach is used the method will still continue to be valid. This method may become more important as state-of-the-art solar cell junction responses overlap the sources of the simulator. Also, as the number of cell junctions and sources increase, this method should remain applicable.

  6. Modelling of caesium dynamics in the negative ion sources at BATMAN and ELISE

    NASA Astrophysics Data System (ADS)

    Mimo, A.; Wimmer, C.; Wünderlich, D.; Fantz, U.

    2017-08-01

    The knowledge of Cs dynamics in negative hydrogen ion sources is a primary issue to achieve the ITER requirements for the Neutral Beam Injection (NBI) systems, i.e. one hour operation with an accelerated ion current of 40 A of D- and a ratio between negative ions and co-extracted electrons below one. Production of negative ions is mostly achieved by conversion of hydrogen/deuterium atoms on a converter surface, which is caesiated in order to reduce the work function and increase the conversion efficiency. The understanding of the Cs transport and redistribution mechanism inside the source is necessary for the achievement of high performances. Cs dynamics was therefore investigated by means of numerical simulations performed with the Monte Carlo transport code CsFlow3D. Simulations of the prototype source (1/8 of the ITER NBI source size) have shown that the plasma distribution inside the source has the major effect on Cs dynamics during the pulse: asymmetry of the plasma parameters leads to asymmetry in Cs distribution in front of the plasma grid. The simulated time traces and the general simulation results are in agreement with the experimental measurements. Simulations performed for the ELISE testbed (half of the ITER NBI source size) have shown an effect of the vacuum phase time on the amount and stability of Cs during the pulse. The sputtering of Cs due to back-streaming ions was reproduced by the simulations and it is in agreement with the experimental observation: this can become a critical issue during long pulses, especially in case of continuous extraction as foreseen for ITER. These results and the acquired knowledge of Cs dynamics will be useful to have a better management of Cs and thus to reduce its consumption, in the direction of the demonstration fusion power plant DEMO.

  7. The experiences of undergraduate nursing students with bots in Second LifeRTM

    NASA Astrophysics Data System (ADS)

    Rose, Lesele H.

    As technology continues to transform education from the status quo of traditional lecture-style instruction to an interactive engaging learning experience, students' experiences within the learning environment continues to change as well. This dissertation addressed the need for continuing research in advancing implementation of technology in higher education. The purpose of this phenomenological study was to discover more about the experiences of undergraduate nursing students using standardized geriatric evaluation tools when interacting with scripted geriatric patient bots tools in a simulated instructional intake setting. Data was collected through a Demographics questionnaire, an Experiential questionnaire, and a Reflection questionnaire. Triangulation of data collection occurred through an automatically created log of the interactions with the two bots, and by an automatically recorded log of the participants' movements while in the simulated geriatric intake interview. The data analysis consisted of an iterative review of the questionnaires and the participants' logs in an effort to identify common themes, recurring comments, and issues which would benefit from further exploration. Findings revealed that the interactions with the bots were perceived as a valuable experience for the participants from the perspective of interacting with the Geriatric Evaluation Tools in the role of an intake nurse. Further research is indicated to explore instructional interactions with bots in effectively mastering the use of established Geriatric Evaluation Tools.

  8. Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.

    PubMed

    Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos

    2013-11-04

    In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.

  9. Iterative Overlap FDE for Multicode DS-CDMA

    NASA Astrophysics Data System (ADS)

    Takeda, Kazuaki; Tomeba, Hiromichi; Adachi, Fumiyuki

    Recently, a new frequency-domain equalization (FDE) technique, called overlap FDE, that requires no GI insertion was proposed. However, the residual inter/intra-block interference (IBI) cannot completely be removed. In addition to this, for multicode direct sequence code division multiple access (DS-CDMA), the presence of residual interchip interference (ICI) after FDE distorts orthogonality among the spreading codes. In this paper, we propose an iterative overlap FDE for multicode DS-CDMA to suppress both the residual IBI and the residual ICI. In the iterative overlap FDE, joint minimum mean square error (MMSE)-FDE and ICI cancellation is repeated a sufficient number of times. The bit error rate (BER) performance with the iterative overlap FDE is evaluated by computer simulation.

  10. A non-iterative extension of the multivariate random effects meta-analysis.

    PubMed

    Makambi, Kepher H; Seung, Hyunuk

    2015-01-01

    Multivariate methods in meta-analysis are becoming popular and more accepted in biomedical research despite computational issues in some of the techniques. A number of approaches, both iterative and non-iterative, have been proposed including the multivariate DerSimonian and Laird method by Jackson et al. (2010), which is non-iterative. In this study, we propose an extension of the method by Hartung and Makambi (2002) and Makambi (2001) to multivariate situations. A comparison of the bias and mean square error from a simulation study indicates that, in some circumstances, the proposed approach perform better than the multivariate DerSimonian-Laird approach. An example is presented to demonstrate the application of the proposed approach.

  11. Finite element analysis of heat load of tungsten relevant to ITER conditions

    NASA Astrophysics Data System (ADS)

    Zinovev, A.; Terentyev, D.; Delannay, L.

    2017-12-01

    A computational procedure is proposed in order to predict the initiation of intergranular cracks in tungsten with ITER specification microstructure (i.e. characterised by elongated micrometre-sized grains). Damage is caused by a cyclic heat load, which emerges from plasma instabilities during operation of thermonuclear devices. First, a macroscopic thermo-mechanical simulation is performed in order to obtain temperature- and strain field in the material. The strain path is recorded at a selected point of interest of the macroscopic specimen, and is then applied at the microscopic level to a finite element mesh of a polycrystal. In the microscopic simulation, the stress state at the grain boundaries serves as the marker of cracking initiation. The simulated heat load cycle is a representative of edge-localized modes, which are anticipated during normal operations of ITER. Normal stresses at the grain boundary interfaces were shown to strongly depend on the direction of grain orientation with respect to the heat flux direction and to attain higher values if the flux is perpendicular to the elongated grains, where it apparently promotes crack initiation.

  12. SIMPSON: a general simulation program for solid-state NMR spectroscopy.

    PubMed

    Bak, M; Rasmussen, J T; Nielsen, N C

    2000-12-01

    A computer program for fast and accurate numerical simulation of solid-state NMR experiments is described. The program is designed to emulate a NMR spectrometer by letting the user specify high-level NMR concepts such as spin systems, nuclear spin interactions, RF irradiation, free precession, phase cycling, coherence-order filtering, and implicit/explicit acquisition. These elements are implemented using the Tcl scripting language to ensure a minimum of programming overhead and direct interpretation without the need for compilation, while maintaining the flexibility of a full-featured programming language. Basically, there are no intrinsic limitations to the number of spins, types of interactions, sample conditions (static or spinning, powders, uniaxially oriented molecules, single crystals, or solutions), and the complexity or number of spectral dimensions for the pulse sequence. The applicability ranges from simple 1D experiments to advanced multiple-pulse and multiple-dimensional experiments, series of simulations, parameter scans, complex data manipulation/visualization, and iterative fitting of simulated to experimental spectra. A major effort has been devoted to optimizing the computation speed using state-of-the-art algorithms for the time-consuming parts of the calculations implemented in the core of the program using the C programming language. Modification and maintenance of the program are facilitated by releasing the program as open source software (General Public License) currently at http://nmr.imsb.au.dk. The general features of the program are demonstrated by numerical simulations of various aspects for REDOR, rotational resonance, DRAMA, DRAWS, HORROR, C7, TEDOR, POST-C7, CW decoupling, TPPM, F-SLG, SLF, SEMA-CP, PISEMA, RFDR, QCPMG-MAS, and MQ-MAS experiments. Copyright 2000 Academic Press.

  13. UCSD Performance in the Edge Plasma Simulation (EPSI) Project. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tynan, George Robert

    This report contains a final report on the activities of UC San Diego PI G.R. Tynan and his collaborators as part of the EPSI Project, that was led by Dr. C.S. Chang, from PPPL. As a part of our work, we carried out several experiments on the ALCATOR C-­MOD tokamak device, aimed at unraveling the “trigger” or cause of the spontaneous transition from low-­mode confinement (L-­mode) to high confinement (H-­mode) that is universally observed in tokamak devices, and is planned for use in ITER.

  14. High-order dynamic modeling and parameter identification of structural discontinuities in Timoshenko beams by using reflection coefficients

    NASA Astrophysics Data System (ADS)

    Fan, Qiang; Huang, Zhenyu; Zhang, Bing; Chen, Dayue

    2013-02-01

    Properties of discontinuities, such as bolt joints and cracks in the waveguide structures, are difficult to evaluate by either analytical or numerical methods due to the complexity and uncertainty of the discontinuities. In this paper, the discontinuity in a Timoshenko beam is modeled with high-order parameters and then these parameters are identified by using reflection coefficients at the discontinuity. The high-order model is composed of several one-order sub-models in series and each sub-model consists of inertia, stiffness and damping components in parallel. The order of the discontinuity model is determined based on the characteristics of the reflection coefficient curve and the accuracy requirement of the dynamic modeling. The model parameters are identified through the least-square fitting iteration method, of which the undetermined model parameters are updated in iteration to fit the dynamic reflection coefficient curve with the wave-based one. By using the spectral super-element method (SSEM), simulation cases, including one-order discontinuities on infinite- and finite-beams and a two-order discontinuity on an infinite beam, were employed to evaluate both the accuracy of the discontinuity model and the effectiveness of the identification method. For practical considerations, effects of measurement noise on the discontinuity parameter identification are investigated by adding different levels of noise to the simulated data. The simulation results were then validated by the corresponding experiments. Both the simulation and experimental results show that (1) the one-order discontinuities can be identified accurately with the maximum errors of 6.8% and 8.7%, respectively; (2) and the high-order discontinuities can be identified with the maximum errors of 15.8% and 16.2%, respectively; and (3) the high-order model can predict the complex discontinuity much more accurately than the one-order discontinuity model.

  15. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.

    2014-08-21

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less

  16. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    NASA Astrophysics Data System (ADS)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-08-01

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  17. Parallel Implementation of 3-D Iterative Reconstruction With Intra-Thread Update for the jPET-D4

    NASA Astrophysics Data System (ADS)

    Lam, Chih Fung; Yamaya, Taiga; Obi, Takashi; Yoshida, Eiji; Inadama, Naoko; Shibuya, Kengo; Nishikido, Fumihiko; Murayama, Hideo

    2009-02-01

    One way to speed-up iterative image reconstruction is by parallel computing with a computer cluster. However, as the number of computing threads increases, parallel efficiency decreases due to network transfer delay. In this paper, we proposed a method to reduce data transfer between computing threads by introducing an intra-thread update. The update factor is collected from each slave thread and a global image is updated as usual in the first K sub-iteration. In the rest of the sub-iterations, the global image is only updated at an interval which is controlled by a parameter L. In between that interval, the intra-thread update is carried out whereby an image update is performed in each slave thread locally. We investigated combinations of K and L parameters based on parallel implementation of RAMLA for the jPET-D4 scanner. Our evaluation used four workstations with a total of 16 slave threads. Each slave thread calculated a different set of LORs which are divided according to ring difference numbers. We assessed image quality of the proposed method with a hotspot simulation phantom. The figure of merit was the full-width-half-maximum of hotspots and the background normalized standard deviation. At an optimum K and L setting, we did not find significant change in the output images. We also applied the proposed method to a Hoffman phantom experiment and found the difference due to intra-thread update was negligible. With the intra-thread update, computation time could be reduced by about 23%.

  18. EU Development of High Heat Flux Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linke, J.; Lorenzetto, P.; Majerus, P.

    2005-04-15

    The development of plasma facing components for next step fusion devices in Europe is strongly focused to ITER. Here a wide spectrum of different design options for the divertor target and the first wall have been investigated with tungsten, CFC, and beryllium armor. Electron beam simulation experiments have been used to determine the performance of high heat flux components under ITER specific thermal loads. Beside thermal fatigue loads with power density levels up to 20 MWm{sup -2}, off-normal events are a serious concern for the lifetime of plasma facing components. These phenomena are expected to occur on a time scalemore » of a few milliseconds (plasma disruptions) or several hundred milliseconds (vertical displacement events) and have been identified as a major source for the production of neutron activated metallic or tritium enriched carbon dust which is of serious importance from a safety point of view.The irradiation induced material degradation is another critical concern for future D-T-burning fusion devices. In ITER the integrated neutron fluence to the first wall and the divertor armour will remain in the order of 1 dpa and 0.7 dpa, respectively. This value is low compared to future commercial fusion reactors; nevertheless, a nonnegligible degradation of the materials has been detected, both for mechanical and thermal properties, in particular for the thermal conductivity of carbon based materials. Beside the degradation of individual material properties, the high heat flux performance of actively cooled plasma facing components has been investigated under ITER specific thermal and neutron loads.« less

  19. Groupwise Image Registration Guided by a Dynamic Digraph of Images.

    PubMed

    Tang, Zhenyu; Fan, Yong

    2016-04-01

    For groupwise image registration, graph theoretic methods have been adopted for discovering the manifold of images to be registered so that accurate registration of images to a group center image can be achieved by aligning similar images that are linked by the shortest graph paths. However, the image similarity measures adopted to build a graph of images in the extant methods are essentially pairwise measures, not effective for capturing the groupwise similarity among multiple images. To overcome this problem, we present a groupwise image similarity measure that is built on sparse coding for characterizing image similarity among all input images and build a directed graph (digraph) of images so that similar images are connected by the shortest paths of the digraph. Following the shortest paths determined according to the digraph, images are registered to a group center image in an iterative manner by decomposing a large anatomical deformation field required to register an image to the group center image into a series of small ones between similar images. During the iterative image registration, the digraph of images evolves dynamically at each iteration step to pursue an accurate estimation of the image manifold. Moreover, an adaptive dictionary strategy is adopted in the groupwise image similarity measure to ensure fast convergence of the iterative registration procedure. The proposed method has been validated based on both simulated and real brain images, and experiment results have demonstrated that our method was more effective for learning the manifold of input images and achieved higher registration accuracy than state-of-the-art groupwise image registration methods.

  20. Modelling of transitions between L- and H-mode in JET high plasma current plasmas and application to ITER scenarios including tungsten behaviour

    NASA Astrophysics Data System (ADS)

    Koechl, F.; Loarte, A.; Parail, V.; Belo, P.; Brix, M.; Corrigan, G.; Harting, D.; Koskela, T.; Kukushkin, A. S.; Polevoi, A. R.; Romanelli, M.; Saibene, G.; Sartori, R.; Eich, T.; Contributors, JET

    2017-08-01

    The dynamics for the transition from L-mode to a stationary high Q DT H-mode regime in ITER is expected to be qualitatively different to present experiments. Differences may be caused by a low fuelling efficiency of recycling neutrals, that influence the post transition plasma density evolution on the one hand. On the other hand, the effect of the plasma density evolution itself both on the alpha heating power and the edge power flow required to sustain the H-mode confinement itself needs to be considered. This paper presents results of modelling studies of the transition to stationary high Q DT H-mode regime in ITER with the JINTRAC suite of codes, which include optimisation of the plasma density evolution to ensure a robust achievement of high Q DT regimes in ITER on the one hand and the avoidance of tungsten accumulation in this transient phase on the other hand. As a first step, the JINTRAC integrated models have been validated in fully predictive simulations (excluding core momentum transport which is prescribed) against core, pedestal and divertor plasma measurements in JET C-wall experiments for the transition from L-mode to stationary H-mode in partially ITER relevant conditions (highest achievable current and power, H 98,y ~ 1.0, low collisionality, comparable evolution in P net/P L-H, but different ρ *, T i/T e, Mach number and plasma composition compared to ITER expectations). The selection of transport models (core: NCLASS  +  Bohm/gyroBohm in L-mode/GLF23 in H-mode) was determined by a trade-off between model complexity and efficiency. Good agreement between code predictions and measured plasma parameters is obtained if anomalous heat and particle transport in the edge transport barrier are assumed to be reduced at different rates with increasing edge power flow normalised to the H-mode threshold; in particular the increase in edge plasma density is dominated by this edge transport reduction as the calculated neutral influx across the separatrix remains unchanged (or even slightly decreases) following the H-mode transition. JINTRAC modelling of H-mode transitions for the ITER 15 MA / 5.3 T high Q DT scenarios with the same modelling assumptions as those being derived from JET experiments has been carried out. The modelling finds that it is possible to access high Q DT conditions robustly for additional heating power levels of P AUX  ⩾  53 MW by optimising core and edge plasma fuelling in the transition from L-mode to high Q DT H-mode. An initial period of low plasma density, in which the plasma accesses the H-mode regime and the alpha heating power increases, needs to be considered after the start of the additional heating, which is then followed by a slow density ramp. Both the duration of the low density phase and the density ramp-rate depend on boundary and operational conditions and can be optimised to minimise the resistive flux consumption in this transition phase. The modelling also shows that fuelling schemes optimised for a robust access to high Q DT H-mode in ITER are also optimum for the prevention of the contamination of the core plasma by tungsten during this phase.

  1. Iterative Authoring Using Story Generation Feedback: Debugging or Co-creation?

    NASA Astrophysics Data System (ADS)

    Swartjes, Ivo; Theune, Mariët

    We explore the role that story generation feedback may play within the creative process of interactive story authoring. While such feedback is often used as 'debugging' information, we explore here a 'co-creation' view, in which the outcome of the story generator influences authorial intent. We illustrate an iterative authoring approach in which each iteration consists of idea generation, implementation and simulation. We find that the tension between authorial intent and the partially uncontrollable story generation outcome may be relieved by taking such a co-creation approach.

  2. Virtual reality cataract surgery training: learning curves and concurrent validity.

    PubMed

    Selvander, Madeleine; Åsman, Peter

    2012-08-01

    To investigate initial learning curves on a virtual reality (VR) eye surgery simulator and whether achieved skills are transferable between tasks. Thirty-five medical students were randomized to complete ten iterations on either the VR Caspulorhexis module (group A) or the Cataract navigation training module (group B) and then two iterations on the other module. Learning curves were compared between groups. The second Capsulorhexis video was saved and evaluated with the performance rating tool Objective Structured Assessment of Cataract Surgical Skill (OSACSS). The students' stereoacuity was examined. Both groups demonstrated significant improvements in performance over the 10 iterations: group A for all parameters analysed including score (p < 0.0001), time (p < 0.0001) and corneal damage (p = 0.0003), group B for time (p < 0.0001), corneal damage (p < 0.0001) but not for score (p = 0.752). Training on one module did not improve performance on the other. Capsulorhexis score correlated significantly with evaluation of the videos using the OSACSS performance rating tool. For stereoacuity < and ≥120 seconds of arc, sum of both modules' second iteration score was 73.5 and 41.0, respectively (p = 0.062). An initial rapid improvement in performance on a simulator with repeated practice was shown. For capsulorhexis, 10 iterations with only simulator feedback are not enough to reach a plateau for overall score. Skills transfer between modules was not found suggesting benefits from training on both modules. Stereoacuity may be of importance in the recruitment and training of new cataract surgeons. Additional studies are needed to investigate this further. Concurrent validity was found for Capsulorhexis module. © 2010 The Authors. Acta Ophthalmologica © 2010 Acta Ophthalmologica Scandinavica Foundation.

  3. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    USDA-ARS?s Scientific Manuscript database

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  4. Toward a first-principles integrated simulation of tokamak edge plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C S; Klasky, Scott A; Cummings, Julian

    2008-01-01

    Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less

  5. A frequency dependent preconditioned wavelet method for atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2013-12-01

    Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.

  6. Erosion simulation of first wall beryllium armour under ITER transient heat loads

    NASA Astrophysics Data System (ADS)

    Bazylev, B.; Janeschitz, G.; Landman, I.; Pestchanyi, S.; Loarte, A.

    2009-04-01

    The beryllium is foreseen as plasma facing armour for the first wall in the ITER in form of Be-clad blanket modules in macrobrush design with brush size about 8-10 cm. In ITER significant heat loads during transient events (TE) are expected at the main chamber wall that may leads to the essential damage of the Be armour. The main mechanisms of metallic target damage remain surface melting and melt motion erosion, which determines the lifetime of the plasma facing components. Melting thresholds and melt layer depth of the Be armour under transient loads are estimated for different temperatures of the bulk Be and different shapes of transient loads. The melt motion damages of Be macrobrush armour caused by the tangential friction force and the Lorentz force are analyzed for bulk Be and different sizes of Be-brushes. The damage of FW under radiative loads arising during mitigated disruptions is numerically simulated.

  7. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE PAGES

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.; ...

    2016-11-07

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  8. Parallel Dynamics Simulation Using a Krylov-Schwarz Linear Solution Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhyankar, Shrirang; Constantinescu, Emil M.; Smith, Barry F.

    Fast dynamics simulation of large-scale power systems is a computational challenge because of the need to solve a large set of stiff, nonlinear differential-algebraic equations at every time step. The main bottleneck in dynamic simulations is the solution of a linear system during each nonlinear iteration of Newton’s method. In this paper, we present a parallel Krylov- Schwarz linear solution scheme that uses the Krylov subspacebased iterative linear solver GMRES with an overlapping restricted additive Schwarz preconditioner. As a result, performance tests of the proposed Krylov-Schwarz scheme for several large test cases ranging from 2,000 to 20,000 buses, including amore » real utility network, show good scalability on different computing architectures.« less

  9. Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.

    PubMed

    Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo

    2017-05-01

    In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.

  10. Robust design of feedback feed-forward iterative learning control based on 2D system theory for linear uncertain systems

    NASA Astrophysics Data System (ADS)

    Li, Zhifu; Hu, Yueming; Li, Di

    2016-08-01

    For a class of linear discrete-time uncertain systems, a feedback feed-forward iterative learning control (ILC) scheme is proposed, which is comprised of an iterative learning controller and two current iteration feedback controllers. The iterative learning controller is used to improve the performance along the iteration direction and the feedback controllers are used to improve the performance along the time direction. First of all, the uncertain feedback feed-forward ILC system is presented by an uncertain two-dimensional Roesser model system. Then, two robust control schemes are proposed. One can ensure that the feedback feed-forward ILC system is bounded-input bounded-output stable along time direction, and the other can ensure that the feedback feed-forward ILC system is asymptotically stable along time direction. Both schemes can guarantee the system is robust monotonically convergent along the iteration direction. Third, the robust convergent sufficient conditions are given, which contains a linear matrix inequality (LMI). Moreover, the LMI can be used to determine the gain matrix of the feedback feed-forward iterative learning controller. Finally, the simulation results are presented to demonstrate the effectiveness of the proposed schemes.

  11. On the meniscus formation and the negative hydrogen ion extraction from ITER neutral beam injection relevant ion source

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Fantz, U.; Franzen, P.; Minea, T.

    2014-10-01

    The development of a large area (Asource,ITER = 0.9 × 2 m2) hydrogen negative ion (NI) source constitutes a crucial step in construction of the neutral beam injectors of the international fusion reactor ITER. To understand the plasma behaviour in the boundary layer close to the extraction system the 3D PIC MCC code ONIX is exploited. Direct cross checked analysis of the simulation and experimental results from the ITER-relevant BATMAN source testbed with a smaller area (Asource,BATMAN ≈ 0.32 × 0.59 m2) has been conducted for a low perveance beam, but for a full set of plasma parameters available. ONIX has been partially benchmarked by comparison to the results obtained using the commercial particle tracing code for positive ion extraction KOBRA3D. Very good agreement has been found in terms of meniscus position and its shape for simulations of different plasma densities. The influence of the initial plasma composition on the final meniscus structure was then investigated for NIs. As expected from the Child-Langmuir law, the results show that not only does the extraction potential play a crucial role on the meniscus formation, but also the initial plasma density and its electronegativity. For the given parameters, the calculated meniscus locates a few mm downstream of the plasma grid aperture provoking a direct NI extraction. Most of the surface produced NIs do not reach the plasma bulk, but move directly towards the extraction grid guided by the extraction field. Even for artificially increased electronegativity of the bulk plasma the extracted NI current from this region is low. This observation indicates a high relevance of the direct NI extraction. These calculations show that the extracted NI current from the bulk region is low even if a complete ion-ion plasma is assumed, meaning that direct extraction from surface produced ions should be present in order to obtain sufficiently high extracted NI current density. The calculated extracted currents, both ions and electrons, agree rather well with the experiment.

  12. Validation of Kinetic-Turbulent-Neoclassical Theory for Edge Intrinsic Rotation in DIII-D Plasmas

    NASA Astrophysics Data System (ADS)

    Ashourvan, Arash

    2017-10-01

    Recent experiments on DIII-D with low-torque neutral beam injection (NBI) have provided a validation of a new model of momentum generation in a wide range of conditions spanning L- and H-mode with direct ion and electron heating. A challenge in predicting the bulk rotation profile for ITER has been to capture the physics of momentum transport near the separatrix and steep gradient region. A recent theory has presented a model for edge momentum transport which predicts the value and direction of the main-ion intrinsic velocity at the pedestal-top, generated by the passing orbits in the inhomogeneous turbulent field. In this study, this model-predicted velocity is tested on DIII-D for a database of 44 low-torque NBI discharges comprised of bothL- and H-mode plasmas. For moderate NBI powers (PNBI<4 MW), model prediction agrees well with the experiments for both L- and H-mode. At higher NBI power the experimental rotation is observed to saturate and even degrade compared to theory. TRANSP-NUBEAM simulations performed for the database show that for discharges with nominally balanced - but high powered - NBI, the net injected torque through the edge can exceed 1 N.m in the counter-current direction. The theory model has been extended to compute the rotation degradation from this counter-current NBI torque by solving a reduced momentum evolution equation for the edge and found the revised velocity prediction to be in agreement with experiment. Projecting to the ITER baseline scenario, this model predicts a value for the pedestal-top rotation (ρ 0.9) comparable to 4 kRad/s. Using the theory modeled - and now tested - velocity to predict the bulk plasma rotation opens up a path to more confidently projecting the confinement and stability in ITER. Supported by the US DOE under DE-AC02-09CH11466 and DE-FC02-04ER54698.

  13. Iterative approach as alternative to S-matrix in modal methods

    NASA Astrophysics Data System (ADS)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  14. Integrated simulations of saturated neoclassical tearing modes in DIII-D, Joint European Torus, and ITER plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halpern, Federico D.; Bateman, Glenn; Kritz, Arnold H.

    2006-06-15

    A revised version of the ISLAND module [C. N. Nguyen et al., Phys. Plasmas 11, 3604 (2004)] is used in the BALDUR code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)] to carry out integrated modeling simulations of DIII-D [J. Luxon, Nucl. Fusion 42, 614 (2002)], Joint European Torus (JET) [P. H. Rebut et al., Nucl. Fusion 25, 1011 (1985)], and ITER [R. Aymar et al., Plasma Phys. Control. Fusion 44, 519 (2002)] tokamak discharges in order to investigate the adverse effects of multiple saturated magnetic islands driven by neoclassical tearing modes (NTMs). Simulations are carried outmore » with a predictive model for the temperature and density pedestal at the edge of the high confinement mode (H-mode) plasma and with core transport described using the Multi-Mode model. The ISLAND module, which is used to compute magnetic island widths, includes the effects of an arbitrary aspect ratio and plasma cross sectional shape, the effect of the neoclassical bootstrap current, and the effect of the distortion in the shape of each magnetic island caused by the radial variation of the perturbed magnetic field. Radial transport is enhanced across the width of each magnetic island within the BALDUR integrated modeling simulations in order to produce a self-consistent local flattening of the plasma profiles. It is found that the main consequence of the NTM magnetic islands is a decrease in the central plasma temperature and total energy. For the DIII-D and JET discharges, it is found that inclusion of the NTMs typically results in a decrease in total energy of the order of 15%. In simulations of ITER, it is found that the saturated magnetic island widths normalized by the plasma minor radius, for the lowest order individual tearing modes, are approximately 24% for the 2/1 mode and 12% for the 3/2 mode. As a result, the ratio of ITER fusion power to heating power (fusion Q) is reduced from Q=10.6 in simulations with no NTM islands to Q=2.6 in simulations with fully saturated NTM islands.« less

  15. Iterative image-domain ring artifact removal in cone-beam CT

    NASA Astrophysics Data System (ADS)

    Liang, Xiaokun; Zhang, Zhicheng; Niu, Tianye; Yu, Shaode; Wu, Shibin; Li, Zhicheng; Zhang, Huailing; Xie, Yaoqin

    2017-07-01

    Ring artifacts in cone beam computed tomography (CBCT) images are caused by pixel gain variations using flat-panel detectors, and may lead to structured non-uniformities and deterioration of image quality. The purpose of this study is to propose a method of general ring artifact removal in CBCT images. This method is based on the polar coordinate system, where the ring artifacts manifest as stripe artifacts. Using relative total variation, the CBCT images are first smoothed to generate template images with fewer image details and ring artifacts. By subtracting the template images from the CBCT images, residual images with image details and ring artifacts are generated. As the ring artifact manifests as a stripe artifact in a polar coordinate system, the artifact image can be extracted by mean value from the residual image; the image details are generated by subtracting the artifact image from the residual image. Finally, the image details are compensated to the template image to generate the corrected images. The proposed framework is iterated until the differences in the extracted ring artifacts are minimized. We use a 3D Shepp-Logan phantom, Catphan©504 phantom, uniform acrylic cylinder, and images from a head patient to evaluate the proposed method. In the experiments using simulated data, the spatial uniformity is increased by 1.68 times and the structural similarity index is increased from 87.12% to 95.50% using the proposed method. In the experiment using clinical data, our method shows high efficiency in ring artifact removal while preserving the image structure and detail. The iterative approach we propose for ring artifact removal in cone-beam CT is practical and attractive for CBCT guided radiation therapy.

  16. Design of a -1 MV dc UHV power supply for ITER NBI

    NASA Astrophysics Data System (ADS)

    Watanabe, K.; Yamamoto, M.; Takemoto, J.; Yamashita, Y.; Dairaku, M.; Kashiwagi, M.; Taniguchi, M.; Tobari, H.; Umeda, N.; Sakamoto, K.; Inoue, T.

    2009-05-01

    Procurement of a dc -1 MV power supply system for the ITER neutral beam injector (NBI) is shared by Japan and the EU. The Japan Atomic Energy Agency as the Japan Domestic Agency (JADA) for ITER contributes to the procurement of dc -1 MV ultra-high voltage (UHV) components such as a dc -1 MV generator, a transmission line and a -1 MV insulating transformer for the ITER NBI power supply. The inverter frequency of 150 Hz in the -1 MV power supply and major circuit parameters have been proposed and adopted in the ITER NBI. The dc UHV insulation has been carefully designed since dc long pulse insulation is quite different from conventional ac insulation or dc short pulse systems. A multi-layer insulation structure of the transformer for a long pulse up to 3600 s has been designed with electric field simulation. Based on the simulation the overall dimensions of the dc UHV components have been finalized. A surge energy suppression system is also essential to protect the accelerator from electric breakdowns. The JADA contributes to provide an effective surge suppression system composed of core snubbers and resistors. Input energy into the accelerator from the power supply can be reduced to about 20 J, which satisfies the design criteria of 50 J in total in the case of breakdown at -1 MV.

  17. A Burning Plasma Experiment: the role of international collaboration

    NASA Astrophysics Data System (ADS)

    Prager, Stewart

    2003-04-01

    The world effort to develop fusion energy is at the threshold of a new stage in its research: the investigation of burning plasmas. A burning plasma is self-heated. The 100 million degree temperature of the plasma is maintained by the heat generated by the fusion reactions themselves, as occurs in burning stars. The fusion-generated alpha particles produce new physical phenomena that are strongly coupled together as a nonlinear complex system, posing a major plasma physics challenge. Two attractive options are being considered by the US fusion community as burning plasma facilities: the international ITER experiment and the US-based FIRE experiment. ITER (the International Thermonuclear Experimental Reactor) is a large, power-plant scale facility. It was conceived and designed by a partnership of the European Union, Japan, the Soviet Union, and the United States. At the completion of the first engineering design in 1998, the US discontinued its participation. FIRE (the Fusion Ignition Research Experiment) is a smaller, domestic facility that is at an advanced pre-conceptual design stage. Each facility has different scientific, programmatic and political implications. Selecting the optimal path for burning plasma science is itself a challenge. Recently, the Fusion Energy Sciences Advisory Committee recommended a dual path strategy in which the US seek to rejoin ITER, but be prepared to move forward with FIRE if the ITER negotiations do not reach fruition by July, 2004. Either the ITER or FIRE experiment would reveal the behavior of burning plasmas, generate large amounts of fusion power, and be a huge step in establishing the potential of fusion energy to contribute to the world's energy security.

  18. Identifying mechanical property parameters of planetary soil using in-situ data obtained from exploration rovers

    NASA Astrophysics Data System (ADS)

    Ding, Liang; Gao, Haibo; Liu, Zhen; Deng, Zongquan; Liu, Guangjun

    2015-12-01

    Identifying the mechanical property parameters of planetary soil based on terramechanics models using in-situ data obtained from autonomous planetary exploration rovers is both an important scientific goal and essential for control strategy optimization and high-fidelity simulations of rovers. However, identifying all the terrain parameters is a challenging task because of the nonlinear and coupling nature of the involved functions. Three parameter identification methods are presented in this paper to serve different purposes based on an improved terramechanics model that takes into account the effects of slip, wheel lugs, etc. Parameter sensitivity and coupling of the equations are analyzed, and the parameters are grouped according to their sensitivity to the normal force, resistance moment and drawbar pull. An iterative identification method using the original integral model is developed first. In order to realize real-time identification, the model is then simplified by linearizing the normal and shearing stresses to derive decoupled closed-form analytical equations. Each equation contains one or two groups of soil parameters, making step-by-step identification of all the unknowns feasible. Experiments were performed using six different types of single-wheels as well as a four-wheeled rover moving on planetary soil simulant. All the unknown model parameters were identified using the measured data and compared with the values obtained by conventional experiments. It is verified that the proposed iterative identification method provides improved accuracy, making it suitable for scientific studies of soil properties, whereas the step-by-step identification methods based on simplified models require less calculation time, making them more suitable for real-time applications. The models have less than 10% margin of error comparing with the measured results when predicting the interaction forces and moments using the corresponding identified parameters.

  19. Modelling of thermal shock experiments of carbon based materials in JUDITH

    NASA Astrophysics Data System (ADS)

    Ogorodnikova, O. V.; Pestchanyi, S.; Koza, Y.; Linke, J.

    2005-03-01

    The interaction of hot plasma with material in fusion devices can result in material erosion and irreversible damage. Carbon based materials are proposed for ITER divertor armour. To simulate carbon erosion under high heat fluxes, electron beam heating in the JUDITH facility has been used. In this paper, carbon erosion under energetic electron impact is modeled by the 3D thermomechanics code 'PEGASUS-3D'. The code is based on a crack generation induced by thermal stress. The particle emission observed in thermal shock experiments is a result of breaking bonds between grains caused by thermal stress. The comparison of calculations with experimental data from JUDITH shows good agreement for various incident power densities and pulse durations. A realistic mean failure stress has been found. Pre-heating of test specimens results in earlier onset of brittle destruction and enhanced particle loss in agreement with experiments.

  20. The TRIDEC Virtual Tsunami Atlas - customized value-added simulation data products for Tsunami Early Warning generated on compute clusters

    NASA Astrophysics Data System (ADS)

    Löwe, P.; Hammitzsch, M.; Babeyko, A.; Wächter, J.

    2012-04-01

    The development of new Tsunami Early Warning Systems (TEWS) requires the modelling of spatio-temporal spreading of tsunami waves both recorded from past events and hypothetical future cases. The model results are maintained in digital repositories for use in TEWS command and control units for situation assessment once a real tsunami occurs. Thus the simulation results must be absolutely trustworthy, in a sense that the quality of these datasets is assured. This is a prerequisite as solid decision making during a crisis event and the dissemination of dependable warning messages to communities under risk will be based on them. This requires data format validity, but even more the integrity and information value of the content, being a derived value-added product derived from raw tsunami model output. Quality checking of simulation result products can be done in multiple ways, yet the visual verification of both temporal and spatial spreading characteristics for each simulation remains important. The eye of the human observer still remains an unmatched tool for the detection of irregularities. This requires the availability of convenient, human-accessible mappings of each simulation. The improvement of tsunami models necessitates the changes in many variables, including simulation end-parameters. Whenever new improved iterations of the general models or underlying spatial data are evaluated, hundreds to thousands of tsunami model results must be generated for each model iteration, each one having distinct initial parameter settings. The use of a Compute Cluster Environment (CCE) of sufficient size allows the automated generation of all tsunami-results within model iterations in little time. This is a significant improvement to linear processing on dedicated desktop machines or servers. This allows for accelerated/improved visual quality checking iterations, which in turn can provide a positive feedback into the overall model improvement iteratively. An approach to set-up and utilize the CCE has been implemented by the project Collaborative, Complex, and Critical Decision Processes in Evolving Crises (TRIDEC) funded under the European Union's FP7. TRIDEC focuses on real-time intelligent information management in Earth management. The addressed challenges include the design and implementation of a robust and scalable service infrastructure supporting the integration and utilisation of existing resources with accelerated generation of large volumes of data. These include sensor systems, geo-information repositories, simulations and data fusion tools. Additionally, TRIDEC adopts enhancements of Service Oriented Architecture (SOA) principles in terms of Event Driven Architecture (EDA) design. As a next step the implemented CCE's services to generate derived and customized simulation products are foreseen to be provided via an EDA service for on-demand processing for specific threat-parameters and to accommodate for model improvements.

  1. Parallel conjugate gradient algorithms for manipulator dynamic simulation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheld, Robert E.

    1989-01-01

    Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).

  2. A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics

    NASA Astrophysics Data System (ADS)

    Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger

    2017-09-01

    Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.

  3. Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography

    NASA Technical Reports Server (NTRS)

    Xu, Feng; Deshpande, Manohar

    2012-01-01

    Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.

  4. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE PAGES

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; ...

    2016-04-01

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  5. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  6. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven, E-mail: hamiltonsp@ornl.gov; Berrill, Mark, E-mail: berrillma@ornl.gov; Clarno, Kevin, E-mail: clarnokt@ornl.gov

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNKmore » and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  7. Hitchhiker mission operations: Past, present, and future

    NASA Technical Reports Server (NTRS)

    Anderson, Kathryn

    1995-01-01

    What is mission operations? Mission operations is an iterative process aimed at achieving the greatest possible mission success with the resources available. The process involves understanding of the science objectives, investigation of which system capabilities can best meet these objectives, integration of the objectives and resources into a cohesive mission operations plan, evaluation of the plan through simulations, and implementation of the plan in real-time. In this paper, the authors present a comprehensive description of what the Hitchhiker mission operations approach is and why it is crucial to mission success. The authors describe the significance of operational considerations from the beginning and throughout the experiment ground and flight systems development. The authors also address the necessity of training and simulations. Finally, the authors cite several examples illustrating the benefits of understanding and utilizing the mission operations process.

  8. Multi-dimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations in curvilinear geometry

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Chacon, Luis

    2015-11-01

    We discuss a new, conservative, fully implicit 2D3V Vlasov-Darwin particle-in-cell algorithm in curvilinear geometry for non-radiative, electromagnetic kinetic plasma simulations. Unlike standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. Here, we extend these algorithms to curvilinear geometry. The algorithm retains its exact conservation properties in curvilinear grids. The nonlinear iteration is effectively accelerated with a fluid preconditioner for weakly to modestly magnetized plasmas, which allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D (slow shock) and 2D (island coalescense).

  9. A Global Carbon Assimilation System using a modified EnKF assimilation method

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Zheng, X.; Chen, Z.; Dan, B.; Chen, J. M.; Yi, X.; Wang, L.; Wu, G.

    2014-10-01

    A Global Carbon Assimilation System based on Ensemble Kalman filter (GCAS-EK) is developed for assimilating atmospheric CO2 abundance data into an ecosystem model to simultaneously estimate the surface carbon fluxes and atmospheric CO2 distribution. This assimilation approach is based on the ensemble Kalman filter (EnKF), but with several new developments, including using analysis states to iteratively estimate ensemble forecast errors, and a maximum likelihood estimation of the inflation factors of the forecast and observation errors. The proposed assimilation approach is tested in observing system simulation experiments and then used to estimate the terrestrial ecosystem carbon fluxes and atmospheric CO2 distributions from 2002 to 2008. The results showed that this assimilation approach can effectively reduce the biases and uncertainties of the carbon fluxes simulated by the ecosystem model.

  10. Coherent Microwave Scattering Model of Marsh Grass

    NASA Astrophysics Data System (ADS)

    Duan, Xueyang; Jones, Cathleen E.

    2017-12-01

    In this work, we developed an electromagnetic scattering model to analyze radar scattering from tall-grass-covered lands such as wetlands and marshes. The model adopts the generalized iterative extended boundary condition method (GIEBCM) algorithm, previously developed for buried cylindrical media such as vegetation roots, to simulate the scattering from the grass layer. The major challenge of applying GIEBCM to tall grass is the extremely time-consuming iteration among the large number of short subcylinders building up the grass. To overcome this issue, we extended the GIEBCM to multilevel GIEBCM, or M-GIEBCM, in which we first use GIEBCM to calculate a T matrix (transition matrix) database of "straws" with various lengths, thicknesses, orientations, curvatures, and dielectric properties; we then construct the grass with a group of straws from the database and apply GIEBCM again to calculate the T matrix of the overall grass scene. The grass T matrix is transferred to S matrix (scattering matrix) and combined with the ground S matrix, which is computed using the stabilized extended boundary condition method, to obtain the total scattering. In this article, we will demonstrate the capability of the model by simulating scattering from scenes with different grass densities, different grass structures, different grass water contents, and different ground moisture contents. This model will help with radar experiment design and image interpretation for marshland and wetland observations.

  11. Distributed Simulation as a modelling tool for the development of a simulation-based training programme for cardiovascular specialties.

    PubMed

    Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando

    2017-01-01

    Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n  = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n  = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n  = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n  = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n  = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded with the qualitative component, which augmented the quantitative findings; trainees' user feedback was used to perform iterative refinements to the simulation design and components (collaborative design), resulting in higher mean evaluation scores leading up to the implementation phase. Through application of innovative Distributed Simulation techniques, collaborative design, and consistent evaluation techniques from conceptual, development, and implementation stages, fully immersive simulation techniques for cardiovascular specialities are achievable and have the potential to be implemented more broadly.

  12. Improvements in metabolic flux analysis using carbon bond labeling experiments: bondomer balancing and Boolean function mapping.

    PubMed

    Sriram, Ganesh; Shanks, Jacqueline V

    2004-04-01

    The biosynthetically directed fractional (13)C labeling method for metabolic flux evaluation relies on performing a 2-D [(13)C, (1)H] NMR experiment on extracts from organisms cultured on a uniformly labeled carbon substrate. This article focuses on improvements in the interpretation of data obtained from such an experiment by employing the concept of bondomers. Bondomers take into account the natural abundance of (13)C; therefore many bondomers in a real network are zero, and can be precluded a priori--thus resulting in fewer balances. Using this method, we obtained a set of linear equations which can be solved to obtain analytical formulas for NMR-measurable quantities in terms of fluxes in glycolysis and the pentose phosphate pathways. For a specific case of this network with four degrees of freedom, a priori identifiability of the fluxes was shown possible for any set of fluxes. For a more general case with five degrees of freedom, the fluxes were shown identifiable for a representative set of fluxes. Minimal sets of measurements which best identify the fluxes are listed. Furthermore, we have delineated Boolean function mapping, a new method to iteratively simulate bondomer abundances or efficiently convert carbon skeleton rearrangement information to mapping matrices. The efficiency of this method is expected to be valuable while analyzing metabolic networks which are not completely known (such as in plant metabolism) or while implementing iterative bondomer balancing methods.

  13. Finite-difference fluid dynamics computer mathematical models for the design and interpretation of experiments for space flight. [atmospheric general circulation experiment, convection in a float zone, and the Bridgman-Stockbarger crystal growing system

    NASA Technical Reports Server (NTRS)

    Roberts, G. O.; Fowlis, W. W.; Miller, T. L.

    1984-01-01

    Numerical methods are used to design a spherical baroclinic flow model experiment of the large scale atmosphere flow for Spacelab. The dielectric simulation of radial gravity is only dominant in a low gravity environment. Computer codes are developed to study the processes at work in crystal growing systems which are also candidates for space flight. Crystalline materials rarely achieve their potential properties because of imperfections and component concentration variations. Thermosolutal convection in the liquid melt can be the cause of these imperfections. Such convection is suppressed in a low gravity environment. Two and three dimensional finite difference codes are being used for this work. Nonuniform meshes and implicit iterative methods are used. The iterative method for steady solutions is based on time stepping but has the options of different time steps for velocity and temperature and of a time step varying smoothly with position according to specified powers of the mesh spacings. This allows for more rapid convergence. The code being developed for the crystal growth studies allows for growth of the crystal as the solid-liquid interface. The moving interface is followed using finite differences; shape variations are permitted. For convenience in applying finite differences in the solid and liquid, a time dependent coordinate transformation is used to make this interface a coordinate surface.

  14. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  15. Material migration studies with an ITER first wall panel proxy on EAST

    NASA Astrophysics Data System (ADS)

    Ding, R.; Pitts, R. A.; Borodin, D.; Carpentier, S.; Ding, F.; Gong, X. Z.; Guo, H. Y.; Kirschner, A.; Kocan, M.; Li, J. G.; Luo, G.-N.; Mao, H. M.; Qian, J. P.; Stangeby, P. C.; Wampler, W. R.; Wang, H. Q.; Wang, W. Z.

    2015-02-01

    The ITER beryllium (Be) first wall (FW) panels are shaped to protect leading edges between neighbouring panels arising from assembly tolerances. This departure from a perfectly cylindrical surface automatically leads to magnetically shadowed regions where eroded Be can be re-deposited, together with co-deposition of tritium fuel. To provide a benchmark for a series of erosion/re-deposition simulation studies performed for the ITER FW panels, dedicated experiments have been performed on the EAST tokamak using a specially designed, instrumented test limiter acting as a proxy for the FW panel geometry. Carbon coated molybdenum plates forming the limiter front surface were exposed to the outer midplane boundary plasma of helium discharges using the new Material and Plasma Evaluation System (MAPES). Net erosion and deposition patterns are estimated using ion beam analysis to measure the carbon layer thickness variation across the surface after exposure. The highest erosion of about 0.8 µm is found near the midplane, where the surface is closest to the plasma separatrix. No net deposition above the measurement detection limit was found on the proxy wall element, even in shadowed regions. The measured 2D surface erosion distribution has been modelled with the 3D Monte Carlo code ERO, using the local plasma parameter measurements together with a diffusive transport assumption. Excellent agreement between the experimentally observed net erosion and the modelled erosion profile has been obtained.

  16. Refractive and relativistic effects on ITER low field side reflectometer design.

    PubMed

    Wang, G; Rhodes, T L; Peebles, W A; Harvey, R W; Budny, R V

    2010-10-01

    The ITER low field side reflectometer faces some unique design challenges, among which are included the effect of relativistic electron temperatures and refraction of probing waves. This paper utilizes GENRAY, a 3D ray tracing code, to investigate these effects. Using a simulated ITER operating scenario, characteristics of the reflected millimeter waves after return to the launch plane are quantified as a function of a range of design parameters, including antenna height, antenna diameter, and antenna radial position. Results for edge/SOL measurement with both O- and X-mode polarizations using proposed antennas are reported.

  17. Global Adjoint Tomography: Next-Generation Models

    NASA Astrophysics Data System (ADS)

    Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Orsvuran, Ridvan; Peter, Daniel; Ruan, Youyi; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen

    2017-04-01

    The first-generation global adjoint tomography model GLAD-M15 (Bozdag et al. 2016) is the result of 15 conjugate-gradient iterations based on GPU-accelerated spectral-element simulations of 3D wave propagation and Fréchet kernels. For simplicity, GLAD-M15 was constructed as an elastic model with transverse isotropy confined to the upper mantle. However, Earth's mantle and crust show significant evidence of anisotropy as a result of its composition and deformation. There may be different sources of seismic anisotropy affecting both body and surface waves. As a first attempt, we initially tackle with surface-wave anisotropy and proceed iterations using the same 253 earthquake data set used in GLAD-M15 with an emphasize on upper-mantle. Furthermore, we explore new misfits, such as double-difference measurements (Yuan et al. 2016), to better deal with the possible artifacts of the uneven distribution of seismic stations globally and minimize source uncertainties in structural inversions. We will present our observations with the initial results of azimuthally anisotropic inversions and also discuss the next generation global models with various parametrizations. Meanwhile our goal is to use all available seismic data in imaging. This however requires a solid framework to perform iterative adjoint tomography workflows with big data on supercomputers. We will talk about developments in adjoint tomography workflow from the need of defining new seismic and computational data formats (e.g., ASDF by Krischer et al. 2016, ADIOS by Liu et al. 2011) to developing new pre- and post-processing tools together with experimenting workflow management tools, such as Pegasus (Deelman et al. 2015). All our simulations are performed on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Our ultimate aim is to get ready to harness ORNL's next-generation supercomputer "Summit", an IBM with Power-9 CPUs and NVIDIA Volta GPU accelerators, to be ready by 2018 which will enable us to reduce the shortest period in our global simulations from 17 s to 9 s, and exascale systems will reduce this further to just a few seconds.

  18. Integrated modeling of temperature and rotation profiles in JET ITER-like wall discharges

    NASA Astrophysics Data System (ADS)

    Rafiq, T.; Kritz, A. H.; Kim, Hyun-Tae; Schuster, E.; Weiland, J.

    2017-10-01

    Simulations of 78 JET ITER-like wall D-D discharges and 2 D-T reference discharges are carried out using the TRANSP predictive integrated modeling code. The time evolved temperature and rotation profiles are computed utilizing the Multi-Mode anomalous transport model. The discharges involve a broad range of conditions including scans over gyroradius, collisionality, and values of q95. The D-T reference discharges are selected in anticipation of the D-T experimental campaign planned at JET in 2019. The simulated temperature and rotation profiles are compared with the corresponding experimental profiles in the radial range from the magnetic axis to the ρ = 0.9 flux surface. The comparison is quantified by calculating the RMS deviations and Offsets. Overall, good agreement is found between the profiles produced in the simulations and the experimental data. It is planned that the simulations obtained using the Multi-Mode model will be compared with the simulations using the TGLF model. Research supported in part by the US, DoE, Office of Sciences.

  19. Validation results of satellite mock-up capturing experiment using nets

    NASA Astrophysics Data System (ADS)

    Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil

    2017-05-01

    The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly configured according to the parabolic flight scenario, and executed in order to generate the validation data. Both datasets have been compared according to different metrics in order to perform the validation of the PATENDER simulator.

  20. High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair.

    PubMed

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2018-01-01

    Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed.

  1. High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair

    PubMed Central

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y. K.

    2018-01-01

    Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed. PMID:29706894

  2. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    NASA Astrophysics Data System (ADS)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  3. Simulation Modeling to Compare High-Throughput, Low-Iteration Optimization Strategies for Metabolic Engineering

    PubMed Central

    Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.

    2018-01-01

    Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690

  4. DSMC simulation of rarefied gas flows under cooling conditions using a new iterative wall heat flux specifying technique

    NASA Astrophysics Data System (ADS)

    Akhlaghi, H.; Roohi, E.; Myong, R. S.

    2012-11-01

    Micro/nano geometries with specified wall heat flux are widely encountered in electronic cooling and micro-/nano-fluidic sensors. We introduce a new technique to impose the desired (positive/negative) wall heat flux boundary condition in the DSMC simulations. This technique is based on an iterative progress on the wall temperature magnitude. It is found that the proposed iterative technique has a good numerical performance and could implement both positive and negative values of wall heat flux rates accurately. Using present technique, rarefied gas flow through micro-/nanochannels under specified wall heat flux conditions is simulated and unique behaviors are observed in case of channels with cooling walls. For example, contrary to the heating process, it is observed that cooling of micro/nanochannel walls would result in small variations in the density field. Upstream thermal creep effects in the cooling process decrease the velocity slip despite of the Knudsen number increase along the channel. Similarly, cooling process decreases the curvature of the pressure distribution below the linear incompressible distribution. Our results indicate that flow cooling increases the mass flow rate through the channel, and vice versa.

  5. Advanced simulation of mixed-material erosion/evolution and application to low and high-Z containing plasma facing components

    NASA Astrophysics Data System (ADS)

    Brooks, J. N.; Hassanein, A.; Sizyuk, T.

    2013-07-01

    Plasma interactions with mixed-material surfaces are being analyzed using advanced modeling of time-dependent surface evolution/erosion. Simulations use the REDEP/WBC erosion/redeposition code package coupled to the HEIGHTS package ITMC-DYN mixed-material formation/response code, with plasma parameter input from codes and data. We report here on analysis for a DIII-D Mo/C containing tokamak divertor. A DIII-D/DiMES probe experiment simulation predicts that sputtered molybdenum from a 1 cm diameter central spot quickly saturates (˜4 s) in the 5 cm diameter surrounding carbon probe surface, with subsequent re-sputtering and transport to off-probe divertor regions, and with high (˜50%) redeposition on the Mo spot. Predicted Mo content in the carbon agrees well with post-exposure probe data. We discuss implications and mixed-material analysis issues for Be/W mixing at the ITER outer divertor, and Li, C, Mo mixing at an NSTX divertor.

  6. Low Mass-Damping Vortex-Induced Vibrations of a Single Cylinder at Moderate Reynolds Number.

    PubMed

    Jus, Y; Longatte, E; Chassaing, J-C; Sagaut, P

    2014-10-01

    The feasibility and accuracy of large eddy simulation is investigated for the case of three-dimensional unsteady flows past an elastically mounted cylinder at moderate Reynolds number. Although these flow problems are unconfined, complex wake flow patterns may be observed depending on the elastic properties of the structure. An iterative procedure is used to solve the structural dynamic equation to be coupled with the Navier-Stokes system formulated in a pseudo-Eulerian way. A moving mesh method is involved to deform the computational domain according to the motion of the fluid structure interface. Numerical simulations of vortex-induced vibrations are performed for a freely vibrating cylinder at Reynolds number 3900 in the subcritical regime under two low mass-damping conditions. A detailed physical analysis is provided for a wide range of reduced velocities, and the typical three-branch response of the amplitude behavior usually reported in the experiments is exhibited and reproduced by numerical simulation.

  7. Telescience - Concepts And Contributions To The Extreme Ultraviolet Explorer Mission

    NASA Astrophysics Data System (ADS)

    Marchant, Will; Dobson, Carl; Chakrabarti, Supriya; Malina, Roger F.

    1987-10-01

    A goal of the telescience concept is to allow scientists to use remotely located instruments as they would in their laboratory. Another goal is to increase reliability and scientific return of these instruments. In this paper we discuss the role of transparent software tools in development, integration, and postlaunch environments to achieve hands on access to the instrument. The use of transparent tools helps to reduce the parallel development of capability and to assure that valuable pre-launch experience is not lost in the operations phase. We also discuss the use of simulation as a rapid prototyping technique. Rapid prototyping provides a cost-effective means of using an iterative approach to instrument design. By allowing inexpensive produc-tion of testbeds, scientists can quickly tune the instrument to produce the desired scientific data. Using portions of the Extreme Ultraviolet Explorer (EUVE) system, we examine some of the results of preliminary tests in the use of simulation and tran-sparent tools. Additionally, we discuss our efforts to upgrade our software "EUVE electronics" simulator to emulate a full instrument, and give the pros and cons of the simulation facilities we have developed.

  8. Arbitrary temporal shape pulsed fiber laser based on SPGD algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Min; Su, Rongtao; Zhang, Pengfei; Zhou, Pu

    2018-06-01

    A novel adaptive pulse shaping method for a pulsed master oscillator power amplifier fiber laser to deliver an arbitrary pulse shape is demonstrated. Numerical simulation has been performed to validate the feasibility of the scheme and provide meaningful guidance for the design of the algorithm control parameters. In the proof-of-concept experiment, information on the temporal property of the laser is exchanged and evaluated through a local area network, and the laser adjusted the parameters of the seed laser according to the monitored output of the system automatically. Various pulse shapes, including a rectangular shape, ‘M’ shape, and elliptical shape are achieved through experimental iterations.

  9. Simulation of cesium injection and distribution in rf-driven ion sources for negative hydrogen ion generation.

    PubMed

    Gutser, R; Fantz, U; Wünderlich, D

    2010-02-01

    Cesium seeded sources for surface generated negative hydrogen ions are major components of neutral beam injection systems in future large-scale fusion experiments such as ITER. Stability and delivered current density depend highly on the cesium conditions during plasma-on and plasma-off phases of the ion source. The Monte Carlo code CSFLOW3D was used to study the transport of neutral and ionic cesium in both phases. Homogeneous and intense flows were obtained from two cesium sources in the expansion region of the ion source and from a dispenser array, which is located 10 cm in front of the converter surface.

  10. A hybrid Gerchberg-Saxton-like algorithm for DOE and CGH calculation

    NASA Astrophysics Data System (ADS)

    Wang, Haichao; Yue, Weirui; Song, Qiang; Liu, Jingdan; Situ, Guohai

    2017-02-01

    The Gerchberg-Saxton (GS) algorithm is widely used in various disciplines of modern sciences and technologies where phase retrieval is required. However, this legendary algorithm most likely stagnates after a few iterations. Many efforts have been taken to improve this situation. Here we propose to introduce the strategy of gradient descent and weighting technique to the GS algorithm, and demonstrate it using two examples: design of a diffractive optical element (DOE) to achieve off-axis illumination in lithographic tools, and design of a computer generated hologram (CGH) for holographic display. Both numerical simulation and optical experiments are carried out for demonstration.

  11. Simulation of the hybrid and steady state advanced operating modes in ITER

    NASA Astrophysics Data System (ADS)

    Kessel, C. E.; Giruzzi, G.; Sips, A. C. C.; Budny, R. V.; Artaud, J. F.; Basiuk, V.; Imbeaux, F.; Joffrin, E.; Schneider, M.; Murakami, M.; Luce, T.; St. John, Holger; Oikawa, T.; Hayashi, N.; Takizuka, T.; Ozeki, T.; Na, Y.-S.; Park, J. M.; Garcia, J.; Tucillo, A. A.

    2007-09-01

    Integrated simulations are performed to establish a physics basis, in conjunction with present tokamak experiments, for the operating modes in the International Thermonuclear Experimental Reactor (ITER). Simulations of the hybrid mode are done using both fixed and free-boundary 1.5D transport evolution codes including CRONOS, ONETWO, TSC/TRANSP, TOPICS and ASTRA. The hybrid operating mode is simulated using the GLF23 and CDBM05 energy transport models. The injected powers are limited to the negative ion neutral beam, ion cyclotron and electron cyclotron heating systems. Several plasma parameters and source parameters are specified for the hybrid cases to provide a comparison of 1.5D core transport modelling assumptions, source physics modelling assumptions, as well as numerous peripheral physics modelling. Initial results indicate that very strict guidelines will need to be imposed on the application of GLF23, for example, to make useful comparisons. Some of the variations among the simulations are due to source models which vary widely among the codes used. In addition, there are a number of peripheral physics models that should be examined, some of which include fusion power production, bootstrap current, treatment of fast particles and treatment of impurities. The hybrid simulations project to fusion gains of 5.6-8.3, βN values of 2.1-2.6 and fusion powers ranging from 350 to 500 MW, under the assumptions outlined in section 3. Simulations of the steady state operating mode are done with the same 1.5D transport evolution codes cited above, except the ASTRA code. In these cases the energy transport model is more difficult to prescribe, so that energy confinement models will range from theory based to empirically based. The injected powers include the same sources as used for the hybrid with the possible addition of lower hybrid. The simulations of the steady state mode project to fusion gains of 3.5-7, βN values of 2.3-3.0 and fusion powers of 290 to 415 MW, under the assumptions described in section 4. These simulations will be presented and compared with particular focus on the resulting temperature profiles, source profiles and peripheral physics profiles. The steady state simulations are at an early stage and are focused on developing a range of safety factor profiles with 100% non-inductive current.

  12. Iterative algorithm for joint zero diagonalization with application in blind source separation.

    PubMed

    Zhang, Wei-Tao; Lou, Shun-Tian

    2011-07-01

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.

  13. Natural selection of memory-one strategies for the iterated prisoner's dilemma.

    PubMed

    Kraines, D P; Kraines, V Y

    2000-04-21

    In the iterated Prisoner's Dilemma, mutually cooperative behavior can become established through Darwinian natural selection. In simulated interactions of stochastic memory-one strategies for the Iterated Prisoner's Dilemma, Nowak and Sigmund discovered that cooperative agents using a Pavlov (Win-Stay Lose-Switch) type strategy eventually dominate a random population. This emergence follows more directly from a deterministic dynamical system based on differential reproductive success or natural selection. When restricted to an environment of memory-one agents interacting in iterated Prisoner's Dilemma games with a 1% noise level, the Pavlov agent is the only cooperative strategy and one of very few others that cannot be invaded by a similar strategy. Pavlov agents are trusting but no suckers. They will exploit weakness but repent if punished for cheating. Copyright 2000 Academic Press.

  14. Physics of Tokamak Plasma Start-up

    NASA Astrophysics Data System (ADS)

    Mueller, Dennis

    2012-10-01

    This tutorial describes and reviews the state-of-art in tokamak plasma start-up and its importance to next step devices such as ITER, a Fusion Nuclear Science Facility and a Tokamak/ST demo. Tokamak plasma start-up includes breakdown of the initial gas, ramp-up of the plasma current to its final value and the control of plasma parameters during those phases. Tokamaks rely on an inductive component, typically a central solenoid, which has enabled attainment of high performance levels that has enabled the construction of the ITER device. Optimizing the inductive start-up phase continues to be an area of active research, especially in regards to achieving ITER scenarios. A new generation of superconducting tokamaks, EAST and KSTAR, experiments on DIII-D and operation with JET's ITER-like wall are contributing towards this effort. Inductive start-up relies on transformer action to generate a toroidal loop voltage and successful start-up is determined by gas breakdown, avalanche physics and plasma-wall interaction. The goal of achieving steady-sate tokamak operation has motivated interest in other methods for start-up that do not rely on the central solenoid. These include Coaxial Helicity Injection, outer poloidal field coil start-up, and point source helicity injection, which have achieved 200, 150 and 100 kA respectively of toroidal current on closed flux surfaces. Other methods including merging reconnection startup and Electron Bernstein Wave (EBW) plasma start-up are being studied on various devices. EBW start-up generates a directed electron channel due to wave particle interaction physics while the other methods mentioned rely on magnetic helicity injection and magnetic reconnection which are being modeled and understood using NIMROD code simulations.

  15. Stopping Criteria for Log-Domain Diffeomorphic Demons Registration: An Experimental Survey for Radiotherapy Application.

    PubMed

    Peroni, M; Golland, P; Sharp, G C; Baroni, G

    2016-02-01

    A crucial issue in deformable image registration is achieving a robust registration algorithm at a reasonable computational cost. Given the iterative nature of the optimization procedure an algorithm must automatically detect convergence, and stop the iterative process when most appropriate. This paper ranks the performances of three stopping criteria and six stopping value computation strategies for a Log-Domain Demons Deformable registration method simulating both a coarse and a fine registration. The analyzed stopping criteria are: (a) velocity field update magnitude, (b) mean squared error, and (c) harmonic energy. Each stoping condition is formulated so that the user defines a threshold ∊, which quantifies the residual error that is acceptable for the particular problem and calculation strategy. In this work, we did not aim at assigning a value to e, but to give insights in how to evaluate and to set the threshold on a given exit strategy in a very popular registration scheme. Experiments on phantom and patient data demonstrate that comparing the optimization metric minimum over the most recent three iterations with the minimum over the fourth to sixth most recent iterations can be an appropriate algorithm stopping strategy. The harmonic energy was found to provide best trade-off between robustness and speed of convergence for the analyzed registration method at coarse registration, but was outperformed by mean squared error when all the original pixel information is used. This suggests the need of developing mathematically sound new convergence criteria in which both image and vector field information could be used to detect the actual convergence, which could be especially useful when considering multi-resolution registrations. Further work should be also dedicated to study same strategies performances in other deformable registration methods and body districts. © The Author(s) 2014.

  16. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE PAGES

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael; ...

    2016-10-14

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  17. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. This algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  18. Diffraction pattern simulation of cellulose fibrils using distributed and quantized pair distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Inouye, Hideyo; Crowley, Michael

    Intensity simulation of X-ray scattering from large twisted cellulose molecular fibrils is important in understanding the impact of chemical or physical treatments on structural properties such as twisting or coiling. This paper describes a highly efficient method for the simulation of X-ray diffraction patterns from complex fibrils using atom-type-specific pair-distance quantization. Pair distances are sorted into arrays which are labelled by atom type. Histograms of pair distances in each array are computed and binned and the resulting population distributions are used to represent the whole pair-distance data set. These quantized pair-distance arrays are used with a modified and vectorized Debyemore » formula to simulate diffraction patterns. This approach utilizes fewer pair distances in each iteration, and atomic scattering factors are moved outside the iteration since the arrays are labelled by atom type. As a result, this algorithm significantly reduces the computation time while maintaining the accuracy of diffraction pattern simulation, making possible the simulation of diffraction patterns from large twisted fibrils in a relatively short period of time, as is required for model testing and refinement.« less

  19. A new method for recognizing quadric surfaces from range data and its application to telerobotics and automation

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1992-01-01

    Pose and orientation of an object is one of the central issues in 3-D recognition problems. Most of today's available techniques require considerable pre-processing such as detecting edges or joints, fitting curves or surfaces to segment images, and trying to extract higher order features from the input images. We present a method based on analytical geometry, whereby all the rotation parameters of any quadric surface are determined and subsequently eliminated. This procedure is iterative in nature and was found to converge to the desired results in as few as three iterations. The approach enables us to position the quadric surface in a desired coordinate system, and then to utilize the presented shape information to explicitly represent and recognize the 3-D surface. Experiments were conducted with simulated data for objects such as hyperboloid of one and two sheets, elliptic and hyperbolic paraboloid, elliptic and hyperbolic cylinders, ellipsoids, and quadric cones. Real data of quadric cones and cylinders were also utilized. Both of these sets yielded excellent results.

  20. Multiview point clouds denoising based on interference elimination

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu

    2018-03-01

    Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.

  1. Towards a better understanding of critical gradients and near-marginal turbulence in burning plasma conditions

    NASA Astrophysics Data System (ADS)

    Holland, C.; Candy, J.; Howard, N. T.

    2017-10-01

    Developing accurate predictive transport models of burning plasma conditions is essential for confident prediction and optimization of next step experiments such as ITER and DEMO. Core transport in these plasmas is expected to be very small in gyroBohm-normalized units, such that the plasma should lie close to the critical gradients for onset of microturbulence instabilities. We present recent results investigating the scaling of linear critical gradients of ITG, TEM, and ETG modes as a function of parameters such as safety factor, magnetic shear, and collisionality for nominal conditions and geometry expected in ITER H-mode plasmas. A subset of these results is then compared against predictions from nonlinear gyrokinetic simulations, to quantify differences between linear and nonlinear thresholds. As part of this study, linear and nonlinear results from both GYRO and CGYRO codes will be compared against each other, as well as to predictions from the quasilinear TGLF model. Challenges arising from near-marginal turbulence dynamics are addressed. This work was supported by the US Department of Energy under US DE-SC0006957.

  2. Research on adaptive optics image restoration algorithm based on improved joint maximum a posteriori method

    NASA Astrophysics Data System (ADS)

    Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying

    2018-03-01

    In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.

  3. Improved motion correction in PROPELLER by using grouped blades as reference.

    PubMed

    Liu, Zhe; Zhang, Zhe; Ying, Kui; Yuan, Chun; Guo, Hua

    2014-03-01

    To develop a robust reference generation method for improving PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) reconstruction. A new reference generation method, grouped-blade reference (GBR), is proposed for calculating rotation angle and translation shift in PROPELLER. Instead of using a single-blade reference (SBR) or combined-blade reference (CBR), our method classifies blades by their relative correlations and groups similar blades together as the reference to prevent inconsistent data from interfering the correction process. Numerical simulations and in vivo experiments were used to evaluate the performance of GBR for PROPELLER, which was further compared with SBR and CBR in terms of error level and computation cost. Both simulation and in vivo experiments demonstrate that GBR-based PROPELLER provides better correction for random motion or bipolar motion comparing with SBR or CBR. It not only produces images with lower error level but also needs less iteration steps to converge. A grouped-blade for reference selection was investigated for PROPELLER MRI. It helps to improve the accuracy and robustness of motion correction for various motion patterns. Copyright © 2013 Wiley Periodicals, Inc.

  4. Particle-in-cell simulations of the plasma interaction with poloidal gaps in the ITER divertor outer vertical target

    NASA Astrophysics Data System (ADS)

    Komm, M.; Gunn, J. P.; Dejarnac, R.; Pánek, R.; Pitts, R. A.; Podolník, A.

    2017-12-01

    Predictive modelling of the heat flux distribution on ITER tungsten divertor monoblocks is a critical input to the design choice for component front surface shaping and for the understanding of power loading in the case of small-scale exposed edges. This paper presents results of particle-in-cell (PIC) simulations of plasma interaction in the vicinity of poloidal gaps between monoblocks in the high heat flux areas of the ITER outer vertical target. The main objective of the simulations is to assess the role of local electric fields which are accounted for in a related study using the ion orbit approach including only the Lorentz force (Gunn et al 2017 Nucl. Fusion 57 046025). Results of the PIC simulations demonstrate that even if in some cases the electric field plays a distinct role in determining the precise heat flux distribution, when heat diffusion into the bulk material is taken into account, the thermal responses calculated using the PIC or ion orbit approaches are very similar. This is a consequence of the small spatial scales over which the ion orbits distribute the power. The key result of this study is that the computationally much less intensive ion orbit approximation can be used with confidence in monoblock shaping design studies, thus validating the approach used in Gunn et al (2017 Nucl. Fusion 57 046025).

  5. System Optimization and Iterative Image Reconstruction in Photoacoustic Computed Tomography for Breast Imaging

    NASA Astrophysics Data System (ADS)

    Lou, Yang

    Photoacoustic computed tomography(PACT), also known as optoacoustic tomography (OAT), is an emerging imaging technique that has developed rapidly in recent years. The combination of the high optical contrast and the high acoustic resolution of this hybrid imaging technique makes it a promising candidate for human breast imaging, where conventional imaging techniques including X-ray mammography, B-mode ultrasound, and MRI suffer from low contrast, low specificity for certain breast types, and additional risks related to ionizing radiation. Though significant works have been done to push the frontier of PACT breast imaging, it is still challenging to successfully build a PACT breast imaging system and apply it to wide clinical use because of various practical reasons. First, computer simulation studies are often conducted to guide imaging system designs, but the numerical phantoms employed in most previous works consist of simple geometries and do not reflect the true anatomical structures within the breast. Therefore the effectiveness of such simulation-guided PACT system in clinical experiments will be compromised. Second, it is challenging to design a system to simultaneously illuminate the entire breast with limited laser power. Some heuristic designs have been proposed where the illumination is non-stationary during the imaging procedure, but the impact of employing such a design has not been carefully studied. Third, current PACT imaging systems are often optimized with respect to physical measures such as resolution or signal-to-noise ratio (SNR). It would be desirable to establish an assessing framework where the detectability of breast tumor can be directly quantified, therefore the images produced by such optimized imaging systems are not only visually appealing, but most informative in terms of the tumor detection task. Fourth, when imaging a large three-dimensional (3D) object such as the breast, iterative reconstruction algorithms are often utilized to alleviate the need to collect densely sampled measurement data hence a long scanning time. However, the heavy computation burden associated with iterative algorithms largely hinders its application in PACT breast imaging. This dissertation is dedicated to address these aforementioned problems in PACT breast imaging. A method that generates anatomically realistic numerical breast phantoms is first proposed to facilitate computer simulation studies in PACT. The non-stationary illumination designs for PACT breast imaging are then systematically investigated in terms of its impact on reconstructed images. We then apply signal detection theory to assess different system designs to demonstrate how an objective, task-based measure can be established for PACT breast imaging. To address the slow computation time of iterative algorithms for PACT imaging, we propose an acceleration method that employs an approximated but much faster adjoint operator during iterations, which can reduce the computation time by a factor of six without significantly compromising image quality. Finally, some clinical results are presented to demonstrate that the PACT breast imaging can resolve most major and fine vascular structures within the breast, along with some pathological biomarkers that may indicate tumor development.

  6. Modelling controlled VDE's and ramp-down scenarios in ITER

    NASA Astrophysics Data System (ADS)

    Lodestro, L. L.; Kolesnikov, R. A.; Meyer, W. H.; Pearlstein, L. D.; Humphreys, D. A.; Walker, M. L.

    2011-10-01

    Following the design reviews of recent years, the ITER poloidal-field coil-set design, including in-vessel coils (VS3), and the divertor configuration have settled down. The divertor and its material composition (the latter has not been finalized) affect the development of fiducial equilibria and scenarios together with the coils through constraints on strike-point locations and limits on the PF and control systems. Previously we have reported on our studies simulating controlled vertical events in ITER with the JCT 2001 controller to which we added a PID VS3 circuit. In this paper we report and compare controlled VDE results using an optimized integrated VS and shape controller in the updated configuration. We also present our recent simulations of alternate ramp-down scenarios, looking at the effects of ramp-down time and shape strategies, using these controllers. This work performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344.

  7. Exploring the Ability of a Coarse-grained Potential to Describe the Stress-strain Response of Glassy Polystyrene

    DTIC Science & Technology

    2012-10-01

    using the open-source code Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) (http://lammps.sandia.gov) (23). The commercial...parameters are proprietary and cannot be ported to the LAMMPS 4 simulation code. In our molecular dynamics simulations at the atomistic resolution, we...IBI iterative Boltzmann inversion LAMMPS Large-scale Atomic/Molecular Massively Parallel Simulator MAPS Materials Processes and Simulations MS

  8. Numerical simulations of microwave heating of liquids: enhancements using Krylov subspace methods

    NASA Astrophysics Data System (ADS)

    Lollchund, M. R.; Dookhitram, K.; Sunhaloo, M. S.; Boojhawon, R.

    2013-04-01

    In this paper, we compare the performances of three iterative solvers for large sparse linear systems arising in the numerical computations of incompressible Navier-Stokes (NS) equations. These equations are employed mainly in the simulation of microwave heating of liquids. The emphasis of this work is on the application of Krylov projection techniques such as Generalized Minimal Residual (GMRES) to solve the Pressure Poisson Equations that result from discretisation of the NS equations. The performance of the GMRES method is compared with the traditional Gauss-Seidel (GS) and point successive over relaxation (PSOR) techniques through their application to simulate the dynamics of water housed inside a vertical cylindrical vessel which is subjected to microwave radiation. It is found that as the mesh size increases, GMRES gives the fastest convergence rate in terms of computational times and number of iterations.

  9. Simulating Multivariate Nonnormal Data Using an Iterative Algorithm

    ERIC Educational Resources Information Center

    Ruscio, John; Kaczetow, Walter

    2008-01-01

    Simulating multivariate nonnormal data with specified correlation matrices is difficult. One especially popular method is Vale and Maurelli's (1983) extension of Fleishman's (1978) polynomial transformation technique to multivariate applications. This requires the specification of distributional moments and the calculation of an intermediate…

  10. Reduction of Metal Artifact in Single Photon-Counting Computed Tomography by Spectral-Driven Iterative Reconstruction Technique

    PubMed Central

    Nasirudin, Radin A.; Mei, Kai; Panchev, Petar; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Fiebich, Martin; Noël, Peter B.

    2015-01-01

    Purpose The exciting prospect of Spectral CT (SCT) using photon-counting detectors (PCD) will lead to new techniques in computed tomography (CT) that take advantage of the additional spectral information provided. We introduce a method to reduce metal artifact in X-ray tomography by incorporating knowledge obtained from SCT into a statistical iterative reconstruction scheme. We call our method Spectral-driven Iterative Reconstruction (SPIR). Method The proposed algorithm consists of two main components: material decomposition and penalized maximum likelihood iterative reconstruction. In this study, the spectral data acquisitions with an energy-resolving PCD were simulated using a Monte-Carlo simulator based on EGSnrc C++ class library. A jaw phantom with a dental implant made of gold was used as an object in this study. A total of three dental implant shapes were simulated separately to test the influence of prior knowledge on the overall performance of the algorithm. The generated projection data was first decomposed into three basis functions: photoelectric absorption, Compton scattering and attenuation of gold. A pseudo-monochromatic sinogram was calculated and used as input in the reconstruction, while the spatial information of the gold implant was used as a prior. The results from the algorithm were assessed and benchmarked with state-of-the-art reconstruction methods. Results Decomposition results illustrate that gold implant of any shape can be distinguished from other components of the phantom. Additionally, the result from the penalized maximum likelihood iterative reconstruction shows that artifacts are significantly reduced in SPIR reconstructed slices in comparison to other known techniques, while at the same time details around the implant are preserved. Quantitatively, the SPIR algorithm best reflects the true attenuation value in comparison to other algorithms. Conclusion It is demonstrated that the combination of the additional information from Spectral CT and statistical reconstruction can significantly improve image quality, especially streaking artifacts caused by the presence of materials with high atomic numbers. PMID:25955019

  11. Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Gulshan B., E-mail: gbsharma@ucalgary.ca; University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213; University of Calgary, Schulich School of Engineering, Department of Mechanical and Manufacturing Engineering, Calgary, Alberta T2N 1N4

    Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respondmore » over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula’s material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element’s remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.« less

  12. Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine

    NASA Astrophysics Data System (ADS)

    Sharma, Gulshan B.; Robertson, Douglas D.

    2013-07-01

    Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula's material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element's remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.

  13. Iterative demodulation and decoding of coded non-square QAM

    NASA Technical Reports Server (NTRS)

    Li, L.; Divsalar, D.; Dolinar, S.

    2003-01-01

    Simulation results show that, with iterative demodulation and decoding, coded NS-8QAM performs 0.5 dB better than standard 8QAM and 0.7 dB better than 8PSK at BER= 10(sup -5), when the FEC code is the (15, 11) Hamming code concatenated with a rate-1 accumulator code, while coded NS-32QAM performs 0.25 dB better than standard 32QAM.

  14. Interaction of adhered metallic dust with transient plasma heat loads

    NASA Astrophysics Data System (ADS)

    Ratynskaia, S.; Tolias, P.; Bykov, I.; Rudakov, D.; De Angeli, M.; Vignitchouk, L.; Ripamonti, D.; Riva, G.; Bardin, S.; van der Meiden, H.; Vernimmen, J.; Bystrov, K.; De Temmerman, G.

    2016-06-01

    The first study of the interaction of metallic dust (tungsten, aluminum) adhered on tungsten substrates with transient plasma heat loads is presented. Experiments were carried out in the Pilot-PSI linear device with transient heat fluxes up to 550 MW m-2 and in the DIII-D divertor tokamak. The central role of the dust-substrate contact area in heat conduction is highlighted and confirmed by heat transfer simulations. The experiments provide evidence of the occurrence of wetting-induced coagulation, a novel growth mechanism where cluster melting accompanied by droplet wetting leads to the formation of larger grains. The physical processes behind this mechanism are elucidated. The remobilization activity of the newly formed dust and the survivability of tungsten dust on hot surfaces are documented and discussed in the light of implications for ITER.

  15. Model Development for VDE Computations in NIMROD

    NASA Astrophysics Data System (ADS)

    Bunkers, K. J.; Sovinec, C. R.

    2017-10-01

    Vertical displacement events (VDEs) and the disruptions associated with them have potential for causing considerable physical damage to ITER and other tokamak experiments. We report on simulations of generic axisymmetric VDEs and a vertically unstable case from Alcator C-MOD using the NIMROD code. Previous calculations have been done with closures for heat flux and viscous stress. Initial calculations show that halo current width is dependent on temperature boundary conditions, and so transport together with plasma-surface interaction may play a role in determining halo currents in experiments. The behavior of VDEs with Braginskii thermal conductivity and viscosity closures and Spitzer-like resistivity are investigated for both the generic axisymmetric VDE case and the C-MOD case. This effort is supported by the U.S. Dept. of Energy, Award Numbers DE-FG02-06ER54850 and DE-FC02-08ER54975.

  16. Design optimization of RF lines in vacuum environment for the MITICA experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Muri, Michela, E-mail: michela.demuri@igi.cnr.it; Consorzio RFX, Corso Stati Uniti, 4, I-35127 Padova; Pavei, Mauro

    This contribution regards the Radio Frequency (RF) transmission line of the Megavolt ITER Injector and Concept Advancement (MITICA) experiment. The original design considered copper coaxial lines of 1″ 5/8, but thermal simulations under operating conditions showed maximum temperatures of the lines at regime not compatible with the prescription of the component manufacturer. Hence, an optimization of the design was necessary. Enhancing thermal radiation and increasing the conductor size were considered for design optimization: thermal analyses were carried out to calculate the temperature of MITICA RF lines during operation, as a function of the emissivity value and of other geometrical parameters.more » Five coating products to increase the conductor surface emissivity were tested, measuring the outgassing behavior of the selected products and the obtained emissivity values.« less

  17. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces

    NASA Astrophysics Data System (ADS)

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-01

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  18. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces.

    PubMed

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-28

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  19. Steady state numerical solutions for determining the location of MEMS on projectile

    NASA Astrophysics Data System (ADS)

    Abiprayu, K.; Abdigusna, M. F. F.; Gunawan, P. H.

    2018-03-01

    This paper is devoted to compare the numerical solutions for the steady and unsteady state heat distribution model on projectile. Here, the best location for installing of the MEMS on the projectile based on the surface temperature is investigated. Numerical iteration methods, Jacobi and Gauss-Seidel have been elaborated to solve the steady state heat distribution model on projectile. The results using Jacobi and Gauss-Seidel are shown identical but the discrepancy iteration cost for each methods is gained. Using Jacobi’s method, the iteration cost is 350 iterations. Meanwhile, using Gauss-Seidel 188 iterations are obtained, faster than the Jacobi’s method. The comparison of the simulation by steady state model and the unsteady state model by a reference is shown satisfying. Moreover, the best candidate for installing MEMS on projectile is observed at pointT(10, 0) which has the lowest temperature for the other points. The temperature using Jacobi and Gauss-Seidel for scenario 1 and 2 atT(10, 0) are 307 and 309 Kelvin respectively.

  20. Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo

    2018-04-01

    In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.

  1. DEM Calibration Approach: design of experiment

    NASA Astrophysics Data System (ADS)

    Boikov, A. V.; Savelev, R. V.; Payor, V. A.

    2018-05-01

    The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.

  2. Underwater terrain-aided navigation system based on combination matching algorithm.

    PubMed

    Li, Peijuan; Sheng, Guoliang; Zhang, Xiaofei; Wu, Jingqiu; Xu, Baochun; Liu, Xing; Zhang, Yao

    2018-07-01

    Considering that the terrain-aided navigation (TAN) system based on iterated closest contour point (ICCP) algorithm diverges easily when the indicative track of strapdown inertial navigation system (SINS) is large, Kalman filter is adopted in the traditional ICCP algorithm, difference between matching result and SINS output is used as the measurement of Kalman filter, then the cumulative error of the SINS is corrected in time by filter feedback correction, and the indicative track used in ICCP is improved. The mathematic model of the autonomous underwater vehicle (AUV) integrated into the navigation system and the observation model of TAN is built. Proper matching point number is designated by comparing the simulation results of matching time and matching precision. Simulation experiments are carried out according to the ICCP algorithm and the mathematic model. It can be concluded from the simulation experiments that the navigation accuracy and stability are improved with the proposed combinational algorithm in case that proper matching point number is engaged. It will be shown that the integrated navigation system is effective in prohibiting the divergence of the indicative track and can meet the requirements of underwater, long-term and high precision of the navigation system for autonomous underwater vehicles. Copyright © 2017. Published by Elsevier Ltd.

  3. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    PubMed Central

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-01-01

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633

  4. Application of Biologically-Based Lumping To Investigate the ...

    EPA Pesticide Factsheets

    People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. However, investigators have often considered complex mixtures as one lumped entity. Valuable information can be obtained from these experiments, though this simplification provides little insight into the impact of a mixture's chemical composition on toxicologically-relevant metabolic interactions that may occur among its constituents. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically-based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate performance of our PBPK model. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course kinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for the 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 non-target chemicals. Application of this biologic

  5. A Comparison of Techniques for Scheduling Fleets of Earth-Observing Satellites

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    Earth observing satellite (EOS) scheduling is a complex real-world domain representative of a broad class of over-subscription scheduling problems. Over-subscription problems are those where requests for a facility exceed its capacity. These problems arise in a wide variety of NASA and terrestrial domains and are .XI important class of scheduling problems because such facilities often represent large capital investments. We have run experiments comparing multiple variants of the genetic algorithm, hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on two variants of a realistically-sized model of the EOS scheduling problem. These are implemented as permutation-based methods; methods that search in the space of priority orderings of observation requests and evaluate each permutation by using it to drive a greedy scheduler. Simulated annealing performs best and random mutation operators outperform our squeaky (more intelligent) operator. Furthermore, taking smaller steps towards the end of the search improves performance.

  6. A novel framework for virtual prototyping of rehabilitation exoskeletons.

    PubMed

    Agarwal, Priyanshu; Kuo, Pei-Hsin; Neptune, Richard R; Deshpande, Ashish D

    2013-06-01

    Human-worn rehabilitation exoskeletons have the potential to make therapeutic exercises increasingly accessible to disabled individuals while reducing the cost and labor involved in rehabilitation therapy. In this work, we propose a novel human-model-in-the-loop framework for virtual prototyping (design, control and experimentation) of rehabilitation exoskeletons by merging computational musculoskeletal analysis with simulation-based design techniques. The framework allows to iteratively optimize design and control algorithm of an exoskeleton using simulation. We introduce biomechanical, morphological, and controller measures to quantify the performance of the device for optimization study. Furthermore, the framework allows one to carry out virtual experiments for testing specific "what-if" scenarios to quantify device performance and recovery progress. To illustrate the application of the framework, we present a case study wherein the design and analysis of an index-finger exoskeleton is carried out using the proposed framework.

  7. A clustering method of Chinese medicine prescriptions based on modified firefly algorithm.

    PubMed

    Yuan, Feng; Liu, Hong; Chen, Shou-Qiang; Xu, Liang

    2016-12-01

    This paper is aimed to study the clustering method for Chinese medicine (CM) medical cases. The traditional K-means clustering algorithm had shortcomings such as dependence of results on the selection of initial value, trapping in local optimum when processing prescriptions form CM medical cases. Therefore, a new clustering method based on the collaboration of firefly algorithm and simulated annealing algorithm was proposed. This algorithm dynamically determined the iteration of firefly algorithm and simulates sampling of annealing algorithm by fitness changes, and increased the diversity of swarm through expansion of the scope of the sudden jump, thereby effectively avoiding premature problem. The results from confirmatory experiments for CM medical cases suggested that, comparing with traditional K-means clustering algorithms, this method was greatly improved in the individual diversity and the obtained clustering results, the computing results from this method had a certain reference value for cluster analysis on CM prescriptions.

  8. DSD/WBL-consistent JWL equations of state for EDC35

    NASA Astrophysics Data System (ADS)

    Hodgson, Alexander N.; Handley, Caroline Angela

    2012-03-01

    The Detonation Shock Dynamics (DSD) model allows the calculation of curvature-dependent detonation propagation. It is of particular use when applied to insensitive high explosives, such as EDC35, since they have a greater non-ideal behaviour. The DSD model is used in conjunction with experimental cylinder test data to obtain the JWL Equation of State (EOS) for EDC35. Adjustment of parameters in the JWL equation changes the expansion profile of the cylinder wall in hydrocode simulations. The parameters are iterated until the best match can be made between simulation and experiment. Previous DSD models used at AWE have no mechanism to adjust the chemical energy release to match the detonation conditions. Two JWL calibrations are performed using the DSD model, with and without Hetherington's energy release model (these proceedings). Also in use is a newly-calibrated detonation speed-curvature relation.

  9. Effect of electron cyclotron beam width to neoclassical tearing mode stabilization by minimum seeking control in ITER

    NASA Astrophysics Data System (ADS)

    Park, Minho; Na, Yong-Su; Seo, Jaemin; Kim, M.; Kim, Kyungjin

    2018-01-01

    We report the effect of the electron cyclotron (EC) beam width on the full suppression time of neoclassical tearing mode (NTM) using the finite difference method (FDM) based minimum seeking controller in ITER. An integrated numerical system is setup for time-dependent simulations of the NTM evolution in ITER by solving the modified Rutherford equation together with the plasma equilibrium, transport, and EC heating and current drive. The calculated magnetic island width and growth rate is converted to the Mirnov diagnostic signal as an input to the controller to mimic the real experiment. In addition, 10% of the noise is enforced to this diagnostic signal to evaluate the robustness of the controller. To test the dependency of the NTM stabilization time on the EC beam width, the EC beam width scan is performed for a perfectly aligned case first, then for cases with the feedback control using the minimum seeking controller. When the EC beam is perfectly aligned, the narrower the EC beam width, the smaller the NTM stabilization time is observed. As the beam width increases, the required EC power increases exponentially. On the other hand, when the minimum seeking controller is applied, NTM stabilization sometimes fails as the EC beam width decreases. This is consistently observed in the simulation with various representations of the noise as well as without the noise in the Mirnov signal. The higher relative misalignment, misalignment divided by the beam width, is found to be the reason for the failure with the narrower beam widths. The EC stabilization effect can be lower for the narrower beam widths than the broader ones even at the same misalignment due to the smaller ECCD at the island O-point. On the other hand, if the EC beam is too wide, the NTM stabilization time takes too long. Accordingly, the optimal EC beam width range is revealed to exist in the feedback stabilization of NTM.

  10. Simulation-Based Performance Assessment: An Innovative Approach to Exploring Understanding of Physical Science Concepts

    ERIC Educational Resources Information Center

    Gale, Jessica; Wind, Stefanie; Koval, Jayma; Dagosta, Joseph; Ryan, Mike; Usselman, Marion

    2016-01-01

    This paper illustrates the use of simulation-based performance assessment (PA) methodology in a recent study of eighth-grade students' understanding of physical science concepts. A set of four simulation-based PA tasks were iteratively developed to assess student understanding of an array of physical science concepts, including net force,…

  11. Factors Contributing to Cognitive Absorption and Grounded Learning Effectiveness in a Competitive Business Marketing Simulation

    ERIC Educational Resources Information Center

    Baker, David Scott; Underwood, James, III; Thakur, Ramendra

    2017-01-01

    This study aimed to establish a pedagogical positioning of a business marketing simulation as a grounded learning teaching tool and empirically assess the dimensions of cognitive absorption related to grounded learning effectiveness in an iterative business simulation environment. The method/design and sample consisted of a field study survey…

  12. Iterative Nonlocal Total Variation Regularization Method for Image Restoration

    PubMed Central

    Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen

    2013-01-01

    In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560

  13. Investigation of heat transfer in liquid-metal flows under fusion-reactor conditions

    NASA Astrophysics Data System (ADS)

    Poddubnyi, I. I.; Pyatnitskaya, N. Yu.; Razuvanov, N. G.; Sviridov, V. G.; Sviridov, E. V.; Leshukov, A. Yu.; Aleskovskiy, K. V.; Obukhov, D. M.

    2016-12-01

    The effect discovered in studying a downward liquid-metal flow in vertical pipe and in a channel of rectangular cross section in, respectively, a transverse and a coplanar magnetic field is analyzed. In test blanket modules (TBM), which are prototypes of a blanket for a demonstration fusion reactor (DEMO) and which are intended for experimental investigations at the International Thermonuclear Experimental Reactor (ITER), liquid metals are assumed to fulfil simultaneously the functions of (i) a tritium breeder, (ii) a coolant, and (iii) neutron moderator and multiplier. This approach to testing experimentally design solutions is motivated by plans to employ, in the majority of the currently developed DEMO blanket projects, liquid metals pumped through pipes and/or rectangular channels in a transvers magnetic field. At the present time, experiments that would directly simulate liquid-metal flows under conditions of ITER TBM and/or DEMO blanket operation (irradiation with thermonuclear neutrons, a cyclic temperature regime, and a magnetic-field strength of about 4 to 10 T) are not implementable for want of equipment that could reproduce simultaneously the aforementioned effects exerted by thermonuclear plasmas. This is the reason why use is made of an iterative approach to experimentally estimating the performance of design solutions for liquid-metal channels via simulating one or simultaneously two of the aforementioned factors. Therefore, the investigations reported in the present article are of considerable topical interest. The respective experiments were performed on the basis of the mercury magneto hydrodynamic (MHD) loop that is included in the structure of the MPEI—JIHT MHD experimental facility. Temperature fields were measured under conditions of two- and one-sided heating, and data on averaged-temperature fields, distributions of the wall temperature, and statistical fluctuation features were obtained. A substantial effect of counter thermo gravitational convection (TGC) on averaged and fluctuating quantities were found. The development of TGC in the presence of a magnetic field leads to the appearance of low-frequency fluctuations whose anomalously high intensity exceeds severalfold the level of turbulence fluctuations. This effect manifest itself over a broad region of regime parameters. It was confirmed that low-energy fluctuations penetrate readily through the wall; therefore, it is necessary to study this effect further—in particular, from the point of view of the fatigue strength of the walls of liquid-metal channels.

  14. Investigation of heat transfer in liquid-metal flows under fusion-reactor conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poddubnyi, I. I., E-mail: poddubnyyii@nikiet.ru; Pyatnitskaya, N. Yu.; Razuvanov, N. G.

    2016-12-15

    The effect discovered in studying a downward liquid-metal flow in vertical pipe and in a channel of rectangular cross section in, respectively, a transverse and a coplanar magnetic field is analyzed. In test blanket modules (TBM), which are prototypes of a blanket for a demonstration fusion reactor (DEMO) and which are intended for experimental investigations at the International Thermonuclear Experimental Reactor (ITER), liquid metals are assumed to fulfil simultaneously the functions of (i) a tritium breeder, (ii) a coolant, and (iii) neutron moderator and multiplier. This approach to testing experimentally design solutions is motivated by plans to employ, in themore » majority of the currently developed DEMO blanket projects, liquid metals pumped through pipes and/or rectangular channels in a transvers magnetic field. At the present time, experiments that would directly simulate liquid-metal flows under conditions of ITER TBM and/or DEMO blanket operation (irradiation with thermonuclear neutrons, a cyclic temperature regime, and a magnetic-field strength of about 4 to 10 T) are not implementable for want of equipment that could reproduce simultaneously the aforementioned effects exerted by thermonuclear plasmas. This is the reason why use is made of an iterative approach to experimentally estimating the performance of design solutions for liquid-metal channels via simulating one or simultaneously two of the aforementioned factors. Therefore, the investigations reported in the present article are of considerable topical interest. The respective experiments were performed on the basis of the mercury magneto hydrodynamic (MHD) loop that is included in the structure of the MPEI—JIHT MHD experimental facility. Temperature fields were measured under conditions of two- and one-sided heating, and data on averaged-temperature fields, distributions of the wall temperature, and statistical fluctuation features were obtained. A substantial effect of counter thermo gravitational convection (TGC) on averaged and fluctuating quantities were found. The development of TGC in the presence of a magnetic field leads to the appearance of low-frequency fluctuations whose anomalously high intensity exceeds severalfold the level of turbulence fluctuations. This effect manifest itself over a broad region of regime parameters. It was confirmed that low-energy fluctuations penetrate readily through the wall; therefore, it is necessary to study this effect further—in particular, from the point of view of the fatigue strength of the walls of liquid-metal channels.« less

  15. Reduction of asymmetric wall force in ITER disruptions with fast current quench

    NASA Astrophysics Data System (ADS)

    Strauss, H.

    2018-02-01

    One of the problems caused by disruptions in tokamaks is the asymmetric electromechanical force produced in conducting structures surrounding the plasma. The asymmetric wall force in ITER asymmetric vertical displacement event (AVDE) disruptions is calculated in nonlinear 3D MHD simulations. It is found that the wall force can vary by almost an order of magnitude, depending on the ratio of the current quench time to the resistive wall magnetic penetration time. In ITER, this ratio is relatively low, resulting in a low asymmetric wall force. In JET, this ratio is relatively high, resulting in a high asymmetric wall force. Previous extrapolations based on JET measurements have greatly overestimated the ITER wall force. It is shown that there are two limiting regimes of AVDEs, and it is explained why the asymmetric wall force is different in the two limits.

  16. Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.

    PubMed

    Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery

    The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.

  17. Enabling co-simulation of tokamak plant models and plasma control systems

    DOE PAGES

    Walker, M. L.

    2017-12-22

    A system for connecting the Plasma Control System and a model of the tokamak Plant in closed loop co-simulation for plasma control development has been in routine use at DIII-D for more than 20 years and at other fusion labs that use variants of the DIII-D PCS for approximately the last decade. Here, co-simulation refers to the simultaneous execution of two independent codes with the exchange of data - Plant actuator commands and tokamak diagnostic data - between them during execution. Interest in this type of PCS-Plant simulation technology has also been growing recently at other fusion facilities. In fact,more » use of such closed loop control simulations is assumed to play an even larger role in the development of both the ITER Plasma Control System (PCS) and the experimental operation of the ITER device, where they will be used to support verification/validation of the PCS and also for ITER pulse schedule development and validation. We describe the key use cases that motivate the co-simulation capability and the features that must be provided by the Plasma Control System to support it. These features could be provided by the PCS itself or by a model of the PCS. If the PCS itself is chosen to provide them, there are requirements imposed on its architecture. If a PCS model is chosen, there are requirements imposed on the initial implementation of this simulation as well as long-term consequences for its continued development and maintenance. We describe these issues for each use case and discuss the relative merits of the two choices. Several examples are given illustrating uses of the co-simulation method to address problems of plasma control during the operation of DIII-D and of other devices that use the DIII-D PCS.« less

  18. Enabling co-simulation of tokamak plant models and plasma control systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, M. L.

    A system for connecting the Plasma Control System and a model of the tokamak Plant in closed loop co-simulation for plasma control development has been in routine use at DIII-D for more than 20 years and at other fusion labs that use variants of the DIII-D PCS for approximately the last decade. Here, co-simulation refers to the simultaneous execution of two independent codes with the exchange of data - Plant actuator commands and tokamak diagnostic data - between them during execution. Interest in this type of PCS-Plant simulation technology has also been growing recently at other fusion facilities. In fact,more » use of such closed loop control simulations is assumed to play an even larger role in the development of both the ITER Plasma Control System (PCS) and the experimental operation of the ITER device, where they will be used to support verification/validation of the PCS and also for ITER pulse schedule development and validation. We describe the key use cases that motivate the co-simulation capability and the features that must be provided by the Plasma Control System to support it. These features could be provided by the PCS itself or by a model of the PCS. If the PCS itself is chosen to provide them, there are requirements imposed on its architecture. If a PCS model is chosen, there are requirements imposed on the initial implementation of this simulation as well as long-term consequences for its continued development and maintenance. We describe these issues for each use case and discuss the relative merits of the two choices. Several examples are given illustrating uses of the co-simulation method to address problems of plasma control during the operation of DIII-D and of other devices that use the DIII-D PCS.« less

  19. In-class Simulations of the Iterated Prisoner's Dilemma Game.

    ERIC Educational Resources Information Center

    Bodo, Peter

    2002-01-01

    Developed a simple computer program for the in-class simulation of the repeated prisoner's dilemma game with student-designed strategies. Describes the basic features of the software. Presents two examples using the program to teach the problems of cooperation among profit-maximizing agents. (JEH)

  20. Designing Needs Statements in a Systematic Iterative Way

    ERIC Educational Resources Information Center

    Verstegen, D. M. L.; Barnard, Y. F.; Pilot, A.

    2009-01-01

    Designing specifications for technically advanced instructional products, such as e-learning, simulations or simulators requires different kinds of expertise. The SLIM method proposes to involve all stakeholders from the beginning in a series of workshops under the guidance of experienced instructional designers. These instructional designers…

  1. Kinetic turbulence simulations at extreme scale on leadership-class systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    2013-01-01

    Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCFmore » and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).« less

  2. Simulation of tokamak armour erosion and plasma contamination at intense transient heat fluxes in ITER

    NASA Astrophysics Data System (ADS)

    Landman, I. S.; Bazylev, B. N.; Garkusha, I. E.; Loarte, A.; Pestchanyi, S. E.; Safronov, V. M.

    2005-03-01

    For ITER, the potential material damage of plasma facing tungsten-, CFC-, or beryllium components during transient processes such as ELMs or mitigated disruptions are simulated numerically using the MHD code FOREV-2D and the melt motion code MEMOS-1.5D for a heat deposition in the range of 0.5-3 MJ/m 2 on the time scale of 0.1-1 ms. Such loads can cause significant evaporation at the target surface and a contamination of the SOL by the ions of evaporated material. Results are presented on carbon plasma dynamics in toroidal geometry and on radiation fluxes from the SOL carbon ions obtained with FOREV-2D. The validation of MEMOS-1.5D against the plasma gun tokamak simulators MK-200UG and QSPA-Kh50, based on the tungsten melting threshold, is described. Simulations with MEMOS-1.5D for a beryllium first wall that provide important details about the melt motion dynamics and typical features of the damage are reported.

  3. Iterative inversion of deformation vector fields with feedback control.

    PubMed

    Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei

    2018-05-14

    Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.

  4. A Brownian dynamics study on ferrofluid colloidal dispersions using an iterative constraint method to satisfy Maxwell’s equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubina, Sean Hyun, E-mail: sdubin2@uic.edu; Wedgewood, Lewis Edward, E-mail: wedge@uic.edu

    2016-07-15

    Ferrofluids are often favored for their ability to be remotely positioned via external magnetic fields. The behavior of particles in ferromagnetic clusters under uniformly applied magnetic fields has been computationally simulated using the Brownian dynamics, Stokesian dynamics, and Monte Carlo methods. However, few methods have been established that effectively handle the basic principles of magnetic materials, namely, Maxwell’s equations. An iterative constraint method was developed to satisfy Maxwell’s equations when a uniform magnetic field is imposed on ferrofluids in a heterogeneous Brownian dynamics simulation that examines the impact of ferromagnetic clusters in a mesoscale particle collection. This was accomplished bymore » allowing a particulate system in a simple shear flow to advance by a time step under a uniformly applied magnetic field, then adjusting the ferroparticles via an iterative constraint method applied over sub-volume length scales until Maxwell’s equations were satisfied. The resultant ferrofluid model with constraints demonstrates that the magnetoviscosity contribution is not as substantial when compared to homogeneous simulations that assume the material’s magnetism is a direct response to the external magnetic field. This was detected across varying intensities of particle-particle interaction, Brownian motion, and shear flow. Ferroparticle aggregation was still extensively present but less so than typically observed.« less

  5. Simulations of beam-matter interaction experiments at the CERN HiRadMat facility and prospects of high-energy-density physics research.

    PubMed

    Tahir, N A; Burkart, F; Shutov, A; Schmidt, R; Wollmann, D; Piriz, A R

    2014-12-01

    In a recent publication [Schmidt et al., Phys. Plasmas 21, 080701 (2014)], we reported results on beam-target interaction experiments that have been carried out at the CERN HiRadMat (High Radiation to Materials) facility using extended solid copper cylindrical targets that were irradiated with a 440-GeV proton beam delivered by the Super Proton Synchrotron (SPS). On the one hand, these experiments confirmed the existence of hydrodynamic tunneling of the protons that leads to substantial increase in the range of the protons and the corresponding hadron shower in the target, a phenomenon predicted by our previous theoretical investigations [Tahir et al., Phys. Rev. ST Accel. Beams 25, 051003 (2012)]. On the other hand, these experiments demonstrated that the beam heated part of the target is severely damaged and is converted into different phases of high energy density (HED) matter, as suggested by our previous theoretical studies [Tahir et al., Phys. Rev. E 79, 046410 (2009)]. The latter confirms that the HiRadMat facility can be used to study HED physics. In the present paper, we give details of the numerical simulations carried out to understand the experimental measurements. These include the evolution of the physical parameters, for example, density, temperature, pressure, and the internal energy in the target, during and after the irradiation. This information is important in order to determine the region of the HED phase diagram that can be accessed in such experiments. These simulations have been done using the energy deposition code fluka and a two-dimensional hydrodynamic code, big2, iteratively.

  6. First results of the ITER-relevant negative ion beam test facility ELISE (invited).

    PubMed

    Fantz, U; Franzen, P; Heinemann, B; Wünderlich, D

    2014-02-01

    An important step in the European R&D roadmap towards the neutral beam heating systems of ITER is the new test facility ELISE (Extraction from a Large Ion Source Experiment) for large-scale extraction from a half-size ITER RF source. The test facility was constructed in the last years at Max-Planck-Institut für Plasmaphysik Garching and is now operational. ELISE is gaining early experience of the performance and operation of large RF-driven negative hydrogen ion sources with plasma illumination of a source area of 1 × 0.9 m(2) and an extraction area of 0.1 m(2) using 640 apertures. First results in volume operation, i.e., without caesium seeding, are presented.

  7. Numerical simulation of double‐diffusive finger convection

    USGS Publications Warehouse

    Hughes, Joseph D.; Sanford, Ward E.; Vacher, H. Leonard

    2005-01-01

    A hybrid finite element, integrated finite difference numerical model is developed for the simulation of double‐diffusive and multicomponent flow in two and three dimensions. The model is based on a multidimensional, density‐dependent, saturated‐unsaturated transport model (SUTRA), which uses one governing equation for fluid flow and another for solute transport. The solute‐transport equation is applied sequentially to each simulated species. Density coupling of the flow and solute‐transport equations is accounted for and handled using a sequential implicit Picard iterative scheme. High‐resolution data from a double‐diffusive Hele‐Shaw experiment, initially in a density‐stable configuration, is used to verify the numerical model. The temporal and spatial evolution of simulated double‐diffusive convection is in good agreement with experimental results. Numerical results are very sensitive to discretization and correspond closest to experimental results when element sizes adequately define the spatial resolution of observed fingering. Numerical results also indicate that differences in the molecular diffusivity of sodium chloride and the dye used to visualize experimental sodium chloride concentrations are significant and cause inaccurate mapping of sodium chloride concentrations by the dye, especially at late times. As a result of reduced diffusion, simulated dye fingers are better defined than simulated sodium chloride fingers and exhibit more vertical mass transfer.

  8. Scalable Evaluation of Polarization Energy and Associated Forces in Polarizable Molecular Dynamics: II.Towards Massively Parallel Computations using Smooth Particle Mesh Ewald.

    PubMed

    Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip

    2014-02-28

    In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations.

  9. Scalable Evaluation of Polarization Energy and Associated Forces in Polarizable Molecular Dynamics: II.Towards Massively Parallel Computations using Smooth Particle Mesh Ewald

    PubMed Central

    Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip

    2015-01-01

    In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations. PMID:26512230

  10. The effect of electron cyclotron heating on density fluctuations at ion and electron scales in ITER baseline scenario discharges on the DIII-D tokamak

    NASA Astrophysics Data System (ADS)

    Marinoni, A.; Pinsker, R. I.; Porkolab, M.; Rost, J. C.; Davis, E. M.; Burrell, K. H.; Candy, J.; Staebler, G. M.; Grierson, B. A.; McKee, G. R.; Rhodes, T. L.; The DIII-D Team

    2017-12-01

    Experiments simulating the ITER baseline scenario on the DIII-D tokamak show that torque-free pure electron heating, when coupled to plasmas subject to a net co-current beam torque, affects density fluctuations at electron scales on a sub-confinement time scale, whereas fluctuations at ion scales change only after profiles have evolved to a new stationary state. Modifications to the density fluctuations measured by the phase contrast imaging diagnostic (PCI) are assessed by analyzing the time evolution following the switch-off of electron cyclotron heating (ECH), thus going from mixed beam/ECH to pure neutral beam heating at fixed βN . Within 20 ms after turning off ECH, the intensity of fluctuations is observed to increase at frequencies higher than 200 kHz in contrast, fluctuations at lower frequency are seen to decrease in intensity on a longer time scale, after other equilibrium quantities have evolved. Non-linear gyro-kinetic modeling at ion and electron scales scales suggest that, while the low frequency response of the diagnostic is consistent with the dominant ITG modes being weakened by the slow-time increase in flow shear, the high frequency response is due to prompt changes to the electron temperature profile that enhance electron modes and generate a larger heat flux and an inward particle pinch. These results suggest that electron heated regimes in ITER will feature multi-scale fluctuations that might affect fusion performance via modifications to profiles.

  11. Statistical iterative material image reconstruction for spectral CT using a semi-empirical forward model

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.

    2017-03-01

    In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.

  12. Material migration studies with an ITER first wall panel proxy on EAST

    DOE PAGES

    Ding, R.; Pitts, R. A.; Borodin, D.; ...

    2015-01-23

    The ITER beryllium (Be) first wall (FW) panels are shaped to protect leading edges between neighbouring panels arising from assembly tolerances. This departure from a perfectly cylindrical surface automatically leads to magnetically shadowed regions where eroded Be can be re-deposited, together with co-deposition of tritium fuel. To provide a benchmark for a series of erosion/re-deposition simulation studies performed for the ITER FW panels, dedicated experiments have been performed on the EAST tokamak using a specially designed, instrumented test limiter acting as a proxy for the FW panel geometry. Carbon coated molybdenum plates forming the limiter front surface were exposed tomore » the outer midplane boundary plasma of helium discharges using the new Material and Plasma Evaluation System (MAPES). Net erosion and deposition patterns are estimated using ion beam analysis to measure the carbon layer thickness variation across the surface after exposure. The highest erosion of about 0.8 µm is found near the midplane, where the surface is closest to the plasma separatrix. No net deposition above the measurement detection limit was found on the proxy wall element, even in shadowed regions. The measured 2D surface erosion distribution has been modelled with the 3D Monte Carlo code ERO, using the local plasma parameter measurements together with a diffusive transport assumption. In conclusion, excellent agreement between the experimentally observed net erosion and the modelled erosion profile has been obtained.« less

  13. Adaptive and iterative methods for simulations of nanopores with the PNP-Stokes equations

    NASA Astrophysics Data System (ADS)

    Mitscha-Baude, Gregor; Buttinger-Kreuzhuber, Andreas; Tulzer, Gerhard; Heitzinger, Clemens

    2017-06-01

    We present a 3D finite element solver for the nonlinear Poisson-Nernst-Planck (PNP) equations for electrodiffusion, coupled to the Stokes system of fluid dynamics. The model serves as a building block for the simulation of macromolecule dynamics inside nanopore sensors. The source code is released online at http://github.com/mitschabaude/nanopores. We add to existing numerical approaches by deploying goal-oriented adaptive mesh refinement. To reduce the computation overhead of mesh adaptivity, our error estimator uses the much cheaper Poisson-Boltzmann equation as a simplified model, which is justified on heuristic grounds but shown to work well in practice. To address the nonlinearity in the full PNP-Stokes system, three different linearization schemes are proposed and investigated, with two segregated iterative approaches both outperforming a naive application of Newton's method. Numerical experiments are reported on a real-world nanopore sensor geometry. We also investigate two different models for the interaction of target molecules with the nanopore sensor through the PNP-Stokes equations. In one model, the molecule is of finite size and is explicitly built into the geometry; while in the other, the molecule is located at a single point and only modeled implicitly - after solution of the system - which is computationally favorable. We compare the resulting force profiles of the electric and velocity fields acting on the molecule, and conclude that the point-size model fails to capture important physical effects such as the dependence of charge selectivity of the sensor on the molecule radius.

  14. Comparative assessment of pressure field reconstructions from particle image velocimetry measurements and Lagrangian particle tracking

    NASA Astrophysics Data System (ADS)

    van Gent, P. L.; Michaelis, D.; van Oudheusden, B. W.; Weiss, P.-É.; de Kat, R.; Laskari, A.; Jeon, Y. J.; David, L.; Schanz, D.; Huhn, F.; Gesemann, S.; Novara, M.; McPhaden, C.; Neeteson, N. J.; Rival, D. E.; Schneiders, J. F. G.; Schrijer, F. F. J.

    2017-04-01

    A test case for pressure field reconstruction from particle image velocimetry (PIV) and Lagrangian particle tracking (LPT) has been developed by constructing a simulated experiment from a zonal detached eddy simulation for an axisymmetric base flow at Mach 0.7. The test case comprises sequences of four subsequent particle images (representing multi-pulse data) as well as continuous time-resolved data which can realistically only be obtained for low-speed flows. Particle images were processed using tomographic PIV processing as well as the LPT algorithm `Shake-The-Box' (STB). Multiple pressure field reconstruction techniques have subsequently been applied to the PIV results (Eulerian approach, iterative least-square pseudo-tracking, Taylor's hypothesis approach, and instantaneous Vortex-in-Cell) and LPT results (FlowFit, Vortex-in-Cell-plus, Voronoi-based pressure evaluation, and iterative least-square pseudo-tracking). All methods were able to reconstruct the main features of the instantaneous pressure fields, including methods that reconstruct pressure from a single PIV velocity snapshot. Highly accurate reconstructed pressure fields could be obtained using LPT approaches in combination with more advanced techniques. In general, the use of longer series of time-resolved input data, when available, allows more accurate pressure field reconstruction. Noise in the input data typically reduces the accuracy of the reconstructed pressure fields, but none of the techniques proved to be critically sensitive to the amount of noise added in the present test case.

  15. Update 0.2 to "pysimm: A python package for simulation of molecular systems"

    NASA Astrophysics Data System (ADS)

    Demidov, Alexander G.; Fortunato, Michael E.; Colina, Coray M.

    2018-01-01

    An update to the pysimm Python molecular simulation API is presented. A major part of the update is the implementation of a new interface with CASSANDRA - a modern, versatile Monte Carlo molecular simulation program. Several significant improvements in the LAMMPS communication module that allow better and more versatile simulation setup are reported as well. An example of an application implementing iterative CASSANDRA-LAMMPS interaction is illustrated.

  16. Simulation of RF-fields in a fusion device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Witte, Dieter; Bogaert, Ignace; De Zutter, Daniel

    2009-11-26

    In this paper the problem of scattering off a fusion plasma is approached from the point of view of integral equations. Using the volume equivalence principle an integral equation is derived which describes the electromagnetic fields in a plasma. The equation is discretized with MoM using conforming basis functions. This reduces the problem to solving a dense matrix equation. This can be done iteratively. Each iteration can be sped up using FFTs.

  17. Iteration and Anxiety in Mathematical Literature

    ERIC Educational Resources Information Center

    Capezzi, Rita; Kinsey, L. Christine

    2016-01-01

    We describe our experiences in team-teaching an honors seminar on mathematics and literature. We focus particularly on two of the texts we read: Georges Perec's "How to Ask Your Boss for a Raise" and Alain Robbe-Grillet's "Jealousy," both of which make use of iterative structures.

  18. Electron Cyclotron power management for control of Neoclassical Tearing Modes in the ITER baseline scenario

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poli, Francesca M.; Fredrickson, Eric; Henderson, Mark A.

    Time-dependent simulations are used to evolve plasma discharges in combination with a Modified Rutherford equation (MRE) for calculation of Neoclassical Tearing Mode (NTM) stability in response to Electron Cyclotron (EC) feedback control in ITER. The main application of this integrated approach is to support the development of control algorithms by analyzing the plasma response with physics-based models and to assess how uncertainties in the detection of the magnetic island and in the EC alignment affect the ability of the ITER EC system to fulfill its purpose. These simulations indicate that it is critical to detect the island as soon asmore » possible, before its size exceeds the EC deposition width, and that maintaining alignment with the rational surface within half of the EC deposition width is needed for stabilization and suppression of the modes, especially in the case of modes with helicity (2,1). A broadening of the deposition profile, for example due to wave scattering by turbulence fluctuations or not well aligned beams, could even be favorable in the case of the (2,1)-NTM, by relaxing an over-focussing of the EC beam and improving the stabilization at the mode onset. Pre-emptive control reduces the power needed for suppression and stabilization in the ITER baseline discharge to a maximum of 5 MW, which should be reserved and available to the Upper Launcher during the entire flattop phase. By assuming continuous triggering of NTMs, with pre-emptive control ITER would be still able to demonstrate a fusion gain of Q=10.« less

  19. Electron cyclotron power management for control of neoclassical tearing modes in the ITER baseline scenario

    NASA Astrophysics Data System (ADS)

    Poli, F. M.; Fredrickson, E. D.; Henderson, M. A.; Kim, S.-H.; Bertelli, N.; Poli, E.; Farina, D.; Figini, L.

    2018-01-01

    Time-dependent simulations are used to evolve plasma discharges in combination with a modified Rutherford equation for calculation of neoclassical tearing mode (NTM) stability in response to electron cyclotron (EC) feedback control in ITER. The main application of this integrated approach is to support the development of control algorithms by analyzing the plasma response with physics-based models and to assess how uncertainties in the detection of the magnetic island and in the EC alignment affect the ability of the ITER EC system to fulfill its purpose. Simulations indicate that it is critical to detect the island as soon as possible, before its size exceeds the EC deposition width, and that maintaining alignment with the rational surface within half of the EC deposition width is needed for stabilization and suppression of the modes, especially in the case of modes with helicity (2, 1) . A broadening of the deposition profile, for example due to wave scattering by turbulence fluctuations or not well aligned beams, could even be favorable in the case of the (2, 1)- NTM, by relaxing an over-focussing of the EC beam and improving the stabilization at the mode onset. Pre-emptive control reduces the power needed for suppression and stabilization in the ITER baseline discharge to a maximum of 5 MW, which should be reserved and available to the upper launcher during the entire flattop phase. Assuming continuous triggering of NTMs, with pre-emptive control ITER would be still able to demonstrate a fusion gain of Q=10 .

  20. Electron Cyclotron power management for control of Neoclassical Tearing Modes in the ITER baseline scenario

    DOE PAGES

    Poli, Francesca M.; Fredrickson, Eric; Henderson, Mark A.; ...

    2017-09-21

    Time-dependent simulations are used to evolve plasma discharges in combination with a Modified Rutherford equation (MRE) for calculation of Neoclassical Tearing Mode (NTM) stability in response to Electron Cyclotron (EC) feedback control in ITER. The main application of this integrated approach is to support the development of control algorithms by analyzing the plasma response with physics-based models and to assess how uncertainties in the detection of the magnetic island and in the EC alignment affect the ability of the ITER EC system to fulfill its purpose. These simulations indicate that it is critical to detect the island as soon asmore » possible, before its size exceeds the EC deposition width, and that maintaining alignment with the rational surface within half of the EC deposition width is needed for stabilization and suppression of the modes, especially in the case of modes with helicity (2,1). A broadening of the deposition profile, for example due to wave scattering by turbulence fluctuations or not well aligned beams, could even be favorable in the case of the (2,1)-NTM, by relaxing an over-focussing of the EC beam and improving the stabilization at the mode onset. Pre-emptive control reduces the power needed for suppression and stabilization in the ITER baseline discharge to a maximum of 5 MW, which should be reserved and available to the Upper Launcher during the entire flattop phase. By assuming continuous triggering of NTMs, with pre-emptive control ITER would be still able to demonstrate a fusion gain of Q=10.« less

  1. A geochemical transport model for redox-controlled movement of mineral fronts in groundwater flow systems: A case of nitrate removal by oxidation of pyrite

    USGS Publications Warehouse

    Engesgaard, Peter; Kipp, Kenneth L.

    1992-01-01

    A one-dimensional prototype geochemical transport model was developed in order to handle simultaneous precipitation-dissolution and oxidation-reduction reactions governed by chemical equilibria. Total aqueous component concentrations are the primary dependent variables, and a sequential iterative approach is used for the calculation. The model was verified by analytical and numerical comparisons and is able to simulate sharp mineral fronts. At a site in Denmark, denitrification has been observed by oxidation of pyrite. Simulation of nitrate movement at this site showed a redox front movement rate of 0.58 m yr−1, which agreed with calculations of others. It appears that the sequential iterative approach is the most practical for extension to multidimensional simulation and for handling large numbers of components and reactions. However, slow convergence may limit the size of redox systems that can be handled.

  2. Three-dimensional simulation of H-mode plasmas with localized divertor impurity injection on Alcator C-Mod using the edge transport code EMC3-EIRENE

    DOE PAGES

    Lore, Jeremy D.; Reinke, M. L.; Brunner, D.; ...

    2015-04-28

    We study experiments in Alcator C-Mod to assess the level of toroidal asymmetry in divertor conditions resulting from poloidally and toroidally localized extrinsic impurity gas seeding show a weak toroidal peaking (~1.1) in divertor electron temperatures for high-power enhanced D-alpha H-modeplasmas. This is in contrast to similar experiments in Ohmically heated L-modeplasmas, which showed a clear toroidal modulation in the divertor electron temperature. Modeling of these experiments using the 3D edge transport code EMC3-EIRENE [Y. Feng et al., J. Nucl. Mater. 241, 930 (1997)] qualitatively reproduces these trends, and indicates that the different response in the simulations is due tomore » the ionization location of the injected nitrogen. Low electron temperatures in the private flux region (PFR) in L-mode result in a PFR plasma that is nearly transparent to neutral nitrogen, while in H-mode the impurities are ionized in close proximity to the injection location, with this latter case yielding a largely axisymmetric radiation pattern in the scrape-off-layer. In conclusion, the consequences for the ITER gas injection system are discussed. Quantitative agreement with the experiment is lacking in some areas, suggesting potential areas for improving the physics model in EMC3-EIRENE.« less

  3. Iterative direct inversion: An exact complementary solution for inverting fault-slip data to obtain palaeostresses

    NASA Astrophysics Data System (ADS)

    Mostafa, Mostafa E.

    2005-10-01

    The present study shows that reconstructing the reduced stress tensor (RST) from the measurable fault-slip data (FSD) and the immeasurable shear stress magnitudes (SSM) is a typical iteration problem. The result of direct inversion of FSD presented by Angelier [1990. Geophysical Journal International 103, 363-376] is considered as a starting point (zero step iteration) where all SSM are assigned constant value ( λ=√{3}/2). By iteration, the SSM and RST update each other until they converge to fixed values. Angelier [1990. Geophysical Journal International 103, 363-376] designed the function upsilon ( υ) and the two estimators: relative upsilon (RUP) and (ANG) to express the divergence between the measured and calculated shear stresses. Plotting individual faults' RUP at successive iteration steps shows that they tend to zero (simulated data) or to fixed values (real data) at a rate depending on the orientation and homogeneity of the data. FSD of related origin tend to aggregate in clusters. Plots of the estimators ANG versus RUP show that by iteration, labeled data points are disposed in clusters about a straight line. These two new plots form the basis of a technique for separating FSD into homogeneous clusters.

  4. Ethical reasoning through simulation: a phenomenological analysis of student experience.

    PubMed

    Lewis, Gareth; McCullough, Melissa; Maxwell, Alexander P; Gormley, Gerard J

    2016-01-01

    Medical students transitioning into professional practice feel underprepared to deal with the emotional complexities of real-life ethical situations. Simulation-based learning (SBL) may provide a safe environment for students to probe the boundaries of ethical encounters. Published studies of ethics simulation have not generated sufficiently deep accounts of student experience to inform pedagogy. The aim of this study was to understand students' lived experiences as they engaged with the emotional challenges of managing clinical ethical dilemmas within a SBL environment. This qualitative study was underpinned by an interpretivist epistemology. Eight senior medical students participated in an interprofessional ward-based SBL activity incorporating a series of ethically challenging encounters. Each student wore digital video glasses to capture point-of-view (PoV) film footage. Students were interviewed immediately after the simulation and the PoV footage played back to them. Interviews were transcribed verbatim. An interpretative phenomenological approach, using an established template analysis approach, was used to iteratively analyse the data. Four main themes emerged from the analysis: (1) 'Authentic on all levels?', (2)'Letting the emotions flow', (3) 'Ethical alarm bells' and (4) 'Voices of children and ghosts'. Students recognised many explicit ethical dilemmas during the SBL activity but had difficulty navigating more subtle ethical and professional boundaries. In emotionally complex situations, instances of moral compromise were observed (such as telling an untruth). Some participants felt unable to raise concerns or challenge unethical behaviour within the scenarios due to prior negative undergraduate experiences. This study provided deep insights into medical students' immersive and embodied experiences of ethical reasoning during an authentic SBL activity. By layering on the human dimensions of ethical decision-making, students can understand their personal responses to emotion, complexity and interprofessional working. This could assist them in framing and observing appropriate ethical and professional boundaries and help smooth the transition into clinical practice.

  5. Design and Evaluation of a Prompting Instrument to Support Learning within the Diffusion Simulation Game

    ERIC Educational Resources Information Center

    Kwon, Seolim; Lara, Miguel; Enfield, Jake; Frick, Theodore

    2013-01-01

    Conducting an iterative usability testing, a set of prompts used as a form of instructional support was developed in order to facilitate the comprehension of the diffusion of innovations theory (Rogers, 2003) in a simulation game called the Diffusion Simulation Game (DSG) (Molenda & Rice, 1979). The six subjects who participated in the study…

  6. W transport and accumulation control in the termination phase of JET H-mode discharges and implications for ITER

    NASA Astrophysics Data System (ADS)

    Köchl, F.; Loarte, A.; de la Luna, E.; Parail, V.; Corrigan, G.; Harting, D.; Nunes, I.; Reux, C.; Rimini, F. G.; Polevoi, A.; Romanelli, M.; Contributors, JET

    2018-07-01

    Tokamak operation with W PFCs is associated with specific challenges for impurity control, which may be particularly demanding in the transition from stationary H-mode to L-mode. To address W control issues in this phase, dedicated experiments have been performed at JET including the variation of the decrease of the power and current, gas fuelling and central ion cyclotron heating (ICRH), and applying active ELM control by vertical kicks. The experimental results obtained demonstrate the key role of maintaining ELM control to control the W concentration in the exit phase of H-modes with slow (ITER-like) ramp-down of the neutral beam injection power in JET. For these experiments, integrated fully predictive core+edge+SOL transport modelling studies applying discrete models for the description of transients such as sawteeth and ELMs have been performed for the first time with the JINTRAC suite of codes for the entire transition from stationary H-mode until the time when the plasma would return to L-mode focusing on the W transport behaviour. Simulations have shown that the existing models can appropriately reproduce the plasma profile evolution in the core, edge and SOL as well as W accumulation trends in the termination phase of JET H-mode discharges as function of the applied ICRH and ELM control schemes, substantiating the ambivalent effect of ELMs on W sputtering on one side and on edge transport affecting core W accumulation on the other side. The sensitivity with respect to NB particle and momentum sources has also been analysed and their impact on neoclassical W transport has been found to be crucial to reproduce the observed W accumulation characteristics in JET discharges. In this paper the results of the JET experiments, the comparison with JINTRAC modelling and the adequacy of the models to reproduce the experimental results are described and conclusions are drawn regarding the applicability of these models for the extrapolation of the applied W accumulation control techniques to ITER.

  7. Formation and sustainment of internal transport barriers in the International Thermonuclear Experimental Reactor with the baseline heating mixa)

    NASA Astrophysics Data System (ADS)

    Poli, Francesca M.; Kessel, Charles E.

    2013-05-01

    Plasmas with internal transport barriers (ITBs) are a potential and attractive route to steady-state operation in ITER. These plasmas exhibit radially localized regions of improved confinement with steep pressure gradients in the plasma core, which drive large bootstrap current and generate hollow current profiles and negative magnetic shear. This work examines the formation and sustainment of ITBs in ITER with electron cyclotron heating and current drive. The time-dependent transport simulations indicate that, with a trade-off of the power delivered to the equatorial and to the upper launcher, the sustainment of steady-state ITBs can be demonstrated in ITER with the baseline heating configuration.

  8. Observer-based distributed adaptive iterative learning control for linear multi-agent systems

    NASA Astrophysics Data System (ADS)

    Li, Jinsha; Liu, Sanyang; Li, Junmin

    2017-10-01

    This paper investigates the consensus problem for linear multi-agent systems from the viewpoint of two-dimensional systems when the state information of each agent is not available. Observer-based fully distributed adaptive iterative learning protocol is designed in this paper. A local observer is designed for each agent and it is shown that without using any global information about the communication graph, all agents achieve consensus perfectly for all undirected connected communication graph when the number of iterations tends to infinity. The Lyapunov-like energy function is employed to facilitate the learning protocol design and property analysis. Finally, simulation example is given to illustrate the theoretical analysis.

  9. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.

    PubMed

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-10-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.

  10. No-reference image quality assessment for horizontal-path imaging scenarios

    NASA Astrophysics Data System (ADS)

    Rios, Carlos; Gladysz, Szymon

    2013-05-01

    There exist several image-enhancement algorithms and tasks associated with imaging through turbulence that depend on defining the quality of an image. Examples include: "lucky imaging", choosing the width of the inverse filter for image reconstruction, or stopping iterative deconvolution. We collected a number of image quality metrics found in the literature. Particularly interesting are the blind, "no-reference" metrics. We discuss ways of evaluating the usefulness of these metrics, even when a fully objective comparison is impossible because of the lack of a reference image. Metrics are tested on simulated and real data. Field data comes from experiments performed by the NATO SET 165 research group over a 7 km distance in Dayton, Ohio.

  11. Neural basis of quasi-rational decision making.

    PubMed

    Lee, Daeyeol

    2006-04-01

    Standard economic theories conceive homo economicus as a rational decision maker capable of maximizing utility. In reality, however, people tend to approximate optimal decision-making strategies through a collection of heuristic routines. Some of these routines are driven by emotional processes, and others are adjusted iteratively through experience. In addition, routines specialized for social decision making, such as inference about the mental states of other decision makers, might share their origins and neural mechanisms with the ability to simulate or imagine outcomes expected from alternative actions that an individual can take. A recent surge of collaborations across economics, psychology and neuroscience has provided new insights into how such multiple elements of decision making interact in the brain.

  12. Final case for a stainless steel diagnostic first wall on ITER

    NASA Astrophysics Data System (ADS)

    Pitts, R. A.; Bazylev, B.; Linke, J.; Landman, I.; Lehnen, M.; Loesser, D.; Loewenhoff, Th.; Merola, M.; Roccella, R.; Saibene, G.; Smith, M.; Udintsev, V. S.

    2015-08-01

    In 2010 the ITER Organization (IO) proposed to eliminate the beryllium armour on the plasma-facing surface of the diagnostic port plugs and instead to use bare stainless steel (SS), simplifying the design and providing significant cost reduction. Transport simulations at the IO confirmed that charge-exchange sputtering of the SS surfaces would not affect burning plasma operation through core impurity contamination, but a second key issue is the potential melt damage/material loss inflicted by the intense photon radiation flashes expected at the thermal quench of disruptions mitigated by massive gas injection. This paper addresses this second issue through a combination of ITER relevant experimental heat load tests and qualitative theoretical arguments of melt layer stability. It demonstrates that SS can be employed as material for the port plug plasma-facing surface and this has now been adopted into the ITER baseline.

  13. Iterative methods for plasma sheath calculations: Application to spherical probe

    NASA Technical Reports Server (NTRS)

    Parker, L. W.; Sullivan, E. C.

    1973-01-01

    The computer cost of a Poisson-Vlasov iteration procedure for the numerical solution of a steady-state collisionless plasma-sheath problem depends on: (1) the nature of the chosen iterative algorithm, (2) the position of the outer boundary of the grid, and (3) the nature of the boundary condition applied to simulate a condition at infinity (as in three-dimensional probe or satellite-wake problems). Two iterative algorithms, in conjunction with three types of boundary conditions, are analyzed theoretically and applied to the computation of current-voltage characteristics of a spherical electrostatic probe. The first algorithm was commonly used by physicists, and its computer costs depend primarily on the boundary conditions and are only slightly affected by the mesh interval. The second algorithm is not commonly used, and its costs depend primarily on the mesh interval and slightly on the boundary conditions.

  14. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework

    PubMed Central

    Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.

    2016-01-01

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968

  15. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework

    NASA Astrophysics Data System (ADS)

    Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.

    2016-05-01

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.

  16. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.

  17. Novel aspects of plasma control in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, D.; Jackson, G.; Walker, M.

    2015-02-15

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less

  18. Comparison between iteration schemes for three-dimensional coordinate-transformed saturated-unsaturated flow model

    NASA Astrophysics Data System (ADS)

    An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu

    2012-11-01

    SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.

  19. Novel aspects of plasma control in ITER

    DOE PAGES

    Humphreys, David; Ambrosino, G.; de Vries, Peter; ...

    2015-02-12

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g. current profile regulation, tearing mode suppression (TM)), control mathematics (e.g. algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g. methods for management of highly-subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Finally, issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less

  20. Hybrid pairwise likelihood analysis of animal behavior experiments.

    PubMed

    Cattelan, Manuela; Varin, Cristiano

    2013-12-01

    The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons. © 2013, The International Biometric Society.

  1. Tailored ramp wave generation in gas gun experiments

    NASA Astrophysics Data System (ADS)

    Cotton, Matthew; Chapman, David; Winter, Ron; Harris, Ernie; Eakins, Daniel

    2015-09-01

    Gas guns are traditionally used as platforms to introduce a planar shock wave to a material using plate impact methods, generating states on the Hugoniot. The ability to deliver a ramp wave to a target during a gas gun experiment enables access to different regions of the equation-of-state surface, making it a valuable technique for characterising material behaviour. Previous techniques have relied on the use of multi-material impactors to generate a density gradient, which can be complex to manufacture. In this paper we describe the use of an additively manufactured steel component consisting of an array of tapered spikes which can deliver a ramp wave over ˜ 2 μs. The ability to tailor the input wave by varying the component design is discussed, an approach which makes use of the design freedom offered by additive manufacturing techniques to rapidly iterate the spike profile. Results from gas gun experiments are presented to evaluate the technique, and compared with 3D hydrodynamic simulations.

  2. A New Method for Determining the Equation of State of Aluminized Explosive

    NASA Astrophysics Data System (ADS)

    Zhou, Zheng-Qing; Nie, Jian-Xin; Guo, Xue-Yong; Wang, Qiu-Shi; Ou, Zhuo-Cheng; Jiao, Qing-Jie

    2015-01-01

    The time-dependent Jones—Wilkins—Lee equation of state (JWL-EOS) is applied to describe detonation state products for aluminized explosives. To obtain the time-dependent JWL-EOS parameters, cylinder tests and underwater explosion experiments are performed. According to the result of the wall radial velocity in cylinder tests and the shock wave pressures in underwater explosion experiments, the time-dependent JWL-EOS parameters are determined by iterating these variables in AUTODYN hydrocode simulations until the experimental values are reproduced. In addition, to verify the reliability of the derived JWL-EOS parameters, the aluminized explosive experiment is conducted in concrete. The shock wave pressures in the affected concrete bodies are measured by using manganin pressure sensors, and the rod velocity is obtained by using a high-speed camera. Simultaneously, the shock wave pressure and the rod velocity are calculated by using the derived time-dependent JWL equation of state. The calculated results are in good agreement with the experimental data.

  3. Modelling of radiation impact on ITER Beryllium wall

    NASA Astrophysics Data System (ADS)

    Landman, I. S.; Janeschitz, G.

    2009-04-01

    In the ITER H-Mode confinement regime, edge localized instabilities (ELMs) will perturb the discharge. Plasma lost after each ELM moves along magnetic field lines and impacts on divertor armour, causing plasma contamination by back propagating eroded carbon or tungsten. These impurities produce enhanced radiation flux distributed mainly over the beryllium main chamber wall. The simulation of the complicated processes involved are subject of the integrated tokamak code TOKES that is currently under development. This work describes the new TOKES model for radiation transport through confined plasma. Equations for level populations of the multi-fluid plasma species and the propagation of different kinds of radiation (resonance, recombination and bremsstrahlung photons) are implemented. First simulation results without account of resonance lines are presented.

  4. Large Deviations and Quasipotential for Finite State Mean Field Interacting Particle Systems

    DTIC Science & Technology

    2014-05-01

    The conclusion then follows by applying Lemma 4.4.2. 132 119 4.4.1 Iterative solver: The widest neighborhood structure We employ Gauss - Seidel ...nearest neighborhood structure described in Section 4.4.2. We use Gauss - Seidel iterative method for our numerical experiments. The Gauss - Seidel ...x ∈ Bh, M x ∈ Sh\\Bh, where M ∈ (V,∞) is a very large number, so that the iteration (4.5.1) converges quickly. For simplicity, we restrict our

  5. Laser simulation applying Fox-Li iteration: investigation of reason for non-convergence

    NASA Astrophysics Data System (ADS)

    Paxton, Alan H.; Yang, Chi

    2017-02-01

    Fox-Li iteration is often used to numerically simulate lasers. If a solution is found, the complex field amplitude is a good indication of the laser mode. The case of a semiconductor laser, for which the medium possesses a self-focusing nonlinearity, was investigated. For a case of interest, the iterations did not yield a converged solution. Another approach was needed to explore the properties of the laser mode. The laser was treated (unphysically) as a regenerative amplifier. As the input to the amplifier, we required a smooth complex field distribution that matched the laser resonator. To obtain such a field, we found what would be the solution for the laser field if the strength of the self focusing nonlinearity were α = 0. This was used as the input to the laser, treated as an amplifier. Because the beam deteriorated as it propagated multiple passes in the resonator and through the gain medium (for α = 2.7), we concluded that a mode with good beam quality could not exist in the laser.

  6. The fusion code XGC: Enabling kinetic study of multi-scale edge turbulent transport in ITER [Book Chapter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas

    The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning for balancing computational work in pushing particlesmore » and in grid related work, scalable and accurate discretization algorithms for non-linear Coulomb collisions, and communication-avoiding subcycling technology for pushing particles on both CPUs and GPUs are also utilized to dramatically improve the scalability and time-to-solution, hence enabling the difficult kinetic ITER edge simulation on a present-day leadership class computer.« less

  7. Experimental study of stochastic noise propagation in SPECT images reconstructed using the conjugate gradient algorithm.

    PubMed

    Mariano-Goulart, D; Fourcade, M; Bernon, J L; Rossi, M; Zanca, M

    2003-01-01

    Thanks to an experimental study based on simulated and physical phantoms, the propagation of the stochastic noise in slices reconstructed using the conjugate gradient algorithm has been analysed versus iterations. After a first increase corresponding to the reconstruction of the signal, the noise stabilises before increasing linearly with iterations. The level of the plateau as well as the slope of the subsequent linear increase depends on the noise in the projection data.

  8. Limiting CT radiation dose in children with craniosynostosis: phantom study using model-based iterative reconstruction.

    PubMed

    Kaasalainen, Touko; Palmu, Kirsi; Lampinen, Anniina; Reijonen, Vappu; Leikola, Junnu; Kivisaari, Riku; Kortesniemi, Mika

    2015-09-01

    Medical professionals need to exercise particular caution when developing CT scanning protocols for children who require multiple CT studies, such as those with craniosynostosis. To evaluate the utility of ultra-low-dose CT protocols with model-based iterative reconstruction techniques for craniosynostosis imaging. We scanned two pediatric anthropomorphic phantoms with a 64-slice CT scanner using different low-dose protocols for craniosynostosis. We measured organ doses in the head region with metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters. Numerical simulations served to estimate organ and effective doses. We objectively and subjectively evaluated the quality of images produced by adaptive statistical iterative reconstruction (ASiR) 30%, ASiR 50% and Veo (all by GE Healthcare, Waukesha, WI). Image noise and contrast were determined for different tissues. Mean organ dose with the newborn phantom was decreased up to 83% compared to the routine protocol when using ultra-low-dose scanning settings. Similarly, for the 5-year phantom the greatest radiation dose reduction was 88%. The numerical simulations supported the findings with MOSFET measurements. The image quality remained adequate with Veo reconstruction, even at the lowest dose level. Craniosynostosis CT with model-based iterative reconstruction could be performed with a 20-μSv effective dose, corresponding to the radiation exposure of plain skull radiography, without compromising required image quality.

  9. A mixed reality approach for stereo-tomographic quantification of lung nodules.

    PubMed

    Chen, Mianyi; Kalra, Mannudeep K; Yun, Wenbing; Cong, Wenxiang; Yang, Qingsong; Nguyen, Terry; Wei, Biao; Wang, Ge

    2016-05-25

    To reduce the radiation dose and the equipment cost associated with lung CT screening, in this paper we propose a mixed reality based nodule measurement method with an active shutter stereo imaging system. Without involving hundreds of projection views and subsequent image reconstruction, we generated two projections of an iteratively placed ellipsoidal volume in the field of view and merging these synthetic projections with two original CT projections. We then demonstrated the feasibility of measuring the position and size of a nodule by observing whether projections of an ellipsoidal volume and the nodule are overlapped from a human observer's visual perception through the active shutter 3D vision glasses. The average errors of measured nodule parameters are less than 1 mm in the simulated experiment with 8 viewers. Hence, it could measure real nodules accurately in the experiments with physically measured projections.

  10. Detection of thermal gradients through fiber-optic Chirped Fiber Bragg Grating (CFBG): Medical thermal ablation scenario

    NASA Astrophysics Data System (ADS)

    Korganbayev, Sanzhar; Orazayev, Yerzhan; Sovetov, Sultan; Bazyl, Ali; Schena, Emiliano; Massaroni, Carlo; Gassino, Riccardo; Vallan, Alberto; Perrone, Guido; Saccomandi, Paola; Arturo Caponero, Michele; Palumbo, Giovanna; Campopiano, Stefania; Iadicicco, Agostino; Tosi, Daniele

    2018-03-01

    In this paper, we describe a novel method for spatially distributed temperature measurement with Chirped Fiber Bragg Grating (CFBG) fiber-optic sensors. The proposed method determines the thermal profile in the CFBG region from demodulation of the CFBG optical spectrum. The method is based on an iterative optimization that aims at minimizing the mismatch between the measured CFBG spectrum and a CFBG model based on coupled-mode theory (CMT), perturbed by a temperature gradient. In the demodulation part, we simulate different temperature distribution patterns with Monte-Carlo approach on simulated CFBG spectra. Afterwards, we obtain cost function that minimizes difference between measured and simulated spectra, and results in final temperature profile. Experiments and simulations have been carried out first with a linear gradient, demonstrating a correct operation (error 2.9 °C); then, a setup has been arranged to measure the temperature pattern on a 5-cm long section exposed to medical laser thermal ablation. Overall, the proposed method can operate as a real-time detection technique for thermal gradients over 1.5-5 cm regions, and turns as a key asset for the estimation of thermal gradients at the micro-scale in biomedical applications.

  11. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less

  12. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    PubMed

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.

  13. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582

  14. LROC assessment of non-linear filtering methods in Ga-67 SPECT imaging

    NASA Astrophysics Data System (ADS)

    De Clercq, Stijn; Staelens, Steven; De Beenhouwer, Jan; D'Asseler, Yves; Lemahieu, Ignace

    2006-03-01

    In emission tomography, iterative reconstruction is usually followed by a linear smoothing filter to make such images more appropriate for visual inspection and diagnosis by a physician. This will result in a global blurring of the images, smoothing across edges and possibly discarding valuable image information for detection tasks. The purpose of this study is to investigate which possible advantages a non-linear, edge-preserving postfilter could have on lesion detection in Ga-67 SPECT imaging. Image quality can be defined based on the task that has to be performed on the image. This study used LROC observer studies based on a dataset created by CPU-intensive Gate Monte Carlo simulations of a voxelized digital phantom. The filters considered in this study were a linear Gaussian filter, a bilateral filter, the Perona-Malik anisotropic diffusion filter and the Catte filtering scheme. The 3D MCAT software phantom was used to simulate the distribution of Ga-67 citrate in the abdomen. Tumor-present cases had a 1-cm diameter tumor randomly placed near the edges of the anatomical boundaries of the kidneys, bone, liver and spleen. Our data set was generated out of a single noisy background simulation using the bootstrap method, to significantly reduce the simulation time and to allow for a larger observer data set. Lesions were simulated separately and added to the background afterwards. These were then reconstructed with an iterative approach, using a sufficiently large number of MLEM iterations to establish convergence. The output of a numerical observer was used in a simplex optimization method to estimate an optimal set of parameters for each postfilter. No significant improvement was found for using edge-preserving filtering techniques over standard linear Gaussian filtering.

  15. Physics and engineering design of the accelerator and electron dump for SPIDER

    NASA Astrophysics Data System (ADS)

    Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Marconato, N.; Marcuzzi, D.; Pilan, N.; Serianni, G.; Sonato, P.; Veltri, P.; Zaccaria, P.

    2011-06-01

    The ITER Neutral Beam Test Facility (PRIMA) is planned to be built at Consorzio RFX (Padova, Italy). PRIMA includes two experimental devices: a full size ion source with low voltage extraction called SPIDER and a full size neutral beam injector at full beam power called MITICA. SPIDER is the first experimental device to be built and operated, aiming at testing the extraction of a negative ion beam (made of H- and in a later stage D- ions) from an ITER size ion source. The main requirements of this experiment are a H-/D- extracted current density larger than 355/285 A m-2, an energy of 100 keV and a pulse duration of up to 3600 s. Several analytical and numerical codes have been used for the design optimization process, some of which are commercial codes, while some others were developed ad hoc. The codes are used to simulate the electrical fields (SLACCAD, BYPO, OPERA), the magnetic fields (OPERA, ANSYS, COMSOL, PERMAG), the beam aiming (OPERA, IRES), the pressure inside the accelerator (CONDUCT, STRIP), the stripping reactions and transmitted/dumped power (EAMCC), the operating temperature, stress and deformations (ALIGN, ANSYS) and the heat loads on the electron dump (ED) (EDAC, BACKSCAT). An integrated approach, taking into consideration at the same time physics and engineering aspects, has been adopted all along the design process. Particular care has been taken in investigating the many interactions between physics and engineering aspects of the experiment. According to the 'robust design' philosophy, a comprehensive set of sensitivity analyses was performed, in order to investigate the influence of the design choices on the most relevant operating parameters. The design of the SPIDER accelerator, here described, has been developed in order to satisfy with reasonable margin all the requirements given by ITER, from the physics and engineering points of view. In particular, a new approach to the compensation of unwanted beam deflections inside the accelerator and a new concept for the ED have been introduced.

  16. Adaptive dynamic programming for discrete-time linear quadratic regulation based on multirate generalised policy iteration

    NASA Astrophysics Data System (ADS)

    Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2018-06-01

    In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.

  17. Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks

    PubMed Central

    Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng

    2017-01-01

    High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328

  18. A Tutorial on RxODE: Simulating Differential Equation Pharmacometric Models in R.

    PubMed

    Wang, W; Hallow, K M; James, D A

    2016-01-01

    This tutorial presents the application of an R package, RxODE, that facilitates quick, efficient simulations of ordinary differential equation models completely within R. Its application is illustrated through simulation of design decision effects on an adaptive dosing regimen. The package provides an efficient, versatile way to specify dosing scenarios and to perform simulation with variability with minimal custom coding. Models can be directly translated to Rshiny applications to facilitate interactive, real-time evaluation/iteration on simulation scenarios.

  19. 3D numerical simulations of negative hydrogen ion extraction using realistic plasma parameters, geometry of the extraction aperture and full 3D magnetic field map

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Franzen, P.; Fantz, U.; Minea, T.

    2014-02-01

    Decreasing the co-extracted electron current while simultaneously keeping negative ion (NI) current sufficiently high is a crucial issue on the development plasma source system for ITER Neutral Beam Injector. To support finding the best extraction conditions the 3D Particle-in-Cell Monte Carlo Collision electrostatic code ONIX (Orsay Negative Ion eXtraction) has been developed. Close collaboration with experiments and other numerical models allows performing realistic simulations with relevant input parameters: plasma properties, geometry of the extraction aperture, full 3D magnetic field map, etc. For the first time ONIX has been benchmarked with commercial positive ions tracing code KOBRA3D. A very good agreement in terms of the meniscus position and depth has been found. Simulation of NI extraction with different e/NI ratio in bulk plasma shows high relevance of the direct negative ion extraction from the surface produced NI in order to obtain extracted NI current as in the experimental results from BATMAN testbed.

  20. SNDR Limits of Oscillator-Based Sensor Readout Circuits.

    PubMed

    Cardes, Fernando; Quintero, Andres; Gutierrez, Eric; Buffa, Cesare; Wiesbauer, Andreas; Hernandez, Luis

    2018-02-03

    This paper analyzes the influence of phase noise and distortion on the performance of oscillator-based sensor data acquisition systems. Circuit noise inherent to the oscillator circuit manifests as phase noise and limits the SNR. Moreover, oscillator nonlinearity generates distortion for large input signals. Phase noise analysis of oscillators is well known in the literature, but the relationship between phase noise and the SNR of an oscillator-based sensor is not straightforward. This paper proposes a model to estimate the influence of phase noise in the performance of an oscillator-based system by reflecting the phase noise to the oscillator input. The proposed model is based on periodic steady-state analysis tools to predict the SNR of the oscillator. The accuracy of this model has been validated by both simulation and experiment in a 130 nm CMOS prototype. We also propose a method to estimate the SNDR and the dynamic range of an oscillator-based readout circuit that improves by more than one order of magnitude the simulation time compared to standard time domain simulations. This speed up enables the optimization and verification of this kind of systems with iterative algorithms.

  1. Automated protein structure modeling in CASP9 by I-TASSER pipeline combined with QUARK-based ab initio folding and FG-MD-based structure refinement

    PubMed Central

    Xu, Dong; Zhang, Jian; Roy, Ambrish; Zhang, Yang

    2011-01-01

    I-TASSER is an automated pipeline for protein tertiary structure prediction using multiple threading alignments and iterative structure assembly simulations. In CASP9 experiments, two new algorithms, QUARK and FG-MD, were added to the I-TASSER pipeline for improving the structural modeling accuracy. QUARK is a de novo structure prediction algorithm used for structure modeling of proteins that lack detectable template structures. For distantly homologous targets, QUARK models are found useful as a reference structure for selecting good threading alignments and guiding the I-TASSER structure assembly simulations. FG-MD is an atomic-level structural refinement program that uses structural fragments collected from the PDB structures to guide molecular dynamics simulation and improve the local structure of predicted model, including hydrogen-bonding networks, torsion angles and steric clashes. Despite considerable progress in both the template-based and template-free structure modeling, significant improvements on protein target classification, domain parsing, model selection, and ab initio folding of beta-proteins are still needed to further improve the I-TASSER pipeline. PMID:22069036

  2. Unfolding of Proteins: Thermal and Mechanical Unfolding

    NASA Technical Reports Server (NTRS)

    Hur, Joe S.; Darve, Eric

    2004-01-01

    We have employed a Hamiltonian model based on a self-consistent Gaussian appoximation to examine the unfolding process of proteins in external - both mechanical and thermal - force elds. The motivation was to investigate the unfolding pathways of proteins by including only the essence of the important interactions of the native-state topology. Furthermore, if such a model can indeed correctly predict the physics of protein unfolding, it can complement more computationally expensive simulations and theoretical work. The self-consistent Gaussian approximation by Micheletti et al. has been incorporated in our model to make the model mathematically tractable by signi cantly reducing the computational cost. All thermodynamic properties and pair contact probabilities are calculated by simply evaluating the values of a series of Incomplete Gamma functions in an iterative manner. We have compared our results to previous molecular dynamics simulation and experimental data for the mechanical unfolding of the giant muscle protein Titin (1TIT). Our model, especially in light of its simplicity and excellent agreement with experiment and simulation, demonstrates the basic physical elements necessary to capture the mechanism of protein unfolding in an external force field.

  3. Research on radiation characteristic of plasma antenna through FDTD method.

    PubMed

    Zhou, Jianming; Fang, Jingjing; Lu, Qiuyuan; Liu, Fan

    2014-01-01

    The radiation characteristic of plasma antenna is investigated by using the finite-difference time-domain (FDTD) approach in this paper. Through using FDTD method, we study the propagation of electromagnetic wave in free space in stretched coordinate. And the iterative equations of Maxwell equation are derived. In order to validate the correctness of this method, we simulate the process of electromagnetic wave propagating in free space. Results show that electromagnetic wave spreads out around the signal source and can be absorbed by the perfectly matched layer (PML). Otherwise, we study the propagation of electromagnetic wave in plasma by using the Boltzmann-Maxwell theory. In order to verify this theory, the whole process of electromagnetic wave propagating in plasma under one-dimension case is simulated. Results show that Boltzmann-Maxwell theory can be used to explain the phenomenon of electromagnetic wave propagating in plasma. Finally, the two-dimensional simulation model of plasma antenna is established under the cylindrical coordinate. And the near-field and far-field radiation pattern of plasma antenna are obtained. The experiments show that the variation of electron density can introduce the change of radiation characteristic.

  4. A fluid modeling perspective on the tokamak power scrape-off width using SOLPS-ITER

    NASA Astrophysics Data System (ADS)

    Meier, Eric

    2016-10-01

    SOLPS-ITER, a 2D fluid code, is used to conduct the first fluid modeling study of the physics behind the power scrape-off width (λq). When drift physics are activated in the code, λq is insensitive to changes in toroidal magnetic field (Bt), as predicted by the 0D heuristic drift (HD) model developed by Goldston. Using the HD model, which quantitatively agrees with regression analysis of a multi-tokamak database, λq in ITER is projected to be 1 mm instead of the previously assumed 4 mm, magnifying the challenge of maintaining the peak divertor target heat flux below the technological limit. These simulations, which use DIII-D H-mode experimental conditions as input, and reproduce the observed high-recycling, attached outer target plasma, allow insights into the scrape-off layer (SOL) physics that set λq. Independence of λq with respect to Bt suggests that SOLPS-ITER captures basic HD physics: the effect of Bt on the particle dwell time ( Bt) cancels with the effect on drift speed ( 1 /Bt), fixing the SOL plasma density width, and dictating λq. Scaling with plasma current (Ip), however, is much weaker than the roughly 1 /Ip dependence predicted by the HD model. Simulated net cross-separatrix particle flux due to magnetic drifts exceeds the anomalous particle transport, and a Pfirsch-Schluter-like SOL flow pattern is established. Up-down ion pressure asymmetry enables the net magnetic drift flux. Drifts establish in-out temperature asymmetry, and an associated thermoelectric current carries significant heat flux to the outer target. The density fall-off length in the SOL is similar to the electron temperature fall-off length, as observed experimentally. Finally, opportunities and challenges foreseen in ongoing work to extrapolate SOLPS-ITER and the HD model to ITER and future machines will be discussed. Supported by U.S. Department of Energy Contract DESC0010434.

  5. The motional Stark effect diagnostic for ITER using a line-shift approach.

    PubMed

    Foley, E L; Levinton, F M; Yuh, H Y; Zakharov, L E

    2008-10-01

    The United States has been tasked with the development and implementation of a motional Stark effect (MSE) system on ITER. In the harsh ITER environment, MSE is particularly susceptible to degradation, as it depends on polarimetry, and the polarization reflection properties of surfaces are highly sensitive to thin film effects due to plasma deposition and erosion of a first mirror. Here we present the results of a comprehensive study considering a new MSE-based approach to internal plasma magnetic field measurements for ITER. The proposed method uses the line shifts in the MSE spectrum (MSE-LS) to provide a radial profile of the magnetic field magnitude. To determine the utility of MSE-LS for equilibrium reconstruction, studies were performed using the ESC-ERV code system. A near-term opportunity to test the use of MSE-LS for equilibrium reconstruction is being pursued in the implementation of MSE with laser-induced fluorescence on NSTX. Though the field values and beam energies are very different from ITER, the use of a laser allows precision spectroscopy with a similar ratio of linewidth to line spacing on NSTX as would be achievable with a passive system on ITER. Simulation results for ITER and NSTX are presented, and the relative merits of the traditional line polarization approach and the new line-shift approach are discussed.

  6. Simulation Learning PC Screen-Based vs. High Fidelity

    DTIC Science & Technology

    2011-08-01

    D., Burgess, L., Berg, B . and Connolly, K . (2009). Teaching mass casualty triage skills using iterative multimanikin simulations. Prehospital...SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON USAMRMC a. REPORT U b . ABSTRACT U...learning PC screen-based vs. high fidelity – progress chart Attachment B . Approved Protocol - Simulation Learning: PC-Screen Based (PCSB) versus High

  7. Spectrum auto-correlation analysis and its application to fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Ming, A. B.; Qin, Z. Y.; Zhang, W.; Chu, F. L.

    2013-12-01

    Bearing failure is one of the most common reasons of machine breakdowns and accidents. Therefore, the fault diagnosis of rolling element bearings is of great significance to the safe and efficient operation of machines owing to its fault indication and accident prevention capability in engineering applications. Based on the orthogonal projection theory, a novel method is proposed to extract the fault characteristic frequency for the incipient fault diagnosis of rolling element bearings in this paper. With the capability of exposing the oscillation frequency of the signal energy, the proposed method is a generalized form of the squared envelope analysis and named as spectral auto-correlation analysis (SACA). Meanwhile, the SACA is a simplified form of the cyclostationary analysis as well and can be iteratively carried out in applications. Simulations and experiments are used to evaluate the efficiency of the proposed method. Comparing the results of SACA, the traditional envelope analysis and the squared envelope analysis, it is found that the result of SACA is more legible due to the more prominent harmonic amplitudes of the fault characteristic frequency and that the SACA with the proper iteration will further enhance the fault features.

  8. A model reduction approach to numerical inversion for a parabolic partial differential equation

    NASA Astrophysics Data System (ADS)

    Borcea, Liliana; Druskin, Vladimir; Mamonov, Alexander V.; Zaslavsky, Mikhail

    2014-12-01

    We propose a novel numerical inversion algorithm for the coefficients of parabolic partial differential equations, based on model reduction. The study is motivated by the application of controlled source electromagnetic exploration, where the unknown is the subsurface electrical resistivity and the data are time resolved surface measurements of the magnetic field. The algorithm presented in this paper considers inversion in one and two dimensions. The reduced model is obtained with rational interpolation in the frequency (Laplace) domain and a rational Krylov subspace projection method. It amounts to a nonlinear mapping from the function space of the unknown resistivity to the small dimensional space of the parameters of the reduced model. We use this mapping as a nonlinear preconditioner for the Gauss-Newton iterative solution of the inverse problem. The advantage of the inversion algorithm is twofold. First, the nonlinear preconditioner resolves most of the nonlinearity of the problem. Thus the iterations are less likely to get stuck in local minima and the convergence is fast. Second, the inversion is computationally efficient because it avoids repeated accurate simulations of the time-domain response. We study the stability of the inversion algorithm for various rational Krylov subspaces, and assess its performance with numerical experiments.

  9. Flyback CCM inverter for AC module applications: iterative learning control and convergence analysis

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Ho; Kim, Minsung

    2017-12-01

    This paper presents an iterative learning controller (ILC) for an interleaved flyback inverter operating in continuous conduction mode (CCM). The flyback CCM inverter features small output ripple current, high efficiency, and low cost, and hence it is well suited for photovoltaic power applications. However, it exhibits the non-minimum phase behaviour, because its transfer function from control duty to output current has the right-half-plane (RHP) zero. Moreover, the flyback CCM inverter suffers from the time-varying grid voltage disturbance. Thus, conventional control scheme results in inaccurate output tracking. To overcome these problems, the ILC is first developed and applied to the flyback inverter operating in CCM. The ILC makes use of both predictive and current learning terms which help the system output to converge to the reference trajectory. We take into account the nonlinear averaged model and use it to construct the proposed controller. It is proven that the system output globally converges to the reference trajectory in the absence of state disturbances, output noises, or initial state errors. Numerical simulations are performed to validate the proposed control scheme, and experiments using 400-W AC module prototype are carried out to demonstrate its practical feasibility.

  10. Unsupervised change detection of multispectral images based on spatial constraint chi-squared transform and Markov random field model

    NASA Astrophysics Data System (ADS)

    Shi, Aiye; Wang, Chao; Shen, Shaohong; Huang, Fengchen; Ma, Zhenli

    2016-10-01

    Chi-squared transform (CST), as a statistical method, can describe the difference degree between vectors. The CST-based methods operate directly on information stored in the difference image and are simple and effective methods for detecting changes in remotely sensed images that have been registered and aligned. However, the technique does not take spatial information into consideration, which leads to much noise in the result of change detection. An improved unsupervised change detection method is proposed based on spatial constraint CST (SCCST) in combination with a Markov random field (MRF) model. First, the mean and variance matrix of the difference image of bitemporal images are estimated by an iterative trimming method. In each iteration, spatial information is injected to reduce scattered changed points (also known as "salt and pepper" noise). To determine the key parameter confidence level in the SCCST method, a pseudotraining dataset is constructed to estimate the optimal value. Then, the result of SCCST, as an initial solution of change detection, is further improved by the MRF model. The experiments on simulated and real multitemporal and multispectral images indicate that the proposed method performs well in comprehensive indices compared with other methods.

  11. L1-norm kernel discriminant analysis via Bayes error bound optimization for robust feature extraction.

    PubMed

    Zheng, Wenming; Lin, Zhouchen; Wang, Haixian

    2014-04-01

    A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.

  12. Helium-3 MR q-space imaging with radial acquisition and iterative highly constrained back-projection.

    PubMed

    O'Halloran, Rafael L; Holmes, James H; Wu, Yu-Chien; Alexander, Andrew; Fain, Sean B

    2010-01-01

    An undersampled diffusion-weighted stack-of-stars acquisition is combined with iterative highly constrained back-projection to perform hyperpolarized helium-3 MR q-space imaging with combined regional correction of radiofrequency- and T1-related signal loss in a single breath-held scan. The technique is tested in computer simulations and phantom experiments and demonstrated in a healthy human volunteer with whole-lung coverage in a 13-sec breath-hold. Measures of lung microstructure at three different lung volumes are evaluated using inhaled gas volumes of 500 mL, 1000 mL, and 1500 mL to demonstrate feasibility. Phantom results demonstrate that the proposed technique is in agreement with theoretical values, as well as with a fully sampled two-dimensional Cartesian acquisition. Results from the volunteer study demonstrate that the root mean squared diffusion distance increased significantly from the 500-mL volume to the 1000-mL volume. This technique represents the first demonstration of a spatially resolved hyperpolarized helium-3 q-space imaging technique and shows promise for microstructural evaluation of lung disease in three dimensions. Copyright (c) 2009 Wiley-Liss, Inc.

  13. Automatic programming via iterated local search for dynamic job shop scheduling.

    PubMed

    Nguyen, Su; Zhang, Mengjie; Johnston, Mark; Tan, Kay Chen

    2015-01-01

    Dispatching rules have been commonly used in practice for making sequencing and scheduling decisions. Due to specific characteristics of each manufacturing system, there is no universal dispatching rule that can dominate in all situations. Therefore, it is important to design specialized dispatching rules to enhance the scheduling performance for each manufacturing environment. Evolutionary computation approaches such as tree-based genetic programming (TGP) and gene expression programming (GEP) have been proposed to facilitate the design task through automatic design of dispatching rules. However, these methods are still limited by their high computational cost and low exploitation ability. To overcome this problem, we develop a new approach to automatic programming via iterated local search (APRILS) for dynamic job shop scheduling. The key idea of APRILS is to perform multiple local searches started with programs modified from the best obtained programs so far. The experiments show that APRILS outperforms TGP and GEP in most simulation scenarios in terms of effectiveness and efficiency. The analysis also shows that programs generated by APRILS are more compact than those obtained by genetic programming. An investigation of the behavior of APRILS suggests that the good performance of APRILS comes from the balance between exploration and exploitation in its search mechanism.

  14. Preprocessing of region of interest localization based on local surface curvature analysis for three-dimensional reconstruction with multiresolution

    NASA Astrophysics Data System (ADS)

    Li, Wanjing; Schütze, Rainer; Böhler, Martin; Boochs, Frank; Marzani, Franck S.; Voisin, Yvon

    2009-06-01

    We present an approach to integrate a preprocessing step of the region of interest (ROI) localization into 3-D scanners (laser or stereoscopic). The definite objective is to make the 3-D scanner intelligent enough to localize rapidly in the scene, during the preprocessing phase, the regions with high surface curvature, so that precise scanning will be done only in these regions instead of in the whole scene. In this way, the scanning time can be largely reduced, and the results contain only pertinent data. To test its feasibility and efficiency, we simulated the preprocessing process under an active stereoscopic system composed of two cameras and a video projector. The ROI localization is done in an iterative way. First, the video projector projects a regular point pattern in the scene, and then the pattern is modified iteratively according to the local surface curvature of each reconstructed 3-D point. Finally, the last pattern is used to determine the ROI. Our experiments showed that with this approach, the system is capable to localize all types of objects, including small objects with small depth.

  15. Development of a helicon ion source: Simulations and preliminary experiments.

    PubMed

    Afsharmanesh, M; Habibi, M

    2018-03-01

    In the present context, the extraction system of a helicon ion source has been simulated and constructed. Results of the ion source commissioning at up to 20 kV are presented as well as simulations of an ion beam extraction system. Argon current of more than 200 μA at up to 20 kV is extracted and is characterized with a Faraday cup and beam profile monitoring grid. By changing different ion source parameters such as RF power, extraction voltage, and working pressure, an ion beam with current distribution exhibiting a central core has been detected. Jump transition of ion beam current emerges at the RF power near to 700 W, which reveals that the helicon mode excitation has reached this power. Furthermore, measuring the emission line intensity of Ar ii at 434.8 nm is the other way we have used for demonstrating the mode transition from inductively coupled plasma to helicon. Due to asymmetrical longitudinal power absorption of a half-helix helicon antenna, it is used for the ion source development. The modeling of the plasma part of the ion source has been carried out using a code, HELIC. Simulations are carried out by taking into account a Gaussian radial plasma density profile and for plasma densities in range of 10 18 -10 19 m -3 . Power absorption spectrum and the excited helicon mode number are obtained. Longitudinal RF power absorption for two different antenna positions is compared. Our results indicate that positioning the antenna near to the plasma electrode is desirable for the ion beam extraction. The simulation of the extraction system was performed with the ion optical code IBSimu, making it the first helicon ion source extraction designed with the code. Ion beam emittance and Twiss parameters of the ellipse emittance are calculated at different iterations and mesh sizes, and the best values of the mesh size and iteration number have been obtained for the calculations. The simulated ion beam extraction system has been evaluated using optimized parameters such as the gap distance between electrodes, electrodes aperture, and extraction voltage. The gap distance, ground electrode aperture, and extraction voltage have been changed between 3 and 9 mm, 2-6.5 mm, and 10-35 kV in the simulations, respectively.

  16. Development of a helicon ion source: Simulations and preliminary experiments

    NASA Astrophysics Data System (ADS)

    Afsharmanesh, M.; Habibi, M.

    2018-03-01

    In the present context, the extraction system of a helicon ion source has been simulated and constructed. Results of the ion source commissioning at up to 20 kV are presented as well as simulations of an ion beam extraction system. Argon current of more than 200 μA at up to 20 kV is extracted and is characterized with a Faraday cup and beam profile monitoring grid. By changing different ion source parameters such as RF power, extraction voltage, and working pressure, an ion beam with current distribution exhibiting a central core has been detected. Jump transition of ion beam current emerges at the RF power near to 700 W, which reveals that the helicon mode excitation has reached this power. Furthermore, measuring the emission line intensity of Ar ii at 434.8 nm is the other way we have used for demonstrating the mode transition from inductively coupled plasma to helicon. Due to asymmetrical longitudinal power absorption of a half-helix helicon antenna, it is used for the ion source development. The modeling of the plasma part of the ion source has been carried out using a code, HELIC. Simulations are carried out by taking into account a Gaussian radial plasma density profile and for plasma densities in range of 1018-1019 m-3. Power absorption spectrum and the excited helicon mode number are obtained. Longitudinal RF power absorption for two different antenna positions is compared. Our results indicate that positioning the antenna near to the plasma electrode is desirable for the ion beam extraction. The simulation of the extraction system was performed with the ion optical code IBSimu, making it the first helicon ion source extraction designed with the code. Ion beam emittance and Twiss parameters of the ellipse emittance are calculated at different iterations and mesh sizes, and the best values of the mesh size and iteration number have been obtained for the calculations. The simulated ion beam extraction system has been evaluated using optimized parameters such as the gap distance between electrodes, electrodes aperture, and extraction voltage. The gap distance, ground electrode aperture, and extraction voltage have been changed between 3 and 9 mm, 2-6.5 mm, and 10-35 kV in the simulations, respectively.

  17. Best response game of traffic on road network of non-signalized intersections

    NASA Astrophysics Data System (ADS)

    Yao, Wang; Jia, Ning; Zhong, Shiquan; Li, Liying

    2018-01-01

    This paper studies the traffic flow in a grid road network with non-signalized intersections. The nature of the drivers in the network is simulated such that they play an iterative snowdrift game with other drivers. A cellular automata model is applied to study the characteristics of the traffic flow and the evolution of the behaviour of the drivers during the game. The drivers use best-response as their strategy to update rules. Three major findings are revealed. First, the cooperation rate in simulation experiences staircase-shaped drop as cost to benefit ratio r increases, and cooperation rate can be derived analytically as a function of cost to benefit ratio r. Second, we find that higher cooperation rate corresponds to higher average speed, lower density and higher flow. This reveals that defectors deteriorate the efficiency of traffic on non-signalized intersections. Third, the system experiences more randomness when the density is low because the drivers will not have much opportunity to update strategy when the density is low. These findings help to show how the strategy of drivers in a traffic network evolves and how their interactions influence the overall performance of the traffic system.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Z. X.; Wang, W. X.; Diamond, P. H.

    We report that intrinsic torque, which can be generated by turbulent stresses, can induce toroidal rotation in a tokamak plasma at rest without direct momentum injection. Reversals in intrinsic torque have been inferred from the observation of toroidal velocity changes in recent lower hybrid current drive (LHCD) experiments. Here we focus on understanding the cause of LHCD-induced intrinsic torque reversal using gyrokinetic simulations and theoretical analyses. A new mechanism for the intrinsic torque reversal linked to magnetic shear (sˆ) effects on the turbulence spectrum is identified. This reversal is a consequence of the ballooning structure at weak sˆ . Basedmore » on realistic profiles from the Alcator C-Mod LHCD experiments, simulations demonstrate that the intrinsic torque reverses for weak sˆ discharges and that the value of sˆ crit is consistent with the experimental results sˆ exp crit [Rice et al., Phys. Rev. Lett. 111, 125003 (2013)]. In conclusion, the consideration of this intrinsic torque feature in our work is important for the understanding of rotation profile generation at weak and its consequent impact on macro-instability stabilization and micro-turbulence reduction, which is crucial for ITER. It is also relevant to internal transport barrier formation at negative or weakly positive sˆ .« less

  19. System calibration method for Fourier ptychographic microscopy

    NASA Astrophysics Data System (ADS)

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.

  20. Comparison with CLPX II airborne data using DMRT model

    USGS Publications Warehouse

    Xu, X.; Liang, D.; Andreadis, K.M.; Tsang, L.; Josberger, E.G.

    2009-01-01

    In this paper, we considered a physical-based model which use numerical solution of Maxwell Equations in three-dimensional simulations and apply into Dense Media Radiative Theory (DMRT). The model is validated in two specific dataset from the second Cold Land Processes Experiment (CLPX II) at Alaska and Colorado. The data were all obtain by the Ku-band (13.95GHz) observations using airborne imaging polarimetric scatterometer (POLSCAT). Snow is a densely packed media. To take into account the collective scattering and incoherent scattering, analytical Quasi-Crystalline Approximation (QCA) and Numerical Maxwell Equation Method of 3-D simulation (NMM3D) are used to calculate the extinction coefficient and phase matrix. DMRT equations were solved by iterative solution up to 2nd order for the case of small optical thickness and full multiple scattering solution by decomposing the diffuse intensities into Fourier series was used when optical thickness exceed unity. It was shown that the model predictions agree with the field experiment not only co-polarization but also cross-polarization. For Alaska region, the input snow structure data was obtain by the in situ ground observations, while for Colorado region, we combined the VIC model to get the snow profile. ??2009 IEEE.

  1. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  2. Multi-scale modelling to relate beryllium surface temperature, deuterium concentration and erosion in fusion reactor environment

    DOE PAGES

    Safi, E.; Valles, G.; Lasa, A.; ...

    2017-03-27

    Beryllium (Be) has been chosen as the plasma-facing material for the main wall of ITER, the next generation fusion reactor. Identifying the key parameters that determine Be erosion under reactor relevant conditions is vital to predict the ITER plasma-facing component lifetime and viability. To date, a certain prediction of Be erosion, focusing on the effect of two such parameters, surface temperature and D surface content, has not been achieved. In this paper, we develop the first multi-scale KMC-MD modeling approach for Be to provide a more accurate database for its erosion, as well as investigating parameters that affect erosion. First,more » we calculate the complex relationship between surface temperature and D concentration precisely by simulating the time evolution of the system using an object kinetic Monte Carlo (OKMC) technique. These simulations provide a D surface concentration profile for any surface temperature and incoming D energy. We then describe how this profile can be implemented as a starting configuration in molecular dynamics (MD) simulations. We finally use MD simulations to investigate the effect of temperature (300–800 K) and impact energy (10–200 eV) on the erosion of Be due to D plasma irradiations. The results reveal a strong dependency of the D surface content on temperature. Increasing the surface temperature leads to a lower D concentration at the surface, because of the tendency of D atoms to avoid being accommodated in a vacancy, and de-trapping from impurity sites diffuse fast toward bulk. At the next step, total and molecular Be erosion yields due to D irradiations are analyzed using MD simulations. The results show a strong dependency of erosion yields on surface temperature and incoming ion energy. The total Be erosion yield increases with temperature for impact energies up to 100 eV. However, increasing temperature and impact energy results in a lower fraction of Be atoms being sputtered as BeD molecules due to the lower D surface concentrations at higher temperatures. Finally, these findings correlate well with different experiments performed at JET and PISCES-B devices.« less

  3. Multi-scale modelling to relate beryllium surface temperature, deuterium concentration and erosion in fusion reactor environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safi, E.; Valles, G.; Lasa, A.

    Beryllium (Be) has been chosen as the plasma-facing material for the main wall of ITER, the next generation fusion reactor. Identifying the key parameters that determine Be erosion under reactor relevant conditions is vital to predict the ITER plasma-facing component lifetime and viability. To date, a certain prediction of Be erosion, focusing on the effect of two such parameters, surface temperature and D surface content, has not been achieved. In this paper, we develop the first multi-scale KMC-MD modeling approach for Be to provide a more accurate database for its erosion, as well as investigating parameters that affect erosion. First,more » we calculate the complex relationship between surface temperature and D concentration precisely by simulating the time evolution of the system using an object kinetic Monte Carlo (OKMC) technique. These simulations provide a D surface concentration profile for any surface temperature and incoming D energy. We then describe how this profile can be implemented as a starting configuration in molecular dynamics (MD) simulations. We finally use MD simulations to investigate the effect of temperature (300–800 K) and impact energy (10–200 eV) on the erosion of Be due to D plasma irradiations. The results reveal a strong dependency of the D surface content on temperature. Increasing the surface temperature leads to a lower D concentration at the surface, because of the tendency of D atoms to avoid being accommodated in a vacancy, and de-trapping from impurity sites diffuse fast toward bulk. At the next step, total and molecular Be erosion yields due to D irradiations are analyzed using MD simulations. The results show a strong dependency of erosion yields on surface temperature and incoming ion energy. The total Be erosion yield increases with temperature for impact energies up to 100 eV. However, increasing temperature and impact energy results in a lower fraction of Be atoms being sputtered as BeD molecules due to the lower D surface concentrations at higher temperatures. Finally, these findings correlate well with different experiments performed at JET and PISCES-B devices.« less

  4. Multi-scale modelling to relate beryllium surface temperature, deuterium concentration and erosion in fusion reactor environment

    NASA Astrophysics Data System (ADS)

    Safi, E.; Valles, G.; Lasa, A.; Nordlund, K.

    2017-05-01

    Beryllium (Be) has been chosen as the plasma-facing material for the main wall of ITER, the next generation fusion reactor. Identifying the key parameters that determine Be erosion under reactor relevant conditions is vital to predict the ITER plasma-facing component lifetime and viability. To date, a certain prediction of Be erosion, focusing on the effect of two such parameters, surface temperature and D surface content, has not been achieved. In this work, we develop the first multi-scale KMC-MD modeling approach for Be to provide a more accurate database for its erosion, as well as investigating parameters that affect erosion. First, we calculate the complex relationship between surface temperature and D concentration precisely by simulating the time evolution of the system using an object kinetic Monte Carlo (OKMC) technique. These simulations provide a D surface concentration profile for any surface temperature and incoming D energy. We then describe how this profile can be implemented as a starting configuration in molecular dynamics (MD) simulations. We finally use MD simulations to investigate the effect of temperature (300-800 K) and impact energy (10-200 eV) on the erosion of Be due to D plasma irradiations. The results reveal a strong dependency of the D surface content on temperature. Increasing the surface temperature leads to a lower D concentration at the surface, because of the tendency of D atoms to avoid being accommodated in a vacancy, and de-trapping from impurity sites diffuse fast toward bulk. At the next step, total and molecular Be erosion yields due to D irradiations are analyzed using MD simulations. The results show a strong dependency of erosion yields on surface temperature and incoming ion energy. The total Be erosion yield increases with temperature for impact energies up to 100 eV. However, increasing temperature and impact energy results in a lower fraction of Be atoms being sputtered as BeD molecules due to the lower D surface concentrations at higher temperatures. These findings correlate well with different experiments performed at JET and PISCES-B devices.

  5. Experience on divertor fuel retention after two ITER-Like Wall campaigns

    NASA Astrophysics Data System (ADS)

    Heinola, K.; Widdowson, A.; Likonen, J.; Ahlgren, T.; Alves, E.; Ayres, C. F.; Baron-Wiechec, A.; Barradas, N.; Brezinsek, S.; Catarino, N.; Coad, P.; Guillemaut, C.; Jepu, I.; Krat, S.; Lahtinen, A.; Matthews, G. F.; Mayer, M.; Contributors, JET

    2017-12-01

    The JET ITER-Like Wall experiment, with its all-metal plasma-facing components, provides a unique environment for plasma and plasma-wall interaction studies. These studies are of great importance in understanding the underlying phenomena taking place during the operation of a future fusion reactor. Present work summarizes and reports the plasma fuel retention in the divertor resulting from the two first experimental campaigns with the ITER-Like Wall. The deposition pattern in the divertor after the second campaign shows same trend as was observed after the first campaign: highest deposition of 10-15 μm was found on the top part of the inner divertor. Due to the change in plasma magnetic configurations from the first to the second campaign, and the resulted strike point locations, an increase of deposition was observed on the base of the divertor. The deuterium retention was found to be affected by the hydrogen plasma experiments done at the end of second experimental campaign.

  6. Flux-driven turbulence GDB simulations of the IWL Alcator C-Mod L-mode edge compared with experiment

    NASA Astrophysics Data System (ADS)

    Francisquez, Manaure; Zhu, Ben; Rogers, Barrett

    2017-10-01

    Prior to predicting confinement regime transitions in tokamaks one may need an accurate description of L-mode profiles and turbulence properties. These features determine the heat-flux width upon which wall integrity depends, a topic of major interest for research aid to ITER. To this end our work uses the GDB model to simulate the Alcator C-Mod edge and contributes support for its use in studying critical edge phenomena in current and future tokamaks. We carried out 3D electromagnetic flux-driven two-fluid turbulence simulations of inner wall limited (IWL) C-Mod shots spanning closed and open flux surfaces. These simulations are compared with gas puff imaging (GPI) and mirror Langmuir probe (MLP) data, examining global features and statistical properties of turbulent dynamics. GDB reproduces important qualitative aspects of the C-Mod edge regarding global density and temperature profiles, within reasonable margins, and though the turbulence statistics of the simulated turbulence follow similar quantitative trends questions remain about the code's difficulty in exactly predicting quantities like the autocorrelation time A proposed breakpoint in the near SOL pressure and the posited separation between drift and ballooning dynamics it represents are examined This work was supported by DOE-SC-0010508. This research used resources of the National Energy Research Scientific Computing Center (NERSC).

  7. Time-Accurate Unsteady Pressure Loads Simulated for the Space Launch System at Wind Tunnel Conditions

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.

    2015-01-01

    A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.

  8. A Coupled Experiment-finite Element Modeling Methodology for Assessing High Strain Rate Mechanical Response of Soft Biomaterials.

    PubMed

    Prabhu, Rajkumar; Whittington, Wilburn R; Patnaik, Sourav S; Mao, Yuxiong; Begonia, Mark T; Williams, Lakiesha N; Liao, Jun; Horstemeyer, M F

    2015-05-18

    This study offers a combined experimental and finite element (FE) simulation approach for examining the mechanical behavior of soft biomaterials (e.g. brain, liver, tendon, fat, etc.) when exposed to high strain rates. This study utilized a Split-Hopkinson Pressure Bar (SHPB) to generate strain rates of 100-1,500 sec(-1). The SHPB employed a striker bar consisting of a viscoelastic material (polycarbonate). A sample of the biomaterial was obtained shortly postmortem and prepared for SHPB testing. The specimen was interposed between the incident and transmitted bars, and the pneumatic components of the SHPB were activated to drive the striker bar toward the incident bar. The resulting impact generated a compressive stress wave (i.e. incident wave) that traveled through the incident bar. When the compressive stress wave reached the end of the incident bar, a portion continued forward through the sample and transmitted bar (i.e. transmitted wave) while another portion reversed through the incident bar as a tensile wave (i.e. reflected wave). These waves were measured using strain gages mounted on the incident and transmitted bars. The true stress-strain behavior of the sample was determined from equations based on wave propagation and dynamic force equilibrium. The experimental stress-strain response was three dimensional in nature because the specimen bulged. As such, the hydrostatic stress (first invariant) was used to generate the stress-strain response. In order to extract the uniaxial (one-dimensional) mechanical response of the tissue, an iterative coupled optimization was performed using experimental results and Finite Element Analysis (FEA), which contained an Internal State Variable (ISV) material model used for the tissue. The ISV material model used in the FE simulations of the experimental setup was iteratively calibrated (i.e. optimized) to the experimental data such that the experiment and FEA strain gage values and first invariant of stresses were in good agreement.

  9. A Coupled Experiment-finite Element Modeling Methodology for Assessing High Strain Rate Mechanical Response of Soft Biomaterials

    PubMed Central

    Prabhu, Rajkumar; Whittington, Wilburn R.; Patnaik, Sourav S.; Mao, Yuxiong; Begonia, Mark T.; Williams, Lakiesha N.; Liao, Jun; Horstemeyer, M. F.

    2015-01-01

    This study offers a combined experimental and finite element (FE) simulation approach for examining the mechanical behavior of soft biomaterials (e.g. brain, liver, tendon, fat, etc.) when exposed to high strain rates. This study utilized a Split-Hopkinson Pressure Bar (SHPB) to generate strain rates of 100-1,500 sec-1. The SHPB employed a striker bar consisting of a viscoelastic material (polycarbonate). A sample of the biomaterial was obtained shortly postmortem and prepared for SHPB testing. The specimen was interposed between the incident and transmitted bars, and the pneumatic components of the SHPB were activated to drive the striker bar toward the incident bar. The resulting impact generated a compressive stress wave (i.e. incident wave) that traveled through the incident bar. When the compressive stress wave reached the end of the incident bar, a portion continued forward through the sample and transmitted bar (i.e. transmitted wave) while another portion reversed through the incident bar as a tensile wave (i.e. reflected wave). These waves were measured using strain gages mounted on the incident and transmitted bars. The true stress-strain behavior of the sample was determined from equations based on wave propagation and dynamic force equilibrium. The experimental stress-strain response was three dimensional in nature because the specimen bulged. As such, the hydrostatic stress (first invariant) was used to generate the stress-strain response. In order to extract the uniaxial (one-dimensional) mechanical response of the tissue, an iterative coupled optimization was performed using experimental results and Finite Element Analysis (FEA), which contained an Internal State Variable (ISV) material model used for the tissue. The ISV material model used in the FE simulations of the experimental setup was iteratively calibrated (i.e. optimized) to the experimental data such that the experiment and FEA strain gage values and first invariant of stresses were in good agreement. PMID:26067742

  10. Iterative initial condition reconstruction

    NASA Astrophysics Data System (ADS)

    Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias

    2017-07-01

    Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.

  11. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  12. Numerical analysis of modified Central Solenoid insert design

    DOE PAGES

    Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; ...

    2015-06-21

    The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagneticmore » simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.« less

  13. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  14. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  15. ITER-FEAT operation

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.; Aymar, R.; Chuyanov, V. A.; Huguet, M.; Matsumoto, H.; Mizoguchi, T.; Murakami, Y.; Polevoi, A. R.; Shimada, M.; ITER Joint Central Team; ITER Home Teams

    2001-03-01

    ITER is planned to be the first fusion experimental reactor in the world operating for research in physics and engineering. The first ten years of operation will be devoted primarily to physics issues at low neutron fluence and the following ten years of operation to engineering testing at higher fluence. ITER can accommodate various plasma configurations and plasma operation modes, such as inductive high Q modes, long pulse hybrid modes and non-inductive steady state modes, with large ranges of plasma current, density, beta and fusion power, and with various heating and current drive methods. This flexibility will provide an advantage for coping with uncertainties in the physics database, in studying burning plasmas, in introducing advanced features and in optimizing the plasma performance for the different programme objectives. Remote sites will be able to participate in the ITER experiment. This concept will provide an advantage not only in operating ITER for 24 hours a day but also in involving the worldwide fusion community and in promoting scientific competition among the ITER Parties.

  16. The PRIMA Test Facility: SPIDER and MITICA test-beds for ITER neutral beam injectors

    NASA Astrophysics Data System (ADS)

    Toigo, V.; Piovan, R.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Fiorentin, A.; Gambetta, G.; Gnesotto, F.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Moresco, M.; Ocello, E.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Recchia, M.; Rizzolo, A.; Rostagni, G.; Sartori, E.; Siragusa, M.; Sonato, P.; Sottocornola, A.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Kashiwagi, M.; Hanada, M.; Tobari, H.; Watanabe, K.; Maejima, T.; Kojima, A.; Umeda, N.; Yamanaka, H.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Heinemann, B.; Kraus, W.; Hanke, S.; Hauer, V.; Ochoa, S.; Blatchford, P.; Chuilon, B.; Xue, Y.; De Esch, H. P. L.; Hemsworth, R.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Cavenago, M.; D'Arienzo, M.; Sandri, S.; Tonti, A.

    2017-08-01

    The ITER Neutral Beam Test Facility (NBTF), called PRIMA (Padova Research on ITER Megavolt Accelerator), is hosted in Padova, Italy and includes two experiments: MITICA, the full-scale prototype of the ITER heating neutral beam injector, and SPIDER, the full-size radio frequency negative-ions source. The NBTF realization and the exploitation of SPIDER and MITICA have been recognized as necessary to make the future operation of the ITER heating neutral beam injectors efficient and reliable, fundamental to the achievement of thermonuclear-relevant plasma parameters in ITER. This paper reports on design and R&D carried out to construct PRIMA, SPIDER and MITICA, and highlights the huge progress made in just a few years, from the signature of the agreement for the NBTF realization in 2011, up to now—when the buildings and relevant infrastructures have been completed, SPIDER is entering the integrated commissioning phase and the procurements of several MITICA components are at a well advanced stage.

  17. Plasma-surface interaction in the context of ITER.

    PubMed

    Kleyn, A W; Lopes Cardozo, N J; Samm, U

    2006-04-21

    The decreasing availability of energy and concern about climate change necessitate the development of novel sustainable energy sources. Fusion energy is such a source. Although it will take several decades to develop it into routinely operated power sources, the ultimate potential of fusion energy is very high and badly needed. A major step forward in the development of fusion energy is the decision to construct the experimental test reactor ITER. ITER will stimulate research in many areas of science. This article serves as an introduction to some of those areas. In particular, we discuss research opportunities in the context of plasma-surface interactions. The fusion plasma, with a typical temperature of 10 keV, has to be brought into contact with a physical wall in order to remove the helium produced and drain the excess energy in the fusion plasma. The fusion plasma is far too hot to be brought into direct contact with a physical wall. It would degrade the wall and the debris from the wall would extinguish the plasma. Therefore, schemes are developed to cool down the plasma locally before it impacts on a physical surface. The resulting plasma-surface interaction in ITER is facing several challenges including surface erosion, material redeposition and tritium retention. In this article we introduce how the plasma-surface interaction relevant for ITER can be studied in small scale experiments. The various requirements for such experiments are introduced and examples of present and future experiments will be given. The emphasis in this article will be on the experimental studies of plasma-surface interactions.

  18. Round Robin Study: Molecular Simulation of Thermodynamic Properties from Models with Internal Degrees of Freedom.

    PubMed

    Schappals, Michael; Mecklenfeld, Andreas; Kröger, Leif; Botan, Vitalie; Köster, Andreas; Stephan, Simon; García, Edder J; Rutkai, Gabor; Raabe, Gabriele; Klein, Peter; Leonhard, Kai; Glass, Colin W; Lenhard, Johannes; Vrabec, Jadran; Hasse, Hans

    2017-09-12

    Thermodynamic properties are often modeled by classical force fields which describe the interactions on the atomistic scale. Molecular simulations are used for retrieving thermodynamic data from such models, and many simulation techniques and computer codes are available for that purpose. In the present round robin study, the following fundamental question is addressed: Will different user groups working with different simulation codes obtain coinciding results within the statistical uncertainty of their data? A set of 24 simple simulation tasks is defined and solved by five user groups working with eight molecular simulation codes: DL_POLY, GROMACS, IMC, LAMMPS, ms2, NAMD, Tinker, and TOWHEE. Each task consists of the definition of (1) a pure fluid that is described by a force field and (2) the conditions under which that property is to be determined. The fluids are four simple alkanes: ethane, propane, n-butane, and iso-butane. All force fields consider internal degrees of freedom: OPLS, TraPPE, and a modified OPLS version with bond stretching vibrations. Density and potential energy are determined as a function of temperature and pressure on a grid which is specified such that all states are liquid. The user groups worked independently and reported their results to a central instance. The full set of results was disclosed to all user groups only at the end of the study. During the study, the central instance gave only qualitative feedback. The results reveal the challenges of carrying out molecular simulations. Several iterations were needed to eliminate gross errors. For most simulation tasks, the remaining deviations between the results of the different groups are acceptable from a practical standpoint, but they are often outside of the statistical errors of the individual simulation data. However, there are also cases where the deviations are unacceptable. This study highlights similarities between computer experiments and laboratory experiments, which are both subject not only to statistical error but also to systematic error.

  19. ELM-induced transient tungsten melting in the JET divertor

    NASA Astrophysics Data System (ADS)

    Coenen, J. W.; Arnoux, G.; Bazylev, B.; Matthews, G. F.; Autricque, A.; Balboa, I.; Clever, M.; Dejarnac, R.; Coffey, I.; Corre, Y.; Devaux, S.; Frassinetti, L.; Gauthier, E.; Horacek, J.; Jachmich, S.; Komm, M.; Knaup, M.; Krieger, K.; Marsen, S.; Meigs, A.; Mertens, Ph.; Pitts, R. A.; Puetterich, T.; Rack, M.; Stamp, M.; Sergienko, G.; Tamain, P.; Thompson, V.; Contributors, JET-EFDA

    2015-02-01

    The original goals of the JET ITER-like wall included the study of the impact of an all W divertor on plasma operation (Coenen et al 2013 Nucl. Fusion 53 073043) and fuel retention (Brezinsek et al 2013 Nucl. Fusion 53 083023). ITER has recently decided to install a full-tungsten (W) divertor from the start of operations. One of the key inputs required in support of this decision was the study of the possibility of W melting and melt splashing during transients. Damage of this type can lead to modifications of surface topology which could lead to higher disruption frequency or compromise subsequent plasma operation. Although every effort will be made to avoid leading edges, ITER plasma stored energies are sufficient that transients can drive shallow melting on the top surfaces of components. JET is able to produce ELMs large enough to allow access to transient melting in a regime of relevance to ITER. Transient W melt experiments were performed in JET using a dedicated divertor module and a sequence of IP = 3.0 MA/BT = 2.9 T H-mode pulses with an input power of PIN = 23 MW, a stored energy of ˜6 MJ and regular type I ELMs at ΔWELM = 0.3 MJ and fELM ˜ 30 Hz. By moving the outer strike point onto a dedicated leading edge in the W divertor the base temperature was raised within ˜1 s to a level allowing transient, ELM-driven melting during the subsequent 0.5 s. Such ELMs (δW ˜ 300 kJ per ELM) are comparable to mitigated ELMs expected in ITER (Pitts et al 2011 J. Nucl. Mater. 415 (Suppl.) S957-64). Although significant material losses in terms of ejections into the plasma were not observed, there is indirect evidence that some small droplets (˜80 µm) were released. Almost 1 mm (˜6 mm3) of W was moved by ˜150 ELMs within 7 subsequent discharges. The impact on the main plasma parameters was minor and no disruptions occurred. The W-melt gradually moved along the leading edge towards the high-field side, driven by j × B forces. The evaporation rate determined from spectroscopy is 100 times less than expected from steady state melting and is thus consistent only with transient melting during the individual ELMs. Analysis of IR data and spectroscopy together with modelling using the MEMOS code Bazylev et al 2009 J. Nucl. Mater. 390-391 810-13 point to transient melting as the main process. 3D MEMOS simulations on the consequences of multiple ELMs on damage of tungsten castellated armour have been performed. These experiments provide the first experimental evidence for the absence of significant melt splashing at transient events resembling mitigated ELMs on ITER and establish a key experimental benchmark for the MEMOS code.

  20. Penalized Weighted Least-Squares Approach to Sinogram Noise Reduction and Image Reconstruction for Low-Dose X-Ray Computed Tomography

    PubMed Central

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-01-01

    Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831

  1. Hydrogen isotope retention in beryllium for tokamak plasma-facing applications

    NASA Astrophysics Data System (ADS)

    Anderl, R. A.; Causey, R. A.; Davis, J. W.; Doerner, R. P.; Federici, G.; Haasz, A. A.; Longhurst, G. R.; Wampler, W. R.; Wilson, K. L.

    Beryllium has been used as a plasma-facing material to effect substantial improvements in plasma performance in the Joint European Torus (JET), and it is planned as a plasma-facing material for the first wall (FW) and other components of the International Thermonuclear Experimental Reactor (ITER). The interaction of hydrogenic ions, and charge-exchange neutral atoms from plasmas, with beryllium has been studied in recent years with widely varying interpretations of results. In this paper we review experimental data regarding hydrogenic atom inventories in experiments pertinent to tokamak applications and show that with some very plausible assumptions, the experimental data appear to exhibit rather predictable trends. A phenomenon observed in high ion-flux experiments is the saturation of the beryllium surface such that inventories of implanted particles become insensitive to increased flux and to continued implantation fluence. Methods for modeling retention and release of implanted hydrogen in beryllium are reviewed and an adaptation is suggested for modeling the saturation effects. The TMAP4 code used with these modifications has succeeded in simulating experimental data taken under saturation conditions where codes without this feature have not. That implementation also works well under more routine conditions where the conventional recombination-limited release model is applicable. Calculations of tritium inventory and permeation in the ITER FW during the basic performance phase (BPP) using both the conventional recombination model and the saturation effects assumptions show a difference of several orders of magnitude in both inventory and permeation rate to the coolant.

  2. Vector potential methods

    NASA Technical Reports Server (NTRS)

    Hafez, M.

    1989-01-01

    Vector potential and related methods, for the simulation of both inviscid and viscous flows over aerodynamic configurations, are briefly reviewed. The advantages and disadvantages of several formulations are discussed and alternate strategies are recommended. Scalar potential, modified potential, alternate formulations of Euler equations, least-squares formulation, variational principles, iterative techniques and related methods, and viscous flow simulation are discussed.

  3. Xyce

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomquist, Heidi K.; Fixel, Deborah A.; Fett, David Brian

    The Xyce Parallel Electronic Simulator simulates electronic circuit behavior in DC, AC, HB, MPDE and transient mode using standard analog (DAE) and/or device (PDE) device models including several age and radiation aware devices. It supports a variety of computing platforms (both serial and parallel) computers. Lastly, it uses a variety of modern solution algorithms dynamic parallel load-balancing and iterative solvers.

  4. Thermal modeling of W rod armor.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nygren, Richard Einar

    2004-09-01

    Sandia has developed and tested mockups armored with W rods over the last decade and pioneered the initial development of W rod armor for International Thermonuclear Experimental Reactor (ITER) in the 1990's. We have also developed 2D and 3D thermal and stress models of W rod-armored plasma facing components (PFCs) and test mockups and are applying the models to both short pulses, i.e. edge localized modes (ELMs), and thermal performance in steady state for applications in C-MOD, DiMES testing and ITER. This paper briefly describes the 2D and 3D models and their applications with emphasis on modeling for an ongoingmore » test program that simulates repeated heat loads from ITER ELMs.« less

  5. Subpixel edge estimation with lens aberrations compensation based on the iterative image approximation for high-precision thermal expansion measurements of solids

    NASA Astrophysics Data System (ADS)

    Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.

    2017-06-01

    A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.

  6. Gyrokinetic Simulations of JET Carbon and ITER-Like Wall Pedestals

    NASA Astrophysics Data System (ADS)

    Hatch, David; Kotschenreuther, Mike; Mahajan, Swadesh; Liu, Xing; Blackmon, Austin; Giroud, Carine; Hillesheim, Jon; Maggi, Costanza; Saarelma, Samuli; JET Contributors Team

    2017-10-01

    Gyrokinetic simulations using the GENE code are presented, which target a fundamental understanding of JET pedestal transport and, in particular, its modification after installation of an ITER like wall (ILW). A representative pre-ILW (carbon wall) discharge is analyzed as a base case. In this discharge, magnetic diagnostics observe washboard modes, which preferentially affect the temperature pedestal and have frequencies (accounting for Doppler shift) consistent with microtearing modes and inconsistent with kinetic ballooning modes. A similar ILW discharge is examined, which recovers a similar value of H98, albeit at reduced pedestal temperature. This discharge is distinguished by a much higher value of eta, which produces strong ITG and ETG driven instabilities in gyrokinetic simulations. Experimental observations provide several targets for comparisons with simulation data, including the toroidal mode number and frequency of magnetic fluctuations, heat fluxes, and inter-ELM profile evolution. Strategies for optimizing pedestal performance will also be discussed. This work was supported by U.S. DOE Contract No. DE-FG02-04ER54742 and by EUROfusion under Grant No. 633053.

  7. Using sequential self-calibration method to identify conductivity distribution: Conditioning on tracer test data

    USGS Publications Warehouse

    Hu, B.X.; He, C.

    2008-01-01

    An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases. ?? International Association for Mathematical Geology 2008.

  8. Micromagnetic Simulation of Thermal Effects in Magnetic Nanostructures

    DTIC Science & Technology

    2003-01-01

    NiFe magnetic nano- elements are calculated. INTRODUCTION With decreasing size of magnetic nanostructures thermal effects become increasingly important...thermal field. The thermal field is assumed to be a Gaussian random process with the following statistical properties : (H,,,(t))=0 and (H,I.(t),H,.1(t...following property DI " =VE(M’’) - [VE(M"’)• t] t =0, for k =1.m (12) 186 The optimal path can be found using an iterative scheme. In each iteration step the

  9. Multigrid-based reconstruction algorithm for quantitative photoacoustic tomography

    PubMed Central

    Li, Shengfu; Montcel, Bruno; Yuan, Zhen; Liu, Wanyu; Vray, Didier

    2015-01-01

    This paper proposes a multigrid inversion framework for quantitative photoacoustic tomography reconstruction. The forward model of optical fluence distribution and the inverse problem are solved at multiple resolutions. A fixed-point iteration scheme is formulated for each resolution and used as a cost function. The simulated and experimental results for quantitative photoacoustic tomography reconstruction show that the proposed multigrid inversion can dramatically reduce the required number of iterations for the optimization process without loss of reliability in the results. PMID:26203371

  10. Translation position determination in ptychographic coherent diffraction imaging.

    PubMed

    Zhang, Fucai; Peterson, Isaac; Vila-Comamala, Joan; Diaz, Ana; Berenguer, Felisa; Bean, Richard; Chen, Bo; Menzel, Andreas; Robinson, Ian K; Rodenburg, John M

    2013-06-03

    Accurate knowledge of translation positions is essential in ptychography to achieve a good image quality and the diffraction limited resolution. We propose a method to retrieve and correct position errors during the image reconstruction iterations. Sub-pixel position accuracy after refinement is shown to be achievable within several tens of iterations. Simulation and experimental results for both optical and X-ray wavelengths are given. The method improves both the quality of the retrieved object image and relaxes the position accuracy requirement while acquiring the diffraction patterns.

  11. Augmented design and analysis of computer experiments: a novel tolerance embedded global optimization approach applied to SWIR hyperspectral illumination design.

    PubMed

    Keresztes, Janos C; John Koshel, R; D'huys, Karlien; De Ketelaere, Bart; Audenaert, Jan; Goos, Peter; Saeys, Wouter

    2016-12-26

    A novel meta-heuristic approach for minimizing nonlinear constrained problems is proposed, which offers tolerance information during the search for the global optimum. The method is based on the concept of design and analysis of computer experiments combined with a novel two phase design augmentation (DACEDA), which models the entire merit space using a Gaussian process, with iteratively increased resolution around the optimum. The algorithm is introduced through a series of cases studies with increasing complexity for optimizing uniformity of a short-wave infrared (SWIR) hyperspectral imaging (HSI) illumination system (IS). The method is first demonstrated for a two-dimensional problem consisting of the positioning of analytical isotropic point sources. The method is further applied to two-dimensional (2D) and five-dimensional (5D) SWIR HSI IS versions using close- and far-field measured source models applied within the non-sequential ray-tracing software FRED, including inherent stochastic noise. The proposed method is compared to other heuristic approaches such as simplex and simulated annealing (SA). It is shown that DACEDA converges towards a minimum with 1 % improvement compared to simplex and SA, and more importantly requiring only half the number of simulations. Finally, a concurrent tolerance analysis is done within DACEDA for to the five-dimensional case such that further simulations are not required.

  12. Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm.

    PubMed

    Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2013-08-07

    Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.

  13. Status of the 1 MeV Accelerator Design for ITER NBI

    NASA Astrophysics Data System (ADS)

    Kuriyama, M.; Boilson, D.; Hemsworth, R.; Svensson, L.; Graceffa, J.; Schunke, B.; Decamps, H.; Tanaka, M.; Bonicelli, T.; Masiello, A.; Bigi, M.; Chitarin, G.; Luchetta, A.; Marcuzzi, D.; Pasqualotto, R.; Pomaro, N.; Serianni, G.; Sonato, P.; Toigo, V.; Zaccaria, P.; Kraus, W.; Franzen, P.; Heinemann, B.; Inoue, T.; Watanabe, K.; Kashiwagi, M.; Taniguchi, M.; Tobari, H.; De Esch, H.

    2011-09-01

    The beam source of neutral beam heating/current drive system for ITER is needed to accelerate the negative ion beam of 40A with D- at 1 MeV for 3600 sec. In order to realize the beam source, design and R&D works are being developed in many institutions under the coordination of ITER organization. The development of the key issues of the ion source including source plasma uniformity, suppression of co-extracted electron in D beam operation and also after the long beam duration time of over a few 100 sec, is progressed mainly in IPP with the facilities of BATMAN, MANITU and RADI. In the near future, ELISE, that will be tested the half size of the ITER ion source, will start the operation in 2011, and then SPIDER, which demonstrates negative ion production and extraction with the same size and same structure as the ITER ion source, will start the operation in 2014 as part of the NBTF. The development of the accelerator is progressed mainly in JAEA with the MeV test facility, and also the computer simulation of beam optics also developed in JAEA, CEA and RFX. The full ITER heating and current drive beam performance will be demonstrated in MITICA, which will start operation in 2016 as part of the NBTF.

  14. Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Wang, Pei; Lü, Jinhu

    2017-01-01

    Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.

  15. A fresh look at electron cyclotron current drive power requirements for stabilization of tearing modes in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Haye, R. J., E-mail: lahaye@fusion.gat.com

    2015-12-10

    ITER is an international project to design and build an experimental fusion reactor based on the “tokamak” concept. ITER relies upon localized electron cyclotron current drive (ECCD) at the rational safety factor q=2 to suppress or stabilize the expected poloidal mode m=2, toroidal mode n=1 neoclassical tearing mode (NTM) islands. Such islands if unmitigated degrade energy confinement, lock to the resistive wall (stop rotating), cause loss of “H-mode” and induce disruption. The International Tokamak Physics Activity (ITPA) on MHD, Disruptions and Magnetic Control joint experiment group MDC-8 on Current Drive Prevention/Stabilization of Neoclassical Tearing Modes started in 2005, after whichmore » assessments were made for the requirements for ECCD needed in ITER, particularly that of rf power and alignment on q=2 [1]. Narrow well-aligned rf current parallel to and of order of one percent of the total plasma current is needed to replace the “missing” current in the island O-points and heal or preempt (avoid destabilization by applying ECCD on q=2 in absence of the mode) the island [2-4]. This paper updates the advances in ECCD stabilization on NTMs learned in DIII-D experiments and modeling during the last 5 to 10 years as applies to stabilization by localized ECCD of tearing modes in ITER. This includes the ECCD (inside the q=1 radius) stabilization of the NTM “seeding” instability known as sawteeth (m/n=1/1) [5]. Recent measurements in DIII-D show that the ITER-similar current profile is classically unstable, curvature stabilization must not be neglected, and the small island width stabilization effect from helical ion polarization currents is stronger than was previously thought [6]. The consequences of updated assumptions in ITER modeling of the minimum well-aligned ECCD power needed are all-in-all favorable (and well-within the ITER 24 gyrotron capability) when all effects are included. However, a “wild card” may be broadening of the localized ECCD by the presence of the island; various theories predict broadening could occur and there is experimental evidence for broadening in DIII-D. Wider than now expected ECCD in ITER would make alignment easier to do but weaken the stabilization and thus require more rf power. In addition to updated modeling for ITER, advances in the ITER-relevant DIII-D ECCD gyrotron launch mirror control system hardware and real-time plasma control system have been made [7] and there are plans for application in DIII-D ITER demonstration discharges.« less

  16. A fresh look at electron cyclotron current drive power requirements for stabilization of tearing modes in ITER

    NASA Astrophysics Data System (ADS)

    La Haye, R. J.

    2015-12-01

    ITER is an international project to design and build an experimental fusion reactor based on the "tokamak" concept. ITER relies upon localized electron cyclotron current drive (ECCD) at the rational safety factor q=2 to suppress or stabilize the expected poloidal mode m=2, toroidal mode n=1 neoclassical tearing mode (NTM) islands. Such islands if unmitigated degrade energy confinement, lock to the resistive wall (stop rotating), cause loss of "H-mode" and induce disruption. The International Tokamak Physics Activity (ITPA) on MHD, Disruptions and Magnetic Control joint experiment group MDC-8 on Current Drive Prevention/Stabilization of Neoclassical Tearing Modes started in 2005, after which assessments were made for the requirements for ECCD needed in ITER, particularly that of rf power and alignment on q=2 [1]. Narrow well-aligned rf current parallel to and of order of one percent of the total plasma current is needed to replace the "missing" current in the island O-points and heal or preempt (avoid destabilization by applying ECCD on q=2 in absence of the mode) the island [2-4]. This paper updates the advances in ECCD stabilization on NTMs learned in DIII-D experiments and modeling during the last 5 to 10 years as applies to stabilization by localized ECCD of tearing modes in ITER. This includes the ECCD (inside the q=1 radius) stabilization of the NTM "seeding" instability known as sawteeth (m/n=1/1) [5]. Recent measurements in DIII-D show that the ITER-similar current profile is classically unstable, curvature stabilization must not be neglected, and the small island width stabilization effect from helical ion polarization currents is stronger than was previously thought [6]. The consequences of updated assumptions in ITER modeling of the minimum well-aligned ECCD power needed are all-in-all favorable (and well-within the ITER 24 gyrotron capability) when all effects are included. However, a "wild card" may be broadening of the localized ECCD by the presence of the island; various theories predict broadening could occur and there is experimental evidence for broadening in DIII-D. Wider than now expected ECCD in ITER would make alignment easier to do but weaken the stabilization and thus require more rf power. In addition to updated modeling for ITER, advances in the ITER-relevant DIII-D ECCD gyrotron launch mirror control system hardware and real-time plasma control system have been made [7] and there are plans for application in DIII-D ITER demonstration discharges.

  17. Solving Upwind-Biased Discretizations: Defect-Correction Iterations

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    1999-01-01

    This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.

  18. Modeling of the ITER-like wide-angle infrared thermography view of JET.

    PubMed

    Aumeunier, M-H; Firdaouss, M; Travère, J-M; Loarer, T; Gauthier, E; Martin, V; Chabaud, D; Humbert, E

    2012-10-01

    Infrared (IR) thermography systems are mandatory to ensure safe plasma operation in fusion devices. However, IR measurements are made much more complicated in metallic environment because of the spurious contributions of the reflected fluxes. This paper presents a full predictive photonic simulation able to assess accurately the surface temperature measurement with classical IR thermography from a given plasma scenario and by taking into account the optical properties of PFCs materials. This simulation has been carried out the ITER-like wide angle infrared camera view of JET in comparing with experimental data. The consequences and the effects of the low emissivity and the bidirectional reflectivity distribution function used in the model for the metallic PFCs on the contribution of the reflected flux in the analysis are discussed.

  19. A three-dimensional wide-angle BPM for optical waveguide structures.

    PubMed

    Ma, Changbao; Van Keuren, Edward

    2007-01-22

    Algorithms for effective modeling of optical propagation in three- dimensional waveguide structures are critical for the design of photonic devices. We present a three-dimensional (3-D) wide-angle beam propagation method (WA-BPM) using Hoekstra's scheme. A sparse matrix algebraic equation is formed and solved using iterative methods. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation, along with a technique for shifting the simulation window to reduce the dimension of the numerical equation and a threshold technique to further ensure its convergence. These techniques can ensure the implementation of iterative methods for waveguide structures by relaxing the convergence problem, which will further enable us to develop higher-order 3-D WA-BPMs based on Padé approximant operators.

  20. A three-dimensional wide-angle BPM for optical waveguide structures

    NASA Astrophysics Data System (ADS)

    Ma, Changbao; van Keuren, Edward

    2007-01-01

    Algorithms for effective modeling of optical propagation in three- dimensional waveguide structures are critical for the design of photonic devices. We present a three-dimensional (3-D) wide-angle beam propagation method (WA-BPM) using Hoekstra’s scheme. A sparse matrix algebraic equation is formed and solved using iterative methods. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation, along with a technique for shifting the simulation window to reduce the dimension of the numerical equation and a threshold technique to further ensure its convergence. These techniques can ensure the implementation of iterative methods for waveguide structures by relaxing the convergence problem, which will further enable us to develop higher-order 3-D WA-BPMs based on Padé approximant operators.

  1. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, and produce modified schedules quickly. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. These experiments were performed within the domain of Space Shuttle ground processing.

  2. Enhancement of First Wall Damage in Iter Type Tokamak due to Lenr Effects

    NASA Astrophysics Data System (ADS)

    Lipson, Andrei G.; Miley, George H.; Momota, Hiromu

    In recent experiments with pulsed periodic high current (J ~ 300-500 mA/cm2) D2-glow discharge at deuteron energies as low as 0.8-2.45 keV a large DD-reaction yield has been obtained. Thick target yield measurement show unusually high DD-reaction enhancement (at Ed = 1 keV the yield is about nine orders of magnitude larger than that deduced from standard Bosch and Halle extrapolation of DD-reaction cross-section to lower energies) The results obtained in these LENR experiments with glow discharge suggest nonnegligible edge plasma effects in the ITER TOKAMAK that were previously ignored. In the case of the ITER DT plasma core, we here estimate the DT reaction yield at the metal edge due to plasma ion bombardment of the first wall and/or divertor materials.

  3. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  4. A Hardware-Accelerated Quantum Monte Carlo framework (HAQMC) for N-body systems

    NASA Astrophysics Data System (ADS)

    Gothandaraman, Akila; Peterson, Gregory D.; Warren, G. Lee; Hinde, Robert J.; Harrison, Robert J.

    2009-12-01

    Interest in the study of structural and energetic properties of highly quantum clusters, such as inert gas clusters has motivated the development of a hardware-accelerated framework for Quantum Monte Carlo simulations. In the Quantum Monte Carlo method, the properties of a system of atoms, such as the ground-state energies, are averaged over a number of iterations. Our framework is aimed at accelerating the computations in each iteration of the QMC application by offloading the calculation of properties, namely energy and trial wave function, onto reconfigurable hardware. This gives a user the capability to run simulations for a large number of iterations, thereby reducing the statistical uncertainty in the properties, and for larger clusters. This framework is designed to run on the Cray XD1 high performance reconfigurable computing platform, which exploits the coarse-grained parallelism of the processor along with the fine-grained parallelism of the reconfigurable computing devices available in the form of field-programmable gate arrays. In this paper, we illustrate the functioning of the framework, which can be used to calculate the energies for a model cluster of helium atoms. In addition, we present the capabilities of the framework that allow the user to vary the chemical identities of the simulated atoms. Program summaryProgram title: Hardware Accelerated Quantum Monte Carlo (HAQMC) Catalogue identifier: AEEP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 691 537 No. of bytes in distributed program, including test data, etc.: 5 031 226 Distribution format: tar.gz Programming language: C/C++ for the QMC application, VHDL and Xilinx 8.1 ISE/EDK tools for FPGA design and development Computer: Cray XD1 consisting of a dual-core, dualprocessor AMD Opteron 2.2 GHz with a Xilinx Virtex-4 (V4LX160) or Xilinx Virtex-II Pro (XC2VP50) FPGA per node. We use the compute node with the Xilinx Virtex-4 FPGA Operating system: Red Hat Enterprise Linux OS Has the code been vectorised or parallelized?: Yes Classification: 6.1 Nature of problem: Quantum Monte Carlo is a practical method to solve the Schrödinger equation for large many-body systems and obtain the ground-state properties of such systems. This method involves the sampling of a number of configurations of atoms and averaging the properties of the configurations over a number of iterations. We are interested in applying the QMC method to obtain the energy and other properties of highly quantum clusters, such as inert gas clusters. Solution method: The proposed framework provides a combined hardware-software approach, in which the QMC simulation is performed on the host processor, with the computationally intensive functions such as energy and trial wave function computations mapped onto the field-programmable gate array (FPGA) logic device attached as a co-processor to the host processor. We perform the QMC simulation for a number of iterations as in the case of our original software QMC approach, to reduce the statistical uncertainty of the results. However, our proposed HAQMC framework accelerates each iteration of the simulation, by significantly reducing the time taken to calculate the ground-state properties of the configurations of atoms, thereby accelerating the overall QMC simulation. We provide a generic interpolation framework that can be extended to study a variety of pure and doped atomic clusters, irrespective of the chemical identities of the atoms. For the FPGA implementation of the properties, we use a two-region approach for accurately computing the properties over the entire domain, employ deep pipelines and fixed-point for all our calculations guaranteeing the accuracy required for our simulation.

  5. The effect of Electron Cyclotron Heating on density fluctuations at ion and electron scales in ITER Baseline Scenario discharges on the DIII-D tokamak

    DOE PAGES

    Marinoni, Alessandro; Pinsker, Robert I.; Porkolab, Miklos; ...

    2017-08-01

    Experiments simulating the ITER Baseline Scenario on the DIII-D tokamak show that torque-free pure electron heating, when coupled to plasmas subject to a net co-current beam torque, affects density fluctuations at electron scales on a sub-confinement time scale, whereas fluctuations at ion scales change only after profiles have evolved to a new stationary state. Modifications to the density fluctuations measured by the Phase Contrast Imaging diagnostic (PCI) are assessed by analyzing the time evolution following the switch-off of Electron Cyclotron Heating (ECH), thus going from mixed beam/ECH to pure neutral beam heating at fixed β N . Within 20 msmore » after turning off ECH, the intensity of fluctuations is observed to increase at frequencies higher than 200 kHz; in contrast, fluctuations at lower frequency are seen to decrease in intensity on a longer time scale, after other equilibrium quantities have evolved. Non-linear gyro-kinetic modeling at ion and electron scales scales suggest that, while the low frequency response of the diagnostic is consistent with the dominant ITG modes being weakened by the slow-time increase in flow shear, the high frequency response is due to prompt changes to the electron temperature profile that enhance electron modes and generate a larger heat flux and an inward particle pinch. Furthermore, these results suggest that electron heated regimes in ITER will feature multi-scale fluctuations that might affect fusion performance via modifications to profiles.« less

  6. Performance Analysis of Distributed Object-Oriented Applications

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    The purpose of this research was to evaluate the efficiency of a distributed simulation architecture which creates individual modules which are made self-scheduling through the use of a message-based communication system used for requesting input data from another module which is the source of that data. To make the architecture as general as possible, the message-based communication architecture was implemented using standard remote object architectures (Common Object Request Broker Architecture (CORBA) and/or Distributed Component Object Model (DCOM)). A series of experiments were run in which different systems are distributed in a variety of ways across multiple computers and the performance evaluated. The experiments were duplicated in each case so that the overhead due to message communication and data transmission can be separated from the time required to actually perform the computational update of a module each iteration. The software used to distribute the modules across multiple computers was developed in the first year of the current grant and was modified considerably to add a message-based communication scheme supported by the DCOM distributed object architecture. The resulting performance was analyzed using a model created during the first year of this grant which predicts the overhead due to CORBA and DCOM remote procedure calls and includes the effects of data passed to and from the remote objects. A report covering the distributed simulation software and the results of the performance experiments has been submitted separately. The above report also discusses possible future work to apply the methodology to dynamically distribute the simulation modules so as to minimize overall computation time.

  7. Simulation and experimental study of rheological properties of CeO2-water nanofluid

    NASA Astrophysics Data System (ADS)

    Loya, Adil; Stair, Jacqueline L.; Ren, Guogang

    2015-10-01

    Metal oxide nanoparticles offer great merits over controlling rheological, thermal, chemical and physical properties of solutions. The effectiveness of a nanoparticle to modify the properties of a fluid depends on its diffusive properties with respect to the fluid. In this study, rheological properties of aqueous fluids (i.e. water) were enhanced with the addition of CeO2 nanoparticles. This study was characterized by the outcomes of simulation and experimental results of nanofluids. The movement of nanoparticles in the fluidic media was simulated by a large-scale molecular thermal dynamic program (i.e. LAMMPS). The COMPASS force field was employed with smoothed particle hydrodynamic potential (SPH) and discrete particle dynamics potential (DPD). However, this study develops the understanding of how the rheological properties are affected due to the addition of nanoparticles in a fluid and the way DPD and SPH can be used for accurately estimating the rheological properties with Brownian effect. The rheological results of the simulation were confirmed by the convergence of the stress autocorrelation function, whereas experimental properties were measured using a rheometer. These rheological values of simulation were obtained and agreed within 5 % of the experimental values; they were identified and treated with a number of iterations and experimental tests. The results of the experiment and simulation show that 10 % CeO2 nanoparticles dispersion in water has a viscosity of 2.0-3.3 mPas.

  8. Estimation of the tritium retention in ITER tungsten divertor target using macroscopic rate equations simulations

    NASA Astrophysics Data System (ADS)

    Hodille, E. A.; Bernard, E.; Markelj, S.; Mougenot, J.; Becquart, C. S.; Bisson, R.; Grisolia, C.

    2017-12-01

    Based on macroscopic rate equation simulations of tritium migration in an actively cooled tungsten (W) plasma facing component (PFC) using the code MHIMS (migration of hydrogen isotopes in metals), an estimation has been made of the tritium retention in ITER W divertor target during a non-uniform exponential distribution of particle fluxes. Two grades of materials are considered to be exposed to tritium ions: an undamaged W and a damaged W exposed to fast fusion neutrons. Due to strong temperature gradient in the PFC, Soret effect’s impacts on tritium retention is also evaluated for both cases. Thanks to the simulation, the evolutions of the tritium retention and the tritium migration depth are obtained as a function of the implanted flux and the number of cycles. From these evolutions, extrapolation laws are built to estimate the number of cycles needed for tritium to permeate from the implantation zone to the cooled surface and to quantify the corresponding retention of tritium throughout the W PFC.

  9. Coupling the snow thermodynamic model SNOWPACK with the microwave emission model of layered snowpacks for subarctic and arctic snow water equivalent retrievals

    NASA Astrophysics Data System (ADS)

    Langlois, A.; Royer, A.; Derksen, C.; Montpetit, B.; Dupont, F.; GoïTa, K.

    2012-12-01

    Satellite-passive microwave remote sensing has been extensively used to estimate snow water equivalent (SWE) in northern regions. Although passive microwave sensors operate independent of solar illumination and the lower frequencies are independent of atmospheric conditions, the coarse spatial resolution introduces uncertainties to SWE retrievals due to the surface heterogeneity within individual pixels. In this article, we investigate the coupling of a thermodynamic multilayered snow model with a passive microwave emission model. Results show that the snow model itself provides poor SWE simulations when compared to field measurements from two major field campaigns. Coupling the snow and microwave emission models with successive iterations to correct the influence of snow grain size and density significantly improves SWE simulations. This method was further validated using an additional independent data set, which also showed significant improvement using the two-step iteration method compared to standalone simulations with the snow model.

  10. Molecular dynamics simulations of interactions between hydrogen and fusion-relevant materials

    NASA Astrophysics Data System (ADS)

    de Rooij, E. D.

    2010-02-01

    In a thermonuclear reactor fusion between hydrogen isotopes takes place, producing helium and energy. The so-called divertor is the part of the fusion reactor vessel where the plasma is neutralized in order to exhaust the helium. The surface plates of the divertor are subjected to high heat loads and high fluxes of energetic hydrogen and helium. In the next generation fusion device - the tokamak ITER - the expected conditions at the plates are particle fluxes exceeding 1e24 per second and square metre, particle energies ranging from 1 to 100 eV and an average heat load of 10 MW per square metre. Two materials have been identified as candidates for the ITER divertor plates: carbon and tungsten. Since there are currently no fusion devices that can create these harsh conditions, it is unknown how the materials will behave in terms of erosion and hydrogen retention. To gain more insight in the physical processes under these conditions molecular dynamics simulations have been conducted. Since diamond has been proposed as possible plasma facing material, we have studied erosion and hydrogen retention in diamond and amorphous hydrogenated carbon (a-C:H). As in experiments, diamond shows a lower erosion yield than a-C:H, however the hydrogen retention in diamond is much larger than in a-C:H and also hardly depending on the substrate temperature. This implies that simple heating of the surface is not sufficient to retrieve the hydrogen from diamond material, whereas a-C:H readily releases the retained hydrogen. So, in spite of the higher erosion yield carbon material other than diamond seems more suitable. Experiments suggest that the erosion yield of carbon material decreases with increasing flux. This was studied in our simulations. The results show no flux dependency, suggesting that the observed reduction is not a material property but is caused by external factors as, for example, redeposition of the erosion products. Our study of the redeposition showed that the sticking probability of small hydrocarbons is highest on material previously subjected to the highest hydrogen flux. This result suggests that redeposition is more effective under high than under low hydrogen fluxes, partly explaining the experimentally observed reduction in the carbon erosion yield. Lastly, we studied amorphous tungsten carbide. Amorphous material with three different carbon percentages (15, 50 and 95%) was subjected to deuterium bombardment and the resulting erosion and deuterium retention was analysed. The 95% carbon sample behaves like doped carbon, the carbon erosion yield is reduced and no tungsten is eroded. Segregation of the materials was observed, resulting in an accumulation of tungsten at the surface. The hydrogen retention was similar to a-C:H. The 15% carbon sample showed no significant erosion or retention. The most interesting was the 50% sample. Here deuterium bubbles formed that burst open after sufficiently long bombardment, thereby removing both carbon and tungsten from the surface. In the context of ITER our MD simulations suggest that tungsten is the better suited material since both the erosion and the hydrogen retention are significantly lower than for carbon.

  11. Nonlocal variational model and filter algorithm to remove multiplicative noise

    NASA Astrophysics Data System (ADS)

    Chen, Dai-Qiang; Zhang, Hui; Cheng, Li-Zhi

    2010-07-01

    The nonlocal (NL) means filter proposed by Buades, Coll, and Morel (SIAM Multiscale Model. Simul. 4(2), 490-530, 2005), which makes full use of the redundancy information in images, has shown to be very efficient for image denoising with Gauss noise added. On the basis of the NL method and a striver to minimize the conditional mean-square error, we design a NL means filter to remove multiplicative noise, and combining the NL filter to regularity method, we propose a NL total variational (TV) model and present a fast iterated algorithm for it. Experiments demonstrate that our algorithm is better than TV method; it is superior in preserving small structures and textures and can obtain an improvement in peak signal-to-noise ratio.

  12. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations

    PubMed Central

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2014-01-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios. PMID:24729986

  13. George E. Duvall Shock Compression Science Award Talk: Mesomechanical Modeling of Fracture

    NASA Astrophysics Data System (ADS)

    Curran, Don

    2009-06-01

    This paper reviews the efforts of the author and his colleagues over the past four decades to develop mesomechanical models of material failure. In the early 1970s a procedure known as NAG/FRAG (Nucleation and Growth to Fragmentation) methodology was introduced by a group at SRI International. Experiments are performed in which the evolution of microstructural damage is is measured pre and posttest as a function of stress, time-at-stress, temperature, and other environmental parameters. Damage nucleation and growth functions are deduced via iterative computational simulations. We review the history over the past half-century for applications of growing complexity, and conclude with a discussion of a current challenging problem, that of designing improved glass and ceramic armors.

  14. Dynamics of internal models in game players

    NASA Astrophysics Data System (ADS)

    Taiji, Makoto; Ikegami, Takashi

    1999-10-01

    A new approach for the study of social games and communications is proposed. Games are simulated between cognitive players who build the opponent’s internal model and decide their next strategy from predictions based on the model. In this paper, internal models are constructed by the recurrent neural network (RNN), and the iterated prisoner’s dilemma game is performed. The RNN allows us to express the internal model in a geometrical shape. The complicated transients of actions are observed before the stable mutually defecting equilibrium is reached. During the transients, the model shape also becomes complicated and often experiences chaotic changes. These new chaotic dynamics of internal models reflect the dynamical and high-dimensional rugged landscape of the internal model space.

  15. Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2011-01-01

    A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.

  16. Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.

    PubMed

    Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo

    2017-03-03

    Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.

  17. Simulation and Spacecraft Design: Engineering Mars Landings.

    PubMed

    Conway, Erik M

    2015-10-01

    A key issue in history of technology that has received little attention is the use of simulation in engineering design. This article explores the use of both mechanical and numerical simulation in the design of the Mars atmospheric entry phases of the Viking and Mars Pathfinder missions to argue that engineers used both kinds of simulation to develop knowledge of their designs' likely behavior in the poorly known environment of Mars. Each kind of simulation could be used as a warrant of the other's fidelity, in an iterative process of knowledge construction.

  18. A Least-Squares Commutator in the Iterative Subspace Method for Accelerating Self-Consistent Field Convergence.

    PubMed

    Li, Haichen; Yaron, David J

    2016-11-08

    A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.

  19. Multi-AUV autonomous task planning based on the scroll time domain quantum bee colony optimization algorithm in uncertain environment

    PubMed Central

    Zhang, Rubo; Yang, Yu

    2017-01-01

    Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution. PMID:29186166

  20. Multi-AUV autonomous task planning based on the scroll time domain quantum bee colony optimization algorithm in uncertain environment.

    PubMed

    Li, Jianjun; Zhang, Rubo; Yang, Yu

    2017-01-01

    Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution.

Top