Sample records for predictive iterated high

  1. Time Dependent Predictive Modeling of DIII-D ITER Baseline Scenario using Predictive TRANSP

    NASA Astrophysics Data System (ADS)

    Grierson, B. A.; Andre, R. G.; Budny, R. V.; Solomon, W. M.; Yuan, X.; Candy, J.; Pinsker, R. I.; Staebler, G. M.; Holland, C.; Rafiq, T.

    2015-11-01

    ITER baseline scenario discharges on DIII-D are modeled with TGLF and MMM transitioning from combined ECH (3.3MW) +NBI(2.8MW) heating to NBI only (3.0 MW) heating maintaining βN = 2.0 on DIII-D predicting temperature, density and rotation for comparison to experimental measurements. These models capture the reduction of confinement associated with direct electron heating H98y2 = 0.89 vs. 1.0) consistent with stiff electron transport. Reasonable agreement between experimental and modeled temperature profiles is achieved for both heating methods, whereas density and momentum predictions differ significantly. Transport fluxes from TGLF indicate that on DIII-D the electron energy flux has reached a transition from low-k to high-k turbulence with more stiff high-k transport that inhibits an increase in core electron stored energy with additional electron heating. Projections to ITER also indicate high electron stiffness. Supported by US DOE DE-AC02-09CH11466, DE-FC02-04ER54698, DE-FG02-07ER54917, DE-FG02-92-ER54141.

  2. Integrated modeling of plasma ramp-up in DIII-D ITER-like and high bootstrap current scenario discharges

    NASA Astrophysics Data System (ADS)

    Wu, M. Q.; Pan, C. K.; Chan, V. S.; Li, G. Q.; Garofalo, A. M.; Jian, X.; Liu, L.; Ren, Q. L.; Chen, J. L.; Gao, X.; Gong, X. Z.; Ding, S. Y.; Qian, J. P.; Cfetr Physics Team

    2018-04-01

    Time-dependent integrated modeling of DIII-D ITER-like and high bootstrap current plasma ramp-up discharges has been performed with the equilibrium code EFIT, and the transport codes TGYRO and ONETWO. Electron and ion temperature profiles are simulated by TGYRO with the TGLF (SAT0 or VX model) turbulent and NEO neoclassical transport models. The VX model is a new empirical extension of the TGLF turbulent model [Jian et al., Nucl. Fusion 58, 016011 (2018)], which captures the physics of multi-scale interaction between low-k and high-k turbulence from nonlinear gyro-kinetic simulation. This model is demonstrated to accurately model low Ip discharges from the EAST tokamak. Time evolution of the plasma current density profile is simulated by ONETWO with the experimental current ramp-up rate. The general trend of the predicted evolution of the current density profile is consistent with that obtained from the equilibrium reconstruction with Motional Stark effect constraints. The predicted evolution of βN , li , and βP also agrees well with the experiments. For the ITER-like cases, the predicted electron and ion temperature profiles using TGLF_Sat0 agree closely with the experimental measured profiles, and are demonstrably better than other proposed transport models. For the high bootstrap current case, the predicted electron and ion temperature profiles perform better in the VX model. It is found that the SAT0 model works well at high IP (>0.76 MA) while the VX model covers a wider range of plasma current ( IP > 0.6 MA). The results reported in this paper suggest that the developed integrated modeling could be a candidate for ITER and CFETR ramp-up engineering design modeling.

  3. Improved cryoEM-Guided Iterative Molecular Dynamics–Rosetta Protein Structure Refinement Protocol for High Precision Protein Structure Prediction

    PubMed Central

    2016-01-01

    Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538

  4. Multimachine data–based prediction of high-frequency sensor signal noise for resistive wall mode control in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yueqiang; Sabbagh, S. A.; Chapman, I. T.

    The high-frequency noise measured by magnetic sensors, at levels above the typical frequency of resistive wall modes, is analyzed across a range of present tokamak devices including DIII-D, JET, MAST, ASDEX Upgrade, JT-60U, and NSTX. A high-pass filter enables identification of the noise component with Gaussian-like statistics that shares certain common characteristics in all devices considered. A conservative prediction is made for ITER plasma operation of the high-frequency noise component of the sensor signals, to be used for resistive wall mode feedback stabilization, based on the multimachine database. The predicted root-mean-square n = 1 (n is the toroidal mode number)more » noise level is 10 4 to 10 5 G/s for the voltage signal, and 0.1 to 1 G for the perturbed magnetic field signal. The lower cutoff frequency of the Gaussian pickup noise scales linearly with the sampling frequency, with a scaling coefficient of about 0.1. As a result, these basic noise characteristics should be useful for the modeling-based design of the feedback control system for the resistive wall mode in ITER.« less

  5. Multimachine data–based prediction of high-frequency sensor signal noise for resistive wall mode control in ITER

    DOE PAGES

    Liu, Yueqiang; Sabbagh, S. A.; Chapman, I. T.; ...

    2017-03-27

    The high-frequency noise measured by magnetic sensors, at levels above the typical frequency of resistive wall modes, is analyzed across a range of present tokamak devices including DIII-D, JET, MAST, ASDEX Upgrade, JT-60U, and NSTX. A high-pass filter enables identification of the noise component with Gaussian-like statistics that shares certain common characteristics in all devices considered. A conservative prediction is made for ITER plasma operation of the high-frequency noise component of the sensor signals, to be used for resistive wall mode feedback stabilization, based on the multimachine database. The predicted root-mean-square n = 1 (n is the toroidal mode number)more » noise level is 10 4 to 10 5 G/s for the voltage signal, and 0.1 to 1 G for the perturbed magnetic field signal. The lower cutoff frequency of the Gaussian pickup noise scales linearly with the sampling frequency, with a scaling coefficient of about 0.1. As a result, these basic noise characteristics should be useful for the modeling-based design of the feedback control system for the resistive wall mode in ITER.« less

  6. Transport modeling of the DIII-D high $${{\\beta}_{p}}$$ scenario and extrapolations to ITER steady-state operation

    DOE PAGES

    McClenaghan, Joseph; Garofalo, Andrea M.; Meneghini, Orso; ...

    2017-08-03

    In this study, transport modeling of a proposed ITER steady-state scenario based on DIII-D high poloidal-beta (more » $${{\\beta}_{p}}$$ ) discharges finds that ITB formation can occur with either sufficient rotation or a negative central shear q-profile. The high $${{\\beta}_{p}}$$ scenario is characterized by a large bootstrap current fraction (80%) which reduces the demands on the external current drive, and a large radius internal transport barrier which is associated with excellent normalized confinement. Modeling predictions of the electron transport in the high $${{\\beta}_{p}}$$ scenario improve as $${{q}_{95}}$$ approaches levels similar to typical existing models of ITER steady-state and the ion transport is turbulence dominated. Typical temperature and density profiles from the non-inductive high $${{\\beta}_{p}}$$ scenario on DIII-D are scaled according to 0D modeling predictions of the requirements for achieving a $Q=5$ steady-state fusion gain in ITER with 'day one' heating and current drive capabilities. Then, TGLF turbulence modeling is carried out under systematic variations of the toroidal rotation and the core q-profile. A high bootstrap fraction, high $${{\\beta}_{p}}$$ scenario is found to be near an ITB formation threshold, and either strong negative central magnetic shear or rotation in a high bootstrap fraction are found to successfully provide the turbulence suppression required to achieve $Q=5$.« less

  7. Do in-training evaluation reports deserve their bad reputations? A study of the reliability and predictive ability of ITER scores and narrative comments.

    PubMed

    Ginsburg, Shiphra; Eva, Kevin; Regehr, Glenn

    2013-10-01

    Although scores on in-training evaluation reports (ITERs) are often criticized for poor reliability and validity, ITER comments may yield valuable information. The authors assessed across-rotation reliability of ITER scores in one internal medicine program, ability of ITER scores and comments to predict postgraduate year three (PGY3) performance, and reliability and incremental predictive validity of attendings' analysis of written comments. Numeric and narrative data from the first two years of ITERs for one cohort of residents at the University of Toronto Faculty of Medicine (2009-2011) were assessed for reliability and predictive validity of third-year performance. Twenty-four faculty attendings rank-ordered comments (without scores) such that each resident was ranked by three faculty. Mean ITER scores and comment rankings were submitted to regression analyses; dependent variables were PGY3 ITER scores and program directors' rankings. Reliabilities of ITER scores across nine rotations for 63 residents were 0.53 for both postgraduate year one (PGY1) and postgraduate year two (PGY2). Interrater reliabilities across three attendings' rankings were 0.83 for PGY1 and 0.79 for PGY2. There were strong correlations between ITER scores and comments within each year (0.72 and 0.70). Regressions revealed that PGY1 and PGY2 ITER scores collectively explained 25% of variance in PGY3 scores and 46% of variance in PGY3 rankings. Comment rankings did not improve predictions. ITER scores across multiple rotations showed decent reliability and predictive validity. Comment ranks did not add to the predictive ability, but correlation analyses suggest that trainee performance can be measured through these comments.

  8. Predictive Power Estimation Algorithm (PPEA) - A New Algorithm to Reduce Overfitting for Genomic Biomarker Discovery

    PubMed Central

    Liu, Jiangang; Jolly, Robert A.; Smith, Aaron T.; Searfoss, George H.; Goldstein, Keith M.; Uversky, Vladimir N.; Dunker, Keith; Li, Shuyu; Thomas, Craig E.; Wei, Tao

    2011-01-01

    Toxicogenomics promises to aid in predicting adverse effects, understanding the mechanisms of drug action or toxicity, and uncovering unexpected or secondary pharmacology. However, modeling adverse effects using high dimensional and high noise genomic data is prone to over-fitting. Models constructed from such data sets often consist of a large number of genes with no obvious functional relevance to the biological effect the model intends to predict that can make it challenging to interpret the modeling results. To address these issues, we developed a novel algorithm, Predictive Power Estimation Algorithm (PPEA), which estimates the predictive power of each individual transcript through an iterative two-way bootstrapping procedure. By repeatedly enforcing that the sample number is larger than the transcript number, in each iteration of modeling and testing, PPEA reduces the potential risk of overfitting. We show with three different cases studies that: (1) PPEA can quickly derive a reliable rank order of predictive power of individual transcripts in a relatively small number of iterations, (2) the top ranked transcripts tend to be functionally related to the phenotype they are intended to predict, (3) using only the most predictive top ranked transcripts greatly facilitates development of multiplex assay such as qRT-PCR as a biomarker, and (4) more importantly, we were able to demonstrate that a small number of genes identified from the top-ranked transcripts are highly predictive of phenotype as their expression changes distinguished adverse from nonadverse effects of compounds in completely independent tests. Thus, we believe that the PPEA model effectively addresses the over-fitting problem and can be used to facilitate genomic biomarker discovery for predictive toxicology and drug responses. PMID:21935387

  9. Investigation of a Parabolic Iterative Solver for Three-dimensional Configurations

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Watson, Willie R.; Mani, Ramani

    2007-01-01

    A parabolic iterative solution procedure is investigated that seeks to extend the parabolic approximation used within the internal propagation module of the duct noise propagation and radiation code CDUCT-LaRC. The governing convected Helmholtz equation is split into a set of coupled equations governing propagation in the positive and negative directions. The proposed method utilizes an iterative procedure to solve the coupled equations in an attempt to account for possible reflections from internal bifurcations, impedance discontinuities, and duct terminations. A geometry consistent with the NASA Langley Curved Duct Test Rig is considered and the effects of acoustic treatment and non-anechoic termination are included. Two numerical implementations are studied and preliminary results indicate that improved accuracy in predicted amplitude and phase can be obtained for modes at a cut-off ratio of 1.7. Further predictions for modes at a cut-off ratio of 1.1 show improvement in predicted phase at the expense of increased amplitude error. Possible methods of improvement are suggested based on analytic and numerical analysis. It is hoped that coupling the parabolic iterative approach with less efficient, high fidelity finite element approaches will ultimately provide the capability to perform efficient, higher fidelity acoustic calculations within complex 3-D geometries for impedance eduction and noise propagation and radiation predictions.

  10. Boosted classification trees result in minor to modest improvement in the accuracy in classifying cardiovascular outcomes compared to conventional classification trees

    PubMed Central

    Austin, Peter C; Lee, Douglas S

    2011-01-01

    Purpose: Classification trees are increasingly being used to classifying patients according to the presence or absence of a disease or health outcome. A limitation of classification trees is their limited predictive accuracy. In the data-mining and machine learning literature, boosting has been developed to improve classification. Boosting with classification trees iteratively grows classification trees in a sequence of reweighted datasets. In a given iteration, subjects that were misclassified in the previous iteration are weighted more highly than subjects that were correctly classified. Classifications from each of the classification trees in the sequence are combined through a weighted majority vote to produce a final classification. The authors' objective was to examine whether boosting improved the accuracy of classification trees for predicting outcomes in cardiovascular patients. Methods: We examined the utility of boosting classification trees for classifying 30-day mortality outcomes in patients hospitalized with either acute myocardial infarction or congestive heart failure. Results: Improvements in the misclassification rate using boosted classification trees were at best minor compared to when conventional classification trees were used. Minor to modest improvements to sensitivity were observed, with only a negligible reduction in specificity. For predicting cardiovascular mortality, boosted classification trees had high specificity, but low sensitivity. Conclusions: Gains in predictive accuracy for predicting cardiovascular outcomes were less impressive than gains in performance observed in the data mining literature. PMID:22254181

  11. A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.

  12. A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures

    PubMed Central

    2014-01-01

    Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in this work are freely available at http://www.cs.ubc.ca/~hjabbari/software.php. PMID:24884954

  13. CALCULATIONS OF SHUTDOWN DOSE RATE FOR THE TPR SPECTROMETER OF THE HIGH-RESOLUTION NEUTRON SPECTROMETER FOR ITER.

    PubMed

    Wójcik-Gargula, A; Tracz, G; Scholz, M

    2017-12-13

    This work presents results of the calculations performed in order to predict the neutron-induced activity in structural materials that are considered to be using at the TPR spectrometer-one of the detection system of the High-Resolution Neutron Spectrometer for ITER. An attempt has been made to estimate the shutdown dose rates in a Cuboid #1 and to check if they satisfy ICRP regulatory requirements for occupational exposure to radiation and ITER nuclear safety regulations for areas with personal access. The results were obtained by the MCNP and FISPACT-II calculations. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Iterative near-term ecological forecasting: Needs, opportunities, and challenges

    USGS Publications Warehouse

    Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.

    2018-01-01

    Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  15. Iterative near-term ecological forecasting: Needs, opportunities, and challenges.

    PubMed

    Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P

    2018-02-13

    Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  16. Iterative Refinement of a Binding Pocket Model: Active Computational Steering of Lead Optimization

    PubMed Central

    2012-01-01

    Computational approaches for binding affinity prediction are most frequently demonstrated through cross-validation within a series of molecules or through performance shown on a blinded test set. Here, we show how such a system performs in an iterative, temporal lead optimization exercise. A series of gyrase inhibitors with known synthetic order formed the set of molecules that could be selected for “synthesis.” Beginning with a small number of molecules, based only on structures and activities, a model was constructed. Compound selection was done computationally, each time making five selections based on confident predictions of high activity and five selections based on a quantitative measure of three-dimensional structural novelty. Compound selection was followed by model refinement using the new data. Iterative computational candidate selection produced rapid improvements in selected compound activity, and incorporation of explicitly novel compounds uncovered much more diverse active inhibitors than strategies lacking active novelty selection. PMID:23046104

  17. In-vessel tritium retention and removal in ITER

    NASA Astrophysics Data System (ADS)

    Federici, G.; Anderl, R. A.; Andrew, P.; Brooks, J. N.; Causey, R. A.; Coad, J. P.; Cowgill, D.; Doerner, R. P.; Haasz, A. A.; Janeschitz, G.; Jacob, W.; Longhurst, G. R.; Nygren, R.; Peacock, A.; Pick, M. A.; Philipps, V.; Roth, J.; Skinner, C. H.; Wampler, W. R.

    Tritium retention inside the vacuum vessel has emerged as a potentially serious constraint in the operation of the International Thermonuclear Experimental Reactor (ITER). In this paper we review recent tokamak and laboratory data on hydrogen, deuterium and tritium retention for materials and conditions which are of direct relevance to the design of ITER. These data, together with significant advances in understanding the underlying physics, provide the basis for modelling predictions of the tritium inventory in ITER. We present the derivation, and discuss the results, of current predictions both in terms of implantation and codeposition rates, and critically discuss their uncertainties and sensitivity to important design and operation parameters such as the plasma edge conditions, the surface temperature, the presence of mixed-materials, etc. These analyses are consistent with recent tokamak findings and show that codeposition of tritium occurs on the divertor surfaces primarily with carbon eroded from a limited area of the divertor near the strike zones. This issue remains an area of serious concern for ITER. The calculated codeposition rates for ITER are relatively high and the in-vessel tritium inventory limit could be reached, under worst assumptions, in approximately a week of continuous operation. We discuss the implications of these estimates on the design, operation and safety of ITER and present a strategy for resolving the issues. We conclude that as long as carbon is used in ITER - and more generically in any other next-step experimental fusion facility fuelled with tritium - the efficient control and removal of the codeposited tritium is essential. There is a critical need to develop and test in situ cleaning techniques and procedures that are beyond the current experience of present-day tokamaks. We review some of the principal methods that are being investigated and tested, in conjunction with the R&D work still required to extrapolate their applicability to ITER. Finally, unresolved issues are identified and recommendations are made on potential R&D avenues for their resolution.

  18. The Physics Basis of ITER Confinement

    NASA Astrophysics Data System (ADS)

    Wagner, F.

    2009-02-01

    ITER will be the first fusion reactor and the 50 year old dream of fusion scientists will become reality. The quality of magnetic confinement will decide about the success of ITER, directly in the form of the confinement time and indirectly because it decides about the plasma parameters and the fluxes, which cross the separatrix and have to be handled externally by technical means. This lecture portrays some of the basic principles which govern plasma confinement, uses dimensionless scaling to set the limits for the predictions for ITER, an approach which also shows the limitations of the predictions, and describes briefly the major characteristics and physics behind the H-mode—the preferred confinement regime of ITER.

  19. Development and benchmarking of TASSER(iter) for the iterative improvement of protein structure predictions.

    PubMed

    Lee, Seung Yup; Skolnick, Jeffrey

    2007-07-01

    To improve the accuracy of TASSER models especially in the limit where threading provided template alignments are of poor quality, we have developed the TASSER(iter) algorithm which uses the templates and contact restraints from TASSER generated models for iterative structure refinement. We apply TASSER(iter) to a large benchmark set of 2,773 nonhomologous single domain proteins that are < or = 200 in length and that cover the PDB at the level of 35% pairwise sequence identity. Overall, TASSER(iter) models have a smaller global average RMSD of 5.48 A compared to 5.81 A RMSD of the original TASSER models. Classifying the targets by the level of prediction difficulty (where Easy targets have a good template with a corresponding good threading alignment, Medium targets have a good template but a poor alignment, and Hard targets have an incorrectly identified template), TASSER(iter) (TASSER) models have an average RMSD of 4.15 A (4.35 A) for the Easy set and 9.05 A (9.52 A) for the Hard set. The largest reduction of average RMSD is for the Medium set where the TASSER(iter) models have an average global RMSD of 5.67 A compared to 6.72 A of the TASSER models. Seventy percent of the Medium set TASSER(iter) models have a smaller RMSD than the TASSER models, while 63% of the Easy and 60% of the Hard TASSER models are improved by TASSER(iter). For the foldable cases, where the targets have a RMSD to the native <6.5 A, TASSER(iter) shows obvious improvement over TASSER models: For the Medium set, it improves the success rate from 57.0 to 67.2%, followed by the Hard targets where the success rate improves from 32.0 to 34.8%, with the smallest improvement in the Easy targets from 82.6 to 84.0%. These results suggest that TASSER(iter) can provide more reliable predictions for targets of Medium difficulty, a range that had resisted improvement in the quality of protein structure predictions. 2007 Wiley-Liss, Inc.

  20. An implicit-iterative solution of the heat conduction equation with a radiation boundary condition

    NASA Technical Reports Server (NTRS)

    Williams, S. D.; Curry, D. M.

    1977-01-01

    For the problem of predicting one-dimensional heat transfer between conducting and radiating mediums by an implicit finite difference method, four different formulations were used to approximate the surface radiation boundary condition while retaining an implicit formulation for the interior temperature nodes. These formulations are an explicit boundary condition, a linearized boundary condition, an iterative boundary condition, and a semi-iterative boundary method. The results of these methods in predicting surface temperature on the space shuttle orbiter thermal protection system model under a variety of heating rates were compared. The iterative technique caused the surface temperature to be bounded at each step. While the linearized and explicit methods were generally more efficient, the iterative and semi-iterative techniques provided a realistic surface temperature response without requiring step size control techniques.

  1. Validation of the thermal transport model used for ITER startup scenario predictions with DIII-D experimental data

    DOE PAGES

    Casper, T. A.; Meyer, W. H.; Jackson, G. L.; ...

    2010-12-08

    We are exploring characteristics of ITER startup scenarios in similarity experiments conducted on the DIII-D Tokamak. In these experiments, we have validated scenarios for the ITER current ramp up to full current and developed methods to control the plasma parameters to achieve stability. Predictive simulations of ITER startup using 2D free-boundary equilibrium and 1D transport codes rely on accurate estimates of the electron and ion temperature profiles that determine the electrical conductivity and pressure profiles during the current rise. Here we present results of validation studies that apply the transport model used by the ITER team to DIII-D discharge evolutionmore » and comparisons with data from our similarity experiments.« less

  2. How good are the Garvey-Kelson predictions of nuclear masses?

    NASA Astrophysics Data System (ADS)

    Morales, Irving O.; López Vieyra, J. C.; Hirsch, J. G.; Frank, A.

    2009-09-01

    The Garvey-Kelson relations are used in an iterative process to predict nuclear masses in the neighborhood of nuclei with measured masses. Average errors in the predicted masses for the first three iteration shells are smaller than those obtained with the best nuclear mass models. Their quality is comparable with the Audi-Wapstra extrapolations, offering a simple and reproducible procedure for short range mass predictions. A systematic study of the way the error grows as a function of the iteration and the distance to the known masses region, shows that a correlation exists between the error and the residual neutron-proton interaction, produced mainly by the implicit assumption that V varies smoothly along the nuclear landscape.

  3. Iterative dataset optimization in automated planning: Implementation for breast and rectal cancer radiotherapy.

    PubMed

    Fan, Jiawei; Wang, Jiazhou; Zhang, Zhen; Hu, Weigang

    2017-06-01

    To develop a new automated treatment planning solution for breast and rectal cancer radiotherapy. The automated treatment planning solution developed in this study includes selection of the iterative optimized training dataset, dose volume histogram (DVH) prediction for the organs at risk (OARs), and automatic generation of clinically acceptable treatment plans. The iterative optimized training dataset is selected by an iterative optimization from 40 treatment plans for left-breast and rectal cancer patients who received radiation therapy. A two-dimensional kernel density estimation algorithm (noted as two parameters KDE) which incorporated two predictive features was implemented to produce the predicted DVHs. Finally, 10 additional new left-breast treatment plans are re-planned using the Pinnacle 3 Auto-Planning (AP) module (version 9.10, Philips Medical Systems) with the objective functions derived from the predicted DVH curves. Automatically generated re-optimized treatment plans are compared with the original manually optimized plans. By combining the iterative optimized training dataset methodology and two parameters KDE prediction algorithm, our proposed automated planning strategy improves the accuracy of the DVH prediction. The automatically generated treatment plans using the dose derived from the predicted DVHs can achieve better dose sparing for some OARs without compromising other metrics of plan quality. The proposed new automated treatment planning solution can be used to efficiently evaluate and improve the quality and consistency of the treatment plans for intensity-modulated breast and rectal cancer radiation therapy. © 2017 American Association of Physicists in Medicine.

  4. Simulation of Fusion Plasmas

    ScienceCinema

    Holland, Chris [UC San Diego, San Diego, California, United States

    2017-12-09

    The upcoming ITER experiment (www.iter.org) represents the next major milestone in realizing the promise of using nuclear fusion as a commercial energy source, by moving into the “burning plasma” regime where the dominant heat source is the internal fusion reactions. As part of its support for the ITER mission, the US fusion community is actively developing validated predictive models of the behavior of magnetically confined plasmas. In this talk, I will describe how the plasma community is using the latest high performance computing facilities to develop and refine our models of the nonlinear, multiscale plasma dynamics, and how recent advances in experimental diagnostics are allowing us to directly test and validate these models at an unprecedented level.

  5. Predicting rotation for ITER via studies of intrinsic torque and momentum transport in DIII-D

    DOE PAGES

    Chrystal, C.; Grierson, B. A.; Staebler, G. M.; ...

    2017-03-30

    Here, experiments at the DIII-D tokamak have used dimensionless parameter scans to investigate the dependencies of intrinsic torque and momentum transport in order to inform a prediction of the rotation profile in ITER. Measurements of intrinsic torque profiles and momentum confinement time in dimensionless parameter scans of normalized gyroradius and collisionality are used to predict the amount of intrinsic rotation in the pedestal of ITER. Additional scans of T e/T i and safety factor are used to determine the accuracy of momentum flux predictions of the quasi-linear gyrokinetic code TGLF. In these scans, applications of modulated torque are used tomore » measure the incremental momentum diffusivity, and results are consistent with the E x B shear suppression of turbulent transport. These incremental transport measurements are also compared with the TGLF results. In order to form a prediction of the rotation profile for ITER, the pedestal prediction is used as a boundary condition to a simulation that uses TGLF to determine the transport in the core of the plasma. The predicted rotation is ≈20 krad/s in the core, lower than in many current tokamak operating scenarios. TGLF predictions show that this rotation is still significant enough to have a strong effect on confinement via E x B shear.« less

  6. Local sharpening and subspace wavefront correction with predictive dynamic digital holography

    NASA Astrophysics Data System (ADS)

    Sulaiman, Sennan; Gibson, Steve

    2017-09-01

    Digital holography holds several advantages over conventional imaging and wavefront sensing, chief among these being significantly fewer and simpler optical components and the retrieval of complex field. Consequently, many imaging and sensing applications including microscopy and optical tweezing have turned to using digital holography. A significant obstacle for digital holography in real-time applications, such as wavefront sensing for high energy laser systems and high speed imaging for target racking, is the fact that digital holography is computationally intensive; it requires iterative virtual wavefront propagation and hill-climbing to optimize some sharpness criteria. It has been shown recently that minimum-variance wavefront prediction can be integrated with digital holography and image sharpening to reduce significantly large number of costly sharpening iterations required to achieve near-optimal wavefront correction. This paper demonstrates further gains in computational efficiency with localized sharpening in conjunction with predictive dynamic digital holography for real-time applications. The method optimizes sharpness of local regions in a detector plane by parallel independent wavefront correction on reduced-dimension subspaces of the complex field in a spectral plane.

  7. Extrapolation of the DIII-D high poloidal beta scenario to ITER steady-state using transport modeling

    NASA Astrophysics Data System (ADS)

    McClenaghan, J.; Garofalo, A. M.; Meneghini, O.; Smith, S. P.

    2016-10-01

    Transport modeling of a proposed ITER steady-state scenario based on DIII-D high βP discharges finds that the core confinement may be improved with either sufficient rotation or a negative central shear q-profile. The high poloidal beta scenario is characterized by a large bootstrap current fraction( 80%) which reduces the demands on the external current drive, and a large radius internal transport barrier which is associated with improved normalized confinement. Typical temperature and density profiles from the non-inductive high poloidal beta scenario on DIII-D are scaled according to 0D modeling predictions of the requirements for achieving Q=5 steady state performance in ITER with ``day one'' H&CD capabilities. Then, TGLF turbulence modeling is carried out under systematic variations of the toroidal rotation and the core q-profile. Either strong negative central magnetic shear or rotation are found to successfully provide the turbulence suppression required to maintain the temperature and density profiles. This work supported by the US Department of Energy under DE-FC02-04ER54698.

  8. Prospects for Advanced Tokamak Operation of ITER

    NASA Astrophysics Data System (ADS)

    Neilson, George H.

    1996-11-01

    Previous studies have identified steady-state (or "advanced") modes for ITER, based on reverse-shear profiles and significant bootstrap current. A typical example has 12 MA of plasma current, 1,500 MW of fusion power, and 100 MW of heating and current-drive power. The implementation of these and other steady-state operating scenarios in the ITER device is examined in order to identify key design modifications that can enhance the prospects for successfully achieving advanced tokamak operating modes in ITER compatible with a single null divertor design. In particular, we examine plasma configurations that can be achieved by the ITER poloidal field system with either a monolithic central solenoid (as in the ITER Interim Design), or an alternate "hybrid" central solenoid design which provides for greater flexibility in the plasma shape. The increased control capability and expanded operating space provided by the hybrid central solenoid allows operation at high triangularity (beneficial for improving divertor performance through control of edge-localized modes and for increasing beta limits), and will make it much easier for ITER operators to establish an optimum startup trajectory leading to a high-performance, steady-state scenario. Vertical position control is examined because plasmas made accessible by the hybrid central solenoid can be more elongated and/or less well coupled to the conducting structure. Control of vertical-displacements using the external PF coils remains feasible over much of the expanded operating space. Further work is required to define the full spectrum of axisymmetric plasma disturbances requiring active control In addition to active axisymmetric control, advanced tokamak modes in ITER may require active control of kink modes on the resistive time scale of the conducting structure. This might be accomplished in ITER through the use of active control coils external to the vacuum vessel which are actuated by magnetic sensors near the first wall. The enhanced shaping and positioning flexibility provides a range of options for reducing the ripple-induced losses of fast alpha particles--a major limitation on ITER steady-state modes. An alternate approach that we are pursuing in parallel is the inclusion of ferromagnetic inserts to reduce the toroidal field ripple within the plasma chamber. The inclusion of modest design changes such as the hybrid central solenoid, active control coils for kink modes, and ferromagnetic inserts for TF ripple reduction show can greatly increase the flexibility to accommodate advance tokamak operation in ITER. Increased flexibility is important because the optimum operating scenario for ITER cannot be predicted with certainty. While low-inductance, reverse shear modes appear attractive for steady-state operation, high-inductance, high-beta modes are also viable candidates, and it is important that ITER have the flexibility to explore both these, and other, operating regimes.

  9. TGLF Recalibration for ITER Standard Case Parameters FY2015: Theory and Simulation Performance Target Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J.

    2015-12-01

    This work was motivated by the observation, as early as 2008, that GYRO simulations of some ITER operating scenarios exhibited nonlinear zonal-flow generation large enough to effectively quench turbulence inside r /a ~ 0.5. This observation of flow-dominated, low-transport states persisted even as more accurate and comprehensive predictions of ITER profiles were made using the state-of-the-art TGLF transport model. This core stabilization is in stark contrast to GYRO-TGLF comparisons for modern-day tokamaks, for which GYRO and TGLF are typically in very close agreement. So, we began to suspect that TGLF needed to be generalized to include the effect of zonal-flowmore » stabilization in order to be more accurate for the conditions of reactor simulations. While the precise cause of the GYRO-TGLF discrepancy for ITER parameters was not known, it was speculated that closeness to threshold in the absence of driven rotation, as well as electromagnetic stabilization, created conditions more sensitive the self-generated zonal-flow stabilization than in modern tokamaks. Need for nonlinear zonal-flow stabilization: To explore the inclusion of a zonal-flow stabilization mechanism in TGLF, we started with a nominal ITER profile predicted by TGLF, and then performed linear and nonlinear GYRO simulations to characterize the behavior at and slightly above the nominal temperature gradients for finite levels of energy transport. Then, we ran TGLF on these cases to see where the discrepancies were largest. The predicted ITER profiles were indeed near to the TGLF threshold over most of the plasma core in the hybrid discharge studied (weak magnetic shear, q > 1). Scanning temperature gradients above the TGLF power balance values also showed that TGLF overpredicted the electron energy transport in the low-collisionality ITER plasma. At first (in Q3), a model of only the zonal-flow stabilization (Dimits shift) was attempted. Although we were able to construct an ad hoc model of the zonal flows that fit the GYRO simulations, the parameters of the model had to be tuned to each case. A physics basis for the zonal flow model was lacking. Electron energy transport at short wavelength: A secondary issue – the high-k electron energy flux – was initially assumed to be independent of the zonal flow effect. However, detailed studies of the fluctuation spectra from recent multiscale (electron and ion scale) GYRO simulations provided a critical new insight into the role of zonal flows. The multiscale simulations suggested that advection by the zonal flows strongly suppressed electron-scale turbulence. Radial shear of the zonal E×B fluctuation could not compete with the large electron-scale linear growth rate, but the k x-mixing rate of the E×B advection could. This insight led to a preliminary new model for the way zonal flows saturate both electron- and ion-scale turbulence. It was also discovered that the strength of the zonal E×B velocity could be computed from the linear growth rate spectrum. The new saturation model (SAT1), which replaces the original model (SAT0), was fit to the multiscale GYRO simulations as well as the ion-scale GYRO simulations used to calibrate the original SAT0 model. Thus, SAT1 captures the physics of both multiscale electron transport and zonal-flow stabilization. In future work, the SAT1 model will require significant further testing and (expensive) calibration with nonlinear multiscale gyrokinetic simulations over a wider variety of plasma conditions – certainly more than the small set of scans about a single C-Mod L-mode discharge. We believe the SAT1 model holds great promise as a physics-based model of the multiscale turbulent transport in fusion devices. Correction to ITER performance predictions: Finally, the impact of the SAT1model on the ITER hybrid case is mixed. Without the electron-scale contribution to the fluxes, the Dimits shift makes a significant improvement in the predicted fusion power as originally posited. Alas, including the high-k electron transport reduces the improvement, yielding a modest net increase in predicted fusion power compared to the TGLF prediction with the original SAT0 model.« less

  10. Intelligent model-based OPC

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.

  11. Modelling of transitions between L- and H-mode in JET high plasma current plasmas and application to ITER scenarios including tungsten behaviour

    NASA Astrophysics Data System (ADS)

    Koechl, F.; Loarte, A.; Parail, V.; Belo, P.; Brix, M.; Corrigan, G.; Harting, D.; Koskela, T.; Kukushkin, A. S.; Polevoi, A. R.; Romanelli, M.; Saibene, G.; Sartori, R.; Eich, T.; Contributors, JET

    2017-08-01

    The dynamics for the transition from L-mode to a stationary high Q DT H-mode regime in ITER is expected to be qualitatively different to present experiments. Differences may be caused by a low fuelling efficiency of recycling neutrals, that influence the post transition plasma density evolution on the one hand. On the other hand, the effect of the plasma density evolution itself both on the alpha heating power and the edge power flow required to sustain the H-mode confinement itself needs to be considered. This paper presents results of modelling studies of the transition to stationary high Q DT H-mode regime in ITER with the JINTRAC suite of codes, which include optimisation of the plasma density evolution to ensure a robust achievement of high Q DT regimes in ITER on the one hand and the avoidance of tungsten accumulation in this transient phase on the other hand. As a first step, the JINTRAC integrated models have been validated in fully predictive simulations (excluding core momentum transport which is prescribed) against core, pedestal and divertor plasma measurements in JET C-wall experiments for the transition from L-mode to stationary H-mode in partially ITER relevant conditions (highest achievable current and power, H 98,y ~ 1.0, low collisionality, comparable evolution in P net/P L-H, but different ρ *, T i/T e, Mach number and plasma composition compared to ITER expectations). The selection of transport models (core: NCLASS  +  Bohm/gyroBohm in L-mode/GLF23 in H-mode) was determined by a trade-off between model complexity and efficiency. Good agreement between code predictions and measured plasma parameters is obtained if anomalous heat and particle transport in the edge transport barrier are assumed to be reduced at different rates with increasing edge power flow normalised to the H-mode threshold; in particular the increase in edge plasma density is dominated by this edge transport reduction as the calculated neutral influx across the separatrix remains unchanged (or even slightly decreases) following the H-mode transition. JINTRAC modelling of H-mode transitions for the ITER 15 MA / 5.3 T high Q DT scenarios with the same modelling assumptions as those being derived from JET experiments has been carried out. The modelling finds that it is possible to access high Q DT conditions robustly for additional heating power levels of P AUX  ⩾  53 MW by optimising core and edge plasma fuelling in the transition from L-mode to high Q DT H-mode. An initial period of low plasma density, in which the plasma accesses the H-mode regime and the alpha heating power increases, needs to be considered after the start of the additional heating, which is then followed by a slow density ramp. Both the duration of the low density phase and the density ramp-rate depend on boundary and operational conditions and can be optimised to minimise the resistive flux consumption in this transition phase. The modelling also shows that fuelling schemes optimised for a robust access to high Q DT H-mode in ITER are also optimum for the prevention of the contamination of the core plasma by tungsten during this phase.

  12. Determination of an effective scoring function for RNA-RNA interactions with a physics-based double-iterative method.

    PubMed

    Yan, Yumeng; Wen, Zeyu; Zhang, Di; Huang, Sheng-You

    2018-05-18

    RNA-RNA interactions play fundamental roles in gene and cell regulation. Therefore, accurate prediction of RNA-RNA interactions is critical to determine their complex structures and understand the molecular mechanism of the interactions. Here, we have developed a physics-based double-iterative strategy to determine the effective potentials for RNA-RNA interactions based on a training set of 97 diverse RNA-RNA complexes. The double-iterative strategy circumvented the reference state problem in knowledge-based scoring functions by updating the potentials through iteration and also overcame the decoy-dependent limitation in previous iterative methods by constructing the decoys iteratively. The derived scoring function, which is referred to as DITScoreRR, was evaluated on an RNA-RNA docking benchmark of 60 test cases and compared with three other scoring functions. It was shown that for bound docking, our scoring function DITScoreRR obtained the excellent success rates of 90% and 98.3% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 63.3% and 71.7% for van der Waals interactions, 45.0% and 65.0% for ITScorePP, and 11.7% and 26.7% for ZDOCK 2.1, respectively. For unbound docking, DITScoreRR achieved the good success rates of 53.3% and 71.7% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 13.3% and 28.3% for van der Waals interactions, 11.7% and 26.7% for our ITScorePP, and 3.3% and 6.7% for ZDOCK 2.1, respectively. DITScoreRR also performed significantly better in ranking decoys and obtained significantly higher score-RMSD correlations than the other three scoring functions. DITScoreRR will be of great value for the prediction and design of RNA structures and RNA-RNA complexes.

  13. Development of a spatially resolving x-ray crystal spectrometer for measurement of ion-temperature (T(i)) and rotation-velocity (v) profiles in ITER.

    PubMed

    Hill, K W; Bitter, M; Delgado-Aparicio, L; Johnson, D; Feder, R; Beiersdorfer, P; Dunn, J; Morris, K; Wang, E; Reinke, M; Podpaly, Y; Rice, J E; Barnsley, R; O'Mullane, M; Lee, S G

    2010-10-01

    Imaging x-ray crystal spectrometer (XCS) arrays are being developed as a US-ITER activity for Doppler measurement of T(i) and v profiles of impurities (W, Kr, and Fe) with ∼7 cm (a/30) and 10-100 ms resolution in ITER. The imaging XCS, modeled after a prototype instrument on Alcator C-Mod, uses a spherically bent crystal and 2D x-ray detectors to achieve high spectral resolving power (E/dE>6000) horizontally and spatial imaging vertically. Two arrays will measure T(i) and both poloidal and toroidal rotation velocity profiles. The measurement of many spatial chords permits tomographic inversion for the inference of local parameters. The instrument design, predictions of performance, and results from C-Mod are presented.

  14. Overview of Recent DIII-D Experimental Results

    NASA Astrophysics Data System (ADS)

    Fenstermacher, Max

    2015-11-01

    Recent DIII-D experiments have added to the ITER physics basis and to physics understanding for extrapolation to future devices. ELMs were suppressed by RMPs in He plasmas consistent with ITER non-nuclear phase conditions, and in steady state hybrid plasmas. Characteristics of the EHO during both standard high torque, and low torque enhanced pedestal QH-mode with edge broadband fluctuations were measured, including edge localized density fluctuations with a microwave imaging reflectometer. The path to Super H-mode was verified at high beta with a QH-mode edge, and in plasmas with ELMs triggered by Li granules. ITER acceptable TQ mitigation was obtained with low Ne fraction Shattered Pellet Injection. Divertor ne and Te data from Thomson Scattering confirm predicted drift-driven asymmetries in electron pressure, and X-divertor heat flux reduction and detachment were characterized. The crucial mechanisms for ExB shear control of turbulence were clarified. In collaboration with EAST, high beta-p scenarios were obtained with 80 % bootstrap fraction, high H-factor and stability limits, and large radius ITBs leading to low AE activity. Work supported by the US Department of Energy under DE-FC02-04ER54698 and DE-AC52-07NA27344.

  15. Meta-path based heterogeneous combat network link prediction

    NASA Astrophysics Data System (ADS)

    Li, Jichao; Ge, Bingfeng; Yang, Kewei; Chen, Yingwu; Tan, Yuejin

    2017-09-01

    The combat system-of-systems in high-tech informative warfare, composed of many interconnected combat systems of different types, can be regarded as a type of complex heterogeneous network. Link prediction for heterogeneous combat networks (HCNs) is of significant military value, as it facilitates reconfiguring combat networks to represent the complex real-world network topology as appropriate with observed information. This paper proposes a novel integrated methodology framework called HCNMP (HCN link prediction based on meta-path) to predict multiple types of links simultaneously for an HCN. More specifically, the concept of HCN meta-paths is introduced, through which the HCNMP can accumulate information by extracting different features of HCN links for all the six defined types. Next, an HCN link prediction model, based on meta-path features, is built to predict all types of links of the HCN simultaneously. Then, the solution algorithm for the HCN link prediction model is proposed, in which the prediction results are obtained by iteratively updating with the newly predicted results until the results in the HCN converge or reach a certain maximum iteration number. Finally, numerical experiments on the dataset of a real HCN are conducted to demonstrate the feasibility and effectiveness of the proposed HCNMP, in comparison with 30 baseline methods. The results show that the performance of the HCNMP is superior to those of the baseline methods.

  16. The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Xu, X.; Tong, S.; Wang, L.

    2017-12-01

    How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.

  17. Multistep-Ahead Air Passengers Traffic Prediction with Hybrid ARIMA-SVMs Models

    PubMed Central

    Ming, Wei; Xiong, Tao

    2014-01-01

    The hybrid ARIMA-SVMs prediction models have been established recently, which take advantage of the unique strength of ARIMA and SVMs models in linear and nonlinear modeling, respectively. Built upon this hybrid ARIMA-SVMs models alike, this study goes further to extend them into the case of multistep-ahead prediction for air passengers traffic with the two most commonly used multistep-ahead prediction strategies, that is, iterated strategy and direct strategy. Additionally, the effectiveness of data preprocessing approaches, such as deseasonalization and detrending, is investigated and proofed along with the two strategies. Real data sets including four selected airlines' monthly series were collected to justify the effectiveness of the proposed approach. Empirical results demonstrate that the direct strategy performs better than iterative one in long term prediction case while iterative one performs better in the case of short term prediction. Furthermore, both deseasonalization and detrending can significantly improve the prediction accuracy for both strategies, indicating the necessity of data preprocessing. As such, this study contributes as a full reference to the planners from air transportation industries on how to tackle multistep-ahead prediction tasks in the implementation of either prediction strategy. PMID:24723814

  18. Measured vs. Predicted Pedestal Pressure During RMP ELM Control in DIII-D

    NASA Astrophysics Data System (ADS)

    Zywicki, Bailey; Fenstermacher, Max; Groebner, Richard; Meneghini, Orso

    2017-10-01

    From database analysis of DIII-D plasmas with Resonant Magnetic Perturbations (RMPs) for ELM control, we will compare the experimental pedestal pressure (p_ped) to EPED code predictions and present the dependence of any p_ped differences from EPED on RMP parameters not included in the EPED model e.g. RMP field strength, toroidal and poloidal spectrum etc. The EPED code, based on Peeling-Ballooning and Kinetic Ballooning instability constraints, will also be used by ITER to predict the H-mode p_ped without RMPs. ITER plans to use RMPs as an effective ELM control method. The need to control ELMs in ITER is of the utmost priority, as it directly correlates to the lifetime of the plasma facing components. An accurate means of determining the impact of RMP ELM control on the p_ped is needed, because the device fusion power is strongly dependent on p_ped. With this new collection of data, we aim to provide guidance to predictions of the ITER pedestal during RMP ELM control that can be incorporated in a future predictive code. Work supported in part by US DoE under the Science Undergraduate Laboratory Internship (SULI) program and under DE-FC02-04ER54698, and DE-AC52-07NA27344.

  19. Covariate selection with iterative principal component analysis for predicting physical

    USDA-ARS?s Scientific Manuscript database

    Local and regional soil data can be improved by coupling new digital soil mapping techniques with high resolution remote sensing products to quantify both spatial and absolute variation of soil properties. The objective of this research was to advance data-driven digital soil mapping techniques for ...

  20. An Iterative Decambering Approach for Post-Stall Prediction of Wing Characteristics using known Section Data

    NASA Technical Reports Server (NTRS)

    Mukherjee, Rinku; Gopalarathnam, Ashok; Kim, Sung Wan

    2003-01-01

    An iterative decambering approach for the post stall prediction of wings using known section data as inputs is presented. The method can currently be used for incompressible .ow and can be extended to compressible subsonic .ow using Mach number correction schemes. A detailed discussion of past work on this topic is presented first. Next, an overview of the decambering approach is presented and is illustrated by applying the approach to the prediction of the two-dimensional C(sub l) and C(sub m) curves for an airfoil. The implementation of the approach for iterative decambering of wing sections is then discussed. A novel feature of the current e.ort is the use of a multidimensional Newton iteration for taking into consideration the coupling between the di.erent sections of the wing. The approach lends itself to implementation in a variety of finite-wing analysis methods such as lifting-line theory, discrete-vortex Weissinger's method, and vortex lattice codes. Results are presented for a rectangular wing for a from 0 to 25 deg. The results are compared for both increasing and decreasing directions of a, and they show that a hysteresis loop can be predicted for post-stall angles of attack.

  1. Neural Generalized Predictive Control: A Newton-Raphson Implementation

    NASA Technical Reports Server (NTRS)

    Soloway, Donald; Haley, Pamela J.

    1997-01-01

    An efficient implementation of Generalized Predictive Control using a multi-layer feedforward neural network as the plant's nonlinear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. The main cost of the Newton-Raphson algorithm is in the calculation of the Hessian, but even with this overhead the low iteration numbers make Newton-Raphson faster than other techniques and a viable algorithm for real-time control. This paper presents a detailed derivation of the Neural Generalized Predictive Control algorithm with Newton-Raphson as the minimization algorithm. Simulation results show convergence to a good solution within two iterations and timing data show that real-time control is possible. Comments about the algorithm's implementation are also included.

  2. A fluid modeling perspective on the tokamak power scrape-off width using SOLPS-ITER

    NASA Astrophysics Data System (ADS)

    Meier, Eric

    2016-10-01

    SOLPS-ITER, a 2D fluid code, is used to conduct the first fluid modeling study of the physics behind the power scrape-off width (λq). When drift physics are activated in the code, λq is insensitive to changes in toroidal magnetic field (Bt), as predicted by the 0D heuristic drift (HD) model developed by Goldston. Using the HD model, which quantitatively agrees with regression analysis of a multi-tokamak database, λq in ITER is projected to be 1 mm instead of the previously assumed 4 mm, magnifying the challenge of maintaining the peak divertor target heat flux below the technological limit. These simulations, which use DIII-D H-mode experimental conditions as input, and reproduce the observed high-recycling, attached outer target plasma, allow insights into the scrape-off layer (SOL) physics that set λq. Independence of λq with respect to Bt suggests that SOLPS-ITER captures basic HD physics: the effect of Bt on the particle dwell time ( Bt) cancels with the effect on drift speed ( 1 /Bt), fixing the SOL plasma density width, and dictating λq. Scaling with plasma current (Ip), however, is much weaker than the roughly 1 /Ip dependence predicted by the HD model. Simulated net cross-separatrix particle flux due to magnetic drifts exceeds the anomalous particle transport, and a Pfirsch-Schluter-like SOL flow pattern is established. Up-down ion pressure asymmetry enables the net magnetic drift flux. Drifts establish in-out temperature asymmetry, and an associated thermoelectric current carries significant heat flux to the outer target. The density fall-off length in the SOL is similar to the electron temperature fall-off length, as observed experimentally. Finally, opportunities and challenges foreseen in ongoing work to extrapolate SOLPS-ITER and the HD model to ITER and future machines will be discussed. Supported by U.S. Department of Energy Contract DESC0010434.

  3. Rapid and Iterative Estimation of Predictions of High School Graduation and Other Milestones

    ERIC Educational Resources Information Center

    Porter, Kristin E.; Balu, Rekha; Gunton, Brad; Pestronk, Jefferson; Cohen, Allison

    2016-01-01

    With the advent of data systems that allow for frequent or even real-time student data updates, and recognition that high school students often can move from being on-track to graduation to off-track in a matter of weeks, indicator analysis alone may not provide a complete picture to guide school leaders' actions. The authors of this paper suggest…

  4. Extending the physics basis of quiescent H-mode toward ITER relevant parameters

    DOE PAGES

    Solomon, W. M.; Burrell, K. H.; Fenstermacher, M. E.; ...

    2015-06-26

    Recent experiments on DIII-D have addressed several long-standing issues needed to establish quiescent H-mode (QH-mode) as a viable operating scenario for ITER. In the past, QH-mode was associated with low density operation, but has now been extended to high normalized densities compatible with operation envisioned for ITER. Through the use of strong shaping, QH-mode plasmas have been maintained at high densities, both absolute (more » $$\\bar{n}$$ e ≈ 7 × 10 19 m ₋3) and normalized Greenwald fraction ($$\\bar{n}$$ e/n G > 0.7). In these plasmas, the pedestal can evolve to very high pressure and edge current as the density is increased. High density QH-mode operation with strong shaping has allowed access to a previously predicted regime of very high pedestal dubbed “Super H-mode”. Calculations of the pedestal height and width from the EPED model are quantitatively consistent with the experimentally observed density evolution. The confirmation of the shape dependence of the maximum density threshold for QH-mode helps validate the underlying theoretical model of peeling- ballooning modes for ELM stability. In general, QH-mode is found to achieve ELM- stable operation while maintaining adequate impurity exhaust, due to the enhanced impurity transport from an edge harmonic oscillation, thought to be a saturated kink- peeling mode driven by rotation shear. In addition, the impurity confinement time is not affected by rotation, even though the energy confinement time and measured E×B shear are observed to increase at low toroidal rotation. Together with demonstrations of high beta, high confinement and low q 95 for many energy confinement times, these results suggest QH-mode as a potentially attractive operating scenario for the ITER Q=10 mission.« less

  5. Validation of Kinetic-Turbulent-Neoclassical Theory for Edge Intrinsic Rotation in DIII-D Plasmas

    NASA Astrophysics Data System (ADS)

    Ashourvan, Arash

    2017-10-01

    Recent experiments on DIII-D with low-torque neutral beam injection (NBI) have provided a validation of a new model of momentum generation in a wide range of conditions spanning L- and H-mode with direct ion and electron heating. A challenge in predicting the bulk rotation profile for ITER has been to capture the physics of momentum transport near the separatrix and steep gradient region. A recent theory has presented a model for edge momentum transport which predicts the value and direction of the main-ion intrinsic velocity at the pedestal-top, generated by the passing orbits in the inhomogeneous turbulent field. In this study, this model-predicted velocity is tested on DIII-D for a database of 44 low-torque NBI discharges comprised of bothL- and H-mode plasmas. For moderate NBI powers (PNBI<4 MW), model prediction agrees well with the experiments for both L- and H-mode. At higher NBI power the experimental rotation is observed to saturate and even degrade compared to theory. TRANSP-NUBEAM simulations performed for the database show that for discharges with nominally balanced - but high powered - NBI, the net injected torque through the edge can exceed 1 N.m in the counter-current direction. The theory model has been extended to compute the rotation degradation from this counter-current NBI torque by solving a reduced momentum evolution equation for the edge and found the revised velocity prediction to be in agreement with experiment. Projecting to the ITER baseline scenario, this model predicts a value for the pedestal-top rotation (ρ 0.9) comparable to 4 kRad/s. Using the theory modeled - and now tested - velocity to predict the bulk plasma rotation opens up a path to more confidently projecting the confinement and stability in ITER. Supported by the US DOE under DE-AC02-09CH11466 and DE-FC02-04ER54698.

  6. A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation

    NASA Astrophysics Data System (ADS)

    Qiang, Z.; Zeng, L.; Wu, L.

    2016-12-01

    Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.

  7. A Technique for Transient Thermal Testing of Thick Structures

    NASA Technical Reports Server (NTRS)

    Horn, Thomas J.; Richards, W. Lance; Gong, Leslie

    1997-01-01

    A new open-loop heat flux control technique has been developed to conduct transient thermal testing of thick, thermally-conductive aerospace structures. This technique uses calibration of the radiant heater system power level as a function of heat flux, predicted aerodynamic heat flux, and the properties of an instrumented test article. An iterative process was used to generate open-loop heater power profiles prior to each transient thermal test. Differences between the measured and predicted surface temperatures were used to refine the heater power level command profiles through the iteration process. This iteration process has reduced the effects of environmental and test system design factors, which are normally compensated for by closed-loop temperature control, to acceptable levels. The final revised heater power profiles resulted in measured temperature time histories which deviated less than 25 F from the predicted surface temperatures.

  8. Using the surface panel method to predict the steady performance of ducted propellers

    NASA Astrophysics Data System (ADS)

    Cai, Hao-Peng; Su, Yu-Min; Li, Xin; Shen, Hai-Long

    2009-12-01

    A new numerical method was developed for predicting the steady hydrodynamic performance of ducted propellers. A potential based surface panel method was applied both to the duct and the propeller, and the interaction between them was solved by an induced velocity potential iterative method. Compared with the induced velocity iterative method, the method presented can save programming and calculating time. Numerical results for a JD simplified ducted propeller series showed that the method presented is effective for predicting the steady hydrodynamic performance of ducted propellers.

  9. ITER activities and fusion technology

    NASA Astrophysics Data System (ADS)

    Seki, M.

    2007-10-01

    At the 21st IAEA Fusion Energy Conference, 68 and 67 papers were presented in the categories of ITER activities and fusion technology, respectively. ITER performance prediction, results of technology R&D and the construction preparation provide good confidence in ITER realization. The superconducting tokamak EAST achieved the first plasma just before the conference. The construction of other new experimental machines has also shown steady progress. Future reactor studies stress the importance of down sizing and a steady-state approach. Reactor technology in the field of blanket including the ITER TBM programme and materials for the demonstration power plant showed sound progress in both R&D and design activities.

  10. High-resolution tungsten spectroscopy relevant to the diagnostic of high-temperature tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Rzadkiewicz, J.; Yang, Y.; Kozioł, K.; O'Mullane, M. G.; Patel, A.; Xiao, J.; Yao, K.; Shen, Y.; Lu, D.; Hutton, R.; Zou, Y.; JET Contributors

    2018-05-01

    The x-ray transitions in Cu- and Ni-like tungsten ions in the 5.19-5.26 Å wavelength range that are relevant as a high-temperature tokamak diagnostic, in particular for JET in the ITER-like wall configuration, have been studied. Tungsten spectra were measured at the upgraded Shanghai- Electron Beam Ion Trap operated with electron-beam energies from 3.16 to 4.55 keV. High-resolution measurements were performed by means of a flat Si 111 crystal spectrometer equipped by a CCD camera. The experimental wavelengths were determined with an accuracy of 0.3-0.4 mÅ. The wavelength of the ground-state transition in Cu-like tungsten from the 3 p53 d104 s 4 d [ (3/2 ,(1/2,5/2 ) 2] 1 /2 level was measured. All measured wavelengths were compared with those measured from JET ITER-like wall plasmas and with other experiments and various theoretical predictions including cowan, relac, multiconfigurational Dirac-Fock (MCDF), and fac calculations. To obtain a higher accuracy from theoretical predictions, the MCDF calculations were extended by taking into account correlation effects (configuration-interaction approach). It was found that such an extension brings the calculations closer to the experimental values in comparison with other calculations.

  11. Exploring the knowledge behind predictions in everyday cognition: an iterated learning study.

    PubMed

    Stephens, Rachel G; Dunn, John C; Rao, Li-Lin; Li, Shu

    2015-10-01

    Making accurate predictions about events is an important but difficult task. Recent work suggests that people are adept at this task, making predictions that reflect surprisingly accurate knowledge of the distributions of real quantities. Across three experiments, we used an iterated learning procedure to explore the basis of this knowledge: to what extent is domain experience critical to accurate predictions and how accurate are people when faced with unfamiliar domains? In Experiment 1, two groups of participants, one resident in Australia, the other in China, predicted the values of quantities familiar to both (movie run-times), unfamiliar to both (the lengths of Pharaoh reigns), and familiar to one but unfamiliar to the other (cake baking durations and the lengths of Beijing bus routes). While predictions from both groups were reasonably accurate overall, predictions were inaccurate in the selectively unfamiliar domains and, surprisingly, predictions by the China-resident group were also inaccurate for a highly familiar domain: local bus route lengths. Focusing on bus routes, two follow-up experiments with Australia-resident groups clarified the knowledge and strategies that people draw upon, plus important determinants of accurate predictions. For unfamiliar domains, people appear to rely on extrapolating from (not simply directly applying) related knowledge. However, we show that people's predictions are subject to two sources of error: in the estimation of quantities in a familiar domain and extension to plausible values in an unfamiliar domain. We propose that the key to successful predictions is not simply domain experience itself, but explicit experience of relevant quantities.

  12. A Predictive Model for Toxicity Effects Assessment of Biotransformed Hepatic Drugs Using Iterative Sampling Method.

    PubMed

    Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella

    2016-12-09

    Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, K W; Delgado-Aprico, L; Johnson, D

    Imaging XCS arrays are being developed as a US-ITER activity for Doppler measurement of Ti and v profiles of impurities (W, Kr, Fe) with ~7 cm (a/30) and 10-100 ms resolution in ITER. The imaging XCS, modeled after a PPPL-MIT instrument on Alcator C-Mod, uses a spherically bent crystal and 2d x-ray detectors to achieve high spectral resolving power (E/dE>6000) horizontally and spatial imaging vertically. Two arrays will measure Ti and both poloidal and toroidal rotation velocity profiles. Measurement of many spatial chords permits tomographic inversion for inference of local parameters. The instrument design, predictions of performance, and results frommore » C-Mod will be presented.« less

  14. Validation of a coupled core-transport, pedestal-structure, current-profile and equilibrium model

    NASA Astrophysics Data System (ADS)

    Meneghini, O.

    2015-11-01

    The first workflow capable of predicting the self-consistent solution to the coupled core-transport, pedestal structure, and equilibrium problems from first-principles and its experimental tests are presented. Validation with DIII-D discharges in high confinement regimes shows that the workflow is capable of robustly predicting the kinetic profiles from on axis to the separatrix and matching the experimental measurements to within their uncertainty, with no prior knowledge of the pedestal height nor of any measurement of the temperature or pressure. Self-consistent coupling has proven to be essential to match the experimental results, and capture the non-linear physics that governs the core and pedestal solutions. In particular, clear stabilization of the pedestal peeling ballooning instabilities by the global Shafranov shift and destabilization by additional edge bootstrap current, and subsequent effect on the core plasma profiles, have been clearly observed and documented. In our model, self-consistency is achieved by iterating between the TGYRO core transport solver (with NEO and TGLF for neoclassical and turbulent flux), and the pedestal structure predicted by the EPED model. A self-consistent equilibrium is calculated by EFIT, while the ONETWO transport package evolves the current profile and calculates the particle and energy sources. The capabilities of such workflow are shown to be critical for the design of future experiments such as ITER and FNSF, which operate in a regime where the equilibrium, the pedestal, and the core transport problems are strongly coupled, and for which none of these quantities can be assumed to be known. Self-consistent core-pedestal predictions for ITER, as well as initial optimizations, will be presented. Supported by the US Department of Energy under DE-FC02-04ER54698, DE-SC0012652.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chrystal, C.; Grierson, B. A.; Staebler, G. M.

    Here, experiments at the DIII-D tokamak have used dimensionless parameter scans to investigate the dependencies of intrinsic torque and momentum transport in order to inform a prediction of the rotation profile in ITER. Measurements of intrinsic torque profiles and momentum confinement time in dimensionless parameter scans of normalized gyroradius and collisionality are used to predict the amount of intrinsic rotation in the pedestal of ITER. Additional scans of T e/T i and safety factor are used to determine the accuracy of momentum flux predictions of the quasi-linear gyrokinetic code TGLF. In these scans, applications of modulated torque are used tomore » measure the incremental momentum diffusivity, and results are consistent with the E x B shear suppression of turbulent transport. These incremental transport measurements are also compared with the TGLF results. In order to form a prediction of the rotation profile for ITER, the pedestal prediction is used as a boundary condition to a simulation that uses TGLF to determine the transport in the core of the plasma. The predicted rotation is ≈20 krad/s in the core, lower than in many current tokamak operating scenarios. TGLF predictions show that this rotation is still significant enough to have a strong effect on confinement via E x B shear.« less

  16. Iterative inversion of deformation vector fields with feedback control.

    PubMed

    Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei

    2018-05-14

    Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.

  17. Solving Upwind-Biased Discretizations: Defect-Correction Iterations

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    1999-01-01

    This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.

  18. Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2013-01-01

    Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.

  19. An efficient iterative model reduction method for aeroviscoelastic panel flutter analysis in the supersonic regime

    NASA Astrophysics Data System (ADS)

    Cunha-Filho, A. G.; Briend, Y. P. J.; de Lima, A. M. G.; Donadon, M. V.

    2018-05-01

    The flutter boundary prediction of complex aeroelastic systems is not an easy task. In some cases, these analyses may become prohibitive due to the high computational cost and time associated with the large number of degrees of freedom of the aeroelastic models, particularly when the aeroelastic model incorporates a control strategy with the aim of suppressing the flutter phenomenon, such as the use of viscoelastic treatments. In this situation, the use of a model reduction method is essential. However, the construction of a modal reduction basis for aeroviscoelastic systems is still a challenge, owing to the inherent frequency- and temperature-dependent behavior of the viscoelastic materials. Thus, the main contribution intended for the present study is to propose an efficient and accurate iterative enriched Ritz basis to deal with aeroviscoelastic systems. The main features and capabilities of the proposed model reduction method are illustrated in the prediction of flutter boundary for a thin three-layer sandwich flat panel and a typical aeronautical stiffened panel, both under supersonic flow.

  20. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  1. AORSA full wave calculations of helicon waves in DIII-D and ITER

    NASA Astrophysics Data System (ADS)

    Lau, C.; Jaeger, E. F.; Bertelli, N.; Berry, L. A.; Green, D. L.; Murakami, M.; Park, J. M.; Pinsker, R. I.; Prater, R.

    2018-06-01

    Helicon waves have been recently proposed as an off-axis current drive actuator for DIII-D, FNSF, and DEMO tokamaks. Previous ray tracing modeling using GENRAY predicts strong single pass absorption and current drive in the mid-radius region on DIII-D in high beta tokamak discharges. The full wave code AORSA, which is valid to all order of Larmor radius and can resolve arbitrary ion cyclotron harmonics, has been used to validate the ray tracing technique. If the scrape-off-layer (SOL) is ignored in the modeling, AORSA agrees with GENRAY in both the amplitude and location of driven current for DIII-D and ITER cases. These models also show that helicon current drive can possibly be an efficient current drive actuator for ITER. Previous GENRAY analysis did not include the SOL. AORSA has also been used to extend the simulations to include the SOL and to estimate possible power losses of helicon waves in the SOL. AORSA calculations show that another mode can propagate in the SOL and lead to significant (~10%–20%) SOL losses at high SOL densities. Optimizing the SOL density profile can reduce these SOL losses to a few percent.

  2. AORSA full wave calculations of helicon waves in DIII-D and ITER

    DOE PAGES

    Lau, Cornwall; Jaeger, E.F.; Bertelli, Nicola; ...

    2018-04-11

    Helicon waves have been recently proposed as an off-axis current drive actuator for DIII-D, FNSF, and DEMO tokamaks. Previous ray tracing modeling using GENRAY predicts strong single pass absorption and current drive in the mid-radius region on DIII-D in high beta tokamak discharges. The full wave code AORSA, which is valid to all order of Larmor radius and can resolve arbitrary ion cyclotron harmonics, has been used to validate the ray tracing technique. If the scrape-off-layer (SOL) is ignored in the modeling, AORSA agrees with GENRAY in both the amplitude and location of driven current for DIII-D and ITER cases.more » These models also show that helicon current drive can possibly be an efficient current drive actuator for ITER. Previous GENRAY analysis did not include the SOL. AORSA has also been used to extend the simulations to include the SOL and to estimate possible power losses of helicon waves in the SOL. AORSA calculations show that another mode can propagate in the SOL and lead to significant (~10-20%) SOL losses at high SOL densities. Optimizing the SOL density profile can reduce these SOL losses to a few percent.« less

  3. AORSA full wave calculations of helicon waves in DIII-D and ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lau, Cornwall; Jaeger, E.F.; Bertelli, Nicola

    Helicon waves have been recently proposed as an off-axis current drive actuator for DIII-D, FNSF, and DEMO tokamaks. Previous ray tracing modeling using GENRAY predicts strong single pass absorption and current drive in the mid-radius region on DIII-D in high beta tokamak discharges. The full wave code AORSA, which is valid to all order of Larmor radius and can resolve arbitrary ion cyclotron harmonics, has been used to validate the ray tracing technique. If the scrape-off-layer (SOL) is ignored in the modeling, AORSA agrees with GENRAY in both the amplitude and location of driven current for DIII-D and ITER cases.more » These models also show that helicon current drive can possibly be an efficient current drive actuator for ITER. Previous GENRAY analysis did not include the SOL. AORSA has also been used to extend the simulations to include the SOL and to estimate possible power losses of helicon waves in the SOL. AORSA calculations show that another mode can propagate in the SOL and lead to significant (~10-20%) SOL losses at high SOL densities. Optimizing the SOL density profile can reduce these SOL losses to a few percent.« less

  4. The high-βN hybrid scenario for ITER and FNSF steady-state missions

    DOE PAGES

    Turco, Francesca; Petty, Clinton C.; Luce, Timothy C.; ...

    2015-05-15

    New experiments on DIII-D have demonstrated the steady-state potential of the hybrid scenario, with 1 MA of plasma current driven fully non-inductively and βN up to 3.7 sustained for ~3 s (~1.5 current diffusion time, τ R, in DIII-D), providing the basis for an attractive option for steady-state operation in ITER and FNSF. Excellent confinement is achieved (H 98y2~1.6) without performance limiting tearing modes. Furthermore, the hybrid regime overcomes the need for off-axis current drive efficiency, taking advantage of poloidal magnetic flux pumping that is believed to be the result of a saturated 3/2 tearing mode. This allows for efficientmore » current drive close to the axis, without deleterious sawtooth instabilities. In these experiments, the edge surface loop voltage is driven down to zero for >1 τ R when the poloidal β is increased above 1.9 at a plasma current of 1.0 MA and the ECH power is increased to 3.2 MW. Stationary operation of hybrid plasmas with all on-axis current drive is sustained at pressures slightly above the ideal no-wall limit, while the calculated ideal with-wall MHD limit is β N~4-4.5. Off-axis NBI power has been used to broaden the pressure and current profiles in this scenario, seeking to take advantage of higher predicted kink stability limits and lower values of the tearing stability index Δ', as calculated by the DCON and PEST3 codes. Our results are based on measured profiles that predict ideal limits at βN>4.5, 10% higher than the cases with on-axis NBI. A 0-D model, based on the present confinement, βN and shape values of the DIII-D hybrid scenario, shows that these plasmas are consistent with the ITER 9 MA, Q=5 mission and the FNSF 6.7 MA scenario with Q=3.5. With collisionality and edge safety factor values comparable to those envisioned for ITER and FNSF, the high-βN hybrid represents an attractive high performance option for the steady-state missions of these devices.« less

  5. COBRA: A Computational Brewing Application for Predicting the Molecular Composition of Organic Aerosols

    PubMed Central

    Fooshee, David R.; Nguyen, Tran B.; Nizkorodov, Sergey A.; Laskin, Julia; Laskin, Alexander; Baldi, Pierre

    2012-01-01

    Atmospheric organic aerosols (OA) represent a significant fraction of airborne particulate matter and can impact climate, visibility, and human health. These mixtures are difficult to characterize experimentally due to their complex and dynamic chemical composition. We introduce a novel Computational Brewing Application (COBRA) and apply it to modeling oligomerization chemistry stemming from condensation and addition reactions in OA formed by photooxidation of isoprene. COBRA uses two lists as input: a list of chemical structures comprising the molecular starting pool, and a list of rules defining potential reactions between molecules. Reactions are performed iteratively, with products of all previous iterations serving as reactants for the next. The simulation generated thousands of structures in the mass range of 120–500 Da, and correctly predicted ~70% of the individual OA constituents observed by high-resolution mass spectrometry. Select predicted structures were confirmed with tandem mass spectrometry. Esterification was shown to play the most significant role in oligomer formation, with hemiacetal formation less important, and aldol condensation insignificant. COBRA is not limited to atmospheric aerosol chemistry; it should be applicable to the prediction of reaction products in other complex mixtures for which reasonable reaction mechanisms and seed molecules can be supplied by experimental or theoretical methods. PMID:22568707

  6. Bootstrap evaluation of a young Douglas-fir height growth model for the Pacific Northwest

    Treesearch

    Nicholas R. Vaughn; Eric C. Turnblom; Martin W. Ritchie

    2010-01-01

    We evaluated the stability of a complex regression model developed to predict the annual height growth of young Douglas-fir. This model is highly nonlinear and is fit in an iterative manner for annual growth coefficients from data with multiple periodic remeasurement intervals. The traditional methods for such a sensitivity analysis either involve laborious math or...

  7. Advanced Software for Analysis of High-Speed Rolling-Element Bearings

    NASA Technical Reports Server (NTRS)

    Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.

    2003-01-01

    COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.

  8. FAST TRACK PAPER: Non-iterative multiple-attenuation methods: linear inverse solutions to non-linear inverse problems - II. BMG approximation

    NASA Astrophysics Data System (ADS)

    Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing

    2004-12-01

    The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.

  9. TH-CD-207A-07: Prediction of High Dimensional State Subject to Respiratory Motion: A Manifold Learning Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  10. Prediction, experimental results and analysis of the ITER TF insert coil quench propagation tests, using the 4C code

    NASA Astrophysics Data System (ADS)

    Zanino, R.; Bonifetto, R.; Brighenti, A.; Isono, T.; Ozeki, H.; Savoldi, L.

    2018-07-01

    The ITER toroidal field insert (TFI) coil is a single-layer Nb3Sn solenoid tested in 2016-2017 at the National Institutes for Quantum and Radiological Science and Technology (former JAEA) in Naka, Japan. The TFI, the last in a series of ITER insert coils, was tested in operating conditions relevant for the actual ITER TF coils, inserting it in the borehole of the central solenoid model coil, which provided the background magnetic field. In this paper, we consider the five quench propagation tests that were performed using one or two inductive heaters (IHs) as drivers; out of these, three used just one IH but with increasing delay times, up to 7.5 s, between the quench detection and the TFI current dump. The results of the 4C code prediction of the quench propagation up to the current dump are presented first, based on simulations performed before the tests. We then describe the experimental results, showing good reproducibility. Finally, we compare the 4C code predictions with the measurements, confirming the 4C code capability to accurately predict the quench propagation, and the evolution of total and local voltages, as well as of the hot spot temperature. To the best of our knowledge, such a predictive validation exercise is performed here for the first time for the quench of a Nb3Sn coil. Discrepancies between prediction and measurement are found in the evolution of the jacket temperatures, in the He pressurization and quench acceleration in the late phase of the transient before the dump, as well as in the early evolution of the inlet and outlet He mass flow rate. Based on the lessons learned in the predictive exercise, the model is then refined to try and improve a posteriori (i.e. in interpretive, as opposed to predictive mode) the agreement between simulation and experiment.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, B; Fujita, A; Buch, K

    Purpose: To investigate the correlation between texture analysis-based model observer and human observer in the task of diagnosis of ischemic infarct in non-contrast head CT of adults. Methods: Non-contrast head CTs of five patients (2 M, 3 F; 58–83 y) with ischemic infarcts were retro-reconstructed using FBP and Adaptive Statistical Iterative Reconstruction (ASIR) of various levels (10–100%). Six neuro -radiologists reviewed each image and scored image quality for diagnosing acute infarcts by a 9-point Likert scale in a blinded test. These scores were averaged across the observers to produce the average human observer responses. The chief neuro-radiologist placed multiple ROIsmore » over the infarcts. These ROIs were entered into a texture analysis software package. Forty-two features per image, including 11 GLRL, 5 GLCM, 4 GLGM, 9 Laws, and 13 2-D features, were computed and averaged over the images per dataset. The Fisher-coefficient (ratio of between-class variance to in-class variance) was calculated for each feature to identify the most discriminating features from each matrix that separate the different confidence scores most efficiently. The 15 features with the highest Fisher -coefficient were entered into linear multivariate regression for iterative modeling. Results: Multivariate regression analysis resulted in the best prediction model of the confidence scores after three iterations (df=11, F=11.7, p-value<0.0001). The model predicted scores and human observers were highly correlated (R=0.88, R-sq=0.77). The root-mean-square and maximal residual were 0.21 and 0.44, respectively. The residual scatter plot appeared random, symmetric, and unbiased. Conclusion: For diagnosis of ischemic infarct in non-contrast head CT in adults, the predicted image quality scores from texture analysis-based model observer was highly correlated with that of human observers for various noise levels. Texture-based model observer can characterize image quality of low contrast, subtle texture changes in addition to human observers.« less

  12. Construction of Protograph LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  13. MACE prediction of acute coronary syndrome via boosted resampling classification using electronic medical records.

    PubMed

    Huang, Zhengxing; Chan, Tak-Ming; Dong, Wei

    2017-02-01

    Major adverse cardiac events (MACE) of acute coronary syndrome (ACS) often occur suddenly resulting in high mortality and morbidity. Recently, the rapid development of electronic medical records (EMR) provides the opportunity to utilize the potential of EMR to improve the performance of MACE prediction. In this study, we present a novel data-mining based approach specialized for MACE prediction from a large volume of EMR data. The proposed approach presents a new classification algorithm by applying both over-sampling and under-sampling on minority-class and majority-class samples, respectively, and integrating the resampling strategy into a boosting framework so that it can effectively handle imbalance of MACE of ACS patients analogous to domain practice. The method learns a new and stronger MACE prediction model each iteration from a more difficult subset of EMR data with wrongly predicted MACEs of ACS patients by a previous weak model. We verify the effectiveness of the proposed approach on a clinical dataset containing 2930 ACS patient samples with 268 feature types. While the imbalanced ratio does not seem extreme (25.7%), MACE prediction targets pose great challenge to traditional methods. As these methods degenerate dramatically with increasing imbalanced ratios, the performance of our approach for predicting MACE remains robust and reaches 0.672 in terms of AUC. On average, the proposed approach improves the performance of MACE prediction by 4.8%, 4.5%, 8.6% and 4.8% over the standard SVM, Adaboost, SMOTE, and the conventional GRACE risk scoring system for MACE prediction, respectively. We consider that the proposed iterative boosting approach has demonstrated great potential to meet the challenge of MACE prediction for ACS patients using a large volume of EMR. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Progress in preparing scenarios for operation of the International Thermonuclear Experimental Reactor

    NASA Astrophysics Data System (ADS)

    Sips, A. C. C.; Giruzzi, G.; Ide, S.; Kessel, C.; Luce, T. C.; Snipes, J. A.; Stober, J. K.

    2015-02-01

    The development of operating scenarios is one of the key issues in the research for ITER which aims to achieve a fusion gain (Q) of ˜10, while producing 500 MW of fusion power for ≥300 s. The ITER Research plan proposes a success oriented schedule starting in hydrogen and helium, to be followed by a nuclear operation phase with a rapid development towards Q ˜ 10 in deuterium/tritium. The Integrated Operation Scenarios Topical Group of the International Tokamak Physics Activity initiates joint activities among worldwide institutions and experiments to prepare ITER operation. Plasma formation studies report robust plasma breakdown in devices with metal walls over a wide range of conditions, while other experiments use an inclined EC launch angle at plasma formation to mimic the conditions in ITER. Simulations of the plasma burn-through predict that at least 4 MW of Electron Cyclotron heating (EC) assist would be required in ITER. For H-modes at q95 ˜ 3, many experiments have demonstrated operation with scaled parameters for the ITER baseline scenario at ne/nGW ˜ 0.85. Most experiments, however, obtain stable discharges at H98(y,2) ˜ 1.0 only for βN = 2.0-2.2. For the rampup in ITER, early X-point formation is recommended, allowing auxiliary heating to reduce the flux consumption. A range of plasma inductance (li(3)) can be obtained from 0.65 to 1.0, with the lowest values obtained in H-mode operation. For the rampdown, the plasma should stay diverted maintaining H-mode together with a reduction of the elongation from 1.85 to 1.4. Simulations show that the proposed rampup and rampdown schemes developed since 2007 are compatible with the present ITER design for the poloidal field coils. At 13-15 MA and densities down to ne/nGW ˜ 0.5, long pulse operation (>1000 s) in ITER is possible at Q ˜ 5, useful to provide neutron fluence for Test Blanket Module assessments. ITER scenario preparation in hydrogen and helium requires high input power (>50 MW). H-mode operation in helium may be possible at input powers above 35 MW at a toroidal field of 2.65 T, for studying H-modes and ELM mitigation. In hydrogen, H-mode operation is expected to be marginal, even at 2.65 T with 60 MW of input power. Simulation code benchmark studies using hybrid and steady state scenario parameters have proved to be a very challenging and lengthy task of testing suites of codes, consisting of tens of sophisticated modules. Nevertheless, the general basis of the modelling appears sound, with substantial consistency among codes developed by different groups. For a hybrid scenario at 12 MA, the code simulations give a range for Q = 6.5-8.3, using 30 MW neutral beam injection and 20 MW ICRH. For non-inductive operation at 7-9 MA, the simulation results show more variation. At high edge pedestal pressure (Tped ˜ 7 keV), the codes predict Q = 3.3-3.8 using 33 MW NB, 20 MW EC, and 20 MW ion cyclotron to demonstrate the feasibility of steady-state operation with the day-1 heating systems in ITER. Simulations using a lower edge pedestal temperature (˜3 keV) but improved core confinement obtain Q = 5-6.5, when ECCD is concentrated at mid-radius and ˜20 MW off-axis current drive (ECCD or LHCD) is added. Several issues remain to be studied, including plasmas with dominant electron heating, mitigation of transient heat loads integrated in scenario demonstrations and (burn) control simulations in ITER scenarios.

  15. Super H-mode: theoretical prediction and initial observations of a new high performance regime for tokamak operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Philip B.; Solomon, Wayne M.; Burrell, Keith H.

    2015-07-21

    A new “Super H-mode” regime is predicted, which enables pedestal height and predicted fusion performance substantially higher than for H-mode operation. This new regime is predicted to exist by the EPED pedestal model, which calculates criticality constraints for peeling-ballooning and kinetic ballooning modes, and combines them to predict the pedestal height and width. EPED usually predicts a single (“H-mode”) pedestal solution for each set of input parameters, however, in strongly shaped plasmas above a critical density, multiple pedestal solutions are found, including the standard “Hmode” solution, and a “Super H-Mode” solution at substantially larger pedestal height and width. The Supermore » H-mode regime is predicted to be accessible by controlling the trajectory of the density, and to increase fusion performance for ITER, as well as for DEMO designs with strong shaping. A set of experiments on DIII-D has identified the predicted Super H-mode regime, and finds pedestal height and width, and their variation with density, in good agreement with theoretical predictions from the EPED model. Finally, the very high pedestal enables operation at high global beta and high confinement, including the highest normalized beta achieved on DIII-D with a quiescent edge.« less

  16. Burning plasma regime for Fussion-Fission Research Facility

    NASA Astrophysics Data System (ADS)

    Zakharov, Leonid E.

    2010-11-01

    The basic aspects of burning plasma regimes of Fusion-Fission Research Facility (FFRF, R/a=4/1 m/m, Ipl=5 MA, Btor=4-6 T, P^DT=50-100 MW, P^fission=80-4000 MW, 1 m thick blanket), which is suggested as the next step device for Chinese fusion program, are presented. The mission of FFRF is to advance magnetic fusion to the level of a stationary neutron source and to create a technical, scientific, and technology basis for the utilization of high-energy fusion neutrons for the needs of nuclear energy and technology. FFRF will rely as much as possible on ITER design. Thus, the magnetic system, especially TFC, will take advantage of ITER experience. TFC will use the same superconductor as ITER. The plasma regimes will represent an extension of the stationary plasma regimes on HT-7 and EAST tokamaks at ASIPP. Both inductive discharges and stationary non-inductive Lower Hybrid Current Drive (LHCD) will be possible. FFRF strongly relies on new, Lithium Wall Fusion (LiWF) plasma regimes, the development of which will be done on NSTX, HT-7, EAST in parallel with the design work. This regime will eliminate a number of uncertainties, still remaining unresolved in the ITER project. Well controlled, hours long inductive current drive operation at P^DT=50-100 MW is predicted.

  17. In-Vessel Tritium Retention and Removal in ITER-FEAT

    NASA Astrophysics Data System (ADS)

    Federici, G.; Brooks, J. N.; Iseli, M.; Wu, C. H.

    Erosion of the divertor and first-wall plasma-facing components, tritium uptake in the re-deposited films, and direct implantation in the armour material surfaces surrounding the plasma, represent crucial physical issues that affect the design of future fusion devices. In this paper we present the derivation, and discuss the results, of current predictions of tritium inventory in ITER-FEAT due to co-deposition and implantation and their attendant uncertainties. The current armour materials proposed for ITER-FEAT are beryllium on the first-wall, carbon-fibre-composites on the divertor plate near the separatrix strike points, to withstand the high thermal loads expected during off-normal events, e.g., disruptions, and tungsten elsewhere in the divertor. Tritium co-deposition with chemically eroded carbon in the divertor, and possibly with some Be eroded from the first-wall, is expected to represent the dominant mechanism of in-vessel tritium retention in ITER-FEAT. This demands efficient in-situ methods of mitigation and retrieval to avoid frequent outages due to the reaching of precautionary operating limits set by safety considerations (e.g., ˜350 g of in-vessel co-deposited tritium) and for fuel economy reasons. Priority areas where further R&D work is required to narrow the remaining uncertainties are also briefly discussed.

  18. Performance of spectral MSE diagnostic on C-Mod and ITER

    NASA Astrophysics Data System (ADS)

    Liao, Ken; Rowan, William; Mumgaard, Robert; Granetz, Robert; Scott, Steve; Marchuk, Oleksandr; Ralchenko, Yuri; Alcator C-Mod Team

    2015-11-01

    Magnetic field was measured on Alcator C-mod by applying spectral Motional Stark Effect techniques based on line shift (MSE-LS) and line ratio (MSE-LR) to the H-alpha emission spectrum of the diagnostic neutral beam atoms. The high field of Alcator C-mod allows measurements to be made at close to ITER values of Stark splitting (~ Bv⊥) with similar background levels to those expected for ITER. Accurate modeling of the spectrum requires a non-statistical, collisional-radiative analysis of the excited beam population and quadratic and Zeeman corrections to the Stark shift. A detailed synthetic diagnostic was developed and used to estimate the performance of the diagnostic at C-Mod and ITER parameters. Our analysis includes the sensitivity to view and beam geometry, aperture and divergence broadening, magnetic field, pixel size, background noise, and signal levels. Analysis of preliminary experiments agree with Kinetic+(polarization)MSE EFIT within ~2° in pitch angle and simulations predict uncertainties of 20 mT in | B | and <2° in pitch angle. This material is based upon work supported by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences under Award Number DE-FG03-96ER-54373 and DE-FC02-99ER54512.

  19. Improving cluster-based missing value estimation of DNA microarray data.

    PubMed

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  20. Prediction and realization of ITER-like pedestal pressure in the high- B tokamak Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Hughes, Jerry

    2017-10-01

    Fusion power in a burning plasma will scale as the square of the plasma pressure, which is increased in a straightforward way by increasing magnetic field: Pfus p2 B4 . Experiments on Alcator C-Mod, a compact high- B tokamak, have tested predictive capability for pedestal pressure, at toroidal field BT up to 8T , and poloidal field BP up to 1T . These reactor-like fields enable C-Mod to approach an ITER predicted value of 90kPa . This is expected if, as in the EPED model, the pedestal is constrained by onset of kinetic ballooning modes (KBMs) and peeling-ballooning modes (PMB), yielding a pressure pedestal approximately as pped BT ×BP . One successful path to high confinement on C-Mod is the high-density (ne > 3 ×1020m-3) approach, pursued using enhanced D-alpha (EDAs) H-mode. In EDA H-mode, transport regulates both the pedestal profiles and the core impurity content, holding the pedestal stationary, at just below the PBM stability boundary. We have extended this stationary ELM-suppressed regime to the highest magnetic fields achievable on C-Mod, and used it to approach the maximum pedestal predicted by EPED at high density: pped 60kPa . Another approach to high pressure utilizes a pedestal limited by PBMs at low collisionality, where pressure increases with density and EPED predicts access to a higher ``Super H'' solution for pped. Experiments at reduced density (ne < 2 ×1020m-3) and strong plasma shaping (δ > 0.5) accessed these regimes on C-Mod, producing pedestals with world record pped 80kPa , at Tped 2keV . In both the high and low density approaches, the impact of the pedestal on core performance is substantial. Our exploration of high pedestal regimes yielded a volume-averaged pressure 〈 p 〉 > 2atm , a world record value for a magnetic fusion device. The results hold promise for the projection of pedestal pressure and overall performance of high field burning plasma devices. Supported by U.S. Department of Energy awards DE-FC02-99ER54512, DE-FG02-95ER54309, DE-FC02-06ER54873, DE-AC02-09CH11466, DE-SC0007880 using Alcator C-Mod, a DOE Office of Science User Facility.

  1. A De-Novo Genome Analysis Pipeline (DeNoGAP) for large-scale comparative prokaryotic genomics studies.

    PubMed

    Thakur, Shalabh; Guttman, David S

    2016-06-30

    Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .

  2. Effects of high-frequency damping on iterative convergence of implicit viscous solver

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Nakashima, Yoshitaka; Watanabe, Norihiko

    2017-11-01

    This paper discusses effects of high-frequency damping on iterative convergence of an implicit defect-correction solver for viscous problems. The study targets a finite-volume discretization with a one parameter family of damped viscous schemes. The parameter α controls high-frequency damping: zero damping with α = 0, and larger damping for larger α (> 0). Convergence rates are predicted for a model diffusion equation by a Fourier analysis over a practical range of α. It is shown that the convergence rate attains its minimum at α = 1 on regular quadrilateral grids, and deteriorates for larger values of α. A similar behavior is observed for regular triangular grids. In both quadrilateral and triangular grids, the solver is predicted to diverge for α smaller than approximately 0.5. Numerical results are shown for the diffusion equation and the Navier-Stokes equations on regular and irregular grids. The study suggests that α = 1 and 4/3 are suitable values for robust and efficient computations, and α = 4 / 3 is recommended for the diffusion equation, which achieves higher-order accuracy on regular quadrilateral grids. Finally, a Jacobian-Free Newton-Krylov solver with the implicit solver (a low-order Jacobian approximately inverted by a multi-color Gauss-Seidel relaxation scheme) used as a variable preconditioner is recommended for practical computations, which provides robust and efficient convergence for a wide range of α.

  3. Rheologic effects of crystal preferred orientation in upper mantle flow near plate boundaries

    NASA Astrophysics Data System (ADS)

    Blackman, Donna; Castelnau, Olivier; Dawson, Paul; Boyce, Donald

    2016-04-01

    Observations of anisotropy provide insight into upper mantle processes. Flow-induced mineral alignment provides a link between mantle deformation patterns and seismic anisotropy. Our study focuses on the rheologic effects of crystal preferred orientation (CPO), which develops during mantle flow, in order to assess whether corresponding anisotropic viscosity could significantly impact the pattern of flow. We employ a coupled nonlinear numerical method to link CPO and the flow model via a local viscosity tensor field that quantifies the stress/strain-rate response of a textured mineral aggregate. For a given flow field, the CPO is computed along streamlines using a self-consistent texture model and is then used to update the viscosity tensor field. The new viscosity tensor field defines the local properties for the next flow computation. This iteration produces a coupled nonlinear model for which seismic signatures can be predicted. Results thus far confirm that CPO can impact flow pattern by altering rheology in directionally-dependent ways, particularly in regions of high flow gradient. Multiple iterations run for an initial, linear stress/strain-rate case (power law exponent n=1) converge to a flow field and CPO distribution that are modestly different from the reference, scalar viscosity case. Upwelling rates directly below the spreading axis are slightly reduced and flow is focused somewhat toward the axis. Predicted seismic anisotropy differences are modest. P-wave anisotropy is a few percent greater in the flow 'corner', near the spreading axis, below the lithosphere and extending 40-100 km off axis. Predicted S-wave splitting differences would be below seafloor measurement limits. Calculations with non-linear stress/strain-rate relation, which is more realistic for olivine, indicate that effects are stronger than for the linear case. For n=2-3, the distribution and strength of CPO for the first iteration are greater than for n=1, although the fast seismic axis directions are similar. The greatest difference in CPO for the nonlinear cases develop at the flow 'corner' at depths of 10-30 km and 20-100 km off-axis. J index values up to 10% greater than the linear case are predicted near the lithosphere base in that region. Viscosity tensor components are notably altered in the nonlinear cases. Iterations between the texture and flow calculations for the non-linear cases are underway this winter; results will be reported in the presentation.

  4. Iterated Stretching of Viscoelastic Jets

    NASA Technical Reports Server (NTRS)

    Chang, Hsueh-Chia; Demekhin, Evgeny A.; Kalaidin, Evgeny

    1999-01-01

    We examine, with asymptotic analysis and numerical simulation, the iterated stretching dynamics of FENE and Oldroyd-B jets of initial radius r(sub 0), shear viscosity nu, Weissenberg number We, retardation number S, and capillary number Ca. The usual Rayleigh instability stretches the local uniaxial extensional flow region near a minimum in jet radius into a primary filament of radius [Ca(1 - S)/ We](sup 1/2)r(sub 0) between two beads. The strain-rate within the filament remains constant while its radius (elastic stress) decreases (increases) exponentially in time with a long elastic relaxation time 3We(r(sup 2, sub 0)/nu). Instabilities convected from the bead relieve the tension at the necks during this slow elastic drainage and trigger a filament recoil. Secondary filaments then form at the necks from the resulting stretching. This iterated stretching is predicted to occur successively to generate high-generation filaments of radius r(sub n), (r(sub n)/r(sub 0)) = square root of 2[r(sub n-1)/r(sub 0)](sup 3/2) until finite-extensibility effects set in.

  5. Fast Time Response Electromagnetic Particle Injection System for Disruption Mitigation

    NASA Astrophysics Data System (ADS)

    Raman, Roger; Lay, W.-S.; Jarboe, T. R.; Menard, J. E.; Ono, M.

    2017-10-01

    Predicting and controlling disruptions is an urgent issue for ITER. In this proposed method, a radiative payload consisting of micro spheres of Be, BN, B, or other acceptable low-Z materials would be injected inside the q =2 surface for thermal and runaway electron mitigation. The radiative payload would be accelerated to the required velocities (0.2 to >1km/s) in an Electromagnetic Particle Injector (EPI). An important advantage of the EPI system is that it could be positioned very close to the reactor vessel. This has the added benefit that the external field near a high-field tokamak dramatically improves the injector performance, while simultaneously reducing the system response time. A NSTX-U / DIII-D scale system has been tested off-line to verify the critical parameters - the projected system response time and attainable velocities. Both are consistent with the model calculations, giving confidence that an ITER-scale system could be built to ensure safety of the ITER device. This work is supported by U.S. DOE Contracts: DE-AC02-09CH11466, DE-FG02-99ER54519 AM08, and DE-SC0006757.

  6. Brownian motion with adaptive drift for remaining useful life prediction: Revisited

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tsui, Kwok-Leung

    2018-01-01

    Linear Brownian motion with constant drift is widely used in remaining useful life predictions because its first hitting time follows the inverse Gaussian distribution. State space modelling of linear Brownian motion was proposed to make the drift coefficient adaptive and incorporate on-line measurements into the first hitting time distribution. Here, the drift coefficient followed the Gaussian distribution, and it was iteratively estimated by using Kalman filtering once a new measurement was available. Then, to model nonlinear degradation, linear Brownian motion with adaptive drift was extended to nonlinear Brownian motion with adaptive drift. However, in previous studies, an underlying assumption used in the state space modelling was that in the update phase of Kalman filtering, the predicted drift coefficient at the current time exactly equalled the posterior drift coefficient estimated at the previous time, which caused a contradiction with the predicted drift coefficient evolution driven by an additive Gaussian process noise. In this paper, to alleviate such an underlying assumption, a new state space model is constructed. As a result, in the update phase of Kalman filtering, the predicted drift coefficient at the current time evolves from the posterior drift coefficient at the previous time. Moreover, the optimal Kalman filtering gain for iteratively estimating the posterior drift coefficient at any time is mathematically derived. A discussion that theoretically explains the main reasons why the constructed state space model can result in high remaining useful life prediction accuracies is provided. Finally, the proposed state space model and its associated Kalman filtering gain are applied to battery prognostics.

  7. The high-β{sub N} hybrid scenario for ITER and FNSF steady-state missions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turco, F.; Petty, C. C.; Luce, T. C.

    2015-05-15

    New experiments on DIII-D have demonstrated the steady-state potential of the hybrid scenario, with 1 MA of plasma current driven fully non-inductively and β{sub N} up to 3.7 sustained for ∼3 s (∼1.5 current diffusion time, τ{sub R}, in DIII-D), providing the basis for an attractive option for steady-state operation in ITER and FNSF. Excellent confinement is achieved (H{sub 98y2} ∼ 1.6) without performance limiting tearing modes. The hybrid regime overcomes the need for off-axis current drive efficiency, taking advantage of poloidal magnetic flux pumping that is believed to be the result of a saturated 3/2 tearing mode. This allows for efficient currentmore » drive close to the axis, without deleterious sawtooth instabilities. In these experiments, the edge surface loop voltage is driven down to zero for >1 τ{sub R} when the poloidal β is increased above 1.9 at a plasma current of 1.0 MA and the ECH power is increased to 3.2 MW. Stationary operation of hybrid plasmas with all on-axis current drive is sustained at pressures slightly above the ideal no-wall limit, while the calculated ideal with-wall MHD limit is β{sub N} ∼ 4–4.5. Off-axis Neutral Beam Injection (NBI) power has been used to broaden the pressure and current profiles in this scenario, seeking to take advantage of higher predicted kink stability limits and lower values of the tearing stability index Δ′, as calculated by the DCON and PEST3 codes. Results based on measured profiles predict ideal limits at β{sub N} > 4.5, 10% higher than the cases with on-axis NBI. A 0-D model, based on the present confinement, β{sub N} and shape values of the DIII-D hybrid scenario, shows that these plasmas are consistent with the ITER 9 MA, Q = 5 mission and the FNSF 6.7 MA scenario with Q = 3.5. With collisionality and edge safety factor values comparable to those envisioned for ITER and FNSF, the high-β{sub N} hybrid represents an attractive high performance option for the steady-state missions of these devices.« less

  8. Solution of the within-group multidimensional discrete ordinates transport equations on massively parallel architectures

    NASA Astrophysics Data System (ADS)

    Zerr, Robert Joseph

    2011-12-01

    The integral transport matrix method (ITMM) has been used as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells and between the cells and boundary surfaces. The main goals of this work were to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and performance of the developed methods for increasing number of processes. This project compares the effectiveness of the ITMM with the SI scheme parallelized with the Koch-Baker-Alcouffe (KBA) method. The primary parallel solution method involves a decomposition of the domain into smaller spatial sub-domains, each with their own transport matrices, and coupled together via interface boundary angular fluxes. Each sub-domain has its own set of ITMM operators and represents an independent transport problem. Multiple iterative parallel solution methods have investigated, including parallel block Jacobi (PBJ), parallel red/black Gauss-Seidel (PGS), and parallel GMRES (PGMRES). The fastest observed parallel solution method, PGS, was used in a weak scaling comparison with the PARTISN code. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method without acceleration/preconditioning is not competitive for any problem parameters considered. The best comparisons occur for problems that are difficult for SI DSA, namely highly scattering and optically thick. SI DSA execution time curves are generally steeper than the PGS ones. However, until further testing is performed it cannot be concluded that SI DSA does not outperform the ITMM with PGS even on several thousand or tens of thousands of processors. The PGS method does outperform SI DSA for the periodic heterogeneous layers (PHL) configuration problems. Although this demonstrates a relative strength/weakness between the two methods, the practicality of these problems is much less, further limiting instances where it would be beneficial to select ITMM over SI DSA. The results strongly indicate a need for a robust, stable, and efficient acceleration method (or preconditioner for PGMRES). The spatial multigrid (SMG) method is currently incomplete in that it does not work for all cases considered and does not effectively improve the convergence rate for all values of scattering ratio c or cell dimension h. Nevertheless, it does display the desired trend for highly scattering, optically thin problems. That is, it tends to lower the rate of growth of number of iterations with increasing number of processes, P, while not increasing the number of additional operations per iteration to the extent that the total execution time of the rapidly converging accelerated iterations exceeds that of the slower unaccelerated iterations. A predictive parallel performance model has been developed for the PBJ method. Timing tests were performed such that trend lines could be fitted to the data for the different components and used to estimate the execution times. Applied to the weak scaling results, the model notably underestimates construction time, but combined with a slight overestimation in iterative solution time, the model predicts total execution time very well for large P. It also does a decent job with the strong scaling results, closely predicting the construction time and time per iteration, especially as P increases. Although not shown to be competitive up to 1,024 processing elements with the current state of the art, the parallelized ITMM exhibits promising scaling trends. Ultimately, compared to the KBA method, the parallelized ITMM may be found to be a very attractive option for transport calculations spatially decomposed over several tens of thousands of processes. Acceleration/preconditioning of the parallelized ITMM once developed will improve the convergence rate and improve its competitiveness. (Abstract shortened by UMI.)

  9. Space shuttle propulsion parameter estimation using optimal estimation techniques

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.

  10. Physics and Control of Locked Modes in the DIII-D Tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volpe, Francesco

    This Final Technical Report summarizes an investigation, carried out under the auspices of the DOE Early Career Award, of the physics and control of non-rotating magnetic islands (“locked modes”) in tokamak plasmas. Locked modes are one of the main causes of disruptions in present tokamaks, and could be an even bigger concern in ITER, due to its relatively high beta (favoring the formation of Neoclassical Tearing Mode islands) and low rotation (favoring locking). For these reasons, this research had the goal of studying and learning how to control locked modes in the DIII-D National Fusion Facility under ITER-relevant conditions ofmore » high pressure and low rotation. Major results included: the first full suppression of locked modes and avoidance of the associated disruptions; the demonstration of error field detection from the interaction between locked modes, applied rotating fields and intrinsic errors; the analysis of a vast database of disruptive locked modes, which led to criteria for disruption prediction and avoidance.« less

  11. Fast wave direct electron heating in advanced inductive and ITER baseline scenario discharges in DIII-D

    DOE PAGES

    Pinsker, R. I.; Austin, M. E.; Diem, S. J.; ...

    2014-02-12

    Fast Wave (FW) heating and electron cyclotron heating (ECH) are used in the DIII-D tokamak to study plasmas with low applied torque and dominant electron heating characteristic of burning plasmas. FW heating via direct electron damping has reached the 2.5 MW level in high performance ELMy H-mode plasmas. In Advanced Inductive (AI) plasmas, core FW heating was found to be comparable to that of ECH, consistent with the excellent first-pass absorption of FWs predicted by ray-tracing models at high electron beta. FW heating at the ~2 MW level to ELMy H-mode discharges in the ITER Baseline Scenario (IBS) showed unexpectedlymore » strong absorption of FW power by injected neutral beam (NB) ions, indicated by significant enhancement of the D-D neutron rate, while the intended absorption on core electrons appeared rather weak. As a result, the AI and IBS discharges are compared in an effort to identify the causes of the different response to FWs.« less

  12. Fast wave direct electron heating in advanced inductive and ITER baseline scenario discharges in DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinsker, R. I.; Jackson, G. L.; Luce, T. C.

    Fast Wave (FW) heating and electron cyclotron heating (ECH) are used in the DIII-D tokamak to study plasmas with low applied torque and dominant electron heating characteristic of burning plasmas. FW heating via direct electron damping has reached the 2.5 MW level in high performance ELMy H-mode plasmas. In Advanced Inductive (AI) plasmas, core FW heating was found to be comparable to that of ECH, consistent with the excellent first-pass absorption of FWs predicted by ray-tracing models at high electron beta. FW heating at the ∼2 MW level to ELMy H-mode discharges in the ITER Baseline Scenario (IBS) showed unexpectedlymore » strong absorption of FW power by injected neutral beam (NB) ions, indicated by significant enhancement of the D-D neutron rate, while the intended absorption on core electrons appeared rather weak. The AI and IBS discharges are compared in an effort to identify the causes of the different response to FWs.« less

  13. Fast wave direct electron heating in advanced inductive and ITER baseline scenario discharges in DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinsker, R. I.; Austin, M. E.; Diem, S. J.

    Fast Wave (FW) heating and electron cyclotron heating (ECH) are used in the DIII-D tokamak to study plasmas with low applied torque and dominant electron heating characteristic of burning plasmas. FW heating via direct electron damping has reached the 2.5 MW level in high performance ELMy H-mode plasmas. In Advanced Inductive (AI) plasmas, core FW heating was found to be comparable to that of ECH, consistent with the excellent first-pass absorption of FWs predicted by ray-tracing models at high electron beta. FW heating at the ~2 MW level to ELMy H-mode discharges in the ITER Baseline Scenario (IBS) showed unexpectedlymore » strong absorption of FW power by injected neutral beam (NB) ions, indicated by significant enhancement of the D-D neutron rate, while the intended absorption on core electrons appeared rather weak. As a result, the AI and IBS discharges are compared in an effort to identify the causes of the different response to FWs.« less

  14. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  15. Pressure-based high-order TVD methodology for dynamic stall control

    NASA Astrophysics Data System (ADS)

    Yang, H. Q.; Przekwas, A. J.

    1992-01-01

    The quantitative prediction of the dynamics of separating unsteady flows, such as dynamic stall, is of crucial importance. This six-month SBIR Phase 1 study has developed several new pressure-based methodologies for solving 3D Navier-Stokes equations in both stationary and moving (body-comforting) coordinates. The present pressure-based algorithm is equally efficient for low speed incompressible flows and high speed compressible flows. The discretization of convective terms by the presently developed high-order TVD schemes requires no artificial dissipation and can properly resolve the concentrated vortices in the wing-body with minimum numerical diffusion. It is demonstrated that the proposed Newton's iteration technique not only increases the convergence rate but also strongly couples the iteration between pressure and velocities. The proposed hyperbolization of the pressure correction equation is shown to increase the solver's efficiency. The above proposed methodologies were implemented in an existing CFD code, REFLEQS. The modified code was used to simulate both static and dynamic stalls on two- and three-dimensional wing-body configurations. Three-dimensional effect and flow physics are discussed.

  16. An automated construction of error models for uncertainty quantification and model calibration

    NASA Astrophysics Data System (ADS)

    Josset, L.; Lunati, I.

    2015-12-01

    To reduce the computational cost of stochastic predictions, it is common practice to rely on approximate flow solvers (or «proxy»), which provide an inexact, but computationally inexpensive response [1,2]. Error models can be constructed to correct the proxy response: based on a learning set of realizations for which both exact and proxy simulations are performed, a transformation is sought to map proxy into exact responses. Once the error model is constructed a prediction of the exact response is obtained at the cost of a proxy simulation for any new realization. Despite its effectiveness [2,3], the methodology relies on several user-defined parameters, which impact the accuracy of the predictions. To achieve a fully automated construction, we propose a novel methodology based on an iterative scheme: we first initialize the error model with a small training set of realizations; then, at each iteration, we add a new realization both to improve the model and to evaluate its performance. More specifically, at each iteration we use the responses predicted by the updated model to identify the realizations that need to be considered to compute the quantity of interest. Another user-defined parameter is the number of dimensions of the response spaces between which the mapping is sought. To identify the space dimensions that optimally balance mapping accuracy and risk of overfitting, we follow a Leave-One-Out Cross Validation. Also, the definition of a stopping criterion is central to an automated construction. We use a stability measure based on bootstrap techniques to stop the iterative procedure when the iterative model has converged. The methodology is illustrated with two test cases in which an inverse problem has to be solved and assess the performance of the method. We show that an iterative scheme is crucial to increase the applicability of the approach. [1] Josset, L., and I. Lunati, Local and global error models for improving uncertainty quantification, Math.ematical Geosciences, 2013 [2] Josset, L., D. Ginsbourger, and I. Lunati, Functional Error Modeling for uncertainty quantification in hydrogeology, Water Resources Research, 2015 [3] Josset, L., V. Demyanov, A.H. Elsheikhb, and I. Lunati, Accelerating Monte Carlo Markov chains with proxy and error models, Computer & Geosciences, 2015 (In press)

  17. High Rayleigh number convection in rectangular enclosures with differentially heated vertical walls and aspect ratios between zero and unity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kassemi, S.A.

    1988-04-01

    High Rayleigh number convection in a rectangular cavity with insulated horizontal surfaces and differentially heated vertical walls was analyzed for an arbitrary aspect ratio smaller than or equal to unity. Unlike previous analytical studies, a systematic method of solution based on linearization technique and analytical iteration procedure was developed to obtain approximate closed-form solutions for a wide range of aspect ratios. The predicted velocity and temperature fields are shown to be in excellent agreement with available experimental and numerical data.

  18. High Rayleigh number convection in rectangular enclosures with differentially heated vertical walls and aspect ratios between zero and unity

    NASA Technical Reports Server (NTRS)

    Kassemi, Siavash A.

    1988-01-01

    High Rayleigh number convection in a rectangular cavity with insulated horizontal surfaces and differentially heated vertical walls was analyzed for an arbitrary aspect ratio smaller than or equal to unity. Unlike previous analytical studies, a systematic method of solution based on linearization technique and analytical iteration procedure was developed to obtain approximate closed-form solutions for a wide range of aspect ratios. The predicted velocity and temperature fields are shown to be in excellent agreement with available experimental and numerical data.

  19. Protograph LDPC Codes with Node Degrees at Least 3

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  20. Iterating between lessons on concepts and procedures can improve mathematics knowledge.

    PubMed

    Rittle-Johnson, Bethany; Koedinger, Kenneth

    2009-09-01

    Knowledge of concepts and procedures seems to develop in an iterative fashion, with increases in one type of knowledge leading to increases in the other type of knowledge. This suggests that iterating between lessons on concepts and procedures may improve learning. The purpose of the current study was to evaluate the instructional benefits of an iterative lesson sequence compared to a concepts-before-procedures sequence for students learning decimal place-value concepts and arithmetic procedures. In two classroom experiments, sixth-grade students from two schools participated (N=77 and 26). Students completed six decimal lessons on an intelligent-tutoring systems. In the iterative condition, lessons cycled between concept and procedure lessons. In the concepts-first condition, all concept lessons were presented before introducing the procedure lessons. In both experiments, students in the iterative condition gained more knowledge of arithmetic procedures, including ability to transfer the procedures to problems with novel features. Knowledge of concepts was fairly comparable across conditions. Finally, pre-test knowledge of one type predicted gains in knowledge of the other type across experiments. An iterative sequencing of lessons seems to facilitate learning and transfer, particularly of mathematical procedures. The findings support an iterative perspective for the development of knowledge of concepts and procedures.

  1. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  2. The Lessons Oscar Taught Us: Data Science and Media & Entertainment.

    PubMed

    Gold, Michael; McClarren, Ryan; Gaughan, Conor

    2013-06-01

    Farsite Group, a data science firm based in Columbus, Ohio, launched a highly visible campaign in early 2013 to use predictive analytics to forecast the winners of the 85th Annual Academy Awards. The initiative was fun and exciting for the millions of Oscar viewers, but it also illustrated how data science could be further deployed in the media and entertainment industries. This article explores the current and potential use cases for big data and predictive analytics in those industries. It further discusses how the Farsite Forecast was built, as well as how the model was iterated, how the projections performed, and what lessons were learned in the process.

  3. Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.

    PubMed

    Balfer, Jenny; Hu, Ye; Bajorath, Jürgen

    2014-08-01

    Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Studies on Flat Sandwich-type Self-Powered Detectors for Flux Measurements in ITER Test Blanket Modules

    NASA Astrophysics Data System (ADS)

    Raj, Prasoon; Angelone, Maurizio; Döring, Toralf; Eberhardt, Klaus; Fischer, Ulrich; Klix, Axel; Schwengner, Ronald

    2018-01-01

    Neutron and gamma flux measurements in designated positions in the test blanket modules (TBM) of ITER will be important tasks during ITER's campaigns. As part of the ongoing task on development of nuclear instrumentation for application in European ITER TBMs, experimental investigations on self-powered detectors (SPD) are undertaken. This paper reports the findings of neutron and photon irradiation tests performed with a test SPD in flat sandwich-like geometry. Whereas both neutrons and gammas can be detected with appropriate optimization of geometries, materials and sizes of the components, the present sandwich-like design is more sensitive to gammas than 14 MeV neutrons. Range of SPD current signals achievable under TBM conditions are predicted based on the SPD sensitivities measured in this work.

  5. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    PubMed Central

    Wolverton, Christopher; Hattrick-Simpers, Jason; Mehta, Apurva

    2018-01-01

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, but there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict. PMID:29662953

  6. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Fang; Ward, Logan; Williams, Travis

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less

  7. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    DOE PAGES

    Ren, Fang; Ward, Logan; Williams, Travis; ...

    2018-04-01

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less

  8. MED: a new non-supervised gene prediction algorithm for bacterial and archaeal genomes.

    PubMed

    Zhu, Huaiqiu; Hu, Gang-Qing; Yang, Yi-Fan; Wang, Jin; She, Zhen-Su

    2007-03-16

    Despite a remarkable success in the computational prediction of genes in Bacteria and Archaea, a lack of comprehensive understanding of prokaryotic gene structures prevents from further elucidation of differences among genomes. It continues to be interesting to develop new ab initio algorithms which not only accurately predict genes, but also facilitate comparative studies of prokaryotic genomes. This paper describes a new prokaryotic genefinding algorithm based on a comprehensive statistical model of protein coding Open Reading Frames (ORFs) and Translation Initiation Sites (TISs). The former is based on a linguistic "Entropy Density Profile" (EDP) model of coding DNA sequence and the latter comprises several relevant features related to the translation initiation. They are combined to form a so-called Multivariate Entropy Distance (MED) algorithm, MED 2.0, that incorporates several strategies in the iterative program. The iterations enable us to develop a non-supervised learning process and to obtain a set of genome-specific parameters for the gene structure, before making the prediction of genes. Results of extensive tests show that MED 2.0 achieves a competitive high performance in the gene prediction for both 5' and 3' end matches, compared to the current best prokaryotic gene finders. The advantage of the MED 2.0 is particularly evident for GC-rich genomes and archaeal genomes. Furthermore, the genome-specific parameters given by MED 2.0 match with the current understanding of prokaryotic genomes and may serve as tools for comparative genomic studies. In particular, MED 2.0 is shown to reveal divergent translation initiation mechanisms in archaeal genomes while making a more accurate prediction of TISs compared to the existing gene finders and the current GenBank annotation.

  9. Closed-loop control of artificial pancreatic Beta -cell in type 1 diabetes mellitus using model predictive iterative learning control.

    PubMed

    Wang, Youqing; Dassau, Eyal; Doyle, Francis J

    2010-02-01

    A novel combination of iterative learning control (ILC) and model predictive control (MPC), referred to here as model predictive iterative learning control (MPILC), is proposed for glycemic control in type 1 diabetes mellitus. MPILC exploits two key factors: frequent glucose readings made possible by continuous glucose monitoring technology; and the repetitive nature of glucose-meal-insulin dynamics with a 24-h cycle. The proposed algorithm can learn from an individual's lifestyle, allowing the control performance to be improved from day to day. After less than 10 days, the blood glucose concentrations can be kept within a range of 90-170 mg/dL. Generally, control performance under MPILC is better than that under MPC. The proposed methodology is robust to random variations in meal timings within +/-60 min or meal amounts within +/-75% of the nominal value, which validates MPILC's superior robustness compared to run-to-run control. Moreover, to further improve the algorithm's robustness, an automatic scheme for setpoint update that ensures safe convergence is proposed. Furthermore, the proposed method does not require user intervention; hence, the algorithm should be of particular interest for glycemic control in children and adolescents.

  10. The quiescent H-mode regime for high performance edge localized mode-stable operation in future burning plasmas [The quiescent H-mode regime for high performance ELM-stable operation in future burning plasmas

    DOE PAGES

    Garofalo, Andrea M.; Burrell, Keith H.; Eldon, David; ...

    2015-05-26

    For the first time, DIII-D experiments have achieved stationary quiescent H-mode (QH-mode) operation for many energy confinement times at simultaneous ITER-relevant values of beta, confinement, and safety factor, in an ITER similar shape. QH-mode provides excellent energy confinement, even at very low plasma rotation, while operating without edge localized modes (ELMs) and with strong impurity transport via the benign edge harmonic oscillation (EHO). By tailoring the plasma shape to improve the edge stability, the QH-mode operating space has also been extended to densities exceeding 80% of the Greenwald limit, overcoming the long-standing low-density limit of QH-mode operation. In the theory,more » the density range over which the plasma encounters the kink-peeling boundary widens as the plasma cross-section shaping is increased, thus increasing the QH-mode density threshold. Here, the DIII-D results are in excellent agreement with these predictions, and nonlinear MHD analysis of reconstructed QH-mode equilibria shows unstable low n kink-peeling modes growing to a saturated level, consistent with the theoretical picture of the EHO. Furthermore, high density operation in the QH-mode regime has opened a path to a new, previously predicted region of parameter space, named “Super H-mode” because it is characterized by very high pedestals that can be more than a factor of two above the peeling-ballooning stability limit for similar ELMing H-mode discharges at the same density.« less

  11. Post-Stall Aerodynamic Modeling and Gain-Scheduled Control Design

    NASA Technical Reports Server (NTRS)

    Wu, Fen; Gopalarathnam, Ashok; Kim, Sungwan

    2005-01-01

    A multidisciplinary research e.ort that combines aerodynamic modeling and gain-scheduled control design for aircraft flight at post-stall conditions is described. The aerodynamic modeling uses a decambering approach for rapid prediction of post-stall aerodynamic characteristics of multiple-wing con.gurations using known section data. The approach is successful in bringing to light multiple solutions at post-stall angles of attack right during the iteration process. The predictions agree fairly well with experimental results from wind tunnel tests. The control research was focused on actuator saturation and .ight transition between low and high angles of attack regions for near- and post-stall aircraft using advanced LPV control techniques. The new control approaches maintain adequate control capability to handle high angle of attack aircraft control with stability and performance guarantee.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garofalo, Andrea M.; Burrell, Keith H.; Eldon, David

    For the first time, DIII-D experiments have achieved stationary quiescent H-mode (QH-mode) operation for many energy confinement times at simultaneous ITER-relevant values of beta, confinement, and safety factor, in an ITER similar shape. QH-mode provides excellent energy confinement, even at very low plasma rotation, while operating without edge localized modes (ELMs) and with strong impurity transport via the benign edge harmonic oscillation (EHO). By tailoring the plasma shape to improve the edge stability, the QH-mode operating space has also been extended to densities exceeding 80% of the Greenwald limit, overcoming the long-standing low-density limit of QH-mode operation. In the theory,more » the density range over which the plasma encounters the kink-peeling boundary widens as the plasma cross-section shaping is increased, thus increasing the QH-mode density threshold. Here, the DIII-D results are in excellent agreement with these predictions, and nonlinear MHD analysis of reconstructed QH-mode equilibria shows unstable low n kink-peeling modes growing to a saturated level, consistent with the theoretical picture of the EHO. Furthermore, high density operation in the QH-mode regime has opened a path to a new, previously predicted region of parameter space, named “Super H-mode” because it is characterized by very high pedestals that can be more than a factor of two above the peeling-ballooning stability limit for similar ELMing H-mode discharges at the same density.« less

  13. Overview of Recent DIII-D Experimental Results

    NASA Astrophysics Data System (ADS)

    Fenstermacher, Max; DIII-D Team

    2017-10-01

    Recent DIII-D experiments contributed to the ITER physics basis and to physics understanding for extrapolation to future devices. A predict-first analysis showed how shape can enhance access to RMP ELM suppression. 3D equilibrium changes from ELM control RMPs, were linked to density pumpout. Ion velocity imaging in the SOL showed 3D C2+flow perturbations near RMP induced n =1 islands. Correlation ECE reveals a 40% increase in Te turbulence during QH-mode and 70% during RMP ELM suppression vs. ELMing H-mode. A long-lived predator-prey oscillation replaces edge MHD in recent low-torque QH-mode plasmas. Spatio-temporally resolved runaway electron measurements validate the importance of synchrotron and collisional damping on RE dissipation. A new small angle slot divertor achieves strong plasma cooling and facilitates detachment access. Fast ion confinement was improved in high q_min scenarios using variable beam energy optimization. First reproducible, stable ITER baseline scenarios were established. Studies have validated a model for edge momentum transport that predicts the pedestal main-ion intrinsic velocity value and direction. Work supported by the US DOE under DE-FC02-04ER54698 and DE-AC52-07NA27344.

  14. Inversion of sonobuoy data from shallow-water sites with simulated annealing.

    PubMed

    Lindwall, Dennis; Brozena, John

    2005-02-01

    An enhanced simulated annealing algorithm is used to invert sparsely sampled seismic data collected with sonobuoys to obtain seafloor geoacoustic properties at two littoral marine environments as well as for a synthetic data set. Inversion of field data from a 750-m water-depth site using a water-gun sound source found a good solution which included a pronounced subbottom reflector after 6483 iterations over seven variables. Field data from a 250-m water-depth site using an air-gun source required 35,421 iterations for a good inversion solution because 30 variables had to be solved for, including the shot-to-receiver offsets. The sonobuoy derived compressional wave velocity-depth (Vp-Z) models compare favorably with Vp-Z models derived from nearby, high-quality, multichannel seismic data. There are, however, substantial differences between seafloor reflection coefficients calculated from field models and seafloor reflection coefficients based on commonly used Vp regression curves (gradients). Reflection loss is higher at one field site and lower at the other than predicted from commonly used Vp gradients for terrigenous sediments. In addition, there are strong effects on reflection loss due to the subseafloor interfaces that are also not predicted by Vp gradients.

  15. Noise tolerant illumination optimization applied to display devices

    NASA Astrophysics Data System (ADS)

    Cassarly, William J.; Irving, Bruce

    2005-02-01

    Display devices have historically been designed through an iterative process using numerous hardware prototypes. This process is effective but the number of iterations is limited by the time and cost to make the prototypes. In recent years, virtual prototyping using illumination software modeling tools has replaced many of the hardware prototypes. Typically, the designer specifies the design parameters, builds the software model, predicts the performance using a Monte Carlo simulation, and uses the performance results to repeat this process until an acceptable design is obtained. What is highly desired, and now possible, is to use illumination optimization to automate the design process. Illumination optimization provides the ability to explore a wider range of design options while also providing improved performance. Since Monte Carlo simulations are often used to calculate the system performance but those predictions have statistical uncertainty, the use of noise tolerant optimization algorithms is important. The use of noise tolerant illumination optimization is demonstrated by considering display device designs that extract light using 2D paint patterns as well as 3D textured surfaces. A hybrid optimization approach that combines a mesh feedback optimization with a classical optimizer is demonstrated. Displays with LED sources and cold cathode fluorescent lamps are considered.

  16. Pellet Injection in ITER with ∇B-induced Drift Effect using TASK/TR and HPI2 Codes

    NASA Astrophysics Data System (ADS)

    Kongkurd, R.; Wisitsorasak, A.

    2017-09-01

    The impact of pellet injection in International Thermonuclear Experimental Reactor (ITER) are investigated using integrated predictive modeling codes TASK/TR and HPI2 . In the core, the plasma profiles are predicted by the TASK/TR code in which the core transport models consist of a combination of the MMM95 anomalous transport model and NCLASS neoclassical transport. The pellet ablation in the plasma is described using neutral gas shielding (NGS) model with inclusion of the ∇B-induced \\overrightarrow{E}× \\overrightarrow{B} drift of the ionized ablated pellet particles. It is found that the high-field-side injection can deposit the pellet mass deeper than the injection from the low-field-side due to the advantage of the ∇B-induced drift. When pellets with deuterium-tritium mixing ratio of unity are launched with speed of 200 m/s, radius of 3 mm and injected at frequency of 2 Hz, the line average density and the plasma stored energy are increased by 80% and 25% respectively. The pellet material is mostly deposited at the normalized minor radius of 0.5 from the edge.

  17. Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2012-01-01

    The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.

  18. Some modifications of Newton's method for the determination of the steady-state response of nonlinear oscillatory circuits

    NASA Astrophysics Data System (ADS)

    Grosz, F. B., Jr.; Trick, T. N.

    1982-07-01

    It is proposed that nondominant states should be eliminated from the Newton algorithm in the steady-state analysis of nonlinear oscillatory systems. This technique not only improves convergence, but also reduces the size of the sensitivity matrix so that less computation is required for each iteration. One or more periods of integration should be performed after each periodic state estimation before the sensitivity computations are made for the next periodic state estimation. These extra periods of integration between Newton iterations are found to allow the fast states due to parasitic effects to settle, which enables the Newton algorithm to make a better prediction. In addition, the reliability of the algorithm is improved in high Q oscillator circuits by both local and global damping in which the amount of damping is proportional to the difference between the initial and final state values.

  19. An Efficient Algorithm for Perturbed Orbit Integration Combining Analytical Continuation and Modified Chebyshev Picard Iteration

    NASA Astrophysics Data System (ADS)

    Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.

    2014-09-01

    Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.

  20. Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach

    NASA Astrophysics Data System (ADS)

    Liu, Wenyang; Sawant, Amit; Ruan, Dan

    2016-07-01

    The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.

  1. ITER Simulations Using the PEDESTAL Module in the PTRANSP Code

    NASA Astrophysics Data System (ADS)

    Halpern, F. D.; Bateman, G.; Kritz, A. H.; Pankin, A. Y.; Budny, R. V.; Kessel, C.; McCune, D.; Onjun, T.

    2006-10-01

    PTRANSP simulations with a computed pedestal height are carried out for ITER scenarios including a standard ELMy H-mode (15 MA discharge) and a hybrid scenario (12MA discharge). It has been found that fusion power production predicted in simulations of ITER discharges depends sensitively on the height of the H-mode temperature pedestal [1]. In order to study this effect, the NTCC PEDESTAL module [2] has been implemented in PTRANSP code to provide boundary conditions used for the computation of the projected performance of ITER. The PEDESTAL module computes both the temperature and width of the pedestal at the edge of type I ELMy H-mode discharges once the threshold conditions for the H-mode are satisfied. The anomalous transport in the plasma core is predicted using the GLF23 or MMM95 transport models. To facilitate the steering of lengthy PTRANSP computations, the PTRANSP code has been modified to allow changes in the transport model when simulations are restarted. The PTRANSP simulation results are compared with corresponding results obtained using other integrated modeling codes.[1] G. Bateman, T. Onjun and A.H. Kritz, Plasma Physics and Controlled Fusion, 45, 1939 (2003).[2] T. Onjun, G. Bateman, A.H. Kritz, and G. Hammett, Phys. Plasmas 9, 5018 (2002).

  2. Nonlinear random response prediction using MSC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Robinson, J. H.; Chiang, C. K.; Rizzi, S. A.

    1993-01-01

    An equivalent linearization technique was incorporated into MSC/NASTRAN to predict the nonlinear random response of structures by means of Direct Matrix Abstract Programming (DMAP) modifications and inclusion of the nonlinear differential stiffness module inside the iteration loop. An iterative process was used to determine the rms displacements. Numerical results obtained for validation on simple plates and beams are in good agreement with existing solutions in both the linear and linearized regions. The versatility of the implementation will enable the analyst to determine the nonlinear random responses for complex structures under combined loads. The thermo-acoustic response of a hexagonal thermal protection system panel is used to highlight some of the features of the program.

  3. High resolution x-ray CMT: Reconstruction methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.K.

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less

  4. Comparison of different filter methods for data assimilation in the unsaturated zone

    NASA Astrophysics Data System (ADS)

    Lange, Natascha; Berkhahn, Simon; Erdal, Daniel; Neuweiler, Insa

    2016-04-01

    The unsaturated zone is an important compartment, which plays a role for the division of terrestrial water fluxes into surface runoff, groundwater recharge and evapotranspiration. For data assimilation in coupled systems it is therefore important to have a good representation of the unsaturated zone in the model. Flow processes in the unsaturated zone have all the typical features of flow in porous media: Processes can have long memory and as observations are scarce, hydraulic model parameters cannot be determined easily. However, they are important for the quality of model predictions. On top of that, the established flow models are highly non-linear. For these reasons, the use of the popular Ensemble Kalman filter as a data assimilation method to estimate state and parameters in unsaturated zone models could be questioned. With respect to the long process memory in the subsurface, it has been suggested that iterative filters and smoothers may be more suitable for parameter estimation in unsaturated media. We test the performance of different iterative filters and smoothers for data assimilation with a focus on parameter updates in the unsaturated zone. In particular we compare the Iterative Ensemble Kalman Filter and Smoother as introduced by Bocquet and Sakov (2013) as well as the Confirming Ensemble Kalman Filter and the modified Restart Ensemble Kalman Filter proposed by Song et al. (2014) to the original Ensemble Kalman Filter (Evensen, 2009). This is done with simple test cases generated numerically. We consider also test examples with layering structure, as a layering structure is often found in natural soils. We assume that observations are water content, obtained from TDR probes or other observation methods sampling relatively small volumes. Particularly in larger data assimilation frameworks, a reasonable balance between computational effort and quality of results has to be found. Therefore, we compare computational costs of the different methods as well as the quality of open loop model predictions and the estimated parameters. Bocquet, M. and P. Sakov, 2013: Joint state and parameter estimation with an iterative ensemble Kalman smoother, Nonlinear Processes in Geophysics 20(5): 803-818. Evensen, G., 2009: Data assimilation: The ensemble Kalman filter. Springer Science & Business Media. Song, X.H., L.S. Shi, M. Ye, J.Z. Yang and I.M. Navon, 2014: Numerical comparison of iterative ensemble Kalman filters for unsaturated flow inverse modeling. Vadose Zone Journal 13(2), 10.2136/vzj2013.05.0083.

  5. Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2016-01-01

    A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.

  6. Automated Deep Learning-Based System to Identify Endothelial Cells Derived from Induced Pluripotent Stem Cells.

    PubMed

    Kusumoto, Dai; Lachmann, Mark; Kunihiro, Takeshi; Yuasa, Shinsuke; Kishino, Yoshikazu; Kimura, Mai; Katsuki, Toshiomi; Itoh, Shogo; Seki, Tomohisa; Fukuda, Keiichi

    2018-06-05

    Deep learning technology is rapidly advancing and is now used to solve complex problems. Here, we used deep learning in convolutional neural networks to establish an automated method to identify endothelial cells derived from induced pluripotent stem cells (iPSCs), without the need for immunostaining or lineage tracing. Networks were trained to predict whether phase-contrast images contain endothelial cells based on morphology only. Predictions were validated by comparison to immunofluorescence staining for CD31, a marker of endothelial cells. Method parameters were then automatically and iteratively optimized to increase prediction accuracy. We found that prediction accuracy was correlated with network depth and pixel size of images to be analyzed. Finally, K-fold cross-validation confirmed that optimized convolutional neural networks can identify endothelial cells with high performance, based only on morphology. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  7. ITER Status and Plans

    NASA Astrophysics Data System (ADS)

    Greenfield, Charles M.

    2017-10-01

    The US Burning Plasma Organization is pleased to welcome Dr. Bernard Bigot, who will give an update on progress in the ITER Project. Dr. Bigot took over as Director General of the ITER Organization in early 2015 following a distinguished career that included serving as Chairman and CEO of the French Alternative Energies and Atomic Energy Commission and as High Commissioner for ITER in France. During his tenure at ITER the project has moved into high gear, with rapid progress evident on the construction site and preparation of a staged schedule and a research plan leading from where we are today through all the way to full DT operation. In an unprecedented international effort, seven partners ``China, the European Union, India, Japan, Korea, Russia and the United States'' have pooled their financial and scientific resources to build the biggest fusion reactor in history. ITER will open the way to the next step: a demonstration fusion power plant. All DPP attendees are welcome to attend this ITER town meeting.

  8. The ITER project construction status

    NASA Astrophysics Data System (ADS)

    Motojima, O.

    2015-10-01

    The pace of the ITER project in St Paul-lez-Durance, France is accelerating rapidly into its peak construction phase. With the completion of the B2 slab in August 2014, which will support about 400 000 metric tons of the tokamak complex structures and components, the construction is advancing on a daily basis. Magnet, vacuum vessel, cryostat, thermal shield, first wall and divertor structures are under construction or in prototype phase in the ITER member states of China, Europe, India, Japan, Korea, Russia, and the United States. Each of these member states has its own domestic agency (DA) to manage their procurements of components for ITER. Plant systems engineering is being transformed to fully integrate the tokamak and its auxiliary systems in preparation for the assembly and operations phase. CODAC, diagnostics, and the three main heating and current drive systems are also progressing, including the construction of the neutral beam test facility building in Padua, Italy. The conceptual design of the Chinese test blanket module system for ITER has been completed and those of the EU are well under way. Significant progress has been made addressing several outstanding physics issues including disruption load characterization, prediction, avoidance, and mitigation, first wall and divertor shaping, edge pedestal and SOL plasma stability, fuelling and plasma behaviour during confinement transients and W impurity transport. Further development of the ITER Research Plan has included a definition of the required plant configuration for 1st plasma and subsequent phases of ITER operation as well as the major plasma commissioning activities and the needs of the accompanying R&D program to ITER construction by the ITER parties.

  9. A Simple Iterative Model Accurately Captures Complex Trapline Formation by Bumblebees Across Spatial Scales and Flower Arrangements

    PubMed Central

    Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars

    2013-01-01

    Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353

  10. Experimental validation of an analytical kinetic model for edge-localized modes in JET-ITER-like wall

    NASA Astrophysics Data System (ADS)

    Guillemaut, C.; Metzger, C.; Moulton, D.; Heinola, K.; O’Mullane, M.; Balboa, I.; Boom, J.; Matthews, G. F.; Silburn, S.; Solano, E. R.; contributors, JET

    2018-06-01

    The design and operation of future fusion devices relying on H-mode plasmas requires reliable modelling of edge-localized modes (ELMs) for precise prediction of divertor target conditions. An extensive experimental validation of simple analytical predictions of the time evolution of target plasma loads during ELMs has been carried out here in more than 70 JET-ITER-like wall H-mode experiments with a wide range of conditions. Comparisons of these analytical predictions with diagnostic measurements of target ion flux density, power density, impact energy and electron temperature during ELMs are presented in this paper and show excellent agreement. The analytical predictions tested here are made with the ‘free-streaming’ kinetic model (FSM) which describes ELMs as a quasi-neutral plasma bunch expanding along the magnetic field lines into the Scrape-Off Layer without collisions. Consequences of the FSM on energy reflection and deposition on divertor targets during ELMs are also discussed.

  11. High performance computation of residual stress and distortion in laser welded 301L stainless sheets

    DOE PAGES

    Huang, Hui; Tsutsumi, Seiichiro; Wang, Jiandong; ...

    2017-07-11

    Transient thermo-mechanical simulation of stainless plate laser welding process was performed by a highly efficient and accurate approach-hybrid iterative substructure and adaptive mesh method. Especially, residual stress prediction was enhanced by considering various heat effects in the numerical model. The influence of laser welding heat input on residual stress and welding distortion of stainless thin sheets were investigated by experiment and simulation. X-ray diffraction (XRD) and contour method were used to measure the surficial and internal residual stress respectively. Effect of strain hardening, annealing and melting on residual stress prediction was clarified through a parametric study. It was shown thatmore » these heat effects must be taken into account for accurate prediction of residual stresses in laser welded stainless sheets. Reasonable agreement among residual stresses by numerical method, XRD and contour method was obtained. Buckling type welding distortion was also well reproduced by the developed thermo-mechanical FEM.« less

  12. High performance computation of residual stress and distortion in laser welded 301L stainless sheets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hui; Tsutsumi, Seiichiro; Wang, Jiandong

    Transient thermo-mechanical simulation of stainless plate laser welding process was performed by a highly efficient and accurate approach-hybrid iterative substructure and adaptive mesh method. Especially, residual stress prediction was enhanced by considering various heat effects in the numerical model. The influence of laser welding heat input on residual stress and welding distortion of stainless thin sheets were investigated by experiment and simulation. X-ray diffraction (XRD) and contour method were used to measure the surficial and internal residual stress respectively. Effect of strain hardening, annealing and melting on residual stress prediction was clarified through a parametric study. It was shown thatmore » these heat effects must be taken into account for accurate prediction of residual stresses in laser welded stainless sheets. Reasonable agreement among residual stresses by numerical method, XRD and contour method was obtained. Buckling type welding distortion was also well reproduced by the developed thermo-mechanical FEM.« less

  13. Improving the iterative Linear Interaction Energy approach using automated recognition of configurational transitions.

    PubMed

    Vosmeer, C Ruben; Kooi, Derk P; Capoferri, Luigi; Terpstra, Margreet M; Vermeulen, Nico P E; Geerke, Daan P

    2016-01-01

    Recently an iterative method was proposed to enhance the accuracy and efficiency of ligand-protein binding affinity prediction through linear interaction energy (LIE) theory. For ligand binding to flexible Cytochrome P450s (CYPs), this method was shown to decrease the root-mean-square error and standard deviation of error prediction by combining interaction energies of simulations starting from different conformations. Thereby, different parts of protein-ligand conformational space are sampled in parallel simulations. The iterative LIE framework relies on the assumption that separate simulations explore different local parts of phase space, and do not show transitions to other parts of configurational space that are already covered in parallel simulations. In this work, a method is proposed to (automatically) detect such transitions during the simulations that are performed to construct LIE models and to predict binding affinities. Using noise-canceling techniques and splines to fit time series of the raw data for the interaction energies, transitions during simulation between different parts of phase space are identified. Boolean selection criteria are then applied to determine which parts of the interaction energy trajectories are to be used as input for the LIE calculations. Here we show that this filtering approach benefits the predictive quality of our previous CYP 2D6-aryloxypropanolamine LIE model. In addition, an analysis is performed of the gain in computational efficiency that can be obtained from monitoring simulations using the proposed filtering method and by prematurely terminating simulations accordingly.

  14. Predictive wall adjustment strategy for two-dimensional flexible walled adaptive wind tunnel: A detailed description of the first one-step method

    NASA Technical Reports Server (NTRS)

    Wolf, Stephen W. D.; Goodyer, Michael J.

    1988-01-01

    Following the realization that a simple iterative strategy for bringing the flexible walls of two-dimensional test sections to streamline contours was too slow for practical use, Judd proposed, developed, and placed into service what was the first Predictive Strategy. The Predictive Strategy reduced by 75 percent or more the number of iterations of wall shapes, and therefore the tunnel run-time overhead attributable to the streamlining process, required to reach satisfactory streamlines. The procedures of the Strategy are embodied in the FORTRAN subroutine WAS (standing for Wall Adjustment Strategy) which is written in general form. The essentials of the test section hardware, followed by the underlying aerodynamic theory which forms the basis of the Strategy, are briefly described. The subroutine is then presented as the Appendix, broken down into segments with descriptions of the numerical operations underway in each, with definitions of variables.

  15. Beef quality grading using machine vision

    NASA Astrophysics Data System (ADS)

    Jeyamkondan, S.; Ray, N.; Kranzler, Glenn A.; Biju, Nisha

    2000-12-01

    A video image analysis system was developed to support automation of beef quality grading. Forty images of ribeye steaks were acquired. Fat and lean meat were differentiated using a fuzzy c-means clustering algorithm. Muscle longissimus dorsi (l.d.) was segmented from the ribeye using morphological operations. At the end of each iteration of erosion and dilation, a convex hull was fitted to the image and compactness was measured. The number of iterations was selected to yield the most compact l.d. Match between the l.d. muscle traced by an expert grader and that segmented by the program was 95.9%. Marbling and color features were extracted from the l.d. muscle and were used to build regression models to predict marbling and color scores. Quality grade was predicted using another regression model incorporating all features. Grades predicted by the model were statistically equivalent to the grades assigned by expert graders.

  16. Novel aspects of plasma control in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, D.; Jackson, G.; Walker, M.

    2015-02-15

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less

  17. Novel aspects of plasma control in ITER

    DOE PAGES

    Humphreys, David; Ambrosino, G.; de Vries, Peter; ...

    2015-02-12

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g. current profile regulation, tearing mode suppression (TM)), control mathematics (e.g. algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g. methods for management of highly-subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Finally, issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less

  18. Twostep-by-twostep PIRK-type PC methods with continuous output formulas

    NASA Astrophysics Data System (ADS)

    Cong, Nguyen Huu; Xuan, Le Ngoc

    2008-11-01

    This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.

  19. Master-Leader-Slave Cuckoo Search with Parameter Control for ANN Optimization and Its Real-World Application to Water Quality Prediction

    PubMed Central

    Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda

    2017-01-01

    Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results. PMID:28125609

  20. Master-Leader-Slave Cuckoo Search with Parameter Control for ANN Optimization and Its Real-World Application to Water Quality Prediction.

    PubMed

    Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda

    2017-01-01

    Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results.

  1. Influence of iterative reconstruction on coronary calcium scores at multiple heart rates: a multivendor phantom study on state-of-the-art CT systems.

    PubMed

    van der Werf, N R; Willemink, M J; Willems, T P; Greuter, M J W; Leiner, T

    2017-12-28

    The objective of this study was to evaluate the influence of iterative reconstruction on coronary calcium scores (CCS) at different heart rates for four state-of-the-art CT systems. Within an anthropomorphic chest phantom, artificial coronary arteries were translated in a water-filled compartment. The arteries contained three different calcifications with low (38 mg), medium (80 mg) and high (157 mg) mass. Linear velocities were applied, corresponding to heart rates of 0, < 60, 60-75 and > 75 bpm. Data were acquired on four state-of-the-art CT systems (CT1-CT4) with routinely used CCS protocols. Filtered back projection (FBP) and three increasing levels of iterative reconstruction (L1-L3) were used for reconstruction. CCS were quantified as Agatston score and mass score. An iterative reconstruction susceptibility (IRS) index was used to assess susceptibility of Agatston score (IRS AS ) and mass score (IRS MS ) to iterative reconstruction. IRS values were compared between CT systems and between calcification masses. For each heart rate, differences in CCS of iterative reconstructed images were evaluated with CCS of FBP images as reference, and indicated as small (< 5%), medium (5-10%) or large (> 10%). Statistical analysis was performed with repeated measures ANOVA tests. While subtle differences were found for Agatston scores of low mass calcification, medium and high mass calcifications showed increased CCS up to 77% with increasing heart rates. IRS AS of CT1-T4 were 17, 41, 130 and 22% higher than IRS MS . Not only were IRS significantly different between all CT systems, but also between calcification masses. Up to a fourfold increase in IRS was found for the low mass calcification in comparison with the high mass calcification. With increasing iterative reconstruction strength, maximum decreases of 21 and 13% for Agatston and mass score were found. In total, 21 large differences between Agatston scores from FBP and iterative reconstruction were found, while only five large differences were found between FBP and iterative reconstruction mass scores. Iterative reconstruction results in reduced CCS. The effect of iterative reconstruction on CCS is more prominent with low-density calcifications, high heart rates and increasing iterative reconstruction strength.

  2. Mechanical Properties of High Manganese Austenitic Stainless Steel JK2LB for ITER Central Solenoid Jacket Material

    NASA Astrophysics Data System (ADS)

    Saito, Toru; Kawano, Katsumi; Yamazaki, Toru; Ozeki, Hidemasa; Isono, Takaaki; Hamada, Kazuya; Devred, Arnaud; Vostner, Alexander

    A suite of advanced austenitic stainless steels are used for the ITER TF, CS and PF coil systems.These materials will be exposed to cyclic-stress at cryogenic temperature. Therefore, high manganese austenitic stainless steel JK2LB, which has high tensile strength, high ductility and high resistance to fatigue at 4 K has been chosen for the CS conductor. The cryogenic temperature mechanical property data of this material are very important for the ITER magnet design. This study is focused on mechanical characteristics of JK2LB and its weld joint.

  3. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.

  4. Progressing in cable-in-conduit for fusion magnets: from ITER to low cost, high performance DEMO

    NASA Astrophysics Data System (ADS)

    Uglietti, D.; Sedlak, K.; Wesche, R.; Bruzzone, P.; Muzzi, L.; della Corte, A.

    2018-05-01

    The performance of ITER toroidal field (TF) conductors still have a significant margin for improvement because the effective strain between ‑0.62% and ‑0.95% limits the strands’ critical current between 15% and 45% of the maximum achievable. Prototype Nb3Sn cable-in-conduit conductors have been designed, manufactured and tested in the frame of the EUROfusion DEMO activities. In these conductors the effective strain has shown a clear improvement with respect to the ITER conductors, reaching values between ‑0.55% and ‑0.28%, resulting in a strand critical current which is two to three times higher than in ITER conductors. In terms of the amount of Nb3Sn strand required for the construction of the DEMO TF magnet system, such improvement may lead to a reduction of at least a factor of two with respect to a similar magnet built with ITER type conductors; a further saving of Nb3Sn is possible if graded conductors/windings are employed. In the best case the DEMO TF magnet could require fewer Nb3Sn strands than the ITER one, despite the larger size of DEMO. Moreover high performance conductors could be operated at higher fields than ITER TF conductors, enabling the construction of low cost, compact, high field tokamaks.

  5. Modelling of the test of the JT-60SA HTS current leads

    NASA Astrophysics Data System (ADS)

    Zappatore, A.; Heller, R.; Savoldi, L.; Zanino, R.

    2017-07-01

    The CURLEAD code, which was developed at the Karlsruhe Institute of Technology (KIT), implements an integrated 1D transient model of a high temperature superconducting (HTS) current lead (CL) including the room termination (RT), the meander-flow type heat exchanger (HX), and the HTS module. CURLEAD was successfully used for the design of the 70 kA ITER demonstrator and of the W7-X and JT-60SA CLs. Recently the code was successfully applied to the prediction and analysis of steady state operation of the ITER correction coils (CC) HTS CL. Here the steady state and pulsed operation of the JT-60SA HTS CLs are analysed, which requires also the modelling of the HX shell and of the vacuum shell, which was not present in the ITER CC. The CURLEAD model extension is presented and the capability of the new version of CURLEAD to reproduce the transient experimental data of the JT-60SA HTS CL is shown. The results obtained provide a better understanding of key parameters of the CL, among which the temperature evolution at the HX-HTS interface, the GHe mass flow rate needed in the HX to achieve the target temperature at that location and the heat load at the cold end.

  6. Predictions of high QDT in ITER H-mode plasmas

    NASA Astrophysics Data System (ADS)

    Budny, Robert

    2009-05-01

    Time-dependent integrated predictions of performance metrics such as the fusion power PDT, QDT≡ PDT/Pext, and alpha profiles are presented. The PTRANSP code (see R.V. Budny, R. Andre, G. Bateman, F. Halpern, C.E. Kessel, A. Kritz, and D. McCune, Nuclear Fusion 48 075005, and F. Halpern, A. Kritz, G. Bateman, R.V. Budny, and D. McCune, Phys. Plasmas 15 062505) is used, along with GLF23 to predict plasma profiles, NUBEAM for NNBI and alpha heating, TORIC for ICRH, and TORAY for ECRH. Effects of sawteeth mixing, beam steering, beam shine-through, radiation loss, ash accumulation, and toroidal rotation are included. A total heating of Pext=73MW is assumed to achieve H-mode during the density and current ramp-up phase. Various mixes of NNBI, ICRH, and ECRH heating schemes are compared. After steady state conditions are achieved, Pext is stepped down to lower values to explore high QDT. Physics and computation uncertainties lead to ranges in predictions for PDT and QDT. Physics uncertainties include the L->H and H->L threshold powers, pedestal height, impurity and ash transport, and recycling. There are considerably more uncertainties predicting the peak value for QDT than for PDT.

  7. Observation and analysis of pellet material del B drift on MAST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzotti, L.; Baylor, Larry R; Kochi, F.

    2010-01-01

    Pellet material deposited in a tokamak plasma experiences a drift towards the low field side of the torus induced by the magnetic field gradient. Plasma fuelling in ITER relies on the beneficial effect of this drift to increase the pellet deposition depth and fuelling efficiency. It is therefore important to analyse this phenomenon in present machines to improve the understanding of the del B induced drift and the accuracy of the predictions for ITER. This paper presents a detailed analysis of pellet material drift in MAST pellet injection experiments based on the unique diagnostic capabilities available on this machine andmore » compares the observations with predictions of state-of-the-art ablation and deposition codes.« less

  8. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    PubMed Central

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  9. Isotope and fast ions turbulence suppression effects: Consequences for high-β ITER plasmas

    NASA Astrophysics Data System (ADS)

    Garcia, J.; Görler, T.; Jenko, F.

    2018-05-01

    The impact of isotope effects and fast ions on microturbulence is analyzed by means of non-linear gyrokinetic simulations for an ITER hybrid scenario at high beta obtained from previous integrated modelling simulations with simplified assumptions. Simulations show that ITER might work very close to threshold, and in these conditions, significant turbulence suppression is found from DD to DT plasmas. Electromagnetic effects are shown to play an important role in the onset of this isotope effect. Additionally, even external ExB flow shear, which is expected to be low in ITER, has a stronger impact on DT than on DD. The fast ions generated by fusion reactions can additionally reduce turbulence even more although the impact in ITER seems weaker than in present-day tokamaks.

  10. Integrated simulations of H-mode operation in ITER including core fuelling, divertor detachment and ELM control

    NASA Astrophysics Data System (ADS)

    Polevoi, A. R.; Loarte, A.; Dux, R.; Eich, T.; Fable, E.; Coster, D.; Maruyama, S.; Medvedev, S. Yu.; Köchl, F.; Zhogolev, V. E.

    2018-05-01

    ELM mitigation to avoid melting of the tungsten (W) divertor is one of the main factors affecting plasma fuelling and detachment control at full current for high Q operation in ITER. Here we derive the ITER operational space, where ELM mitigation to avoid melting of the W divertor monoblocks top surface is not required and appropriate control of W sources and radiation in the main plasma can be ensured through ELM control by pellet pacing. We apply the experimental scaling that relates the maximum ELM energy density deposited at the divertor with the pedestal parameters and this eliminates the uncertainty related with the ELM wetted area for energy deposition at the divertor and enables the definition of the ITER operating space through global plasma parameters. Our evaluation is thus based on this empirical scaling for ELM power loads together with the scaling for the pedestal pressure limit based on predictions from stability codes. In particular, our analysis has revealed that for the pedestal pressure predicted by the EPED1  +  SOLPS scaling, ELM mitigation to avoid melting of the W divertor monoblocks top surface may not be required for 2.65 T H-modes with normalized pedestal densities (to the Greenwald limit) larger than 0.5 to a level of current of 6.5–7.5 MA, which depends on assumptions on the divertor power flux during ELMs and between ELMs that expand the range of experimental uncertainties. The pellet and gas fuelling requirements compatible with control of plasma detachment, core plasma tungsten accumulation and H-mode operation (including post-ELM W transient radiation) have been assessed by 1.5D transport simulations for a range of assumptions regarding W re-deposition at the divertor including the most conservative assumption of zero prompt re-deposition. With such conservative assumptions, the post-ELM W transient radiation imposes a very stringent limit on ELM energy losses and the associated minimum required ELM frequency. Depending on W transport assumptions during the ELM, a maximum ELM frequency is also identified above which core tungsten accumulation takes place.

  11. Approximate Single-Diode Photovoltaic Model for Efficient I-V Characteristics Estimation

    PubMed Central

    Ting, T. O.; Zhang, Nan; Guan, Sheng-Uei; Wong, Prudence W. H.

    2013-01-01

    Precise photovoltaic (PV) behavior models are normally described by nonlinear analytical equations. To solve such equations, it is necessary to use iterative procedures. Aiming to make the computation easier, this paper proposes an approximate single-diode PV model that enables high-speed predictions for the electrical characteristics of commercial PV modules. Based on the experimental data, statistical analysis is conducted to validate the approximate model. Simulation results show that the calculated current-voltage (I-V) characteristics fit the measured data with high accuracy. Furthermore, compared with the existing modeling methods, the proposed model reduces the simulation time by approximately 30% in this work. PMID:24298205

  12. Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Gulshan B., E-mail: gbsharma@ucalgary.ca; University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213; University of Calgary, Schulich School of Engineering, Department of Mechanical and Manufacturing Engineering, Calgary, Alberta T2N 1N4

    Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respondmore » over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula’s material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element’s remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.« less

  13. Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine

    NASA Astrophysics Data System (ADS)

    Sharma, Gulshan B.; Robertson, Douglas D.

    2013-07-01

    Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula's material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element's remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.

  14. Organizing Compression of Hyperspectral Imagery to Allow Efficient Parallel Decompression

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew A.; Kiely, Aaron B.

    2014-01-01

    family of schemes has been devised for organizing the output of an algorithm for predictive data compression of hyperspectral imagery so as to allow efficient parallelization in both the compressor and decompressor. In these schemes, the compressor performs a number of iterations, during each of which a portion of the data is compressed via parallel threads operating on independent portions of the data. The general idea is that for each iteration it is predetermined how much compressed data will be produced from each thread.

  15. Absolutely and uniformly convergent iterative approach to inverse scattering with an infinite radius of convergence

    DOEpatents

    Kouri, Donald J [Houston, TX; Vijay, Amrendra [Houston, TX; Zhang, Haiyan [Houston, TX; Zhang, Jingfeng [Houston, TX; Hoffman, David K [Ames, IA

    2007-05-01

    A method and system for solving the inverse acoustic scattering problem using an iterative approach with consideration of half-off-shell transition matrix elements (near-field) information, where the Volterra inverse series correctly predicts the first two moments of the interaction, while the Fredholm inverse series is correct only for the first moment and that the Volterra approach provides a method for exactly obtaining interactions which can be written as a sum of delta functions.

  16. Physics and Engineering Design of the ITER Electron Cyclotron Emission Diagnostic

    NASA Astrophysics Data System (ADS)

    Rowan, W. L.; Austin, M. E.; Houshmandyar, S.; Phillips, P. E.; Beno, J. H.; Ouroua, A.; Weeks, D. A.; Hubbard, A. E.; Stillerman, J. A.; Feder, R. E.; Khodak, A.; Taylor, G.; Pandya, H. K.; Danani, S.; Kumar, R.

    2015-11-01

    Electron temperature (Te) measurements and consequent electron thermal transport inferences will be critical to the non-active phases of ITER operation and will take on added importance during the alpha heating phase. Here, we describe our design for the diagnostic that will measure spatial and temporal profiles of Te using electron cyclotron emission (ECE). Other measurement capability includes high frequency instabilities (e.g. ELMs, NTMs, and TAEs). Since results from TFTR and JET suggest that Thomson Scattering and ECE differ at high Te due to driven non-Maxwellian distributions, non-thermal features of the ITER electron distribution must be documented. The ITER environment presents other challenges including space limitations, vacuum requirements, and very high-neutron-fluence. Plasma control in ITER will require real-time Te. The diagnosic design that evolved from these sometimes-conflicting needs and requirements will be described component by component with special emphasis on the integration to form a single effective diagnostic system. Supported by PPPL/US-DA via subcontract S013464-C to UT Austin.

  17. Solving the Schroedinger Equation of Atoms and Molecules without Analytical Integration Based on the Free Iterative-Complement-Interaction Wave Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakatsuji, H.; Nakashima, H.; Department of Synthetic Chemistry and Biological Chemistry, Graduate School of Engineering, Kyoto University, Nishikyo-ku, Kyoto 615-8510

    2007-12-14

    A local Schroedinger equation (LSE) method is proposed for solving the Schroedinger equation (SE) of general atoms and molecules without doing analytic integrations over the complement functions of the free ICI (iterative-complement-interaction) wave functions. Since the free ICI wave function is potentially exact, we can assume a flatness of its local energy. The variational principle is not applicable because the analytic integrations over the free ICI complement functions are very difficult for general atoms and molecules. The LSE method is applied to several 2 to 5 electron atoms and molecules, giving an accuracy of 10{sup -5} Hartree in total energy.more » The potential energy curves of H{sub 2} and LiH molecules are calculated precisely with the free ICI LSE method. The results show the high potentiality of the free ICI LSE method for developing accurate predictive quantum chemistry with the solutions of the SE.« less

  18. Toward a first-principles integrated simulation of tokamak edge plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C S; Klasky, Scott A; Cummings, Julian

    2008-01-01

    Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less

  19. Understanding and predicting the dynamics of tokamak discharges during startup and rampdown

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackson, G. L.; Politzer, P. A.; Humphreys, D. A.

    Understanding the dynamics of plasma startup and termination is important for present tokamaks and for predictive modeling of future burning plasma devices such as ITER. We report on experiments in the DIII-D tokamak that explore the plasma startup and rampdown phases and on the benchmarking of transport models. Key issues have been examined such as plasma initiation and burnthrough with limited inductive voltage and achieving flattop and maximum burn within the technical limits of coil systems and their actuators while maintaining the desired q profile. Successful rampdown requires scenarios consistent with technical limits, including controlled H-L transitions, while avoiding verticalmore » instabilities, additional Ohmic transformer flux consumption, and density limit disruptions. Discharges were typically initiated with an inductive electric field typical of ITER, 0.3 V/m, most with second harmonic electron cyclotron assist. A fast framing camera was used during breakdown and burnthrough of low Z impurity charge states to study the formation physics. An improved 'large aperture' ITER startup scenario was developed, and aperture reduction in rampdown was found to be essential to avoid instabilities. Current evolution using neoclassical conductivity in the CORSICA code agrees with rampup experiments, but the prediction of the temperature and internal inductance evolution using the Coppi-Tang model for electron energy transport is not yet accurate enough to allow extrapolation to future devices.« less

  20. Improving binding mode and binding affinity predictions of docking by ligand-based search of protein conformations: evaluation in D3R grand challenge 2015

    NASA Astrophysics Data System (ADS)

    Xu, Xianjin; Yan, Chengfei; Zou, Xiaoqin

    2017-08-01

    The growing number of protein-ligand complex structures, particularly the structures of proteins co-bound with different ligands, in the Protein Data Bank helps us tackle two major challenges in molecular docking studies: the protein flexibility and the scoring function. Here, we introduced a systematic strategy by using the information embedded in the known protein-ligand complex structures to improve both binding mode and binding affinity predictions. Specifically, a ligand similarity calculation method was employed to search a receptor structure with a bound ligand sharing high similarity with the query ligand for the docking use. The strategy was applied to the two datasets (HSP90 and MAP4K4) in recent D3R Grand Challenge 2015. In addition, for the HSP90 dataset, a system-specific scoring function (ITScore2_hsp90) was generated by recalibrating our statistical potential-based scoring function (ITScore2) using the known protein-ligand complex structures and the statistical mechanics-based iterative method. For the HSP90 dataset, better performances were achieved for both binding mode and binding affinity predictions comparing with the original ITScore2 and with ensemble docking. For the MAP4K4 dataset, although there were only eight known protein-ligand complex structures, our docking strategy achieved a comparable performance with ensemble docking. Our method for receptor conformational selection and iterative method for the development of system-specific statistical potential-based scoring functions can be easily applied to other protein targets that have a number of protein-ligand complex structures available to improve predictions on binding.

  1. Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2011-01-01

    A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.

  2. Synthetic Molecular Evolution of Membrane-Active Peptides

    NASA Astrophysics Data System (ADS)

    Wimley, William

    The physical chemistry of membrane partitioning largely determines the function of membrane active peptides. Membrane-active peptides have potential utility in many areas, including in the cellular delivery of polar compounds, cancer therapy, biosensor design, and in antibacterial, antiviral and antifungal therapies. Yet, despite decades of research on thousands of known examples, useful sequence-structure-function relationships are essentially unknown. Because peptide-membrane interactions within the highly fluid bilayer are dynamic and heterogeneous, accounts of mechanism are necessarily vague and descriptive, and have little predictive power. This creates a significant roadblock to advances in the field. We are bypassing that roadblock with synthetic molecular evolution: iterative peptide library design and orthogonal high-throughput screening. We start with template sequences that have at least some useful activity, and create small, focused libraries using structural and biophysical principles to design the sequence space around the template. Orthogonal high-throughput screening is used to identify gain-of-function peptides by simultaneously selecting for several different properties (e.g. solubility, activity and toxicity). Multiple generations of iterative library design and screening have enabled the identification of membrane-active sequences with heretofore unknown properties, including clinically relevant, broad-spectrum activity against drug-resistant bacteria and enveloped viruses as well as pH-triggered macromolecular poration.

  3. Overview of International Thermonuclear Experimental Reactor (ITER) engineering design activities*

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.

    1994-05-01

    The International Thermonuclear Experimental Reactor (ITER) [International Thermonuclear Experimental Reactor (ITER) (International Atomic Energy Agency, Vienna, 1988), ITER Documentation Series, No. 1] project is a multiphased project, presently proceeding under the auspices of the International Atomic Energy Agency according to the terms of a four-party agreement among the European Atomic Energy Community (EC), the Government of Japan (JA), the Government of the Russian Federation (RF), and the Government of the United States (US), ``the Parties.'' The ITER project is based on the tokamak, a Russian invention, and has since been brought to a high level of development in all major fusion programs in the world. The objective of ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER design is being developed, with support from the Parties' four Home Teams and is in progress by the Joint Central Team. An overview of ITER Design activities is presented.

  4. Towards a better understanding of critical gradients and near-marginal turbulence in burning plasma conditions

    NASA Astrophysics Data System (ADS)

    Holland, C.; Candy, J.; Howard, N. T.

    2017-10-01

    Developing accurate predictive transport models of burning plasma conditions is essential for confident prediction and optimization of next step experiments such as ITER and DEMO. Core transport in these plasmas is expected to be very small in gyroBohm-normalized units, such that the plasma should lie close to the critical gradients for onset of microturbulence instabilities. We present recent results investigating the scaling of linear critical gradients of ITG, TEM, and ETG modes as a function of parameters such as safety factor, magnetic shear, and collisionality for nominal conditions and geometry expected in ITER H-mode plasmas. A subset of these results is then compared against predictions from nonlinear gyrokinetic simulations, to quantify differences between linear and nonlinear thresholds. As part of this study, linear and nonlinear results from both GYRO and CGYRO codes will be compared against each other, as well as to predictions from the quasilinear TGLF model. Challenges arising from near-marginal turbulence dynamics are addressed. This work was supported by the US Department of Energy under US DE-SC0006957.

  5. Design of the DEMO Fusion Reactor Following ITER.

    PubMed

    Garabedian, Paul R; McFadden, Geoffrey B

    2009-01-01

    Runs of the NSTAB nonlinear stability code show there are many three-dimensional (3D) solutions of the advanced tokamak problem subject to axially symmetric boundary conditions. These numerical simulations based on mathematical equations in conservation form predict that the ITER international tokamak project will encounter persistent disruptions and edge localized mode (ELMS) crashes. Test particle runs of the TRAN transport code suggest that for quasineutrality to prevail in tokamaks a certain minimum level of 3D asymmetry of the magnetic spectrum is required which is comparable to that found in quasiaxially symmetric (QAS) stellarators. The computational theory suggests that a QAS stellarator with two field periods and proportions like those of ITER is a good candidate for a fusion reactor. For a demonstration reactor (DEMO) we seek an experiment that combines the best features of ITER, with a system of QAS coils providing external rotational transform, which is a measure of the poloidal field. We have discovered a configuration with unusually good quasisymmetry that is ideal for this task.

  6. Design of the DEMO Fusion Reactor Following ITER

    PubMed Central

    Garabedian, Paul R.; McFadden, Geoffrey B.

    2009-01-01

    Runs of the NSTAB nonlinear stability code show there are many three-dimensional (3D) solutions of the advanced tokamak problem subject to axially symmetric boundary conditions. These numerical simulations based on mathematical equations in conservation form predict that the ITER international tokamak project will encounter persistent disruptions and edge localized mode (ELMS) crashes. Test particle runs of the TRAN transport code suggest that for quasineutrality to prevail in tokamaks a certain minimum level of 3D asymmetry of the magnetic spectrum is required which is comparable to that found in quasiaxially symmetric (QAS) stellarators. The computational theory suggests that a QAS stellarator with two field periods and proportions like those of ITER is a good candidate for a fusion reactor. For a demonstration reactor (DEMO) we seek an experiment that combines the best features of ITER, with a system of QAS coils providing external rotational transform, which is a measure of the poloidal field. We have discovered a configuration with unusually good quasisymmetry that is ideal for this task. PMID:27504224

  7. Competencies "plus": the nature of written comments on internal medicine residents' evaluation forms.

    PubMed

    Ginsburg, Shiphra; Gold, Wayne; Cavalcanti, Rodrigo B; Kurabi, Bochra; McDonald-Blumer, Heather

    2011-10-01

    Comments on residents' in-training evaluation reports (ITERs) may be more useful than scores in identifying trainees in difficulty. However, little is known about the nature of comments written by internal medicine faculty on residents' ITERs. Comments on 1,770 ITERs (from 180 residents in postgraduate years 1-3) were analyzed using constructivist grounded theory beginning with an existing framework. Ninety-three percent of ITERs contained comments, which were frequently easy to map onto traditional competencies, such as knowledge base (n = 1,075 comments) to the CanMEDs Medical Expert role. Many comments, however, could be linked to several overlapping competencies. Also common were comments completely unrelated to competencies, for instance, the resident's impact on staff (813), or personality issues (450). Residents' "trajectory" was a major theme (performance in relation to expected norms [494], improvement seen [286], or future predictions [286]). Faculty's assessments of residents are underpinned by factors related and unrelated to traditional competencies. Future evaluations should attempt to capture these holistic, integrated impressions.

  8. Stability of the iterative solutions of integral equations as one phase freezing criterion.

    PubMed

    Fantoni, R; Pastore, G

    2003-10-01

    A recently proposed connection between the threshold for the stability of the iterative solution of integral equations for the pair correlation functions of a classical fluid and the structural instability of the corresponding real fluid is carefully analyzed. Direct calculation of the Lyapunov exponent of the standard iterative solution of hypernetted chain and Percus-Yevick integral equations for the one-dimensional (1D) hard rods fluid shows the same behavior observed in 3D systems. Since no phase transition is allowed in such 1D system, our analysis shows that the proposed one phase criterion, at least in this case, fails. We argue that the observed proximity between the numerical and the structural instability in 3D originates from the enhanced structure present in the fluid but, in view of the arbitrary dependence on the iteration scheme, it seems uneasy to relate the numerical stability analysis to a robust one-phase criterion for predicting a thermodynamic phase transition.

  9. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  10. High-Level Performance Modeling of SAR Systems

    NASA Technical Reports Server (NTRS)

    Chen, Curtis

    2006-01-01

    SAUSAGE (Still Another Utility for SAR Analysis that s General and Extensible) is a computer program for modeling (see figure) the performance of synthetic- aperture radar (SAR) or interferometric synthetic-aperture radar (InSAR or IFSAR) systems. The user is assumed to be familiar with the basic principles of SAR imaging and interferometry. Given design parameters (e.g., altitude, power, and bandwidth) that characterize a radar system, the software predicts various performance metrics (e.g., signal-to-noise ratio and resolution). SAUSAGE is intended to be a general software tool for quick, high-level evaluation of radar designs; it is not meant to capture all the subtleties, nuances, and particulars of specific systems. SAUSAGE was written to facilitate the exploration of engineering tradeoffs within the multidimensional space of design parameters. Typically, this space is examined through an iterative process of adjusting the values of the design parameters and examining the effects of the adjustments on the overall performance of the system at each iteration. The software is designed to be modular and extensible to enable consideration of a variety of operating modes and antenna beam patterns, including, for example, strip-map and spotlight SAR acquisitions, polarimetry, burst modes, and squinted geometries.

  11. Algorithm for predicting the evolution of series of dynamics of complex systems in solving information problems

    NASA Astrophysics Data System (ADS)

    Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.

    2018-03-01

    In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.

  12. Quickprop method to speed up learning process of Artificial Neural Network in money's nominal value recognition case

    NASA Astrophysics Data System (ADS)

    Swastika, Windra

    2017-03-01

    A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.

  13. Prediction and verification of creep behavior in metallic materials and components for the space shuttle thermal protection system. Volume 2: Phase 2 subsize panel cyclic creep predictions

    NASA Technical Reports Server (NTRS)

    Cramer, B. A.; Davis, J. W.

    1975-01-01

    A method for predicting permanent cyclic creep deflections in stiffened panel structures was developed. The resulting computer program may be applied to either the time-hardening or strain-hardening theories of creep accumulation. Iterative techniques were used to determine structural rotations, creep strains, and stresses as a function of time. Deflections were determined by numerical integration of structural rotations along the panel length. The analytical approach was developed for analyzing thin-gage entry vehicle metallic-thermal-protection system panels subjected to cyclic bending loads at high temperatures, but may be applied to any panel subjected to bending loads. Predicted panel creep deflections were compared with results from cyclic tests of subsize corrugation and rib-stiffened panels. Empirical equations were developed for each material based on correlation with tensile cyclic creep data and both the subsize panels and tensile specimens were fabricated from the same sheet material. For Vol. 1, see N75-21431.

  14. Design for disassembly and sustainability assessment to support aircraft end-of-life treatment

    NASA Astrophysics Data System (ADS)

    Savaria, Christian

    Gas turbine engine design is a multidisciplinary and iterative process. Many design iterations are necessary to address the challenges among the disciplines. In the creation of a new engine architecture, the design time is crucial in capturing new business opportunities. At the detail design phase, it was proven very difficult to correct an unsatisfactory design. To overcome this difficulty, the concept of Multi-Disciplinary Optimization (MDO) at the preliminary design phase (Preliminary MDO or PMDO) is used allowing more freedom to perform changes in the design. PMDO also reduces the design time at the preliminary design phase. The concept of PMDO was used was used to create parametric models, and new correlations for high pressure gas turbine housing and shroud segments towards a new design process. First, dedicated parametric models were created because of their reusability and versatility. Their ease of use compared to non-parameterized models allows more design iterations thus reduces set up and design time. Second, geometry correlations were created to minimize the number of parameters used in turbine housing and shroud segment design. Since the turbine housing and the shroud segment geometries are required in tip clearance analyses, care was taken as to not oversimplify the parametric formulation. In addition, a user interface was developed to interact with the parametric models and improve the design time. Third, the cooling flow predictions require many engine parameters (i.e. geometric and performance parameters and air properties) and a reference shroud segments. A second correlation study was conducted to minimize the number of engine parameters required in the cooling flow predictions and to facilitate the selection of a reference shroud segment. Finally, the parametric models, the geometry correlations, and the user interface resulted in a time saving of 50% and an increase in accuracy of 56% in the new design system compared to the existing design system. Also, regarding the cooling flow correlations, the number of engine parameters was reduced by a factor of 6 to create a simplified prediction model and hence a faster shroud segment selection process. None

  15. Conceptual design of the ITER fast-ion loss detector.

    PubMed

    Garcia-Munoz, M; Kocan, M; Ayllon-Guerola, J; Bertalot, L; Bonnet, Y; Casal, N; Galdon, J; Garcia Lopez, J; Giacomin, T; Gonzalez-Martin, J; Gunn, J P; Jimenez-Ramos, M C; Kiptily, V; Pinches, S D; Rodriguez-Ramos, M; Reichle, R; Rivero-Rodriguez, J F; Sanchis-Sanchez, L; Snicker, A; Vayakis, G; Veshchev, E; Vorpahl, Ch; Walsh, M; Walton, R

    2016-11-01

    A conceptual design of a reciprocating fast-ion loss detector for ITER has been developed and is presented here. Fast-ion orbit simulations in a 3D magnetic equilibrium and up-to-date first wall have been carried out to revise the measurement requirements for the lost alpha monitor in ITER. In agreement with recent observations, the simulations presented here suggest that a pitch-angle resolution of ∼5° might be necessary to identify the loss mechanisms. Synthetic measurements including realistic lost alpha-particle as well as neutron and gamma fluxes predict scintillator signal-to-noise levels measurable with standard light acquisition systems with the detector aperture at ∼11 cm outside of the diagnostic first wall. At measurement position, heat load on detector head is comparable to that in present devices.

  16. An analysis of the symmetry issue in the ℓ-distribution method of gas radiation in non-uniform gaseous media

    NASA Astrophysics Data System (ADS)

    André, Frédéric

    2017-03-01

    The recently proposed ℓ-distribution/ICE (Iterative Copula Evaluation) method of gas radiation suffers from symmetry issues when applied in highly non-isothermal and non-homogeneous gaseous media. This problem is studied in a detailed theoretical way. The objective of the present paper is: 1/to provide a mathematical analysis of this problem of symmetry and, 2/to suggest a decisive factor, defined in terms of the ratio between the narrow band Planck and Rosseland mean absorption coefficients, to handle this issue. Comparisons of model predictions with reference LBL calculations show that the proposed criterion improves the accuracy of the intuitive ICE method for applications in highly non-uniform gases at high temperatures.

  17. Fast Acting Eddy Current Driven Valve for Massive Gas Injection on ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyttle, Mark S; Baylor, Larry R; Carmichael, Justin R

    2015-01-01

    Tokamak plasma disruptions present a significant challenge to ITER as they can result in intense heat flux, large forces from halo and eddy currents, and potential first-wall damage from the generation of multi-MeV runaway electrons. Massive gas injection (MGI) of high Z material using fast acting valves is being explored on existing tokamaks and is planned for ITER as a method to evenly distribute the thermal load of the plasma to prevent melting, control the rate of the current decay to minimize mechanical loads, and to suppress the generation of runaway electrons. A fast acting valve and accompanying power supplymore » have been designed and first test articles produced to meet the requirements for a disruption mitigation system on ITER. The test valve incorporates a flyer plate actuator similar to designs deployed on TEXTOR, ASDEX upgrade, and JET [1 3] of a size useful for ITER with special considerations to mitigate the high mechanical forces developed during actuation due to high background magnetic fields. The valve includes a tip design and all-metal valve stem sealing for compatibility with tritium and high neutron and gamma fluxes.« less

  18. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    PubMed

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  19. Adaptive strategies for materials design using uncertainties

    DOE PAGES

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James; ...

    2016-01-21

    Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material withmore » desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.« less

  20. Adaptive strategies for materials design using uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balachandran, Prasanna V.; Xue, Dezhen; Theiler, James

    Here, we compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young’s (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material withmore » desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don’t. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.« less

  1. Development of a real-time system for ITER first wall heat load control

    NASA Astrophysics Data System (ADS)

    Anand, Himank; de Vries, Peter; Gribov, Yuri; Pitts, Richard; Snipes, Joseph; Zabeo, Luca

    2017-10-01

    The steady state heat flux on the ITER first wall (FW) panels are limited by the heat removal capacity of the water cooling system. In case of off-normal events (e.g. plasma displacement during H-L transitions), the heat loads are predicted to exceed the design limits (2-4.7 MW/m2). Intense heat loads are predicted on the FW, even well before the burning plasma phase. Thus, a real-time (RT) FW heat load control system is mandatory from early plasma operation of the ITER tokamak. A heat load estimator based on the RT equilibrium reconstruction has been developed for the plasma control system (PCS). A scheme, estimating the energy state for prescribed gaps defined as the distance between the last closed flux surface (LCFS)/separatrix and the FW is presented. The RT energy state is determined by the product of a weighted function of gap distance and the power crossing the plasma boundary. In addition, a heat load estimator assuming a simplified FW geometry and parallel heat transport model in the scrape-off layer (SOL), benchmarked against a full 3-D magnetic field line tracer is also presented.

  2. Use of a Machine-learning Method for Predicting Highly Cited Articles Within General Radiology Journals.

    PubMed

    Rosenkrantz, Andrew B; Doshi, Ankur M; Ginocchio, Luke A; Aphinyanaphongs, Yindalon

    2016-12-01

    This study aimed to assess the performance of a text classification machine-learning model in predicting highly cited articles within the recent radiological literature and to identify the model's most influential article features. We downloaded from PubMed the title, abstract, and medical subject heading terms for 10,065 articles published in 25 general radiology journals in 2012 and 2013. Three machine-learning models were applied to predict the top 10% of included articles in terms of the number of citations to the article in 2014 (reflecting the 2-year time window in conventional impact factor calculations). The model having the highest area under the curve was selected to derive a list of article features (words) predicting high citation volume, which was iteratively reduced to identify the smallest possible core feature list maintaining predictive power. Overall themes were qualitatively assigned to the core features. The regularized logistic regression (Bayesian binary regression) model had highest performance, achieving an area under the curve of 0.814 in predicting articles in the top 10% of citation volume. We reduced the initial 14,083 features to 210 features that maintain predictivity. These features corresponded with topics relating to various imaging techniques (eg, diffusion-weighted magnetic resonance imaging, hyperpolarized magnetic resonance imaging, dual-energy computed tomography, computed tomography reconstruction algorithms, tomosynthesis, elastography, and computer-aided diagnosis), particular pathologies (prostate cancer; thyroid nodules; hepatic adenoma, hepatocellular carcinoma, non-alcoholic fatty liver disease), and other topics (radiation dose, electroporation, education, general oncology, gadolinium, statistics). Machine learning can be successfully applied to create specific feature-based models for predicting articles likely to achieve high influence within the radiological literature. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  3. Design Features of the Neutral Particle Diagnostic System for the ITER Tokamak

    NASA Astrophysics Data System (ADS)

    Petrov, S. Ya.; Afanasyev, V. I.; Melnik, A. D.; Mironov, M. I.; Navolotsky, A. S.; Nesenevich, V. G.; Petrov, M. P.; Chernyshev, F. V.; Kedrov, I. V.; Kuzmin, E. G.; Lyublin, B. V.; Kozlovski, S. S.; Mokeev, A. N.

    2017-12-01

    The control of the deuterium-tritium (DT) fuel isotopic ratio has to ensure the best performance of the ITER thermonuclear fusion reactor. The diagnostic system described in this paper allows the measurement of this ratio analyzing the hydrogen isotope fluxes (performing neutral particle analysis (NPA)). The development and supply of the NPA diagnostics for ITER was delegated to the Russian Federation. The diagnostics is being developed at the Ioffe Institute. The system consists of two analyzers, viz., LENPA (Low Energy Neutral Particle Analyzer) with 10-200 keV energy range and HENPA (High Energy Neutral Particle Analyzer) with 0.1-4.0MeV energy range. Simultaneous operation of both analyzers in different energy ranges enables researchers to measure the DT fuel ratio both in the central burning plasma (thermonuclear burn zone) and at the edge as well. When developing the diagnostic complex, it was necessary to account for the impact of several factors: high levels of neutron and gamma radiation, the direct vacuum connection to the ITER vessel, implying high tritium containment, strict requirements on reliability of all units and mechanisms, and the limited space available for accommodation of the diagnostic hardware at the ITER tokamak. The paper describes the design of the diagnostic complex and the engineering solutions that make it possible to conduct measurements under tokamak reactor conditions. The proposed engineering solutions provide a safe—with respect to thermal and mechanical loads—common vacuum channel for hydrogen isotope atoms to pass to the analyzers; ensure efficient shielding of the analyzers from the ITER stray magnetic field (up to 1 kG); provide the remote control of the NPA diagnostic complex, in particular, connection/disconnection of the NPA vacuum beamline from the ITER vessel; meet the ITER radiation safety requirements; and ensure measurements of the fuel isotopic ratio under high levels of neutron and gamma radiation.

  4. Particle-in-cell simulations of the plasma interaction with poloidal gaps in the ITER divertor outer vertical target

    NASA Astrophysics Data System (ADS)

    Komm, M.; Gunn, J. P.; Dejarnac, R.; Pánek, R.; Pitts, R. A.; Podolník, A.

    2017-12-01

    Predictive modelling of the heat flux distribution on ITER tungsten divertor monoblocks is a critical input to the design choice for component front surface shaping and for the understanding of power loading in the case of small-scale exposed edges. This paper presents results of particle-in-cell (PIC) simulations of plasma interaction in the vicinity of poloidal gaps between monoblocks in the high heat flux areas of the ITER outer vertical target. The main objective of the simulations is to assess the role of local electric fields which are accounted for in a related study using the ion orbit approach including only the Lorentz force (Gunn et al 2017 Nucl. Fusion 57 046025). Results of the PIC simulations demonstrate that even if in some cases the electric field plays a distinct role in determining the precise heat flux distribution, when heat diffusion into the bulk material is taken into account, the thermal responses calculated using the PIC or ion orbit approaches are very similar. This is a consequence of the small spatial scales over which the ion orbits distribute the power. The key result of this study is that the computationally much less intensive ion orbit approximation can be used with confidence in monoblock shaping design studies, thus validating the approach used in Gunn et al (2017 Nucl. Fusion 57 046025).

  5. Cross-cultural equivalence of the patient- and parent-reported quality of life in short stature youth (QoLISSY) questionnaire.

    PubMed

    Bullinger, Monika; Quitmann, Julia; Silva, Neuza; Rohenkohl, Anja; Chaplin, John E; DeBusk, Kendra; Mimoun, Emmanuelle; Feigerlova, Eva; Herdman, Michael; Sanz, Dolores; Wollmann, Hartmut; Pleil, Andreas; Power, Michael

    2014-01-01

    Testing cross-cultural equivalence of patient-reported outcomes requires sufficiently large samples per country, which is difficult to achieve in rare endocrine paediatric conditions. We describe a novel approach to cross-cultural testing of the Quality of Life in Short Stature Youth (QoLISSY) questionnaire in five countries by sequentially taking one country out (TOCO) from the total sample and iteratively comparing the resulting psychometric performance. Development of the QoLISSY proceeded from focus group discussions through pilot testing to field testing in 268 short-statured patients and their parents. To explore cross-cultural equivalence, the iterative TOCO technique was used to examine and compare the validity, reliability, and convergence of patient and parent responses on QoLISSY in the field test dataset, and to predict QoLISSY scores from clinical, socio-demographic and psychosocial variables. Validity and reliability indicators were satisfactory for each sample after iteratively omitting one country. Comparisons with the total sample revealed cross-cultural equivalence in internal consistency and construct validity for patients and parents, high inter-rater agreement and a substantial proportion of QoLISSY variance explained by predictors. The TOCO technique is a powerful method to overcome problems of country-specific testing of patient-reported outcome instruments. It provides an empirical support to QoLISSY's cross-cultural equivalence and is recommended for future research.

  6. Contribution of ASDEX Upgrade to disruption studies for ITER

    NASA Astrophysics Data System (ADS)

    Pautasso, G.; Zhang, Y.; Reiter, B.; Giannone, L.; Gruber, O.; Herrmann, A.; Kardaun, O.; Khayrutdinov, K. K.; Lukash, V. E.; Maraschek, M.; Mlynek, A.; Nakamura, Y.; Schneider, W.; Sias, G.; Sugihara, M.; ASDEX Upgrade Team

    2011-10-01

    This paper describes the most recent contributions of ASDEX Upgrade to ITER in the field of disruption studies. (1) The ITER specifications for the halo current magnitude are based on data collected from several tokamaks and summarized in the plot of the toroidal peaking factor versus the maximum halo current fraction. Even if the maximum halo current in ASDEX Upgrade reaches 50% of the plasma current, the duration of this maximum lasts a fraction of a ms. (2) Long-lasting asymmetries of the halo current are rare and do not give rise to a large asymmetric component of the mechanical forces on the machine. Differently from JET, these asymmetries are neither locked nor exhibit a stationary harmonic structure. (3) Recent work on disruption prediction has concentrated on the search for a simple function of the most relevant plasma parameters, which is able to discriminate between the safe and pre-disruption phases of a discharge. For this purpose, the disruptions of the last four years have been classified into groups and then discriminant analysis is used to select the most significant variables and to derive the discriminant function. (4) The attainment of the critical density for the collisional suppression of the runaway electrons seems to be technically and physically possible on our medium size tokamak. The CO2 interferometer and the AXUV diagnostic provide information on the highly 3D impurity transport process during the whole plasma quench.

  7. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    NASA Astrophysics Data System (ADS)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  8. LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS*

    PubMed Central

    Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.

    2014-01-01

    We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈ℝn ‖Ax − b‖2, where A ∈ ℝm × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster. PMID:25419094

  9. High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair.

    PubMed

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2018-01-01

    Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed.

  10. High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair

    PubMed Central

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y. K.

    2018-01-01

    Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed. PMID:29706894

  11. Taper Functions for Predicting Product Volumes in Natural Shortleaf Pines

    Treesearch

    Robert M. Farrar; Paul A. Murphy

    1987-01-01

    Taper (stem-profile) functions are presented for natural shortleaf pine (Pinus echinata Mill.) trees growing in the West Gulf area. These functions, when integrated, permit the prediction of volume between any two heights on a stem and, conversely by iteration, the volume between any two diameters on a stem. Examples are given of use of the functions...

  12. Predictions of H-mode performance in ITER

    NASA Astrophysics Data System (ADS)

    Budny, Robert

    2008-11-01

    Time-dependent integrated predictions of performance metrics such as the fusion power PDT, QDT≡ PDT/Pext, and alpha profiles are presented. The PTRANSP [1] code is used, along with GLF23 to predict plasma profiles, NUBEAM for NNBI and alpha heating, TORIC for ICRH, and TORAY for ECRH. Effects of sawteeth mixing, beam steering, beam shine-through, radiation loss, ash accumulation, and toroidal rotation are included. A total heating of Pext=73MW is assumed to achieve H-mode during the density and current ramp-up phase. Various mixes of NNBI, ICRH, and ECRH heating schemes are compared. After steady state conditions are achieved, Pext is stepped down to lower values to explore high QDT. Physics and computation uncertainties lead to ranges in predictions for PDT and QDT. Physics uncertainties include the L->H and H->L threshold powers, pedestal height, impurity and ash transport, and recycling. There are considerably more uncertainties predicting the peak value for QDT than for PDT. [0pt] [1] R.V. Budny, R. Andre, G. Bateman, F. Halpern, C.E. Kessel, A. Kritz, and D. McCune, Nuclear Fusion 48 (2008) 075005.

  13. The ITER Neutral Beam Test Facility towards SPIDER operation

    NASA Astrophysics Data System (ADS)

    Toigo, V.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Gambetta, G.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Piovan, R.; Recchia, M.; Rizzolo, A.; Sartori, E.; Siragusa, M.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Fröschle, M.; Heinemann, B.; Kraus, W.; Nocentini, R.; Riedl, R.; Schiesko, L.; Wimmer, C.; Wünderlich, D.; Cavenago, M.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Hemsworth, R.

    2017-08-01

    SPIDER is one of two projects of the ITER Neutral Beam Test Facility under construction in Padova, Italy, at the Consorzio RFX premises. It will have a 100 keV beam source with a full-size prototype of the radiofrequency ion source for the ITER neutral beam injector (NBI) and also, similar to the ITER diagnostic neutral beam, it is designed to operate with a pulse length of up to 3600 s, featuring an ITER-like magnetic filter field configuration (for high extraction of negative ions) and caesium oven (for high production of negative ions) layout as well as a wide set of diagnostics. These features will allow a reproduction of the ion source operation in ITER, which cannot be done in any other existing test facility. SPIDER realization is well advanced and the first operation is expected at the beginning of 2018, with the mission of achieving the ITER heating and diagnostic NBI ion source requirements and of improving its performance in terms of reliability and availability. This paper mainly focuses on the preparation of the first SPIDER operations—integration and testing of SPIDER components, completion and implementation of diagnostics and control and formulation of operation and research plan, based on a staged strategy.

  14. Unsupervised iterative detection of land mines in highly cluttered environments.

    PubMed

    Batman, Sinan; Goutsias, John

    2003-01-01

    An unsupervised iterative scheme is proposed for land mine detection in heavily cluttered scenes. This scheme is based on iterating hybrid multispectral filters that consist of a decorrelating linear transform coupled with a nonlinear morphological detector. Detections extracted from the first pass are used to improve results in subsequent iterations. The procedure stops after a predetermined number of iterations. The proposed scheme addresses several weaknesses associated with previous adaptations of morphological approaches to land mine detection. Improvement in detection performance, robustness with respect to clutter inhomogeneities, a completely unsupervised operation, and computational efficiency are the main highlights of the method. Experimental results reveal excellent performance.

  15. Recent Progress on ECH Technology for ITER

    NASA Astrophysics Data System (ADS)

    Sirigiri, Jagadishwar

    2005-10-01

    The Electron Cyclotron Heating and Current Drive (ECH&CD) system for ITER is a critical ITER system that must be available for use on Day 1 of the ITER experimental program. The applications of the system include plasma start-up, plasma heating and suppression of Neoclassical Tearing Modes (NTMs). These applications are accomplished using 27 one megawatt continuous wave gyrotrons: 24 at a frequency of 170 GHz and 3 at a frequency of 120 GHz. There are DC power supplies for the gyrotrons, a transmission line system, one launcher at the equatorial plane and three upper port launchers. The US will play a major role in delivering parts of the ECH&CD system to ITER. The present state-of-the-art includes major advances in all areas of ECH technology. In the US, a major effort is underway to supply gyrotrons of up to 1.5 MW power level at 110 GHz to General Atomics for use in heating the DIII-D tokamak. This presentation will include a brief review of the state-of-the-art, worldwide, in ECH technology. The requirements for the ITER ECH&CD system will then be reviewed. ITER calls for gyrotrons capable of operating from a 50 kV power supply, after potential depression, with a minimum of 50% overall efficiency. This is a very significant challenge and some approaches to meeting this goal will be presented. Recent experimental results at MIT showing improved efficiency of high frequency, 1.5 MW gyrotrons will be described. These results will be incorporated into the planned development of gyrotrons for ITER. The ITER ECH&CD system will also be a challenge to the transmission lines, which must operate at high average power at up to 1000 seconds and with high efficiency. The technology challenges and efforts in the US and other ITER parties to solve these problems will be reviewed. *In collaboration with E. Choi, C. Marchewka, I. Mastovosky, M. A. Shapiro and R. J. Temkin. This work is supported by the Office of Fusion Energy Sciences of the U. S. Department of Energy.

  16. Evaluation of concentrated space solar arrays using computer modeling. [for spacecraft propulsion and power supplies

    NASA Technical Reports Server (NTRS)

    Rockey, D. E.

    1979-01-01

    A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.

  17. Optimization of ITER Nb3Sn CICCs for coupling loss, transverse electromagnetic load and axial thermal contraction

    NASA Astrophysics Data System (ADS)

    Nijhuis, A.; van Lanen, E. P. A.; Rolando, G.

    2012-01-01

    The ITER cable-in-conduit conductors (CICCs) are built up from sub-cable bundles, wound in different stages, which are twisted to counter coupling loss caused by time-changing external magnet fields. The selection of the twist pitch lengths has major implications for the performance of the cable in the case of strain-sensitive superconductors, i.e. Nb3Sn, as the electromagnetic and thermal contraction loads are large but also for the heat load from the AC coupling loss. At present, this is a great challenge for the ITER central solenoid (CS) CICCs and the solution presented here could be a breakthrough for not only the ITER CS but also for CICC applications in general. After proposing longer twist pitches in 2006 and successful confirmation by short sample tests later on, the ITER toroidal field (TF) conductor cable pattern was improved accordingly. As the restrictions for coupling loss are more demanding for the CS conductors than for the TF conductors, it was believed that longer pitches would not be applicable for the conductors in the CS coils. In this paper we explain how, with the use of the TEMLOP model and the newly developed models JackPot-ACDC and CORD, the design of a CICC can be improved appreciably, particularly for the CS conductor layout. For the first time a large improvement is predicted not only providing very low sensitivity to electromagnetic load and thermal axial cable stress variations but at the same time much lower AC coupling loss. Reduction of the transverse load and warm-up-cool-down degradation can be reached by applying longer twist pitches in a particular sequence for the sub-stages, offering a large cable transverse stiffness, adequate axial flexibility and maximum allowed lateral strand support. Analysis of short sample (TF conductor) data reveals that increasing the twist pitch can lead to a gain of the effective axial compressive strain of more than 0.3% with practically no degradation from bending. This is probably explained by the distinct difference in mechanical response of the cable during axial contraction for short and long pitches. For short pitches periodic bending in different directions with relatively short wavelength is imposed because of a lack of sufficient lateral restraint of radial pressure. This can lead to high bending strain and eventually buckling. Whereas for cables with long twist pitches, the strands are only able to react as coherent bundles, being tightly supported by the surrounding strands, providing sufficient lateral restraint of radial pressure in combination with enough slippage to avoid single strand bending along detrimental short wavelengths. Experimental evidence of good performance was already provided with the test of the long pitch TFPRO2-OST2, which is still until today, the best ITER-type cable to strand performance ever without any cyclic load (electromagnetic and thermal contraction) degradation. For reduction of the coupling loss, specific choices of the cabling twist sequence are needed to minimize the area of linked strands and bundles that are coupled and form loops with the applied changing magnetic field, instead of simply avoiding longer pitches. In addition we recommend increasing the wrap coverage of the CS conductor from 50% to at least 70%. A larger wrap coverage fraction enhances the overall strand bundle lateral restraint. The long pitch design seems the best solution to optimize the ITER CS conductor within the given restrictions of the present coil design envelope, only allowing marginal changes. The models predict significant improvement against strain sensitivity and substantial decrease of the AC coupling loss in Nb3Sn CICCs, but also for NbTi CICCs minimization of the coupling loss can obviously be achieved. Although the success of long pitches to transverse load degradation was already demonstrated, the prediction of the elegant innovative combination with low coupling loss needs to be validated by a short sample test.

  18. Conceptual design of the ITER fast-ion loss detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Munoz, M., E-mail: mgm@us.es; Ayllon-Guerola, J.; Galdon, J.

    2016-11-15

    A conceptual design of a reciprocating fast-ion loss detector for ITER has been developed and is presented here. Fast-ion orbit simulations in a 3D magnetic equilibrium and up-to-date first wall have been carried out to revise the measurement requirements for the lost alpha monitor in ITER. In agreement with recent observations, the simulations presented here suggest that a pitch-angle resolution of ∼5° might be necessary to identify the loss mechanisms. Synthetic measurements including realistic lost alpha-particle as well as neutron and gamma fluxes predict scintillator signal-to-noise levels measurable with standard light acquisition systems with the detector aperture at ∼11 cmmore » outside of the diagnostic first wall. At measurement position, heat load on detector head is comparable to that in present devices.« less

  19. Composite panel development at JPL

    NASA Technical Reports Server (NTRS)

    Mcelroy, Paul; Helms, Rich

    1988-01-01

    Parametric computer studies can be use in a cost effective manner to determine optimized composite mirror panel designs. An InterDisciplinary computer Model (IDM) was created to aid in the development of high precision reflector panels for LDR. The materials properties, thermal responses, structural geometries, and radio/optical precision are synergistically analyzed for specific panel designs. Promising panels designs are fabricated and tested so that comparison with panel test results can be used to verify performance prediction models and accommodate design refinement. The iterative approach of computer design and model refinement with performance testing and materials optimization has shown good results for LDR panels.

  20. Highly undersampled contrast-enhanced MRA with iterative reconstruction: Integration in a clinical setting.

    PubMed

    Stalder, Aurelien F; Schmidt, Michaela; Quick, Harald H; Schlamann, Marc; Maderwald, Stefan; Schmitt, Peter; Wang, Qiu; Nadar, Mariappan S; Zenge, Michael O

    2015-12-01

    To integrate, optimize, and evaluate a three-dimensional (3D) contrast-enhanced sparse MRA technique with iterative reconstruction on a standard clinical MR system. Data were acquired using a highly undersampled Cartesian spiral phyllotaxis sampling pattern and reconstructed directly on the MR system with an iterative SENSE technique. Undersampling, regularization, and number of iterations of the reconstruction were optimized and validated based on phantom experiments and patient data. Sparse MRA of the whole head (field of view: 265 × 232 × 179 mm(3) ) was investigated in 10 patient examinations. High-quality images with 30-fold undersampling, resulting in 0.7 mm isotropic resolution within 10 s acquisition, were obtained. After optimization of the regularization factor and of the number of iterations of the reconstruction, it was possible to reconstruct images with excellent quality within six minutes per 3D volume. Initial results of sparse contrast-enhanced MRA (CEMRA) in 10 patients demonstrated high-quality whole-head first-pass MRA for both the arterial and venous contrast phases. While sparse MRI techniques have not yet reached clinical routine, this study demonstrates the technical feasibility of high-quality sparse CEMRA of the whole head in a clinical setting. Sparse CEMRA has the potential to become a viable alternative where conventional CEMRA is too slow or does not provide sufficient spatial resolution. © 2014 Wiley Periodicals, Inc.

  1. Collapse of cooperation in evolving games.

    PubMed

    Stewart, Alexander J; Plotkin, Joshua B

    2014-12-09

    Game theory provides a quantitative framework for analyzing the behavior of rational agents. The Iterated Prisoner's Dilemma in particular has become a standard model for studying cooperation and cheating, with cooperation often emerging as a robust outcome in evolving populations. Here we extend evolutionary game theory by allowing players' payoffs as well as their strategies to evolve in response to selection on heritable mutations. In nature, many organisms engage in mutually beneficial interactions and individuals may seek to change the ratio of risk to reward for cooperation by altering the resources they commit to cooperative interactions. To study this, we construct a general framework for the coevolution of strategies and payoffs in arbitrary iterated games. We show that, when there is a tradeoff between the benefits and costs of cooperation, coevolution often leads to a dramatic loss of cooperation in the Iterated Prisoner's Dilemma. The collapse of cooperation is so extreme that the average payoff in a population can decline even as the potential reward for mutual cooperation increases. Depending upon the form of tradeoffs, evolution may even move away from the Iterated Prisoner's Dilemma game altogether. Our work offers a new perspective on the Prisoner's Dilemma and its predictions for cooperation in natural populations; and it provides a general framework to understand the coevolution of strategies and payoffs in iterated interactions.

  2. EDITORIAL: ECRH physics and technology in ITER

    NASA Astrophysics Data System (ADS)

    Luce, T. C.

    2008-05-01

    It is a great pleasure to introduce you to this special issue containing papers from the 4th IAEA Technical Meeting on ECRH Physics and Technology in ITER, which was held 6-8 June 2007 at the IAEA Headquarters in Vienna, Austria. The meeting was attended by more than 40 ECRH experts representing 13 countries and the IAEA. Presentations given at the meeting were placed into five separate categories EC wave physics: current understanding and extrapolation to ITER Application of EC waves to confinement and stability studies, including active control techniques for ITER Transmission systems/launchers: state of the art and ITER relevant techniques Gyrotron development towards ITER needs System integration and optimisation for ITER. It is notable that the participants took seriously the focal point of ITER, rather than simply contributing presentations on general EC physics and technology. The application of EC waves to ITER presents new challenges not faced in the current generation of experiments from both the physics and technology viewpoints. High electron temperatures and the nuclear environment have a significant impact on the application of EC waves. The needs of ITER have also strongly motivated source and launcher development. Finally, the demonstrated ability for precision control of instabilities or non-inductive current drive in addition to bulk heating to fusion burn has secured a key role for EC wave systems in ITER. All of the participants were encouraged to submit their contributions to this special issue, subject to the normal publication and technical merit standards of Nuclear Fusion. Almost half of the participants chose to do so; many of the others had been published in other publications and therefore could not be included in this special issue. The papers included here are a representative sample of the meeting. The International Advisory Committee also asked the three summary speakers from the meeting to supply brief written summaries (O. Sauter: EC wave physics and applications, M. Thumm: Source and transmission line development, and S. Cirant: ITER specific system designs). These summaries are included in this issue to give a more complete view of the technical meeting. Finally, it is appropriate to mention the future of this meeting series. With the ratification of the ITER agreement and the formation of the ITER International Organization, it was recognized that meetings conducted by outside agencies with an exclusive focus on ITER would be somewhat unusual. However, the participants at this meeting felt that the gathering of international experts with diverse specialities within EC wave physics and technology to focus on using EC waves in future fusion devices like ITER was extremely valuable. It was therefore recommended that this series of meetings continue, but with the broader focus on the application of EC waves to steady-state and burning plasma experiments including demonstration power plants. As the papers in this special issue show, the EC community is already taking seriously the challenges of applying EC waves to fusion devices with high neutron fluence and continuous operation at high reliability.

  3. Iterated reaction graphs: simulating complex Maillard reaction pathways.

    PubMed

    Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W

    2001-01-01

    This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.

  4. Dynamic adaptive learning for decision-making supporting systems

    NASA Astrophysics Data System (ADS)

    He, Haibo; Cao, Yuan; Chen, Sheng; Desai, Sachi; Hohil, Myron E.

    2008-03-01

    This paper proposes a novel adaptive learning method for data mining in support of decision-making systems. Due to the inherent characteristics of information ambiguity/uncertainty, high dimensionality and noisy in many homeland security and defense applications, such as surveillances, monitoring, net-centric battlefield, and others, it is critical to develop autonomous learning methods to efficiently learn useful information from raw data to help the decision making process. The proposed method is based on a dynamic learning principle in the feature spaces. Generally speaking, conventional approaches of learning from high dimensional data sets include various feature extraction (principal component analysis, wavelet transform, and others) and feature selection (embedded approach, wrapper approach, filter approach, and others) methods. However, very limited understandings of adaptive learning from different feature spaces have been achieved. We propose an integrative approach that takes advantages of feature selection and hypothesis ensemble techniques to achieve our goal. Based on the training data distributions, a feature score function is used to provide a measurement of the importance of different features for learning purpose. Then multiple hypotheses are iteratively developed in different feature spaces according to their learning capabilities. Unlike the pre-set iteration steps in many of the existing ensemble learning approaches, such as adaptive boosting (AdaBoost) method, the iterative learning process will automatically stop when the intelligent system can not provide a better understanding than a random guess in that particular subset of feature spaces. Finally, a voting algorithm is used to combine all the decisions from different hypotheses to provide the final prediction results. Simulation analyses of the proposed method on classification of different US military aircraft databases show the effectiveness of this method.

  5. Development of a Mobile Clinical Prediction Tool to Estimate Future Depression Severity and Guide Treatment in Primary Care: User-Centered Design

    PubMed Central

    2018-01-01

    Background Around the world, depression is both under- and overtreated. The diamond clinical prediction tool was developed to assist with appropriate treatment allocation by estimating the 3-month prognosis among people with current depressive symptoms. Delivering clinical prediction tools in a way that will enhance their uptake in routine clinical practice remains challenging; however, mobile apps show promise in this respect. To increase the likelihood that an app-delivered clinical prediction tool can be successfully incorporated into clinical practice, it is important to involve end users in the app design process. Objective The aim of the study was to maximize patient engagement in an app designed to improve treatment allocation for depression. Methods An iterative, user-centered design process was employed. Qualitative data were collected via 2 focus groups with a community sample (n=17) and 7 semistructured interviews with people with depressive symptoms. The results of the focus groups and interviews were used by the computer engineering team to modify subsequent protoypes of the app. Results Iterative development resulted in 3 prototypes and a final app. The areas requiring the most substantial changes following end-user input were related to the iconography used and the way that feedback was provided. In particular, communicating risk of future depressive symptoms proved difficult; these messages were consistently misinterpreted and negatively viewed and were ultimately removed. All participants felt positively about seeing their results summarized after completion of the clinical prediction tool, but there was a need for a personalized treatment recommendation made in conjunction with a consultation with a health professional. Conclusions User-centered design led to valuable improvements in the content and design of an app designed to improve allocation of and engagement in depression treatment. Iterative design allowed us to develop a tool that allows users to feel hope, engage in self-reflection, and motivate them to treatment. The tool is currently being evaluated in a randomized controlled trial. PMID:29685864

  6. Development of estrogen receptor beta binding prediction model using large sets of chemicals.

    PubMed

    Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao

    2017-11-03

    We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .

  7. Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm.

    PubMed

    Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2013-08-07

    Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.

  8. Multi-scale analysis and characterization of the ITER pre-compression rings

    NASA Astrophysics Data System (ADS)

    Foussat, A.; Park, B.; Rajainmaki, H.

    2014-01-01

    The toroidal field (TF) system of ITER Tokamak composed of 18 "D" shaped Toroidal Field (TF) coils during an operating scenario experiences out-of-plane forces caused by the interaction between the 68kA operating TF current and the poloidal magnetic fields. In order to keep the induced static and cyclic stress range in the intercoil shear keys between coils cases within the ITER allowable limits [1], centripetal preload is introduced by means of S2 fiber-glass/epoxy composite pre-compression rings (PCRs). Those PCRs consist in two sets of three rings, each 5 m in diameter and 337 × 288 mm in cross-section, and are installed at the top and bottom regions to apply a total resultant preload of 70 MN per TF coil equivalent to about 400 MPa hoop stress. Recent developments of composites in the aerospace industry have accelerated the use of advanced composites as primary structural materials. The PCRs represent one of the most challenging composite applications of large dimensions and highly stressed structures operating at 4 K over a long term life. Efficient design of those pre-compression composite structures requires a detailed understanding of both the failure behavior of the structure and the fracture behavior of the material. Due to the inherent difficulties to carry out real scale testing campaign, there is a need to develop simulation tools to predict the multiple complex failure mechanisms in pre-compression rings. A framework contract was placed by ITER Organization with SENER Ingenieria y Sistemas SA to develop multi-scale models representative of the composite structure of the Pre-compression rings based on experimental material data. The predictive modeling based on ABAQUS FEM provides the opportunity both to understand better how PCR composites behave in operating conditions and to support the development of materials by the supplier with enhanced performance to withstand the machine design lifetime of 30,000 cycles. The multi-scale stress analysis has revealed a complete picture of the stress levels within the fiber and the matrix regarding the static and fatigue performance of the rings structure including the presence of a delamination defect of critical size. The analysis results of the composite material demonstrate that the rings performance objectives under all loading and strength conditions are met.

  9. Varying execution discipline to increase performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, P.L.; Maccabe, A.B.

    1993-12-22

    This research investigates the relationship between execution discipline and performance. The hypothesis has two parts: 1. Different execution disciplines exhibit different performance for different computations, and 2. These differences can be effectively predicted by heuristics. A machine model is developed that can vary its execution discipline. That is, the model can execute a given program using either the control-driven, data-driven or demand-driven execution discipline. This model is referred to as a ``variable-execution-discipline`` machine. The instruction set for the model is the Program Dependence Web (PDW). The first part of the hypothesis will be tested by simulating the execution of themore » machine model on a suite of computations, based on the Livermore Fortran Kernel (LFK) Test (a.k.a. the Livermore Loops), using all three execution disciplines. Heuristics are developed to predict relative performance. These heuristics predict (a) the execution time under each discipline for one iteration of each loop and (b) the number of iterations taken by that loop; then the heuristics use those predictions to develop a prediction for the execution of the entire loop. Similar calculations are performed for branch statements. The second part of the hypothesis will be tested by comparing the results of the simulated execution with the predictions produced by the heuristics. If the hypothesis is supported, then the door is open for the development of machines that can vary execution discipline to increase performance.« less

  10. Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Cai, Weiwei; Liu, Yingzheng

    2018-04-01

    Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.

  11. Rapid tomographic reconstruction based on machine learning for time-resolved combustion diagnostics.

    PubMed

    Yu, Tao; Cai, Weiwei; Liu, Yingzheng

    2018-04-01

    Optical tomography has attracted surged research efforts recently due to the progress in both the imaging concepts and the sensor and laser technologies. The high spatial and temporal resolutions achievable by these methods provide unprecedented opportunity for diagnosis of complicated turbulent combustion. However, due to the high data throughput and the inefficiency of the prevailing iterative methods, the tomographic reconstructions which are typically conducted off-line are computationally formidable. In this work, we propose an efficient inversion method based on a machine learning algorithm, which can extract useful information from the previous reconstructions and build efficient neural networks to serve as a surrogate model to rapidly predict the reconstructions. Extreme learning machine is cited here as an example for demonstrative purpose simply due to its ease of implementation, fast learning speed, and good generalization performance. Extensive numerical studies were performed, and the results show that the new method can dramatically reduce the computational time compared with the classical iterative methods. This technique is expected to be an alternative to existing methods when sufficient training data are available. Although this work is discussed under the context of tomographic absorption spectroscopy, we expect it to be useful also to other high speed tomographic modalities such as volumetric laser-induced fluorescence and tomographic laser-induced incandescence which have been demonstrated for combustion diagnostics.

  12. Overview of Alcator C-Mod Research

    NASA Astrophysics Data System (ADS)

    White, A. E.

    2017-10-01

    Alcator C-Mod, a compact (R =0.68m, a =0.21m), high magnetic field, Bt <= 8T, tokamak accesses a variety of naturally ELM-suppressed high confinement regimes that feature extreme power density into the divertor, q|| <= 3 GW/m2, with SOL heat flux widths λq <0.5mm, exceeding conditions expected in ITER and approaching those foreseen in power plants. The unique parameter range provides much of the physics basis of a high-field, compact tokamak reactor. Research spans the topics of core transport and turbulence, RF heating and current drive, pedestal physics, scrape-off layer, divertor and plasma wall interactions. In the last experimental campaign, Super H-mode was explored and featured the highest pedestal pressures ever recorded, pped 90 kPa (90% of ITER target), consistent with EPED predictions. Optimization of naturally ELM-suppressed EDA H-modes accessed the highest volume averaged pressures ever achieved (〈p〉>2 atm), with pped 60 kPa. The SOL heat flux width has been measured at Bpol = 1.25T, confirming the Eich scaling over a broader poloidal field range than before. Multi-channel transport studies focus on the relationship between momentum transport and heat transport with perturbative experiments and new multi-scale gyrokinetic simulation validation techniques were developed. U.S. Department of Energy Grant No. DE-FC02-99ER54512.

  13. A Novel Calibration-Minimum Method for Prediction of Mole Fraction in Non-Ideal Mixture.

    PubMed

    Shibayama, Shojiro; Kaneko, Hiromasa; Funatsu, Kimito

    2017-04-01

    This article proposes a novel concentration prediction model that requires little training data and is useful for rapid process understanding. Process analytical technology is currently popular, especially in the pharmaceutical industry, for enhancement of process understanding and process control. A calibration-free method, iterative optimization technology (IOT), was proposed to predict pure component concentrations, because calibration methods such as partial least squares, require a large number of training samples, leading to high costs. However, IOT cannot be applied to concentration prediction in non-ideal mixtures because its basic equation is derived from the Beer-Lambert law, which cannot be applied to non-ideal mixtures. We proposed a novel method that realizes prediction of pure component concentrations in mixtures from a small number of training samples, assuming that spectral changes arising from molecular interactions can be expressed as a function of concentration. The proposed method is named IOT with virtual molecular interaction spectra (IOT-VIS) because the method takes spectral change as a virtual spectrum x nonlin,i into account. It was confirmed through the two case studies that the predictive accuracy of IOT-VIS was the highest among existing IOT methods.

  14. EC assisted start-up experiments reproduction in FTU and AUG for simulations of the ITER case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granucci, G.; Ricci, D.; Farina, D.

    The breakdown and plasma start-up in ITER are well known issues studied in the last few years in many tokamaks with the aid of calculation based on simplified modeling. The thickness of ITER metallic wall and the voltage limits of the Central Solenoid Power Supply strongly limit the maximum toroidal electric field achievable (0.3 V/m), well below the level used in the present generation of tokamaks. In order to have a safe and robust breakdown, the use of Electron Cyclotron Power to assist plasma formation and current rump up has been foreseen. This has raised attention on plasma formation phasemore » in presence of EC wave, especially in order to predict the required power for a robust breakdown in ITER. Few detailed theory studies have been performed up to nowadays, due to the complexity of the problems. A simplified approach, extended from that proposed in ref[1] has been developed including a impurity multispecies distribution and an EC wave propagation and absorption based on GRAY code. This integrated model (BK0D) has been benchmarked on ohmic and EC assisted experiments on FTU and AUG, finding the key aspects for a good reproduction of data. On the basis of this, the simulation has been devoted to understand the best configuration for ITER case. The dependency of impurity distribution content and neutral gas pressure limits has been considered. As results of the analysis a reasonable amount of power (1 - 2 MW) seems to be enough to extend in a significant way the breakdown and current start up capability of ITER. The work reports the FTU data reproduction and the ITER case simulations.« less

  15. Solving the electron and electron-nuclear Schroedinger equations for the excited states of helium atom with the free iterative-complement-interaction method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroyuki; Hijikata, Yuh; Nakatsuji, Hiroshi

    2008-04-21

    Very accurate variational calculations with the free iterative-complement-interaction (ICI) method for solving the Schroedinger equation were performed for the 1sNs singlet and triplet excited states of helium atom up to N=24. This is the first extensive applications of the free ICI method to the calculations of excited states to very high levels. We performed the calculations with the fixed-nucleus Hamiltonian and moving-nucleus Hamiltonian. The latter case is the Schroedinger equation for the electron-nuclear Hamiltonian and includes the quantum effect of nuclear motion. This solution corresponds to the nonrelativistic limit and reproduced the experimental values up to five decimal figures. Themore » small differences from the experimental values are not at all the theoretical errors but represent the physical effects that are not included in the present calculations, such as relativistic effect, quantum electrodynamic effect, and even the experimental errors. The present calculations constitute a small step toward the accurately predictive quantum chemistry.« less

  16. Ketide Synthase (KS) Domain Prediction and Analysis of Iterative Type II PKS Gene in Marine Sponge-Associated Actinobacteria Producing Biosurfactants and Antimicrobial Agents

    PubMed Central

    Selvin, Joseph; Sathiyanarayanan, Ganesan; Lipton, Anuj N.; Al-Dhabi, Naif Abdullah; Valan Arasu, Mariadhas; Kiran, George S.

    2016-01-01

    The important biological macromolecules, such as lipopeptide and glycolipid biosurfactant producing marine actinobacteria were analyzed and their potential linkage between type II polyketide synthase (PKS) genes was explored. A unique feature of type II PKS genes is their high amino acid (AA) sequence homology and conserved gene organization. These enzymes mediate the biosynthesis of polyketide natural products with enormous structural complexity and chemical nature by combinatorial use of various domains. Therefore, deciphering the order of AA sequence encoded by PKS domains tailored the chemical structure of polyketide analogs still remains a great challenge. The present work deals with an in vitro and in silico analysis of PKS type II genes from five actinobacterial species to correlate KS domain architecture and structural features. Our present analysis reveals the unique protein domain organization of iterative type II PKS and KS domain of marine actinobacteria. The findings of this study would have implications in metabolic pathway reconstruction and design of semi-synthetic genomes to achieve rational design of novel natural products. PMID:26903957

  17. Advanced simulation of mixed-material erosion/evolution and application to low and high-Z containing plasma facing components

    NASA Astrophysics Data System (ADS)

    Brooks, J. N.; Hassanein, A.; Sizyuk, T.

    2013-07-01

    Plasma interactions with mixed-material surfaces are being analyzed using advanced modeling of time-dependent surface evolution/erosion. Simulations use the REDEP/WBC erosion/redeposition code package coupled to the HEIGHTS package ITMC-DYN mixed-material formation/response code, with plasma parameter input from codes and data. We report here on analysis for a DIII-D Mo/C containing tokamak divertor. A DIII-D/DiMES probe experiment simulation predicts that sputtered molybdenum from a 1 cm diameter central spot quickly saturates (˜4 s) in the 5 cm diameter surrounding carbon probe surface, with subsequent re-sputtering and transport to off-probe divertor regions, and with high (˜50%) redeposition on the Mo spot. Predicted Mo content in the carbon agrees well with post-exposure probe data. We discuss implications and mixed-material analysis issues for Be/W mixing at the ITER outer divertor, and Li, C, Mo mixing at an NSTX divertor.

  18. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  19. "Ask Ernö": a self-learning tool for assignment and prediction of nuclear magnetic resonance spectra.

    PubMed

    Castillo, Andrés M; Bernal, Andrés; Dieden, Reiner; Patiny, Luc; Wist, Julien

    2016-01-01

    We present "Ask Ernö", a self-learning system for the automatic analysis of NMR spectra, consisting of integrated chemical shift assignment and prediction tools. The output of the automatic assignment component initializes and improves a database of assigned protons that is used by the chemical shift predictor. In turn, the predictions provided by the latter facilitate improvement of the assignment process. Iteration on these steps allows Ask Ernö to improve its ability to assign and predict spectra without any prior knowledge or assistance from human experts. This concept was tested by training such a system with a dataset of 2341 molecules and their (1)H-NMR spectra, and evaluating the accuracy of chemical shift predictions on a test set of 298 partially assigned molecules (2007 assigned protons). After 10 iterations, Ask Ernö was able to decrease its prediction error by 17 %, reaching an average error of 0.265 ppm. Over 60 % of the test chemical shifts were predicted within 0.2 ppm, while only 5 % still presented a prediction error of more than 1 ppm. Ask Ernö introduces an innovative approach to automatic NMR analysis that constantly learns and improves when provided with new data. Furthermore, it completely avoids the need for manually assigned spectra. This system has the potential to be turned into a fully autonomous tool able to compete with the best alternatives currently available.Graphical abstractSelf-learning loop. Any progress in the prediction (forward problem) will improve the assignment ability (reverse problem) and vice versa.

  20. Improved Regression Analysis of Temperature-Dependent Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2015-01-01

    An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.

  1. Unsteady flow model for circulation-control airfoils

    NASA Technical Reports Server (NTRS)

    Rao, B. M.

    1979-01-01

    An analysis and a numerical lifting surface method are developed for predicting the unsteady airloads on two-dimensional circulation control airfoils in incompressible flow. The analysis and the computer program are validated by correlating the computed unsteady airloads with test data and also with other theoretical solutions. Additionally, a mathematical model for predicting the bending-torsion flutter of a two-dimensional airfoil (a reference section of a wing or rotor blade) and a computer program using an iterative scheme are developed. The flutter program has a provision for using the CC airfoil airloads program or the Theodorsen hard flap solution to compute the unsteady lift and moment used in the flutter equations. The adopted mathematical model and the iterative scheme are used to perform a flutter analysis of a typical CC rotor blade reference section. The program seems to work well within the basic assumption of the incompressible flow.

  2. Systems and methods for predicting materials properties

    DOEpatents

    Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano

    2007-11-06

    Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.

  3. Fluid Intelligence and Cognitive Reflection in a Strategic Environment: Evidence from Dominance-Solvable Games

    PubMed Central

    Hanaki, Nobuyuki; Jacquemet, Nicolas; Luchini, Stéphane; Zylbersztejn, Adam

    2016-01-01

    Dominance solvability is one of the most straightforward solution concepts in game theory. It is based on two principles: dominance (according to which players always use their dominant strategy) and iterated dominance (according to which players always act as if others apply the principle of dominance). However, existing experimental evidence questions the empirical accuracy of dominance solvability. In this study, we study the relationships between the key facets of dominance solvability and two cognitive skills, cognitive reflection, and fluid intelligence. We provide evidence that the behaviors in accordance with dominance and one-step iterated dominance are both predicted by one's fluid intelligence rather than cognitive reflection. Individual cognitive skills, however, only explain a small fraction of the observed failure of dominance solvability. The accuracy of theoretical predictions on strategic decision making thus not only depends on individual cognitive characteristics, but also, perhaps more importantly, on the decision making environment itself. PMID:27559324

  4. Integrative Analysis of High-throughput Cancer Studies with Contrasted Penalization

    PubMed Central

    Shi, Xingjie; Liu, Jin; Huang, Jian; Zhou, Yong; Shia, BenChang; Ma, Shuangge

    2015-01-01

    In cancer studies with high-throughput genetic and genomic measurements, integrative analysis provides a way to effectively pool and analyze heterogeneous raw data from multiple independent studies and outperforms “classic” meta-analysis and single-dataset analysis. When marker selection is of interest, the genetic basis of multiple datasets can be described using the homogeneity model or the heterogeneity model. In this study, we consider marker selection under the heterogeneity model, which includes the homogeneity model as a special case and can be more flexible. Penalization methods have been developed in the literature for marker selection. This study advances from the published ones by introducing the contrast penalties, which can accommodate the within- and across-dataset structures of covariates/regression coefficients and, by doing so, further improve marker selection performance. Specifically, we develop a penalization method that accommodates the across-dataset structures by smoothing over regression coefficients. An effective iterative algorithm, which calls an inner coordinate descent iteration, is developed. Simulation shows that the proposed method outperforms the benchmark with more accurate marker identification. The analysis of breast cancer and lung cancer prognosis studies with gene expression measurements shows that the proposed method identifies genes different from those using the benchmark and has better prediction performance. PMID:24395534

  5. Erosion of newly developed CFCs and Be under disruption heat loads

    NASA Astrophysics Data System (ADS)

    Nakamura, K.; Akiba, M.; Araki, M.; Dairaku, M.; Sato, K.; Suzuki, S.; Yokoyama, K.; Linke, J.; Duwe, R.; Bolt, H.; Roedig, M.

    1996-10-01

    An evaluation of the erosion under disruption heat loads is very important to the lifetime prediction of divertor armour tiles of next fusion devices such as ITER. In particular, erosion data on CFCs (carbon fiber reinforced composites) and beryllium (Be) as the armour materials is urgently required in the ITER design. For CFCs, high heat flux experiments on the newly developed CFCs with high thermal conductivity have been performed under the heat flux of around 800-2000 MW/m 2 and the pulse length of 2-5 ms in JAERI electron beam irradiation systems (JEBIS). As a result, the weight losses of B 4C doped CFCs after heating were almost same to those of the non doped CFC up to 5 wt% boron content. For Be, we have carried out our first disruption experiments on S65/C grade Be specimens in the Juelich divertor test facility in hot cells (JUDITH) facility as a frame work of the J—EU collaboration. The heating conditions were heat loads of 1250-5000 MW/m 2 for 2-8 ms, and the heated area was 3 × 3 mm 2. As a result, the protuberances of the heated area of Be were observed under the lower heat flux.

  6. Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2016-02-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.1

  7. Method for protein structure alignment

    DOEpatents

    Blankenbecler, Richard; Ohlsson, Mattias; Peterson, Carsten; Ringner, Markus

    2005-02-22

    This invention provides a method for protein structure alignment. More particularly, the present invention provides a method for identification, classification and prediction of protein structures. The present invention involves two key ingredients. First, an energy or cost function formulation of the problem simultaneously in terms of binary (Potts) assignment variables and real-valued atomic coordinates. Second, a minimization of the energy or cost function by an iterative method, where in each iteration (1) a mean field method is employed for the assignment variables and (2) exact rotation and/or translation of atomic coordinates is performed, weighted with the corresponding assignment variables.

  8. The V3, V4 and V6 bands of formaldehyde: A spectral catalog from 900 cm(-1) to 1580 cm(-1)

    NASA Technical Reports Server (NTRS)

    Nadler, Shachar; Reuter, D. C.; Daunt, S. J.; Johns, J. W. C.

    1988-01-01

    The results of a complete high resolution study of the three vibration-rotation bands v sub 3, v sub 4, and V sub 6 using both TDLs and FT-IR spectroscopy are presented. The reults are given in terms of a table of over 8000 predicted transition frequencies and strengths. A plot of the predicted and calculated spectra is shown. Over 3000 transitions were assigned and used in the simultaneous analysis of the three bands. The simultaneous fit permitted a rigorous study of Coriolis and other type iterations among bands yielding improved molecular constants. Line intensities of 28 transitions measured by a TDL and 20 transitions from FTS data were used, along with the eigenvectors from the frequency fitting, in a least squares analysis to evaluate the band strengths.

  9. The Chlamydomonas genome project: a decade on

    PubMed Central

    Blaby, Ian K.; Blaby-Haas, Crysten; Tourasse, Nicolas; Hom, Erik F. Y.; Lopez, David; Aksoy, Munevver; Grossman, Arthur; Umen, James; Dutcher, Susan; Porter, Mary; King, Stephen; Witman, George; Stanke, Mario; Harris, Elizabeth H.; Goodstein, David; Grimwood, Jane; Schmutz, Jeremy; Vallon, Olivier; Merchant, Sabeeha S.; Prochnik, Simon

    2014-01-01

    The green alga Chlamydomonas reinhardtii is a popular unicellular organism for studying photosynthesis, cilia biogenesis and micronutrient homeostasis. Ten years since its genome project was initiated, an iterative process of improvements to the genome and gene predictions has propelled this organism to the forefront of the “omics” era. Housed at Phytozome, the Joint Genome Institute’s (JGI) plant genomics portal, the most up-to-date genomic data include a genome arranged on chromosomes and high-quality gene models with alternative splice forms supported by an abundance of RNA-Seq data. Here, we present the past, present and future of Chlamydomonas genomics. Specifically, we detail progress on genome assembly and gene model refinement, discuss resources for gene annotations, functional predictions and locus ID mapping between versions and, importantly, outline a standardized framework for naming genes. PMID:24950814

  10. Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution

    DTIC Science & Technology

    2015-06-08

    deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero

  11. An ITPA joint experiment to study runaway electron generation and suppression

    DOE PAGES

    Granetz, Robert S.; Esposito, B.; Kim, J. H.; ...

    2014-07-11

    Recent results from an ITPA joint experiment to study the onset, growth, and decay of relativistic electrons (REs) indicate that loss mechanisms other than collisional damping may play a dominant role in the dynamics of the RE population, even during the quiescent Ip flattop. Understanding the physics of RE growth and mitigation is motivated by the theoretical prediction that disruptions of full-current (15 MA) ITER discharges could generate up to 10 MA of REs with 10-20 MeV energies. The ITPA MHD group is conducting a joint experiment to measure the RE detection threshold conditions on a number of tokamaks undermore » quasi-steady-state conditions in which V loop, n e, and REs can be well-diagnosed and compared to collisional theory. Data from DIII-D, C-Mod, FTU, KSTAR, and TEXTOR have been obtained so far, and the consensus to date is that the threshold E-field is significantly higher than predicted by relativistic collisional theory, or conversely, the density required to damp REs is significantly less than predicted, which could have significant implications for RE mitigation on ITER.« less

  12. Benchmarking kinetic calculations of resistive wall mode stability

    NASA Astrophysics Data System (ADS)

    Berkery, J. W.; Liu, Y. Q.; Wang, Z. R.; Sabbagh, S. A.; Logan, N. C.; Park, J.-K.; Manickam, J.; Betti, R.

    2014-05-01

    Validating the calculations of kinetic resistive wall mode (RWM) stability is important for confidently predicting RWM stable operating regions in ITER and other high performance tokamaks for disruption avoidance. Benchmarking the calculations of the Magnetohydrodynamic Resistive Spectrum—Kinetic (MARS-K) [Y. Liu et al., Phys. Plasmas 15, 112503 (2008)], Modification to Ideal Stability by Kinetic effects (MISK) [B. Hu et al., Phys. Plasmas 12, 057301 (2005)], and Perturbed Equilibrium Nonambipolar Transport PENT) [N. Logan et al., Phys. Plasmas 20, 122507 (2013)] codes for two Solov'ev analytical equilibria and a projected ITER equilibrium has demonstrated good agreement between the codes. The important particle frequencies, the frequency resonance energy integral in which they are used, the marginally stable eigenfunctions, perturbed Lagrangians, and fluid growth rates are all generally consistent between the codes. The most important kinetic effect at low rotation is the resonance between the mode rotation and the trapped thermal particle's precession drift, and MARS-K, MISK, and PENT show good agreement in this term. The different ways the rational surface contribution was treated historically in the codes is identified as a source of disagreement in the bounce and transit resonance terms at higher plasma rotation. Calculations from all of the codes support the present understanding that RWM stability can be increased by kinetic effects at low rotation through precession drift resonance and at high rotation by bounce and transit resonances, while intermediate rotation can remain susceptible to instability. The applicability of benchmarked kinetic stability calculations to experimental results is demonstrated by the prediction of MISK calculations of near marginal growth rates for experimental marginal stability points from the National Spherical Torus Experiment (NSTX) [M. Ono et al., Nucl. Fusion 40, 557 (2000)].

  13. Amino Acid Distribution Rules Predict Protein Fold: Protein Grammar for Beta-Strand Sandwich-Like Structures

    PubMed Central

    Kister, Alexander

    2015-01-01

    We present an alternative approach to protein 3D folding prediction based on determination of rules that specify distribution of “favorable” residues, that are mainly responsible for a given fold formation, and “unfavorable” residues, that are incompatible with that fold, in polypeptide sequences. The process of determining favorable and unfavorable residues is iterative. The starting assumptions are based on the general principles of protein structure formation as well as structural features peculiar to a protein fold under investigation. The initial assumptions are tested one-by-one for a set of all known proteins with a given structure. The assumption is accepted as a “rule of amino acid distribution” for the protein fold if it holds true for all, or near all, structures. If the assumption is not accepted as a rule, it can be modified to better fit the data and then tested again in the next step of the iterative search algorithm, or rejected. We determined the set of amino acid distribution rules for a large group of beta sandwich-like proteins characterized by a specific arrangement of strands in two beta sheets. It was shown that this set of rules is highly sensitive (~90%) and very specific (~99%) for identifying sequences of proteins with specified beta sandwich fold structure. The advantage of the proposed approach is that it does not require that query proteins have a high degree of homology to proteins with known structure. So long as the query protein satisfies residue distribution rules, it can be confidently assigned to its respective protein fold. Another advantage of our approach is that it allows for a better understanding of which residues play an essential role in protein fold formation. It may, therefore, facilitate rational protein engineering design. PMID:25625198

  14. Development of two color laser diagnostics for the ITER poloidal polarimeter.

    PubMed

    Kawahata, K; Akiyama, T; Tanaka, K; Nakayama, K; Okajima, S

    2010-10-01

    Two color laser diagnostics using terahertz laser sources are under development for a high performance operation of the Large Helical Device and for future fusion devices such as ITER. So far, we have achieved high power laser oscillation lines simultaneously oscillating at 57.2 and 47.7 μm by using a twin optically pumped CH(3)OD laser, and confirmed the original function, compensation of mechanical vibration, of the two color laser interferometer. In this article, application of the two color laser diagnostics to the ITER poloidal polarimeter and recent hardware developments will be described.

  15. A fresh look at electron cyclotron current drive power requirements for stabilization of tearing modes in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Haye, R. J., E-mail: lahaye@fusion.gat.com

    2015-12-10

    ITER is an international project to design and build an experimental fusion reactor based on the “tokamak” concept. ITER relies upon localized electron cyclotron current drive (ECCD) at the rational safety factor q=2 to suppress or stabilize the expected poloidal mode m=2, toroidal mode n=1 neoclassical tearing mode (NTM) islands. Such islands if unmitigated degrade energy confinement, lock to the resistive wall (stop rotating), cause loss of “H-mode” and induce disruption. The International Tokamak Physics Activity (ITPA) on MHD, Disruptions and Magnetic Control joint experiment group MDC-8 on Current Drive Prevention/Stabilization of Neoclassical Tearing Modes started in 2005, after whichmore » assessments were made for the requirements for ECCD needed in ITER, particularly that of rf power and alignment on q=2 [1]. Narrow well-aligned rf current parallel to and of order of one percent of the total plasma current is needed to replace the “missing” current in the island O-points and heal or preempt (avoid destabilization by applying ECCD on q=2 in absence of the mode) the island [2-4]. This paper updates the advances in ECCD stabilization on NTMs learned in DIII-D experiments and modeling during the last 5 to 10 years as applies to stabilization by localized ECCD of tearing modes in ITER. This includes the ECCD (inside the q=1 radius) stabilization of the NTM “seeding” instability known as sawteeth (m/n=1/1) [5]. Recent measurements in DIII-D show that the ITER-similar current profile is classically unstable, curvature stabilization must not be neglected, and the small island width stabilization effect from helical ion polarization currents is stronger than was previously thought [6]. The consequences of updated assumptions in ITER modeling of the minimum well-aligned ECCD power needed are all-in-all favorable (and well-within the ITER 24 gyrotron capability) when all effects are included. However, a “wild card” may be broadening of the localized ECCD by the presence of the island; various theories predict broadening could occur and there is experimental evidence for broadening in DIII-D. Wider than now expected ECCD in ITER would make alignment easier to do but weaken the stabilization and thus require more rf power. In addition to updated modeling for ITER, advances in the ITER-relevant DIII-D ECCD gyrotron launch mirror control system hardware and real-time plasma control system have been made [7] and there are plans for application in DIII-D ITER demonstration discharges.« less

  16. A fresh look at electron cyclotron current drive power requirements for stabilization of tearing modes in ITER

    NASA Astrophysics Data System (ADS)

    La Haye, R. J.

    2015-12-01

    ITER is an international project to design and build an experimental fusion reactor based on the "tokamak" concept. ITER relies upon localized electron cyclotron current drive (ECCD) at the rational safety factor q=2 to suppress or stabilize the expected poloidal mode m=2, toroidal mode n=1 neoclassical tearing mode (NTM) islands. Such islands if unmitigated degrade energy confinement, lock to the resistive wall (stop rotating), cause loss of "H-mode" and induce disruption. The International Tokamak Physics Activity (ITPA) on MHD, Disruptions and Magnetic Control joint experiment group MDC-8 on Current Drive Prevention/Stabilization of Neoclassical Tearing Modes started in 2005, after which assessments were made for the requirements for ECCD needed in ITER, particularly that of rf power and alignment on q=2 [1]. Narrow well-aligned rf current parallel to and of order of one percent of the total plasma current is needed to replace the "missing" current in the island O-points and heal or preempt (avoid destabilization by applying ECCD on q=2 in absence of the mode) the island [2-4]. This paper updates the advances in ECCD stabilization on NTMs learned in DIII-D experiments and modeling during the last 5 to 10 years as applies to stabilization by localized ECCD of tearing modes in ITER. This includes the ECCD (inside the q=1 radius) stabilization of the NTM "seeding" instability known as sawteeth (m/n=1/1) [5]. Recent measurements in DIII-D show that the ITER-similar current profile is classically unstable, curvature stabilization must not be neglected, and the small island width stabilization effect from helical ion polarization currents is stronger than was previously thought [6]. The consequences of updated assumptions in ITER modeling of the minimum well-aligned ECCD power needed are all-in-all favorable (and well-within the ITER 24 gyrotron capability) when all effects are included. However, a "wild card" may be broadening of the localized ECCD by the presence of the island; various theories predict broadening could occur and there is experimental evidence for broadening in DIII-D. Wider than now expected ECCD in ITER would make alignment easier to do but weaken the stabilization and thus require more rf power. In addition to updated modeling for ITER, advances in the ITER-relevant DIII-D ECCD gyrotron launch mirror control system hardware and real-time plasma control system have been made [7] and there are plans for application in DIII-D ITER demonstration discharges.

  17. Collimator-free photon tomography

    DOEpatents

    Dilmanian, F. Avraham; Barbour, Randall L.

    1998-10-06

    A method of uncollimated single photon emission computed tomography includes administering a radioisotope to a patient for producing gamma ray photons from a source inside the patient. Emissivity of the photons is measured externally of the patient with an uncollimated gamma camera at a plurality of measurement positions surrounding the patient for obtaining corresponding energy spectrums thereat. Photon emissivity at the plurality of measurement positions is predicted using an initial prediction of an image of the source. The predicted and measured photon emissivities are compared to obtain differences therebetween. Prediction and comparison is iterated by updating the image prediction until the differences are below a threshold for obtaining a final prediction of the source image.

  18. Theoretical performance of foil journal bearings

    NASA Technical Reports Server (NTRS)

    Carpino, M.; Peng, J.-P.

    1991-01-01

    A modified forward iteration approach for the coupled solution of foil bearings is presented. The method is used to predict the steady state theoretical performance of a journal type gas bearing constructed from an inextensible shell supported by an elastic foundation. Bending effects are treated as negligible. Finite element methods are used to predict both the foil deflections and the pressure distribution in the gas film.

  19. Reduction of asymmetric wall force in ITER disruptions with fast current quench

    NASA Astrophysics Data System (ADS)

    Strauss, H.

    2018-02-01

    One of the problems caused by disruptions in tokamaks is the asymmetric electromechanical force produced in conducting structures surrounding the plasma. The asymmetric wall force in ITER asymmetric vertical displacement event (AVDE) disruptions is calculated in nonlinear 3D MHD simulations. It is found that the wall force can vary by almost an order of magnitude, depending on the ratio of the current quench time to the resistive wall magnetic penetration time. In ITER, this ratio is relatively low, resulting in a low asymmetric wall force. In JET, this ratio is relatively high, resulting in a high asymmetric wall force. Previous extrapolations based on JET measurements have greatly overestimated the ITER wall force. It is shown that there are two limiting regimes of AVDEs, and it is explained why the asymmetric wall force is different in the two limits.

  20. Using multi-date satellite imagery to monitor invasive grass species distribution in post-wildfire landscapes: An iterative, adaptable approach that employs open-source data and software

    USGS Publications Warehouse

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew; Chignell, Steve

    2017-01-01

    Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.

  1. A Parallel Fast Sweeping Method for the Eikonal Equation

    NASA Astrophysics Data System (ADS)

    Baker, B.

    2017-12-01

    Recently, there has been an exciting emergence of probabilistic methods for travel time tomography. Unlike gradient-based optimization strategies, probabilistic tomographic methods are resistant to becoming trapped in a local minimum and provide a much better quantification of parameter resolution than, say, appealing to ray density or performing checkerboard reconstruction tests. The benefits associated with random sampling methods however are only realized by successive computation of predicted travel times in, potentially, strongly heterogeneous media. To this end this abstract is concerned with expediting the solution of the Eikonal equation. While many Eikonal solvers use a fast marching method, the proposed solver will use the iterative fast sweeping method because the eight fixed sweep orderings in each iteration are natural targets for parallelization. To reduce the number of iterations and grid points required the high-accuracy finite difference stencil of Nobel et al., 2014 is implemented. A directed acyclic graph (DAG) is created with a priori knowledge of the sweep ordering and finite different stencil. By performing a topological sort of the DAG sets of independent nodes are identified as candidates for concurrent updating. Additionally, the proposed solver will also address scalability during earthquake relocation, a necessary step in local and regional earthquake tomography and a barrier to extending probabilistic methods from active source to passive source applications, by introducing an asynchronous parallel forward solve phase for all receivers in the network. Synthetic examples using the SEG over-thrust model will be presented.

  2. Using multi-date satellite imagery to monitor invasive grass species distribution in post-wildfire landscapes: An iterative, adaptable approach that employs open-source data and software

    NASA Astrophysics Data System (ADS)

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew W.; Chignell, Stephen M.

    2017-07-01

    Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.

  3. Towards Current Profile Control in ITER: Potential Approaches and Research Needs

    NASA Astrophysics Data System (ADS)

    Schuster, E.; Barton, J. E.; Wehner, W. P.

    2014-10-01

    Many challenging plasma control problems still need to be addressed in order for the ITER Plasma Control System (PCS) to be able to successfully achieve the ITER project goals. For instance, setting up a suitable toroidal current density profile is key for one possible advanced scenario characterized by noninductive sustainment of the plasma current and steady-state operation. The nonlinearity and high dimensionality exhibited by the plasma demand a model-based current-profile control synthesis procedure that can accommodate this complexity through embedding the known physics within the design. The development of a model capturing the dynamics of the plasma relevant for control design enables not only the design of feedback controllers for regulation or tracking but also the design of optimal feedforward controllers for a systematic model-based approach to scenario planning, the design of state estimators for a reliable real-time reconstruction of the plasma internal profiles based on limited and noisy diagnostics, and the development of a fast predictive simulation code for closed-loop performance evaluation before implementation. Progress towards control-oriented modeling of the current profile evolution and associated control design has been reported following both data-driven and first-principles-driven approaches. An overview of these two approaches will be provided, as well as a discussion on research needs associated with each one of the model applications described above. Supported by the US Department of Energy under DE-SC0001334 and DE-SC0010661.

  4. Assessment of conductor degradation in the ITER CS insert coil and implications for the ITER conductors

    NASA Astrophysics Data System (ADS)

    Mitchell, N.

    2007-01-01

    Nb3Sn cable in conduit-type conductors were expected to provide an efficient way of achieving large conductor currents at high field (up to 13 T) combined with good stability to electromagnetic disturbances due to the extensive helium contact area with the strands. Although ITER model coils successfully reached their design performance (Kato et al 2001 Fusion Eng. Des. 56/57 59-70), initial indications (Mitchell 2003 Fusion Eng. Des. 66-68 971-94) that there were unexplained performance shortfalls have been confirmed. Recent conductor tests (Pasztor et al 2004 IEEE Trans. Appl. Supercond. 14 1527-30) and modelling work (Mitchell 2005 Supercond. Sci. Technol. 18 396-404) suggest that the shortfalls are due to a combination of strand bending and filament fracture under the transverse magnetic loads. Using the new model, the extensive database from the ITER CS insert coil has been reassessed. A parametric fit based on a loss of filament area and n (the exponent of the power-law fit to the electric field) combined with a more rigorous consideration of the conductor field gradient has enabled the coil behaviour to be explained much more consistently than in earlier assessments, now fitting the Nb3Sn strain scaling laws when used with measurements of the conductor operating strain, including conditions when the insert coil current (and hence operating strain) were reversed. The coil superconducting performance also shows a fatigue-type behaviour consistent with recent measurements on conductor samples (Martovetsky et al 2005 IEEE Trans. Appl. Supercond. 15 1367-70). The ITER conductor design has already been modified compared to the CS insert, to increase the margin and provide increased resistance to the degradation, by using a steel jacket to provide thermal pre-compression to reduce tensile strain levels, reducing the void fraction from 36% to 33% and increasing the non-copper material by 25%. Test results are not yet available for the new design and performance predictions at present rely on models with limited verification.

  5. Thermo-mechanical analysis of ITER first mirrors and its use for the ITER equatorial visible∕infrared wide angle viewing system optical design.

    PubMed

    Joanny, M; Salasca, S; Dapena, M; Cantone, B; Travère, J M; Thellier, C; Fermé, J J; Marot, L; Buravand, O; Perrollaz, G; Zeile, C

    2012-10-01

    ITER first mirrors (FMs), as the first components of most ITER optical diagnostics, will be exposed to high plasma radiation flux and neutron load. To reduce the FMs heating and optical surface deformation induced during ITER operation, the use of relevant materials and cooling system are foreseen. The calculations led on different materials and FMs designs and geometries (100 mm and 200 mm) show that the use of CuCrZr and TZM, and a complex integrated cooling system can limit efficiently the FMs heating and reduce their optical surface deformation under plasma radiation flux and neutron load. These investigations were used to evaluate, for the ITER equatorial port visible∕infrared wide angle viewing system, the impact of the FMs properties change during operation on the instrument main optical performances. The results obtained are presented and discussed.

  6. Fast projection/backprojection and incremental methods applied to synchrotron light tomographic reconstruction.

    PubMed

    de Lima, Camila; Salomão Helou, Elias

    2018-01-01

    Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.

  7. DONBOL: A computer program for predicting axisymmetric nozzle afterbody pressure distributions and drag at subsonic speeds

    NASA Technical Reports Server (NTRS)

    Putnam, L. E.

    1979-01-01

    A Neumann solution for inviscid external flow was coupled to a modified Reshotko-Tucker integral boundary-layer technique, the control volume method of Presz for calculating flow in the separated region, and an inviscid one-dimensional solution for the jet exhaust flow in order to predict axisymmetric nozzle afterbody pressure distributions and drag. The viscous and inviscid flows are solved iteratively until convergence is obtained. A computer algorithm of this procedure was written and is called DONBOL. A description of the computer program and a guide to its use is given. Comparisons of the predictions of this method with experiments show that the method accurately predicts the pressure distributions of boattail afterbodies which have the jet exhaust flow simulated by solid bodies. For nozzle configurations which have the jet exhaust simulated by high-pressure air, the present method significantly underpredicts the magnitude of nozzle pressure drag. This deficiency results because the method neglects the effects of jet plume entrainment. This method is limited to subsonic free-stream Mach numbers below that for which the flow over the body of revolution becomes sonic.

  8. Behaviour of the ASDEX pressure gauge at high neutral gas pressure and applications for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scarabosio, A.; Haas, G.

    2008-03-12

    The ASDEX Pressure Gauge is, at present, the main candidate for in-vessel neutral pressure measurement in ITER. Although the APG output is found to saturate at around 15 Pa, below the ITER requirement of 20 Pa. We show, here, that with small modifications of the gauge geometry and potentials settings we can achieve satisfactory behaviour up to 30 Pa at 6 T.

  9. SU-E-T-106: Development of a Collision Prediction Algorithm for Determining Problematic Geometry for SBRT Treatments Using a Stereotactic Body Frame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagar, M; Friesen, S; Mannarino, E

    2014-06-01

    Purpose: Collision between the gantry and the couch or patient during Radiotherapy is not a common concern for conventional RT (static fields or arc). With the increase in the application of stereotactic planning techniques to the body, collisions have become a greater concern. Non-coplanar beam geometry is desirable in stereotatic treatments in order to achieve sharp gradients and a high conformality. Non-coplanar geometry is less intuitive in the body and often requires an iterative process of planning and dry runs to guarantee deliverability. Methods: Purpose written software was developed in order to predict the likelihood of collision between the headmore » of the gantry and the couch, patient or stereotatic body frame. Using the DICOM plan and structures set, exported by the treatment planning system, this software is able to predict the possibility of a collision. Given the plan's isocenter, treatment geometry and exterior contours, the software is able to determine if a particular beam/arc is clinically deliverable or if collision is imminent. Results: The software was tested on real world treatment plans with untreatable beam geometry. Both static non-coplanar and VMAT plans were tested. Of these, the collision prediction software could identify all as having potentially problematic geometry. Re-plans of the same cases were also tested and validated as deliverable. Conclusion: This software is capable of giving good initial indication of deliverability for treatment plans that utilize complex geometry (SBRT) or have lateral isocenters. This software is not intended to replace the standard pre-treatment QA dry run. The effectiveness is limited to those portions of the patient and immobilization devices that have been included in the simulation CT and contoured in the planning system. It will however aid the planner in reducing the iterations required to create complex treatment geometries necessary to achieve ideal conformality and organ sparing.« less

  10. Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2017-01-01

    A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.

  11. Application of Temperature Sensitivities During Iterative Strain-Gage Balance Calibration Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    A new method is discussed that may be used to correct wind tunnel strain-gage balance load predictions for the influence of residual temperature effects at the location of the strain-gages. The method was designed for the iterative analysis technique that is used in the aerospace testing community to predict balance loads from strain-gage outputs during a wind tunnel test. The new method implicitly applies temperature corrections to the gage outputs during the load iteration process. Therefore, it can use uncorrected gage outputs directly as input for the load calculations. The new method is applied in several steps. First, balance calibration data is analyzed in the usual manner assuming that the balance temperature was kept constant during the calibration. Then, the temperature difference relative to the calibration temperature is introduced as a new independent variable for each strain--gage output. Therefore, sensors must exist near the strain--gages so that the required temperature differences can be measured during the wind tunnel test. In addition, the format of the regression coefficient matrix needs to be extended so that it can support the new independent variables. In the next step, the extended regression coefficient matrix of the original calibration data is modified by using the manufacturer specified temperature sensitivity of each strain--gage as the regression coefficient of the corresponding temperature difference variable. Finally, the modified regression coefficient matrix is converted to a data reduction matrix that the iterative analysis technique needs for the calculation of balance loads. Original calibration data and modified check load data of NASA's MC60D balance are used to illustrate the new method.

  12. Collapse of cooperation in evolving games

    PubMed Central

    Stewart, Alexander J.; Plotkin, Joshua B.

    2014-01-01

    Game theory provides a quantitative framework for analyzing the behavior of rational agents. The Iterated Prisoner’s Dilemma in particular has become a standard model for studying cooperation and cheating, with cooperation often emerging as a robust outcome in evolving populations. Here we extend evolutionary game theory by allowing players’ payoffs as well as their strategies to evolve in response to selection on heritable mutations. In nature, many organisms engage in mutually beneficial interactions and individuals may seek to change the ratio of risk to reward for cooperation by altering the resources they commit to cooperative interactions. To study this, we construct a general framework for the coevolution of strategies and payoffs in arbitrary iterated games. We show that, when there is a tradeoff between the benefits and costs of cooperation, coevolution often leads to a dramatic loss of cooperation in the Iterated Prisoner’s Dilemma. The collapse of cooperation is so extreme that the average payoff in a population can decline even as the potential reward for mutual cooperation increases. Depending upon the form of tradeoffs, evolution may even move away from the Iterated Prisoner’s Dilemma game altogether. Our work offers a new perspective on the Prisoner’s Dilemma and its predictions for cooperation in natural populations; and it provides a general framework to understand the coevolution of strategies and payoffs in iterated interactions. PMID:25422421

  13. Ensemble Kalman Filter versus Ensemble Smoother for Data Assimilation in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Li, L.; Cao, Z.; Zhou, H.

    2017-12-01

    Groundwater modeling calls for an effective and robust integrating method to fill the gap between the model and data. The Ensemble Kalman Filter (EnKF), a real-time data assimilation method, has been increasingly applied in multiple disciplines such as petroleum engineering and hydrogeology. In this approach, the groundwater models are sequentially updated using measured data such as hydraulic head and concentration data. As an alternative to the EnKF, the Ensemble Smoother (ES) was proposed with updating models using all the data together, and therefore needs a much less computational cost. To further improve the performance of the ES, an iterative ES was proposed for continuously updating the models by assimilating measurements together. In this work, we compare the performance of the EnKF, the ES and the iterative ES using a synthetic example in groundwater modeling. The hydraulic head data modeled on the basis of the reference conductivity field are utilized to inversely estimate conductivities at un-sampled locations. Results are evaluated in terms of the characterization of conductivity and groundwater flow and solute transport predictions. It is concluded that: (1) the iterative ES could achieve a comparable result with the EnKF, but needs a less computational cost; (2) the iterative ES has the better performance than the ES through continuously updating. These findings suggest that the iterative ES should be paid much more attention for data assimilation in groundwater modeling.

  14. Modelling of edge localised modes and edge localised mode control [Modelling of ELMs and ELM control

    DOE PAGES

    Huijsmans, G. T. A.; Chang, C. S.; Ferraro, N.; ...

    2015-02-07

    Edge Localised Modes (ELMs) in ITER Q = 10 H-mode plasmas are likely to lead to large transient heat loads to the divertor. In order to avoid an ELM induced reduction of the divertor lifetime, the large ELM energy losses need to be controlled. In ITER, ELM control is foreseen using magnetic field perturbations created by in-vessel coils and the injection of small D2 pellets. ITER plasmas are characterised by low collisionality at a high density (high fraction of the Greenwald density limit). These parameters cannot simultaneously be achieved in current experiments. Thus, the extrapolation of the ELM properties andmore » the requirements for ELM control in ITER relies on the development of validated physics models and numerical simulations. Here, we describe the modelling of ELMs and ELM control methods in ITER. The aim of this paper is not a complete review on the subject of ELM and ELM control modelling but rather to describe the current status and discuss open issues.« less

  15. Effect of sample size on multi-parametric prediction of tissue outcome in acute ischemic stroke using a random forest classifier

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Fiehler, Jens

    2015-03-01

    The tissue outcome prediction in acute ischemic stroke patients is highly relevant for clinical and research purposes. It has been shown that the combined analysis of diffusion and perfusion MRI datasets using high-level machine learning techniques leads to an improved prediction of final infarction compared to single perfusion parameter thresholding. However, most high-level classifiers require a previous training and, until now, it is ambiguous how many subjects are required for this, which is the focus of this work. 23 MRI datasets of acute stroke patients with known tissue outcome were used in this work. Relative values of diffusion and perfusion parameters as well as the binary tissue outcome were extracted on a voxel-by- voxel level for all patients and used for training of a random forest classifier. The number of patients used for training set definition was iteratively and randomly reduced from using all 22 other patients to only one other patient. Thus, 22 tissue outcome predictions were generated for each patient using the trained random forest classifiers and compared to the known tissue outcome using the Dice coefficient. Overall, a logarithmic relation between the number of patients used for training set definition and tissue outcome prediction accuracy was found. Quantitatively, a mean Dice coefficient of 0.45 was found for the prediction using the training set consisting of the voxel information from only one other patient, which increases to 0.53 if using all other patients (n=22). Based on extrapolation, 50-100 patients appear to be a reasonable tradeoff between tissue outcome prediction accuracy and effort required for data acquisition and preparation.

  16. Genetic polymorphisms to predict gains in maximal O2 uptake and knee peak torque after a high intensity training program in humans.

    PubMed

    Yoo, Jinho; Kim, Bo-Hyung; Kim, Soo-Hwan; Kim, Yangseok; Yim, Sung-Vin

    2016-05-01

    The study aimed to identify single nucleotide polymorphisms (SNPs) that significantly influenced the level of improvement of two kinds of training responses, including maximal O2 uptake (V'O2max) and knee peak torque of healthy adults participating in the high intensity training (HIT) program. The study also aimed to use these SNPs to develop prediction models for individual training responses. 79 Healthy volunteers participated in the HIT program. A genome-wide association study, based on 2,391,739 SNPs, was performed to identify SNPs that were significantly associated with gains in V'O2max and knee peak torque, following 9 weeks of the HIT program. To predict two training responses, two independent SNPs sets were determined using linear regression and iterative binary logistic regression analysis. False discovery rate analysis and permutation tests were performed to avoid false-positive findings. To predict gains in V'O2max, 7 SNPs were identified. These SNPs accounted for 26.0 % of the variance in the increment of V'O2max, and discriminated the subjects into three subgroups, non-responders, medium responders, and high responders, with prediction accuracy of 86.1 %. For the knee peak torque, 6 SNPs were identified, and accounted for 27.5 % of the variance in the increment of knee peak torque. The prediction accuracy discriminating the subjects into the three subgroups was estimated as 77.2 %. Novel SNPs found in this study could explain, and predict inter-individual variability in gains of V'O2max, and knee peak torque. Furthermore, with these genetic markers, a methodology suggested in this study provides a sound approach for the personalized training program.

  17. Overview of the JET results in support to ITER

    NASA Astrophysics Data System (ADS)

    Litaudon, X.; Abduallev, S.; Abhangi, M.; Abreu, P.; Afzal, M.; Aggarwal, K. M.; Ahlgren, T.; Ahn, J. H.; Aho-Mantila, L.; Aiba, N.; Airila, M.; Albanese, R.; Aldred, V.; Alegre, D.; Alessi, E.; Aleynikov, P.; Alfier, A.; Alkseev, A.; Allinson, M.; Alper, B.; Alves, E.; Ambrosino, G.; Ambrosino, R.; Amicucci, L.; Amosov, V.; Andersson Sundén, E.; Angelone, M.; Anghel, M.; Angioni, C.; Appel, L.; Appelbee, C.; Arena, P.; Ariola, M.; Arnichand, H.; Arshad, S.; Ash, A.; Ashikawa, N.; Aslanyan, V.; Asunta, O.; Auriemma, F.; Austin, Y.; Avotina, L.; Axton, M. D.; Ayres, C.; Bacharis, M.; Baciero, A.; Baião, D.; Bailey, S.; Baker, A.; Balboa, I.; Balden, M.; Balshaw, N.; Bament, R.; Banks, J. W.; Baranov, Y. F.; Barnard, M. A.; Barnes, D.; Barnes, M.; Barnsley, R.; Baron Wiechec, A.; Barrera Orte, L.; Baruzzo, M.; Basiuk, V.; Bassan, M.; Bastow, R.; Batista, A.; Batistoni, P.; Baughan, R.; Bauvir, B.; Baylor, L.; Bazylev, B.; Beal, J.; Beaumont, P. S.; Beckers, M.; Beckett, B.; Becoulet, A.; Bekris, N.; Beldishevski, M.; Bell, K.; Belli, F.; Bellinger, M.; Belonohy, É.; Ben Ayed, N.; Benterman, N. A.; Bergsåker, H.; Bernardo, J.; Bernert, M.; Berry, M.; Bertalot, L.; Besliu, C.; Beurskens, M.; Bieg, B.; Bielecki, J.; Biewer, T.; Bigi, M.; Bílková, P.; Binda, F.; Bisoffi, A.; Bizarro, J. P. S.; Björkas, C.; Blackburn, J.; Blackman, K.; Blackman, T. R.; Blanchard, P.; Blatchford, P.; Bobkov, V.; Boboc, A.; Bodnár, G.; Bogar, O.; Bolshakova, I.; Bolzonella, T.; Bonanomi, N.; Bonelli, F.; Boom, J.; Booth, J.; Borba, D.; Borodin, D.; Borodkina, I.; Botrugno, A.; Bottereau, C.; Boulting, P.; Bourdelle, C.; Bowden, M.; Bower, C.; Bowman, C.; Boyce, T.; Boyd, C.; Boyer, H. J.; Bradshaw, J. M. A.; Braic, V.; Bravanec, R.; Breizman, B.; Bremond, S.; Brennan, P. D.; Breton, S.; Brett, A.; Brezinsek, S.; Bright, M. D. J.; Brix, M.; Broeckx, W.; Brombin, M.; Brosławski, A.; Brown, D. P. D.; Brown, M.; Bruno, E.; Bucalossi, J.; Buch, J.; Buchanan, J.; Buckley, M. A.; Budny, R.; Bufferand, H.; Bulman, M.; Bulmer, N.; Bunting, P.; Buratti, P.; Burckhart, A.; Buscarino, A.; Busse, A.; Butler, N. K.; Bykov, I.; Byrne, J.; Cahyna, P.; Calabrò, G.; Calvo, I.; Camenen, Y.; Camp, P.; Campling, D. C.; Cane, J.; Cannas, B.; Capel, A. J.; Card, P. J.; Cardinali, A.; Carman, P.; Carr, M.; Carralero, D.; Carraro, L.; Carvalho, B. B.; Carvalho, I.; Carvalho, P.; Casson, F. J.; Castaldo, C.; Catarino, N.; Caumont, J.; Causa, F.; Cavazzana, R.; Cave-Ayland, K.; Cavinato, M.; Cecconello, M.; Ceccuzzi, S.; Cecil, E.; Cenedese, A.; Cesario, R.; Challis, C. D.; Chandler, M.; Chandra, D.; Chang, C. S.; Chankin, A.; Chapman, I. T.; Chapman, S. C.; Chernyshova, M.; Chitarin, G.; Ciraolo, G.; Ciric, D.; Citrin, J.; Clairet, F.; Clark, E.; Clark, M.; Clarkson, R.; Clatworthy, D.; Clements, C.; Cleverly, M.; Coad, J. P.; Coates, P. A.; Cobalt, A.; Coccorese, V.; Cocilovo, V.; Coda, S.; Coelho, R.; Coenen, J. W.; Coffey, I.; Colas, L.; Collins, S.; Conka, D.; Conroy, S.; Conway, N.; Coombs, D.; Cooper, D.; Cooper, S. R.; Corradino, C.; Corre, Y.; Corrigan, G.; Cortes, S.; Coster, D.; Couchman, A. S.; Cox, M. P.; Craciunescu, T.; Cramp, S.; Craven, R.; Crisanti, F.; Croci, G.; Croft, D.; Crombé, K.; Crowe, R.; Cruz, N.; Cseh, G.; Cufar, A.; Cullen, A.; Curuia, M.; Czarnecka, A.; Dabirikhah, H.; Dalgliesh, P.; Dalley, S.; Dankowski, J.; Darrow, D.; Davies, O.; Davis, W.; Day, C.; Day, I. E.; De Bock, M.; de Castro, A.; de la Cal, E.; de la Luna, E.; De Masi, G.; de Pablos, J. L.; De Temmerman, G.; De Tommasi, G.; de Vries, P.; Deakin, K.; Deane, J.; Degli Agostini, F.; Dejarnac, R.; Delabie, E.; den Harder, N.; Dendy, R. O.; Denis, J.; Denner, P.; Devaux, S.; Devynck, P.; Di Maio, F.; Di Siena, A.; Di Troia, C.; Dinca, P.; D'Inca, R.; Ding, B.; Dittmar, T.; Doerk, H.; Doerner, R. P.; Donné, T.; Dorling, S. E.; Dormido-Canto, S.; Doswon, S.; Douai, D.; Doyle, P. T.; Drenik, A.; Drewelow, P.; Drews, P.; Duckworth, Ph.; Dumont, R.; Dumortier, P.; Dunai, D.; Dunne, M.; Ďuran, I.; Durodié, F.; Dutta, P.; Duval, B. P.; Dux, R.; Dylst, K.; Dzysiuk, N.; Edappala, P. V.; Edmond, J.; Edwards, A. M.; Edwards, J.; Eich, Th.; Ekedahl, A.; El-Jorf, R.; Elsmore, C. G.; Enachescu, M.; Ericsson, G.; Eriksson, F.; Eriksson, J.; Eriksson, L. G.; Esposito, B.; Esquembri, S.; Esser, H. G.; Esteve, D.; Evans, B.; Evans, G. E.; Evison, G.; Ewart, G. D.; Fagan, D.; Faitsch, M.; Falie, D.; Fanni, A.; Fasoli, A.; Faustin, J. M.; Fawlk, N.; Fazendeiro, L.; Fedorczak, N.; Felton, R. C.; Fenton, K.; Fernades, A.; Fernandes, H.; Ferreira, J.; Fessey, J. A.; Février, O.; Ficker, O.; Field, A.; Fietz, S.; Figueiredo, A.; Figueiredo, J.; Fil, A.; Finburg, P.; Firdaouss, M.; Fischer, U.; Fittill, L.; Fitzgerald, M.; Flammini, D.; Flanagan, J.; Fleming, C.; Flinders, K.; Fonnesu, N.; Fontdecaba, J. M.; Formisano, A.; Forsythe, L.; Fortuna, L.; Fortuna-Zalesna, E.; Fortune, M.; Foster, S.; Franke, T.; Franklin, T.; Frasca, M.; Frassinetti, L.; Freisinger, M.; Fresa, R.; Frigione, D.; Fuchs, V.; Fuller, D.; Futatani, S.; Fyvie, J.; Gál, K.; Galassi, D.; Gałązka, K.; Galdon-Quiroga, J.; Gallagher, J.; Gallart, D.; Galvão, R.; Gao, X.; Gao, Y.; Garcia, J.; Garcia-Carrasco, A.; García-Muñoz, M.; Gardarein, J.-L.; Garzotti, L.; Gaudio, P.; Gauthier, E.; Gear, D. F.; Gee, S. J.; Geiger, B.; Gelfusa, M.; Gerasimov, S.; Gervasini, G.; Gethins, M.; Ghani, Z.; Ghate, M.; Gherendi, M.; Giacalone, J. C.; Giacomelli, L.; Gibson, C. S.; Giegerich, T.; Gil, C.; Gil, L.; Gilligan, S.; Gin, D.; Giovannozzi, E.; Girardo, J. B.; Giroud, C.; Giruzzi, G.; Glöggler, S.; Godwin, J.; Goff, J.; Gohil, P.; Goloborod'ko, V.; Gomes, R.; Gonçalves, B.; Goniche, M.; Goodliffe, M.; Goodyear, A.; Gorini, G.; Gosk, M.; Goulding, R.; Goussarov, A.; Gowland, R.; Graham, B.; Graham, M. E.; Graves, J. P.; Grazier, N.; Grazier, P.; Green, N. R.; Greuner, H.; Grierson, B.; Griph, F. S.; Grisolia, C.; Grist, D.; Groth, M.; Grove, R.; Grundy, C. N.; Grzonka, J.; Guard, D.; Guérard, C.; Guillemaut, C.; Guirlet, R.; Gurl, C.; Utoh, H. H.; Hackett, L. J.; Hacquin, S.; Hagar, A.; Hager, R.; Hakola, A.; Halitovs, M.; Hall, S. J.; Hallworth Cook, S. P.; Hamlyn-Harris, C.; Hammond, K.; Harrington, C.; Harrison, J.; Harting, D.; Hasenbeck, F.; Hatano, Y.; Hatch, D. R.; Haupt, T. D. V.; Hawes, J.; Hawkes, N. C.; Hawkins, J.; Hawkins, P.; Haydon, P. W.; Hayter, N.; Hazel, S.; Heesterman, P. J. L.; Heinola, K.; Hellesen, C.; Hellsten, T.; Helou, W.; Hemming, O. N.; Hender, T. C.; Henderson, M.; Henderson, S. S.; Henriques, R.; Hepple, D.; Hermon, G.; Hertout, P.; Hidalgo, C.; Highcock, E. G.; Hill, M.; Hillairet, J.; Hillesheim, J.; Hillis, D.; Hizanidis, K.; Hjalmarsson, A.; Hobirk, J.; Hodille, E.; Hogben, C. H. A.; Hogeweij, G. M. D.; Hollingsworth, A.; Hollis, S.; Homfray, D. A.; Horáček, J.; Hornung, G.; Horton, A. R.; Horton, L. D.; Horvath, L.; Hotchin, S. P.; Hough, M. R.; Howarth, P. J.; Hubbard, A.; Huber, A.; Huber, V.; Huddleston, T. M.; Hughes, M.; Huijsmans, G. T. A.; Hunter, C. L.; Huynh, P.; Hynes, A. M.; Iglesias, D.; Imazawa, N.; Imbeaux, F.; Imríšek, M.; Incelli, M.; Innocente, P.; Irishkin, M.; Ivanova-Stanik, I.; Jachmich, S.; Jacobsen, A. S.; Jacquet, P.; Jansons, J.; Jardin, A.; Järvinen, A.; Jaulmes, F.; Jednoróg, S.; Jenkins, I.; Jeong, C.; Jepu, I.; Joffrin, E.; Johnson, R.; Johnson, T.; Johnston, Jane; Joita, L.; Jones, G.; Jones, T. T. C.; Hoshino, K. K.; Kallenbach, A.; Kamiya, K.; Kaniewski, J.; Kantor, A.; Kappatou, A.; Karhunen, J.; Karkinsky, D.; Karnowska, I.; Kaufman, M.; Kaveney, G.; Kazakov, Y.; Kazantzidis, V.; Keeling, D. L.; Keenan, T.; Keep, J.; Kempenaars, M.; Kennedy, C.; Kenny, D.; Kent, J.; Kent, O. N.; Khilkevich, E.; Kim, H. T.; Kim, H. S.; Kinch, A.; king, C.; King, D.; King, R. F.; Kinna, D. J.; Kiptily, V.; Kirk, A.; Kirov, K.; Kirschner, A.; Kizane, G.; Klepper, C.; Klix, A.; Knight, P.; Knipe, S. J.; Knott, S.; Kobuchi, T.; Köchl, F.; Kocsis, G.; Kodeli, I.; Kogan, L.; Kogut, D.; Koivuranta, S.; Kominis, Y.; Köppen, M.; Kos, B.; Koskela, T.; Koslowski, H. R.; Koubiti, M.; Kovari, M.; Kowalska-Strzęciwilk, E.; Krasilnikov, A.; Krasilnikov, V.; Krawczyk, N.; Kresina, M.; Krieger, K.; Krivska, A.; Kruezi, U.; Książek, I.; Kukushkin, A.; Kundu, A.; Kurki-Suonio, T.; Kwak, S.; Kwiatkowski, R.; Kwon, O. J.; Laguardia, L.; Lahtinen, A.; Laing, A.; Lam, N.; Lambertz, H. T.; Lane, C.; Lang, P. T.; Lanthaler, S.; Lapins, J.; Lasa, A.; Last, J. R.; Łaszyńska, E.; Lawless, R.; Lawson, A.; Lawson, K. D.; Lazaros, A.; Lazzaro, E.; Leddy, J.; Lee, S.; Lefebvre, X.; Leggate, H. J.; Lehmann, J.; Lehnen, M.; Leichtle, D.; Leichuer, P.; Leipold, F.; Lengar, I.; Lennholm, M.; Lerche, E.; Lescinskis, A.; Lesnoj, S.; Letellier, E.; Leyland, M.; Leysen, W.; Li, L.; Liang, Y.; Likonen, J.; Linke, J.; Linsmeier, Ch.; Lipschultz, B.; Liu, G.; Liu, Y.; Lo Schiavo, V. P.; Loarer, T.; Loarte, A.; Lobel, R. C.; Lomanowski, B.; Lomas, P. J.; Lönnroth, J.; López, J. M.; López-Razola, J.; Lorenzini, R.; Losada, U.; Lovell, J. J.; Loving, A. B.; Lowry, C.; Luce, T.; Lucock, R. M. A.; Lukin, A.; Luna, C.; Lungaroni, M.; Lungu, C. P.; Lungu, M.; Lunniss, A.; Lupelli, I.; Lyssoivan, A.; Macdonald, N.; Macheta, P.; Maczewa, K.; Magesh, B.; Maget, P.; Maggi, C.; Maier, H.; Mailloux, J.; Makkonen, T.; Makwana, R.; Malaquias, A.; Malizia, A.; Manas, P.; Manning, A.; Manso, M. E.; Mantica, P.; Mantsinen, M.; Manzanares, A.; Maquet, Ph.; Marandet, Y.; Marcenko, N.; Marchetto, C.; Marchuk, O.; Marinelli, M.; Marinucci, M.; Markovič, T.; Marocco, D.; Marot, L.; Marren, C. A.; Marshal, R.; Martin, A.; Martin, Y.; Martín de Aguilera, A.; Martínez, F. J.; Martín-Solís, J. R.; Martynova, Y.; Maruyama, S.; Masiello, A.; Maslov, M.; Matejcik, S.; Mattei, M.; Matthews, G. F.; Maviglia, F.; Mayer, M.; Mayoral, M. L.; May-Smith, T.; Mazon, D.; Mazzotta, C.; McAdams, R.; McCarthy, P. J.; McClements, K. G.; McCormack, O.; McCullen, P. A.; McDonald, D.; McIntosh, S.; McKean, R.; McKehon, J.; Meadows, R. C.; Meakins, A.; Medina, F.; Medland, M.; Medley, S.; Meigh, S.; Meigs, A. G.; Meisl, G.; Meitner, S.; Meneses, L.; Menmuir, S.; Mergia, K.; Merrigan, I. R.; Mertens, Ph.; Meshchaninov, S.; Messiaen, A.; Meyer, H.; Mianowski, S.; Michling, R.; Middleton-Gear, D.; Miettunen, J.; Militello, F.; Militello-Asp, E.; Miloshevsky, G.; Mink, F.; Minucci, S.; Miyoshi, Y.; Mlynář, J.; Molina, D.; Monakhov, I.; Moneti, M.; Mooney, R.; Moradi, S.; Mordijck, S.; Moreira, L.; Moreno, R.; Moro, F.; Morris, A. W.; Morris, J.; Moser, L.; Mosher, S.; Moulton, D.; Murari, A.; Muraro, A.; Murphy, S.; Asakura, N. N.; Na, Y. S.; Nabais, F.; Naish, R.; Nakano, T.; Nardon, E.; Naulin, V.; Nave, M. F. F.; Nedzelski, I.; Nemtsev, G.; Nespoli, F.; Neto, A.; Neu, R.; Neverov, V. S.; Newman, M.; Nicholls, K. J.; Nicolas, T.; Nielsen, A. H.; Nielsen, P.; Nilsson, E.; Nishijima, D.; Noble, C.; Nocente, M.; Nodwell, D.; Nordlund, K.; Nordman, H.; Nouailletas, R.; Nunes, I.; Oberkofler, M.; Odupitan, T.; Ogawa, M. T.; O'Gorman, T.; Okabayashi, M.; Olney, R.; Omolayo, O.; O'Mullane, M.; Ongena, J.; Orsitto, F.; Orszagh, J.; Oswuigwe, B. I.; Otin, R.; Owen, A.; Paccagnella, R.; Pace, N.; Pacella, D.; Packer, L. W.; Page, A.; Pajuste, E.; Palazzo, S.; Pamela, S.; Panja, S.; Papp, P.; Paprok, R.; Parail, V.; Park, M.; Parra Diaz, F.; Parsons, M.; Pasqualotto, R.; Patel, A.; Pathak, S.; Paton, D.; Patten, H.; Pau, A.; Pawelec, E.; Soldan, C. Paz; Peackoc, A.; Pearson, I. J.; Pehkonen, S.-P.; Peluso, E.; Penot, C.; Pereira, A.; Pereira, R.; Pereira Puglia, P. P.; Perez von Thun, C.; Peruzzo, S.; Peschanyi, S.; Peterka, M.; Petersson, P.; Petravich, G.; Petre, A.; Petrella, N.; Petržilka, V.; Peysson, Y.; Pfefferlé, D.; Philipps, V.; Pillon, M.; Pintsuk, G.; Piovesan, P.; Pires dos Reis, A.; Piron, L.; Pironti, A.; Pisano, F.; Pitts, R.; Pizzo, F.; Plyusnin, V.; Pomaro, N.; Pompilian, O. G.; Pool, P. J.; Popovichev, S.; Porfiri, M. T.; Porosnicu, C.; Porton, M.; Possnert, G.; Potzel, S.; Powell, T.; Pozzi, J.; Prajapati, V.; Prakash, R.; Prestopino, G.; Price, D.; Price, M.; Price, R.; Prior, P.; Proudfoot, R.; Pucella, G.; Puglia, P.; Puiatti, M. E.; Pulley, D.; Purahoo, K.; Pütterich, Th.; Rachlew, E.; Rack, M.; Ragona, R.; Rainford, M. S. J.; Rakha, A.; Ramogida, G.; Ranjan, S.; Rapson, C. J.; Rasmussen, J. J.; Rathod, K.; Rattá, G.; Ratynskaia, S.; Ravera, G.; Rayner, C.; Rebai, M.; Reece, D.; Reed, A.; Réfy, D.; Regan, B.; Regaña, J.; Reich, M.; Reid, N.; Reimold, F.; Reinhart, M.; Reinke, M.; Reiser, D.; Rendell, D.; Reux, C.; Reyes Cortes, S. D. A.; Reynolds, S.; Riccardo, V.; Richardson, N.; Riddle, K.; Rigamonti, D.; Rimini, F. G.; Risner, J.; Riva, M.; Roach, C.; Robins, R. J.; Robinson, S. A.; Robinson, T.; Robson, D. W.; Roccella, R.; Rodionov, R.; Rodrigues, P.; Rodriguez, J.; Rohde, V.; Romanelli, F.; Romanelli, M.; Romanelli, S.; Romazanov, J.; Rowe, S.; Rubel, M.; Rubinacci, G.; Rubino, G.; Ruchko, L.; Ruiz, M.; Ruset, C.; Rzadkiewicz, J.; Saarelma, S.; Sabot, R.; Safi, E.; Sagar, P.; Saibene, G.; Saint-Laurent, F.; Salewski, M.; Salmi, A.; Salmon, R.; Salzedas, F.; Samaddar, D.; Samm, U.; Sandiford, D.; Santa, P.; Santala, M. I. K.; Santos, B.; Santucci, A.; Sartori, F.; Sartori, R.; Sauter, O.; Scannell, R.; Schlummer, T.; Schmid, K.; Schmidt, V.; Schmuck, S.; Schneider, M.; Schöpf, K.; Schwörer, D.; Scott, S. D.; Sergienko, G.; Sertoli, M.; Shabbir, A.; Sharapov, S. E.; Shaw, A.; Shaw, R.; Sheikh, H.; Shepherd, A.; Shevelev, A.; Shumack, A.; Sias, G.; Sibbald, M.; Sieglin, B.; Silburn, S.; Silva, A.; Silva, C.; Simmons, P. A.; Simpson, J.; Simpson-Hutchinson, J.; Sinha, A.; Sipilä, S. K.; Sips, A. C. C.; Sirén, P.; Sirinelli, A.; Sjöstrand, H.; Skiba, M.; Skilton, R.; Slabkowska, K.; Slade, B.; Smith, N.; Smith, P. G.; Smith, R.; Smith, T. J.; Smithies, M.; Snoj, L.; Soare, S.; Solano, E. R.; Somers, A.; Sommariva, C.; Sonato, P.; Sopplesa, A.; Sousa, J.; Sozzi, C.; Spagnolo, S.; Spelzini, T.; Spineanu, F.; Stables, G.; Stamatelatos, I.; Stamp, M. F.; Staniec, P.; Stankūnas, G.; Stan-Sion, C.; Stead, M. J.; Stefanikova, E.; Stepanov, I.; Stephen, A. V.; Stephen, M.; Stevens, A.; Stevens, B. D.; Strachan, J.; Strand, P.; Strauss, H. R.; Ström, P.; Stubbs, G.; Studholme, W.; Subba, F.; Summers, H. P.; Svensson, J.; Świderski, Ł.; Szabolics, T.; Szawlowski, M.; Szepesi, G.; Suzuki, T. T.; Tál, B.; Tala, T.; Talbot, A. R.; Talebzadeh, S.; Taliercio, C.; Tamain, P.; Tame, C.; Tang, W.; Tardocchi, M.; Taroni, L.; Taylor, D.; Taylor, K. A.; Tegnered, D.; Telesca, G.; Teplova, N.; Terranova, D.; Testa, D.; Tholerus, E.; Thomas, J.; Thomas, J. D.; Thomas, P.; Thompson, A.; Thompson, C.-A.; Thompson, V. K.; Thorne, L.; Thornton, A.; Thrysøe, A. S.; Tigwell, P. A.; Tipton, N.; Tiseanu, I.; Tojo, H.; Tokitani, M.; Tolias, P.; Tomeš, M.; Tonner, P.; Towndrow, M.; Trimble, P.; Tripsky, M.; Tsalas, M.; Tsavalas, P.; Tskhakaya jun, D.; Turner, I.; Turner, M. M.; Turnyanskiy, M.; Tvalashvili, G.; Tyrrell, S. G. J.; Uccello, A.; Ul-Abidin, Z.; Uljanovs, J.; Ulyatt, D.; Urano, H.; Uytdenhouwen, I.; Vadgama, A. P.; Valcarcel, D.; Valentinuzzi, M.; Valisa, M.; Vallejos Olivares, P.; Valovic, M.; Van De Mortel, M.; Van Eester, D.; Van Renterghem, W.; van Rooij, G. J.; Varje, J.; Varoutis, S.; Vartanian, S.; Vasava, K.; Vasilopoulou, T.; Vega, J.; Verdoolaege, G.; Verhoeven, R.; Verona, C.; Verona Rinati, G.; Veshchev, E.; Vianello, N.; Vicente, J.; Viezzer, E.; Villari, S.; Villone, F.; Vincenzi, P.; Vinyar, I.; Viola, B.; Vitins, A.; Vizvary, Z.; Vlad, M.; Voitsekhovitch, I.; Vondráček, P.; Vora, N.; Vu, T.; Pires de Sa, W. W.; Wakeling, B.; Waldon, C. W. F.; Walkden, N.; Walker, M.; Walker, R.; Walsh, M.; Wang, E.; Wang, N.; Warder, S.; Warren, R. J.; Waterhouse, J.; Watkins, N. W.; Watts, C.; Wauters, T.; Weckmann, A.; Weiland, J.; Weisen, H.; Weiszflog, M.; Wellstood, C.; West, A. T.; Wheatley, M. R.; Whetham, S.; Whitehead, A. M.; Whitehead, B. D.; Widdowson, A. M.; Wiesen, S.; Wilkinson, J.; Williams, J.; Williams, M.; Wilson, A. R.; Wilson, D. J.; Wilson, H. R.; Wilson, J.; Wischmeier, M.; Withenshaw, G.; Withycombe, A.; Witts, D. M.; Wood, D.; Wood, R.; Woodley, C.; Wray, S.; Wright, J.; Wright, J. C.; Wu, J.; Wukitch, S.; Wynn, A.; Xu, T.; Yadikin, D.; Yanling, W.; Yao, L.; Yavorskij, V.; Yoo, M. G.; Young, C.; Young, D.; Young, I. D.; Young, R.; Zacks, J.; Zagorski, R.; Zaitsev, F. S.; Zanino, R.; Zarins, A.; Zastrow, K. D.; Zerbini, M.; Zhang, W.; Zhou, Y.; Zilli, E.; Zoita, V.; Zoletnik, S.; Zychor, I.; JET Contributors

    2017-10-01

    The 2014-2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L-H power threshold in Deuterium and Hydrogen are given, stressing the importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H  =  1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D-T campaign and 14 MeV neutron calibration strategy are reviewed.

  18. Research infrastructure support to address ecosystem dynamics

    NASA Astrophysics Data System (ADS)

    Los, Wouter

    2014-05-01

    Predicting the evolution of ecosystems to climate change or human pressures is a challenge. Even understanding past or current processes is complicated as a result of the many interactions and feedbacks that occur within and between components of the system. This talk will present an example of current research on changes in landscape evolution, hydrology, soil biogeochemical processes, zoological food webs, and plant community succession, and how these affect feedbacks to components of the systems, including the climate system. Multiple observations, experiments, and simulations provide a wealth of data, but not necessarily understanding. Model development on the coupled processes on different spatial and temporal scales is sensitive for variations in data and of parameter change. Fast high performance computing may help to visualize the effect of these changes and the potential stability (and reliability) of the models. This may than allow for iteration between data production and models towards stable models reducing uncertainty and improving the prediction of change. The role of research infrastructures becomes crucial is overcoming barriers for such research. Environmental infrastructures are covering physical site facilities, dedicated instrumentation and e-infrastructure. The LifeWatch infrastructure for biodiversity and ecosystem research will provide services for data integration, analysis and modeling. But it has to cooperate intensively with the other kinds of infrastructures in order to support the iteration between data production and model computation. The cooperation in the ENVRI project (Common operations of environmental research infrastructures) is one of the initiatives to foster such multidisciplinary research.

  19. A novel artificial neural network method for biomedical prediction based on matrix pseudo-inversion.

    PubMed

    Cai, Binghuang; Jiang, Xia

    2014-04-01

    Biomedical prediction based on clinical and genome-wide data has become increasingly important in disease diagnosis and classification. To solve the prediction problem in an effective manner for the improvement of clinical care, we develop a novel Artificial Neural Network (ANN) method based on Matrix Pseudo-Inversion (MPI) for use in biomedical applications. The MPI-ANN is constructed as a three-layer (i.e., input, hidden, and output layers) feed-forward neural network, and the weights connecting the hidden and output layers are directly determined based on MPI without a lengthy learning iteration. The LASSO (Least Absolute Shrinkage and Selection Operator) method is also presented for comparative purposes. Single Nucleotide Polymorphism (SNP) simulated data and real breast cancer data are employed to validate the performance of the MPI-ANN method via 5-fold cross validation. Experimental results demonstrate the efficacy of the developed MPI-ANN for disease classification and prediction, in view of the significantly superior accuracy (i.e., the rate of correct predictions), as compared with LASSO. The results based on the real breast cancer data also show that the MPI-ANN has better performance than other machine learning methods (including support vector machine (SVM), logistic regression (LR), and an iterative ANN). In addition, experiments demonstrate that our MPI-ANN could be used for bio-marker selection as well. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. MICRA: an automatic pipeline for fast characterization of microbial genomes from high-throughput sequencing data.

    PubMed

    Caboche, Ségolène; Even, Gaël; Loywick, Alexandre; Audebert, Christophe; Hot, David

    2017-12-19

    The increase in available sequence data has advanced the field of microbiology; however, making sense of these data without bioinformatics skills is still problematic. We describe MICRA, an automatic pipeline, available as a web interface, for microbial identification and characterization through reads analysis. MICRA uses iterative mapping against reference genomes to identify genes and variations. Additional modules allow prediction of antibiotic susceptibility and resistance and comparing the results of several samples. MICRA is fast, producing few false-positive annotations and variant calls compared to current methods, making it a tool of great interest for fully exploiting sequencing data.

  1. Kinetic modeling of plant metabolism and its predictive power: peppermint essential oil biosynthesis as an example.

    PubMed

    Lange, Bernd Markus; Rios-Estepa, Rigoberto

    2014-01-01

    The integration of mathematical modeling with analytical experimentation in an iterative fashion is a powerful approach to advance our understanding of the architecture and regulation of metabolic networks. Ultimately, such knowledge is highly valuable to support efforts aimed at modulating flux through target pathways by molecular breeding and/or metabolic engineering. In this article we describe a kinetic mathematical model of peppermint essential oil biosynthesis, a pathway that has been studied extensively for more than two decades. Modeling assumptions and approximations are described in detail. We provide step-by-step instructions on how to run simulations of dynamic changes in pathway metabolites concentrations.

  2. Status of the ITER Electron Cyclotron Heating and Current Drive System

    NASA Astrophysics Data System (ADS)

    Darbos, Caroline; Albajar, Ferran; Bonicelli, Tullio; Carannante, Giuseppe; Cavinato, Mario; Cismondi, Fabio; Denisov, Grigory; Farina, Daniela; Gagliardi, Mario; Gandini, Franco; Gassmann, Thibault; Goodman, Timothy; Hanson, Gregory; Henderson, Mark A.; Kajiwara, Ken; McElhaney, Karen; Nousiainen, Risto; Oda, Yasuhisa; Omori, Toshimichi; Oustinov, Alexander; Parmar, Darshankumar; Popov, Vladimir L.; Purohit, Dharmesh; Rao, Shambhu Laxmikanth; Rasmussen, David; Rathod, Vipal; Ronden, Dennis M. S.; Saibene, Gabriella; Sakamoto, Keishi; Sartori, Filippo; Scherer, Theo; Singh, Narinder Pal; Strauß, Dirk; Takahashi, Koji

    2016-01-01

    The electron cyclotron (EC) heating and current drive (H&CD) system developed for the ITER is made of 12 sets of high-voltage power supplies feeding 24 gyrotrons connected through 24 transmission lines (TL), to five launchers, four located in upper ports and one at the equatorial level. Nearly all procurements are in-kind, following general ITER philosophy, and will come from Europe, India, Japan, Russia and the USA. The full system is designed to couple to the plasma 20 MW among the 24 MW generated power, at the frequency of 170 GHz, for various physics applications such as plasma start-up, central H&CD and magnetohydrodynamic (MHD) activity control. The design takes present day technology and extends toward high-power continuous operation, which represents a large step forward as compared to the present state of the art. The ITER EC system will be a stepping stone to future EC systems for DEMO and beyond.

  3. Effect of contrast enhancement prior to iteration procedure on image correction for soft x-ray projection microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamsranjav, Erdenetogtokh, E-mail: ja.erdenetogtokh@gmail.com; Shiina, Tatsuo, E-mail: shiina@faculity.chiba-u.jp; Kuge, Kenichi

    2016-01-28

    Soft X-ray microscopy is well recognized as a powerful tool of high-resolution imaging for hydrated biological specimens. Projection type of it has characteristics of easy zooming function, simple optical layout and so on. However the image is blurred by the diffraction of X-rays, leading the spatial resolution to be worse. In this study, the blurred images have been corrected by an iteration procedure, i.e., Fresnel and inverse Fresnel transformations repeated. This method was confirmed by earlier studies to be effective. Nevertheless it was not enough to some images showing too low contrast, especially at high magnification. In the present study,more » we tried a contrast enhancement method to make the diffraction fringes clearer prior to the iteration procedure. The method was effective to improve the images which were not successful by iteration procedure only.« less

  4. Iterative Methods to Solve Linear RF Fields in Hot Plasma

    NASA Astrophysics Data System (ADS)

    Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo

    2014-10-01

    Most magnetic plasma confinement devices use radio frequency (RF) waves for current drive and/or heating. Numerical modeling of RF fields is an important part of performance analysis of such devices and a predictive tool aiding design and development of future devices. Prior attempts at this modeling have mostly used direct solvers to solve the formulated linear equations. Full wave modeling of RF fields in hot plasma with 3D nonuniformities is mostly prohibited, with memory demands of a direct solver placing a significant limitation on spatial resolution. Iterative methods can significantly increase spatial resolution. We explore the feasibility of using iterative methods in 3D full wave modeling. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating along test particle orbits. The wave equation is discretized using a finite difference approach. The initial guess is important in iterative methods, and we examine different initial guesses including the solution to the cold plasma wave equation. Work is supported by the U.S. DOE SBIR program.

  5. A fast iterative scheme for the linearized Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Wu, Lei; Zhang, Jun; Liu, Haihu; Zhang, Yonghao; Reese, Jason M.

    2017-06-01

    Iterative schemes to find steady-state solutions to the Boltzmann equation are efficient for highly rarefied gas flows, but can be very slow to converge in the near-continuum flow regime. In this paper, a synthetic iterative scheme is developed to speed up the solution of the linearized Boltzmann equation by penalizing the collision operator L into the form L = (L + Nδh) - Nδh, where δ is the gas rarefaction parameter, h is the velocity distribution function, and N is a tuning parameter controlling the convergence rate. The velocity distribution function is first solved by the conventional iterative scheme, then it is corrected such that the macroscopic flow velocity is governed by a diffusion-type equation that is asymptotic-preserving into the Navier-Stokes limit. The efficiency of this new scheme is assessed by calculating the eigenvalue of the iteration, as well as solving for Poiseuille and thermal transpiration flows. We find that the fastest convergence of our synthetic scheme for the linearized Boltzmann equation is achieved when Nδ is close to the average collision frequency. The synthetic iterative scheme is significantly faster than the conventional iterative scheme in both the transition and the near-continuum gas flow regimes. Moreover, due to its asymptotic-preserving properties, the synthetic iterative scheme does not need high spatial resolution in the near-continuum flow regime, which makes it even faster than the conventional iterative scheme. Using this synthetic scheme, with the fast spectral approximation of the linearized Boltzmann collision operator, Poiseuille and thermal transpiration flows between two parallel plates, through channels of circular/rectangular cross sections and various porous media are calculated over the whole range of gas rarefaction. Finally, the flow of a Ne-Ar gas mixture is solved based on the linearized Boltzmann equation with the Lennard-Jones intermolecular potential for the first time, and the difference between these results and those using the hard-sphere potential is discussed.

  6. Wake Vortex Inverse Model User's Guide

    NASA Technical Reports Server (NTRS)

    Lai, David; Delisi, Donald

    2008-01-01

    NorthWest Research Associates (NWRA) has developed an inverse model for inverting landing aircraft vortex data. The data used for the inversion are the time evolution of the lateral transport position and vertical position of both the port and starboard vortices. The inverse model performs iterative forward model runs using various estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Forward model predictions of lateral transport and altitude are then compared with the observed data. Differences between the data and model predictions guide the choice of vortex parameter values, crosswind profile and circulation evolution in the next iteration. Iterations are performed until a user-defined criterion is satisfied. Currently, the inverse model is set to stop when the improvement in the rms deviation between the data and model predictions is less than 1 percent for two consecutive iterations. The forward model used in this inverse model is a modified version of the Shear-APA model. A detailed description of this forward model, the inverse model, and its validation are presented in a different report (Lai, Mellman, Robins, and Delisi, 2007). This document is a User's Guide for the Wake Vortex Inverse Model. Section 2 presents an overview of the inverse model program. Execution of the inverse model is described in Section 3. When executing the inverse model, a user is requested to provide the name of an input file which contains the inverse model parameters, the various datasets, and directories needed for the inversion. A detailed description of the list of parameters in the inversion input file is presented in Section 4. A user has an option to save the inversion results of each lidar track in a mat-file (a condensed data file in Matlab format). These saved mat-files can be used for post-inversion analysis. A description of the contents of the saved files is given in Section 5. An example of an inversion input file, with preferred parameters values, is given in Appendix A. An example of the plot generated at a normal completion of the inversion is shown in Appendix B.

  7. Radar cross-section reduction based on an iterative fast Fourier transform optimized metasurface

    NASA Astrophysics Data System (ADS)

    Song, Yi-Chuan; Ding, Jun; Guo, Chen-Jiang; Ren, Yu-Hui; Zhang, Jia-Kai

    2016-07-01

    A novel polarization insensitive metasurface with over 25 dB monostatic radar cross-section (RCS) reduction is introduced. The proposed metasurface is comprised of carefully arranged unit cells with spatially varied dimension, which enables approximate uniform diffusion of incoming electromagnetic (EM) energy and reduces the threat from bistatic radar system. An iterative fast Fourier transform (FFT) method for conventional antenna array pattern synthesis is innovatively applied to find the best unit cell geometry parameter arrangement. Finally, a metasurface sample is fabricated and tested to validate RCS reduction behavior predicted by full wave simulation software Ansys HFSSTM and marvelous agreement is observed.

  8. A new approach for solving the three-dimensional steady Euler equations. I - General theory

    NASA Technical Reports Server (NTRS)

    Chang, S.-C.; Adamczyk, J. J.

    1986-01-01

    The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.

  9. ITER L-mode confinement database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaye, S.M.

    This paper describes the content of an L-mode database that has been compiled with data from Alcator C-Mod, ASDEX, DIII, DIII-D, FTU, JET, JFT-2M, JT-60, PBX-M, PDX, T-10, TEXTOR, TFTR, and Tore-Supra. The database consists of a total of 2938 entries, 1881 of which are in the L-phase while 922 are ohmically heated only (OH). Each entry contains up to 95 descriptive parameters, including global and kinetic information, machine conditioning, and configuration. The paper presents a description of the database and the variables contained therein, and it also presents global and thermal scalings along with predictions for ITER.

  10. A new approach for solving the three-dimensional steady Euler equations. I - General theory

    NASA Astrophysics Data System (ADS)

    Chang, S.-C.; Adamczyk, J. J.

    1986-08-01

    The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.

  11. Iterative procedures for space shuttle main engine performance models

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1989-01-01

    Performance models of the Space Shuttle Main Engine (SSME) contain iterative strategies for determining approximate solutions to nonlinear equations reflecting fundamental mass, energy, and pressure balances within engine flow systems. Both univariate and multivariate Newton-Raphson algorithms are employed in the current version of the engine Test Information Program (TIP). Computational efficiency and reliability of these procedures is examined. A modified trust region form of the multivariate Newton-Raphson method is implemented and shown to be superior for off nominal engine performance predictions. A heuristic form of Broyden's Rank One method is also tested and favorable results based on this algorithm are presented.

  12. Non-axisymmetric ideal equilibrium and stability of ITER plasmas with rotating RMPs

    NASA Astrophysics Data System (ADS)

    Ham, C. J.; Cramp, R. G. J.; Gibson, S.; Lazerson, S. A.; Chapman, I. T.; Kirk, A.

    2016-08-01

    The magnetic perturbations produced by the resonant magnetic perturbation (RMP) coils will be rotated in ITER so that the spiral patterns due to strike point splitting which are locked to the RMP also rotate. This is to ensure even power deposition on the divertor plates. VMEC equilibria are calculated for different phases of the RMP rotation. It is demonstrated that the off harmonics rotate in the opposite direction to the main harmonic. This is an important topic for future research to control and optimize ITER appropriately. High confinement mode (H-mode) is favourable for the economics of a potential fusion power plant and its use is planned in ITER. However, the high pressure gradient at the edge of the plasma can trigger periodic eruptions called edge localized modes (ELMs). ELMs have the potential to shorten the life of the divertor in ITER (Loarte et al 2003 Plasma Phys. Control. Fusion 45 1549) and so methods for mitigating or suppressing ELMs in ITER will be important. Non-axisymmetric RMP coils will be installed in ITER for ELM control. Sampling theory is used to show that there will be significant a {{n}\\text{coils}}-{{n}\\text{rmp}} harmonic sideband. There are nine coils toroidally in ITER so {{n}\\text{coils}}=9 . This results in a significant n  =  6 component to the {{n}\\text{rmp}}=3 applied field and a significant n  =  5 component to the {{n}\\text{rmp}}=4 applied field. Although the vacuum field has similar amplitudes of these harmonics the plasma response to the various harmonics dictates the final equilibrium. Magnetic perturbations with toroidal mode number n  =  3 and n  =  4 are applied to a 15 MA, {{q}95}≈ 3 burning ITER plasma. We use a three-dimensional ideal magnetohydrodynamic model (VMEC) to calculate ITER equilibria with applied RMPs and to determine growth rates of infinite n ballooning modes (COBRA). The {{n}\\text{rmp}}=4 case shows little change in ballooning mode growth rate as the RMP is rotated, however there is a change with rotation for the {{n}\\text{rmp}}=3 case.

  13. Collimator-free photon tomography

    DOEpatents

    Dilmanian, F.A.; Barbour, R.L.

    1998-10-06

    A method of uncollimated single photon emission computed tomography includes administering a radioisotope to a patient for producing gamma ray photons from a source inside the patient. Emissivity of the photons is measured externally of the patient with an uncollimated gamma camera at a plurality of measurement positions surrounding the patient for obtaining corresponding energy spectrums thereat. Photon emissivity at the plurality of measurement positions is predicted using an initial prediction of an image of the source. The predicted and measured photon emissivities are compared to obtain differences therebetween. Prediction and comparison is iterated by updating the image prediction until the differences are below a threshold for obtaining a final prediction of the source image. 6 figs.

  14. The Chlamydomonas genome project: a decade on.

    PubMed

    Blaby, Ian K; Blaby-Haas, Crysten E; Tourasse, Nicolas; Hom, Erik F Y; Lopez, David; Aksoy, Munevver; Grossman, Arthur; Umen, James; Dutcher, Susan; Porter, Mary; King, Stephen; Witman, George B; Stanke, Mario; Harris, Elizabeth H; Goodstein, David; Grimwood, Jane; Schmutz, Jeremy; Vallon, Olivier; Merchant, Sabeeha S; Prochnik, Simon

    2014-10-01

    The green alga Chlamydomonas reinhardtii is a popular unicellular organism for studying photosynthesis, cilia biogenesis, and micronutrient homeostasis. Ten years since its genome project was initiated an iterative process of improvements to the genome and gene predictions has propelled this organism to the forefront of the omics era. Housed at Phytozome, the plant genomics portal of the Joint Genome Institute (JGI), the most up-to-date genomic data include a genome arranged on chromosomes and high-quality gene models with alternative splice forms supported by an abundance of whole transcriptome sequencing (RNA-Seq) data. We present here the past, present, and future of Chlamydomonas genomics. Specifically, we detail progress on genome assembly and gene model refinement, discuss resources for gene annotations, functional predictions, and locus ID mapping between versions and, importantly, outline a standardized framework for naming genes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. In silico design of smart binders to anthrax PA

    NASA Astrophysics Data System (ADS)

    Sellers, Michael; Hurley, Margaret M.

    2012-06-01

    The development of smart peptide binders requires an understanding of the fundamental mechanisms of recognition which has remained an elusive grail of the research community for decades. Recent advances in automated discovery and synthetic library science provide a wealth of information to probe fundamental details of binding and facilitate the development of improved models for a priori prediction of affinity and specificity. Here we present the modeling portion of an iterative experimental/computational study to produce high affinity peptide binders to the Protective Antigen (PA) of Bacillus anthracis. The result is a general usage, HPC-oriented, python-based toolkit based upon powerful third-party freeware, which is designed to provide a better understanding of peptide-protein interactions and ultimately predict and measure new smart peptide binder candidates. We present an improved simulation protocol with flexible peptide docking to the Anthrax Protective Antigen, reported within the context of experimental data presented in a companion work.

  16. Railway track geometry degradation due to differential settlement of ballast/subgrade - Numerical prediction by an iterative procedure

    NASA Astrophysics Data System (ADS)

    Nielsen, Jens C. O.; Li, Xin

    2018-01-01

    An iterative procedure for numerical prediction of long-term degradation of railway track geometry (longitudinal level) due to accumulated differential settlement of ballast/subgrade is presented. The procedure is based on a time-domain model of dynamic vehicle-track interaction to calculate the contact loads between sleepers and ballast in the short-term, which are then used in an empirical model to determine the settlement of ballast/subgrade below each sleeper in the long-term. The number of load cycles (wheel passages) accounted for in each iteration step is determined by an adaptive step length given by a maximum settlement increment. To reduce the computational effort for the simulations of dynamic vehicle-track interaction, complex-valued modal synthesis with a truncated modal set is applied for the linear subset of the discretely supported track model with non-proportional spatial distribution of viscous damping. Gravity loads and state-dependent vehicle, track and wheel-rail contact conditions are accounted for as external loads on the modal model, including situations involving loss of (and recovered) wheel-rail contact, impact between hanging sleeper and ballast, and/or a prescribed variation of non-linear track support stiffness properties along the track model. The procedure is demonstrated by calculating the degradation of longitudinal level over time as initiated by a prescribed initial local rail irregularity (dipped welded rail joint).

  17. Surface heat loads on the ITER divertor vertical targets

    NASA Astrophysics Data System (ADS)

    Gunn, J. P.; Carpentier-Chouchana, S.; Escourbiac, F.; Hirai, T.; Panayotis, S.; Pitts, R. A.; Corre, Y.; Dejarnac, R.; Firdaouss, M.; Kočan, M.; Komm, M.; Kukushkin, A.; Languille, P.; Missirlian, M.; Zhao, W.; Zhong, G.

    2017-04-01

    The heating of tungsten monoblocks at the ITER divertor vertical targets is calculated using the heat flux predicted by three-dimensional ion orbit modelling. The monoblocks are beveled to a depth of 0.5 mm in the toroidal direction to provide magnetic shadowing of the poloidal leading edges within the range of specified assembly tolerances, but this increases the magnetic field incidence angle resulting in a reduction of toroidal wetted fraction and concentration of the local heat flux to the unshadowed surfaces. This shaping solution successfully protects the leading edges from inter-ELM heat loads, but at the expense of (1) temperatures on the main loaded surface that could exceed the tungsten recrystallization temperature in the nominal partially detached regime, and (2) melting and loss of margin against critical heat flux during transient loss of detachment control. During ELMs, the risk of monoblock edge melting is found to be greater than the risk of full surface melting on the plasma-wetted zone. Full surface and edge melting will be triggered by uncontrolled ELMs in the burning plasma phase of ITER operation if current models of the likely ELM ion impact energies at the divertor targets are correct. During uncontrolled ELMs in pre-nuclear deuterium or helium plasmas at half the nominal plasma current and magnetic field, full surface melting should be avoided, but edge melting is predicted.

  18. An overview of ITER diagnostics (invited)

    NASA Astrophysics Data System (ADS)

    Young, Kenneth M.; Costley, A. E.; ITER-JCT Home Team; ITER Diagnostics Expert Group

    1997-01-01

    The requirements for plasma measurements for operating and controlling the ITER device have now been determined. Initial criteria for the measurement quality have been set, and the diagnostics that might be expected to achieve these criteria have been chosen. The design of the first set of diagnostics to achieve these goals is now well under way. The design effort is concentrating on the components that interact most strongly with the other ITER systems, particularly the vacuum vessel, blankets, divertor modules, cryostat, and shield wall. The relevant details of the ITER device and facility design and specific examples of diagnostic design to provide the necessary measurements are described. These designs have to take account of the issues associated with very high 14 MeV neutron fluxes and fluences, nuclear heating, high heat loads, and high mechanical forces that can arise during disruptions. The design work is supported by an extensive research and development program, which to date has concentrated on the effects these levels of radiation might cause on diagnostic components. A brief outline of the organization of the diagnostic development program is given.

  19. ITER-FEAT operation

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.; Aymar, R.; Chuyanov, V. A.; Huguet, M.; Matsumoto, H.; Mizoguchi, T.; Murakami, Y.; Polevoi, A. R.; Shimada, M.; ITER Joint Central Team; ITER Home Teams

    2001-03-01

    ITER is planned to be the first fusion experimental reactor in the world operating for research in physics and engineering. The first ten years of operation will be devoted primarily to physics issues at low neutron fluence and the following ten years of operation to engineering testing at higher fluence. ITER can accommodate various plasma configurations and plasma operation modes, such as inductive high Q modes, long pulse hybrid modes and non-inductive steady state modes, with large ranges of plasma current, density, beta and fusion power, and with various heating and current drive methods. This flexibility will provide an advantage for coping with uncertainties in the physics database, in studying burning plasmas, in introducing advanced features and in optimizing the plasma performance for the different programme objectives. Remote sites will be able to participate in the ITER experiment. This concept will provide an advantage not only in operating ITER for 24 hours a day but also in involving the worldwide fusion community and in promoting scientific competition among the ITER Parties.

  20. A novel approach to calibrate the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Khoram, Nafiseh; Zayane, Chadia; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2016-03-15

    The calibration of the hemodynamic model that describes changes in blood flow and blood oxygenation during brain activation is a crucial step for successfully monitoring and possibly predicting brain activity. This in turn has the potential to provide diagnosis and treatment of brain diseases in early stages. We propose an efficient numerical procedure for calibrating the hemodynamic model using some fMRI measurements. The proposed solution methodology is a regularized iterative method equipped with a Kalman filtering-type procedure. The Newton component of the proposed method addresses the nonlinear aspect of the problem. The regularization feature is used to ensure the stability of the algorithm. The Kalman filter procedure is incorporated here to address the noise in the data. Numerical results obtained with synthetic data as well as with real fMRI measurements are presented to illustrate the accuracy, robustness to the noise, and the cost-effectiveness of the proposed method. We present numerical results that clearly demonstrate that the proposed method outperforms the Cubature Kalman Filter (CKF), one of the most prominent existing numerical methods. We have designed an iterative numerical technique, called the TNM-CKF algorithm, for calibrating the mathematical model that describes the single-event related brain response when fMRI measurements are given. The method appears to be highly accurate and effective in reconstructing the BOLD signal even when the measurements are tainted with high noise level (as high as 30%). Published by Elsevier B.V.

  1. Artificial neural network modeling using clinical and knowledge independent variables predicts salt intake reduction behavior

    PubMed Central

    Isma’eel, Hussain A.; Sakr, George E.; Almedawar, Mohamad M.; Fathallah, Jihan; Garabedian, Torkom; Eddine, Savo Bou Zein

    2015-01-01

    Background High dietary salt intake is directly linked to hypertension and cardiovascular diseases (CVDs). Predicting behaviors regarding salt intake habits is vital to guide interventions and increase their effectiveness. We aim to compare the accuracy of an artificial neural network (ANN) based tool that predicts behavior from key knowledge questions along with clinical data in a high cardiovascular risk cohort relative to the least square models (LSM) method. Methods We collected knowledge, attitude and behavior data on 115 patients. A behavior score was calculated to classify patients’ behavior towards reducing salt intake. Accuracy comparison between ANN and regression analysis was calculated using the bootstrap technique with 200 iterations. Results Starting from a 69-item questionnaire, a reduced model was developed and included eight knowledge items found to result in the highest accuracy of 62% CI (58-67%). The best prediction accuracy in the full and reduced models was attained by ANN at 66% and 62%, respectively, compared to full and reduced LSM at 40% and 34%, respectively. The average relative increase in accuracy over all in the full and reduced models is 82% and 102%, respectively. Conclusions Using ANN modeling, we can predict salt reduction behaviors with 66% accuracy. The statistical model has been implemented in an online calculator and can be used in clinics to estimate the patient’s behavior. This will help implementation in future research to further prove clinical utility of this tool to guide therapeutic salt reduction interventions in high cardiovascular risk individuals. PMID:26090333

  2. Using an ensemble smoother to evaluate parameter uncertainty of an integrated hydrological model of Yanqi basin

    NASA Astrophysics Data System (ADS)

    Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang

    2015-10-01

    Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.

  3. Overview of the JET results in support to ITER

    DOE PAGES

    Litaudon, X.; Abduallev, S.; Abhangi, M.; ...

    2017-06-15

    Here, the 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing themore » importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed.« less

  4. Physics and technology in the ion-cyclotron range of frequency on Tore Supra and TITAN test facility: implication for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litaudon, X; Bernard, J. M.; Colas, L.

    2013-01-01

    To support the design of an ITER ion-cyclotron range of frequency heating (ICRH) system and to mitigate risks of operation in ITER, CEA has initiated an ambitious Research & Development program accompanied by experiments on Tore Supra or test-bed facility together with a significant modelling effort. The paper summarizes the recent results in the following areas: Comprehensive characterization (experiments and modelling) of a new Faraday screen concept tested on the Tore Supra antenna. A new model is developed for calculating the ICRH sheath rectification at the antenna vicinity. The model is applied to calculate the local heat flux on Toremore » Supra and ITER ICRH antennas. Full-wave modelling of ITER ICRH heating and current drive scenarios with the EVE code. With 20 MW of power, a current of 400 kA could be driven on axis in the DT scenario. Comparison between DT and DT(3He) scenario is given for heating and current drive efficiencies. First operation of CW test-bed facility, TITAN, designed for ITER ICRH components testing and could host up to a quarter of an ITER antenna. R&D of high permittivity materials to improve load of test facilities to better simulate ITER plasma antenna loading conditions.« less

  5. Overview of the JET results in support to ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litaudon, X.; Abduallev, S.; Abhangi, M.

    Here, the 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing themore » importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed.« less

  6. Theoretical Prediction of Pressure Distributions on Nonlifting Airfoils at High Subsonic Speeds

    NASA Technical Reports Server (NTRS)

    Spreiter, John R; Alksne, Alberta

    1955-01-01

    Theoretical pressure distributions on nonlifting circular-arc airfoils in two-dimensional flows with high subsonic free-stream velocity are found by determining approximate solutions, through an iteration process, of an integral equation for transonic flow proposed by Oswatitsch. The integral equation stems directly from the small-disturbance theory for transonic flow. This method of analysis possesses the advantage of remaining in the physical, rather than the hodograph, variable and can be applied in airfoils having curved surfaces. After discussion of the derivation of the integral equation and qualitative aspects of the solution, results of calculations carried out for circular-arc airfoils in flows with free-stream Mach numbers up to unity are described. These results indicate most of the principal phenomena observed in experimental studies.

  7. A network extension of species occupancy models in a patchy environment applied to the Yosemite toad (Anaxyrus canorus)

    USGS Publications Warehouse

    Berlow, Eric L.; Knapp, Roland A.; Ostoja, Steven M.; Williams, Richard J.; McKenny, Heather; Matchett, John R.; Guo, Qinghau; Fellers, Gary M.; Kleeman, Patrick; Brooks, Matthew L.; Joppa, Lucas

    2013-01-01

    A central challenge of conservation biology is using limited data to predict rare species occurrence and identify conservation areas that play a disproportionate role in regional persistence. Where species occupy discrete patches in a landscape, such predictions require data about environmental quality of individual patches and the connectivity among high quality patches. We present a novel extension to species occupancy modeling that blends traditionalpredictions of individual patch environmental quality with network analysis to estimate connectivity characteristics using limited survey data. We demonstrate this approach using environmental and geospatial attributes to predict observed occupancy patterns of the Yosemite toad (Anaxyrus (= Bufo) canorus) across >2,500 meadows in Yosemite National Park (USA). A. canorus, a Federal Proposed Species, breeds in shallow water associated with meadows. Our generalized linear model (GLM) accurately predicted ~84% of true presence-absence data on a subset of data withheld for testing. The predicted environmental quality of each meadow was iteratively ‘boosted’ by the quality of neighbors within dispersal distance. We used this park-wide meadow connectivity network to estimate the relative influence of an individual Meadow’s ‘environmental quality’ versus its ‘network quality’ to predict: a) clusters of high quality breeding meadows potentially linked by dispersal, b) breeding meadows with high environmental quality that are isolated from other such meadows, c) breeding meadows with lower environmental quality where long-term persistence may critically depend on the network neighborhood, and d) breeding meadows with the biggest impact on park-wide breeding patterns. Combined with targeted data on dispersal, genetics, disease, and other potential stressors, these results can guide designation of core conservation areas for A. canorus in Yosemite National Park.

  8. On the assessment of spatial resolution of PET systems with iterative image reconstruction

    NASA Astrophysics Data System (ADS)

    Gong, Kuang; Cherry, Simon R.; Qi, Jinyi

    2016-03-01

    Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.

  9. Development of a Mobile Clinical Prediction Tool to Estimate Future Depression Severity and Guide Treatment in Primary Care: User-Centered Design.

    PubMed

    Wachtler, Caroline; Coe, Amy; Davidson, Sandra; Fletcher, Susan; Mendoza, Antonette; Sterling, Leon; Gunn, Jane

    2018-04-23

    Around the world, depression is both under- and overtreated. The diamond clinical prediction tool was developed to assist with appropriate treatment allocation by estimating the 3-month prognosis among people with current depressive symptoms. Delivering clinical prediction tools in a way that will enhance their uptake in routine clinical practice remains challenging; however, mobile apps show promise in this respect. To increase the likelihood that an app-delivered clinical prediction tool can be successfully incorporated into clinical practice, it is important to involve end users in the app design process. The aim of the study was to maximize patient engagement in an app designed to improve treatment allocation for depression. An iterative, user-centered design process was employed. Qualitative data were collected via 2 focus groups with a community sample (n=17) and 7 semistructured interviews with people with depressive symptoms. The results of the focus groups and interviews were used by the computer engineering team to modify subsequent protoypes of the app. Iterative development resulted in 3 prototypes and a final app. The areas requiring the most substantial changes following end-user input were related to the iconography used and the way that feedback was provided. In particular, communicating risk of future depressive symptoms proved difficult; these messages were consistently misinterpreted and negatively viewed and were ultimately removed. All participants felt positively about seeing their results summarized after completion of the clinical prediction tool, but there was a need for a personalized treatment recommendation made in conjunction with a consultation with a health professional. User-centered design led to valuable improvements in the content and design of an app designed to improve allocation of and engagement in depression treatment. Iterative design allowed us to develop a tool that allows users to feel hope, engage in self-reflection, and motivate them to treatment. The tool is currently being evaluated in a randomized controlled trial. ©Caroline Wachtler, Amy Coe, Sandra Davidson, Susan Fletcher, Antonette Mendoza, Leon Sterling, Jane Gunn. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 23.04.2018.

  10. High-performance equation solvers and their impact on finite element analysis

    NASA Technical Reports Server (NTRS)

    Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. Dale, Jr.

    1990-01-01

    The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number of operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.

  11. High-performance equation solvers and their impact on finite element analysis

    NASA Technical Reports Server (NTRS)

    Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. D., Jr.

    1992-01-01

    The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number od operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.

  12. Overview of the JET results

    NASA Astrophysics Data System (ADS)

    Romanelli, F.; JET Contributors,

    2015-10-01

    Since the installation of an ITER-like wall, the JET programme has focused on the consolidation of ITER design choices and the preparation for ITER operation, with a specific emphasis given to the bulk tungsten melt experiment, which has been crucial for the final decision on the material choice for the day-one tungsten divertor in ITER. Integrated scenarios have been progressed with the re-establishment of long-pulse, high-confinement H-modes by optimizing the magnetic configuration and the use of ICRH to avoid tungsten impurity accumulation. Stationary discharges with detached divertor conditions and small edge localized modes have been demonstrated by nitrogen seeding. The differences in confinement and pedestal behaviour before and after the ITER-like wall installation have been better characterized towards the development of high fusion yield scenarios in DT. Post-mortem analyses of the plasma-facing components have confirmed the previously reported low fuel retention obtained by gas balance and shown that the pattern of deposition within the divertor has changed significantly with respect to the JET carbon wall campaigns due to the absence of thermally activated chemical erosion of beryllium in contrast to carbon. Transport to remote areas is almost absent and two orders of magnitude less material is found in the divertor.

  13. Modeling Complex Dynamic Interactions of Nonlinear, Aeroelastic, Multistage, and Localization Phenomena in Turbine Engines

    DTIC Science & Technology

    2011-02-25

    fast method of predicting the number of iterations needed for converged results. A new hybrid technique is proposed to predict the convergence history...interchanging between the modes, whereas a smaller veering (or crossing) region shows fast mode switching. Then, the nonlinear vibration re- sponse of the...problems of interest involve dynamic ( fast ) crack propagation, then the nodes selected by the proposed approach at some time instant might not

  14. Iterative projection algorithms for ab initio phasing in virus crystallography.

    PubMed

    Lo, Victor L; Kingston, Richard L; Millane, Rick P

    2016-12-01

    Iterative projection algorithms are proposed as a tool for ab initio phasing in virus crystallography. The good global convergence properties of these algorithms, coupled with the spherical shape and high structural redundancy of icosahedral viruses, allows high resolution phases to be determined with no initial phase information. This approach is demonstrated by determining the electron density of a virus crystal with 5-fold non-crystallographic symmetry, starting with only a spherical shell envelope. The electron density obtained is sufficiently accurate for model building. The results indicate that iterative projection algorithms should be routinely applicable in virus crystallography, without the need for ancillary phase information. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Clayton, Dwight A; Kisner, Roger A

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structuresmore » are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.« less

  16. Multivariate qualitative analysis of banned additives in food safety using surface enhanced Raman scattering spectroscopy

    NASA Astrophysics Data System (ADS)

    He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei

    2015-02-01

    A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.

  17. Active Interaction Mapping as a tool to elucidate hierarchical functions of biological processes.

    PubMed

    Farré, Jean-Claude; Kramer, Michael; Ideker, Trey; Subramani, Suresh

    2017-07-03

    Increasingly, various 'omics data are contributing significantly to our understanding of novel biological processes, but it has not been possible to iteratively elucidate hierarchical functions in complex phenomena. We describe a general systems biology approach called Active Interaction Mapping (AI-MAP), which elucidates the hierarchy of functions for any biological process. Existing and new 'omics data sets can be iteratively added to create and improve hierarchical models which enhance our understanding of particular biological processes. The best datatypes to further improve an AI-MAP model are predicted computationally. We applied this approach to our understanding of general and selective autophagy, which are conserved in most eukaryotes, setting the stage for the broader application to other cellular processes of interest. In the particular application to autophagy-related processes, we uncovered and validated new autophagy and autophagy-related processes, expanded known autophagy processes with new components, integrated known non-autophagic processes with autophagy and predict other unexplored connections.

  18. A reduced complexity highly power/bandwidth efficient coded FQPSK system with iterative decoding

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    2001-01-01

    Based on a representation of FQPSK as a trellis-coded modulation, this paper investigates the potential improvement in power efficiency obtained from the application of simple outer codes to form a concatenated coding arrangement with iterative decoding.

  19. Flyback CCM inverter for AC module applications: iterative learning control and convergence analysis

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Ho; Kim, Minsung

    2017-12-01

    This paper presents an iterative learning controller (ILC) for an interleaved flyback inverter operating in continuous conduction mode (CCM). The flyback CCM inverter features small output ripple current, high efficiency, and low cost, and hence it is well suited for photovoltaic power applications. However, it exhibits the non-minimum phase behaviour, because its transfer function from control duty to output current has the right-half-plane (RHP) zero. Moreover, the flyback CCM inverter suffers from the time-varying grid voltage disturbance. Thus, conventional control scheme results in inaccurate output tracking. To overcome these problems, the ILC is first developed and applied to the flyback inverter operating in CCM. The ILC makes use of both predictive and current learning terms which help the system output to converge to the reference trajectory. We take into account the nonlinear averaged model and use it to construct the proposed controller. It is proven that the system output globally converges to the reference trajectory in the absence of state disturbances, output noises, or initial state errors. Numerical simulations are performed to validate the proposed control scheme, and experiments using 400-W AC module prototype are carried out to demonstrate its practical feasibility.

  20. Drift effects on the tokamak power scrape-off width

    NASA Astrophysics Data System (ADS)

    Meier, E. T.; Goldston, R. J.; Kaveeva, E. G.; Mordijck, S.; Rozhansky, V. A.; Senichenkov, I. Yu.; Voskoboynikov, S. P.

    2015-11-01

    Recent experimental analysis suggests that the scrape-off layer (SOL) heat flux width (λq) for ITER will be near 1 mm, sharply narrowing the planned operating window. In this work, motivated by the heuristic drift (HD) model, which predicts the observed inverse plasma current scaling, SOLPS-ITER is used to explore drift effects on λq. Modeling focuses on an H-mode DIII-D discharge. In initial results, target recycling is set to 90%, resulting in sheath-limited SOL conditions. SOL particle diffusivity (DSOL) is varied from 0.1 to 1 m2/s. When drifts are included, λq is insensitive to DSOL, consistent with the HD model, with λq near 3 mm; in no-drift cases, λq varies from 2 to 5 mm. Drift effects depress near-separatrix potential, generating a channel of strong electron heat convection that is insensitive to DSOL. Sensitivities to thermal diffusivities, plasma current, toroidal magnetic field, and device size are also assessed. These initial results will be discussed in detail, and progress toward modeling experimentally relevant high-recycling conditions will be reported. Supported by U.S. DOE Contract DE-SC0010434.

  1. Tracking the dynamics of divergent thinking via semantic distance: Analytic methods and theoretical implications.

    PubMed

    Hass, Richard W

    2017-02-01

    Divergent thinking has often been used as a proxy measure of creative thinking, but this practice lacks a foundation in modern cognitive psychological theory. This article addresses several issues with the classic divergent-thinking methodology and presents a new theoretical and methodological framework for cognitive divergent-thinking studies. A secondary analysis of a large dataset of divergent-thinking responses is presented. Latent semantic analysis was used to examine the potential changes in semantic distance between responses and the concept represented by the divergent-thinking prompt across successive response iterations. The results of linear growth modeling showed that although there is some linear increase in semantic distance across response iterations, participants high in fluid intelligence tended to give more distant initial responses than those with lower fluid intelligence. Additional analyses showed that the semantic distance of responses significantly predicted the average creativity rating given to the response, with significant variation in average levels of creativity across participants. Finally, semantic distance does not seem to be related to participants' choices of their own most creative responses. Implications for cognitive theories of creativity are discussed, along with the limitations of the methodology and directions for future research.

  2. Alfvén eigenmode evolution computed with the VENUS and KINX codes for the ITER baseline scenario

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isaev, M. Yu., E-mail: isaev-my@nrcki.ru; Medvedev, S. Yu.; Cooper, W. A.

    A new application of the VENUS code is described, which computes alpha particle orbits in the perturbed electromagnetic fields and its resonant interaction with the toroidal Alfvén eigenmodes (TAEs) for the ITER device. The ITER baseline scenario with Q = 10 and the plasma toroidal current of 15 MA is considered as the most important and relevant for the International Tokamak Physics Activity group on energetic particles (ITPA-EP). For this scenario, typical unstable TAE-modes with the toroidal index n = 20 have been predicted that are localized in the plasma core near the surface with safety factor q = 1.more » The spatial structure of ballooning and antiballooning modes has been computed with the ideal MHD code KINX. The linear growth rates and the saturation levels taking into account the damping effects and the different mode frequencies have been calculated with the VENUS code for both ballooning and antiballooning TAE-modes.« less

  3. Overview of ASDEX Upgrade results

    NASA Astrophysics Data System (ADS)

    Zohm, H.; Adamek, J.; Angioni, C.; Antar, G.; Atanasiu, C. V.; Balden, M.; Becker, W.; Behler, K.; Behringer, K.; Bergmann, A.; Bertoncelli, T.; Bilato, R.; Bobkov, V.; Boom, J.; Bottino, A.; Brambilla, M.; Braun, F.; Brüdgam, M.; Buhler, A.; Chankin, A.; Classen, I.; Conway, G. D.; Coster, D. P.; de Marné, P.; D'Inca, R.; Drube, R.; Dux, R.; Eich, T.; Engelhardt, K.; Esposito, B.; Fahrbach, H.-U.; Fattorini, L.; Fink, J.; Fischer, R.; Flaws, A.; Foley, M.; Forest, C.; Fuchs, J. C.; Gál, K.; García Muñoz, M.; Gemisic Adamov, M.; Giannone, L.; Görler, T.; Gori, S.; da Graça, S.; Granucci, G.; Greuner, H.; Gruber, O.; Gude, A.; Günter, S.; Haas, G.; Hahn, D.; Harhausen, J.; Hauff, T.; Heinemann, B.; Herrmann, A.; Hicks, N.; Hobirk, J.; Hölzl, M.; Holtum, D.; Hopf, C.; Horton, L.; Huart, M.; Igochine, V.; Janzer, M.; Jenko, F.; Kallenbach, A.; Kálvin, S.; Kardaun, O.; Kaufmann, M.; Kick, M.; Kirk, A.; Klingshirn, H.-J.; Koscis, G.; Kollotzek, H.; Konz, C.; Krieger, K.; Kurki-Suonio, T.; Kurzan, B.; Lackner, K.; Lang, P. T.; Langer, B.; Lauber, P.; Laux, M.; Leuterer, F.; Likonen, J.; Liu, L.; Lohs, A.; Lunt, T.; Lyssoivan, A.; Maggi, C. F.; Manini, A.; Mank, K.; Manso, M.-E.; Mantsinen, M.; Maraschek, M.; Martin, P.; Mayer, M.; McCarthy, P.; McCormick, K.; Meister, H.; Meo, F.; Merkel, P.; Merkel, R.; Mertens, V.; Merz, F.; Meyer, H.; Mlynek, A.; Monaco, F.; Müller, H.-W.; Münich, M.; Murmann, H.; Neu, G.; Neu, R.; Neuhauser, J.; Nold, B.; Noterdaeme, J.-M.; Pautasso, G.; Pereverzev, G.; Poli, E.; Potzel, S.; Püschel, M.; Pütterich, T.; Pugno, R.; Raupp, G.; Reich, M.; Reiter, B.; Ribeiro, T.; Riedl, R.; Rohde, V.; Roth, J.; Rott, M.; Ryter, F.; Sandmann, W.; Santos, J.; Sassenberg, K.; Sauter, P.; Scarabosio, A.; Schall, G.; Schilling, H.-B.; Schirmer, J.; Schmid, A.; Schmid, K.; Schneider, W.; Schramm, G.; Schrittwieser, R.; Schustereder, W.; Schweinzer, J.; Schweizer, S.; Scott, B.; Seidel, U.; Sempf, M.; Serra, F.; Sertoli, M.; Siccinio, M.; Sigalov, A.; Silva, A.; Sips, A. C. C.; Speth, E.; Stäbler, A.; Stadler, R.; Steuer, K.-H.; Stober, J.; Streibl, B.; Strumberger, E.; Suttrop, W.; Tardini, G.; Tichmann, C.; Treutterer, W.; Tröster, C.; Urso, L.; Vainonen-Ahlgren, E.; Varela, P.; Vermare, L.; Volpe, F.; Wagner, D.; Wigger, C.; Wischmeier, M.; Wolfrum, E.; Würsching, E.; Yadikin, D.; Yu, Q.; Zasche, D.; Zehetbauer, T.; Zilker, M.

    2009-10-01

    ASDEX Upgrade was operated with a fully W-covered wall in 2007 and 2008. Stationary H-modes at the ITER target values and improved H-modes with H up to 1.2 were run without any boronization. The boundary conditions set by the full W wall (high enough ELM frequency, high enough central heating and low enough power density arriving at the target plates) require significant scenario development, but will apply to ITER as well. D retention has been reduced and stationary operation with saturated wall conditions has been found. Concerning confinement, impurity ion transport across the pedestal is neoclassical, explaining the strong inward pinch of high-Z impurities in between ELMs. In improved H-mode, the width of the temperature pedestal increases with heating power, consistent with a \\beta_{pol,ped}^{1/2} scaling. In the area of MHD instabilities, disruption mitigation experiments using massive Ne injection reach volume averaged values of the total electron density close to those required for runaway suppression in ITER. ECRH at the q = 2 surface was successfully applied to delay density limit disruptions. The characterization of fast particle losses due to MHD has shown the importance of different loss mechanisms for NTMs, TAEs and also beta-induced Alfven eigenmodes (BAEs). Specific studies addressing the first ITER operational phase show that O1 ECRH at the HFS assists reliable low-voltage breakdown. During ramp-up, additional heating can be used to vary li to fit within the ITER range. Confinement and power threshold in He are more favourable than in H, suggesting that He operation could allow us to assess H-mode operation in the non-nuclear phase of ITER operation.

  4. Evaluating the effect of increased pitch, iterative reconstruction and dual source CT on dose reduction and image quality.

    PubMed

    Gariani, Joanna; Martin, Steve P; Botsikas, Diomidis; Becker, Christoph D; Montet, Xavier

    2018-06-14

    To compare radiation dose and image quality of thoracoabdominal scans obtained with a high-pitch protocol (pitch 3.2) and iterative reconstruction (Sinogram Affirmed Iterative Reconstruction) in comparison to standard pitch reconstructed with filtered back projection (FBP) using dual source CT. 114 CT scans (Somatom Definition Flash, Siemens Healthineers, Erlangen, Germany), 39 thoracic scans, 54 thoracoabdominal scans and 21 abdominal scans were performed. Analysis of three protocols was undertaken; pitch of 1 reconstructed with FBP, pitch of 3.2 reconstructed with SAFIRE, pitch of 3.2 with stellar detectors reconstructed with SAFIRE. Objective and subjective image analysis were performed. Dose differences of the protocols used were compared. Dose was reduced when comparing scans with a pitch of 1 reconstructed with FBP to high-pitch scans with a pitch of 3.2 reconstructed with SAFIRE with a reduction of volume CT dose index of 75% for thoracic scans, 64% for thoracoabdominal scans and 67% for abdominal scans. There was a further reduction after the implementation of stellar detectors reflected in a reduction of 36% of the dose-length product for thoracic scans. This was not at the detriment of image quality, contrast-to-noise ratio, signal-to-noise ratio and the qualitative image analysis revealed a superior image quality in the high-pitch protocols. The combination of a high pitch protocol with iterative reconstruction allows significant dose reduction in routine chest and abdominal scans whilst maintaining or improving diagnostic image quality, with a further reduction in thoracic scans with stellar detectors. Advances in knowledge: High pitch imaging with iterative reconstruction is a tool that can be used to reduce dose without sacrificing image quality.

  5. High school dropouts: interactions between social context, self-perceptions, school engagement, and student dropout.

    PubMed

    Fall, Anna-Mária; Roberts, Greg

    2012-08-01

    Research suggests that contextual, self-system, and school engagement variables influence dropping out from school. However, it is not clear how different types of contextual and self-system variables interact to affect students' engagement or contribute to decisions to dropout from high school. The self-system model of motivational development represents a promising theory for understanding this complex phenomenon. The self-system model acknowledges the interactive and iterative roles of social context, self-perceptions, school engagement, and academic achievement as antecedents to the decision to dropout of school. We analyzed data from the Education Longitudinal Study of 2002-2004 in the context of the self-system model, finding that perception of social context (teacher support and parent support) predicts students' self-perceptions (perception of control and identification with school), which in turn predict students' academic and behavioral engagement, and academic achievement. Further, students' academic and behavioral engagement and achievement in 10th grade were associated with decreased likelihood of dropping out of school in 12th grade. Published by Elsevier Ltd.

  6. Use of artificial intelligence in the design of small peptide antibiotics effective against a broad spectrum of highly antibiotic-resistant superbugs.

    PubMed

    Cherkasov, Artem; Hilpert, Kai; Jenssen, Håvard; Fjell, Christopher D; Waldbrook, Matt; Mullaly, Sarah C; Volkmer, Rudolf; Hancock, Robert E W

    2009-01-16

    Increased multiple antibiotic resistance in the face of declining antibiotic discovery is one of society's most pressing health issues. Antimicrobial peptides represent a promising new class of antibiotics. Here we ask whether it is possible to make small broad spectrum peptides employing minimal assumptions, by capitalizing on accumulating chemical biology information. Using peptide array technology, two large random 9-amino-acid peptide libraries were iteratively created using the amino acid composition of the most active peptides. The resultant data was used together with Artificial Neural Networks, a powerful machine learning technique, to create quantitative in silico models of antibiotic activity. On the basis of random testing, these models proved remarkably effective in predicting the activity of 100,000 virtual peptides. The best peptides, representing the top quartile of predicted activities, were effective against a broad array of multidrug-resistant "Superbugs" with activities that were equal to or better than four highly used conventional antibiotics, more effective than the most advanced clinical candidate antimicrobial peptide, and protective against Staphylococcus aureus infections in animal models.

  7. High school dropouts: Interactions between social context, self-perceptions, school engagement, and student dropout☆

    PubMed Central

    Fall, Anna-Mária; Roberts, Greg

    2012-01-01

    Research suggests that contextual, self-system, and school engagement variables influence dropping out from school. However, it is not clear how different types of contextual and self-system variables interact to affect students’ engagement or contribute to decisions to dropout from high school. The self-system model of motivational development represents a promising theory for understanding this complex phenomenon. The self-system model acknowledges the interactive and iterative roles of social context, self-perceptions, school engagement, and academic achievement as antecedents to the decision to dropout of school. We analyzed data from the Education Longitudinal Study of 2002–2004 in the context of the self-system model, finding that perception of social context (teacher support and parent support) predicts students’ self-perceptions (perception of control and identification with school), which in turn predict students’ academic and behavioral engagement, and academic achievement. Further, students’ academic and behavioral engagement and achievement in 10th grade were associated with decreased likelihood of dropping out of school in 12th grade. PMID:22153483

  8. Elliptic polylogarithms and iterated integrals on elliptic curves. Part I: general formalism

    NASA Astrophysics Data System (ADS)

    Broedel, Johannes; Duhr, Claude; Dulat, Falko; Tancredi, Lorenzo

    2018-05-01

    We introduce a class of iterated integrals, defined through a set of linearly independent integration kernels on elliptic curves. As a direct generalisation of multiple polylogarithms, we construct our set of integration kernels ensuring that they have at most simple poles, implying that the iterated integrals have at most logarithmic singularities. We study the properties of our iterated integrals and their relationship to the multiple elliptic polylogarithms from the mathematics literature. On the one hand, we find that our iterated integrals span essentially the same space of functions as the multiple elliptic polylogarithms. On the other, our formulation allows for a more direct use to solve a large variety of problems in high-energy physics. We demonstrate the use of our functions in the evaluation of the Laurent expansion of some hypergeometric functions for values of the indices close to half integers.

  9. PredicT-ML: a tool for automating machine learning model building with big clinical data.

    PubMed

    Luo, Gang

    2016-01-01

    Predictive modeling is fundamental to transforming large clinical data sets, or "big clinical data," into actionable knowledge for various healthcare applications. Machine learning is a major predictive modeling approach, but two barriers make its use in healthcare challenging. First, a machine learning tool user must choose an algorithm and assign one or more model parameters called hyper-parameters before model training. The algorithm and hyper-parameter values used typically impact model accuracy by over 40 %, but their selection requires many labor-intensive manual iterations that can be difficult even for computer scientists. Second, many clinical attributes are repeatedly recorded over time, requiring temporal aggregation before predictive modeling can be performed. Many labor-intensive manual iterations are required to identify a good pair of aggregation period and operator for each clinical attribute. Both barriers result in time and human resource bottlenecks, and preclude healthcare administrators and researchers from asking a series of what-if questions when probing opportunities to use predictive models to improve outcomes and reduce costs. This paper describes our design of and vision for PredicT-ML (prediction tool using machine learning), a software system that aims to overcome these barriers and automate machine learning model building with big clinical data. The paper presents the detailed design of PredicT-ML. PredicT-ML will open the use of big clinical data to thousands of healthcare administrators and researchers and increase the ability to advance clinical research and improve healthcare.

  10. Cross-reactions vs co-sensitization evaluated by in silico motifs and in vitro IgE microarray testing.

    PubMed

    Pfiffner, P; Stadler, B M; Rasi, C; Scala, E; Mari, A

    2012-02-01

    Using an in silico allergen clustering method, we have recently shown that allergen extracts are highly cross-reactive. Here we used serological data from a multi-array IgE test based on recombinant or highly purified natural allergens to evaluate whether co-reactions are true cross-reactions or co-sensitizations by allergens with the same motifs. The serum database consisted of 3142 samples, each tested against 103 highly purified natural or recombinant allergens. Cross-reactivity was predicted by an iterative motif-finding algorithm through sequence motifs identified in 2708 known allergens. Allergen proteins containing the same motifs cross-reacted as predicted. However, proteins with identical motifs revealed a hierarchy in the degree of cross-reaction: The more frequent an allergen was positive in the allergic population, the less frequently it was cross-reacting and vice versa. Co-sensitization was analyzed by splitting the dataset into patient groups that were most likely sensitized through geographical occurrence of allergens. Interestingly, most co-reactions are cross-reactions but not co-sensitizations. The observed hierarchy of cross-reactivity may play an important role for the future management of allergic diseases. © 2011 John Wiley & Sons A/S.

  11. Non-homogeneous updates for the iterative coordinate descent algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang

    2007-02-01

    Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.

  12. Multilevel acceleration of scattering-source iterations with application to electron transport

    DOE PAGES

    Drumm, Clif; Fan, Wesley

    2017-08-18

    Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less

  13. Using habitat suitability models to target invasive plant species surveys

    USGS Publications Warehouse

    Crall, Alycia W.; Jarnevich, Catherine S.; Panke, Brendon; Young, Nick; Renz, Mark; Morisette, Jeffrey

    2013-01-01

    Managers need new tools for detecting the movement and spread of nonnative, invasive species. Habitat suitability models are a popular tool for mapping the potential distribution of current invaders, but the ability of these models to prioritize monitoring efforts has not been tested in the field. We tested the utility of an iterative sampling design (i.e., models based on field observations used to guide subsequent field data collection to improve the model), hypothesizing that model performance would increase when new data were gathered from targeted sampling using criteria based on the initial model results. We also tested the ability of habitat suitability models to predict the spread of invasive species, hypothesizing that models would accurately predict occurrences in the field, and that the use of targeted sampling would detect more species with less sampling effort than a nontargeted approach. We tested these hypotheses on two species at the state scale (Centaurea stoebe and Pastinaca sativa) in Wisconsin (USA), and one genus at the regional scale (Tamarix) in the western United States. These initial data were merged with environmental data at 30-m2 resolution for Wisconsin and 1-km2 resolution for the western United States to produce our first iteration models. We stratified these initial models to target field sampling and compared our models and success at detecting our species of interest to other surveys being conducted during the same field season (i.e., nontargeted sampling). Although more data did not always improve our models based on correct classification rate (CCR), sensitivity, specificity, kappa, or area under the curve (AUC), our models generated from targeted sampling data always performed better than models generated from nontargeted data. For Wisconsin species, the model described actual locations in the field fairly well (kappa = 0.51, 0.19, P 2) = 47.42, P < 0.01). From these findings, we conclude that habitat suitability models can be highly useful tools for guiding invasive species monitoring, and we support the use of an iterative sampling design for guiding such efforts.

  14. Using habitat suitability models to target invasive plant species surveys.

    PubMed

    Crall, Alycia W; Jarnevich, Catherine S; Panke, Brendon; Young, Nick; Renz, Mark; Morisette, Jeffrey

    2013-01-01

    Managers need new tools for detecting the movement and spread of nonnative, invasive species. Habitat suitability models are a popular tool for mapping the potential distribution of current invaders, but the ability of these models to prioritize monitoring efforts has not been tested in the field. We tested the utility of an iterative sampling design (i.e., models based on field observations used to guide subsequent field data collection to improve the model), hypothesizing that model performance would increase when new data were gathered from targeted sampling using criteria based on the initial model results. We also tested the ability of habitat suitability models to predict the spread of invasive species, hypothesizing that models would accurately predict occurrences in the field, and that the use of targeted sampling would detect more species with less sampling effort than a nontargeted approach. We tested these hypotheses on two species at the state scale (Centaurea stoebe and Pastinaca sativa) in Wisconsin (USA), and one genus at the regional scale (Tamarix) in the western United States. These initial data were merged with environmental data at 30-m2 resolution for Wisconsin and 1-km2 resolution for the western United States to produce our first iteration models. We stratified these initial models to target field sampling and compared our models and success at detecting our species of interest to other surveys being conducted during the same field season (i.e., nontargeted sampling). Although more data did not always improve our models based on correct classification rate (CCR), sensitivity, specificity, kappa, or area under the curve (AUC), our models generated from targeted sampling data always performed better than models generated from nontargeted data. For Wisconsin species, the model described actual locations in the field fairly well (kappa = 0.51, 0.19, P < 0.01), and targeted sampling did detect more species than nontargeted sampling with less sampling effort (chi2 = 47.42, P < 0.01). From these findings, we conclude that habitat suitability models can be highly useful tools for guiding invasive species monitoring, and we support the use of an iterative sampling design for guiding such efforts.

  15. Recombination of open-f-shell tungsten ions

    NASA Astrophysics Data System (ADS)

    Krantz, C.; Badnell, N. R.; Müller, A.; Schippers, S.; Wolf, A.

    2017-03-01

    We review experimental and theoretical efforts aimed at a detailed understanding of the recombination of electrons with highly charged tungsten ions characterised by an open 4f sub-shell. Highly charged tungsten occurs as a plasma contaminant in ITER-like tokamak experiments, where it acts as an unwanted cooling agent. Modelling of the charge state populations in a plasma requires reliable thermal rate coefficients for charge-changing electron collisions. The electron recombination of medium-charged tungsten species with open 4f sub-shells is especially challenging to compute reliably. Storage-ring experiments have been conducted that yielded recombination rate coefficients at high energy resolution and well-understood systematics. Significant deviations compared to simplified, but prevalent, computational models have been found. A new class of ab initio numerical calculations has been developed that provides reliable predictions of the total plasma recombination rate coefficients for these ions.

  16. Calibration of ultra-high frequency (UHF) partial discharge sensors using FDTD method

    NASA Astrophysics Data System (ADS)

    Ishak, Asnor Mazuan; Ishak, Mohd Taufiq

    2018-02-01

    Ultra-high frequency (UHF) partial discharge sensors are widely used for conditioning monitoring and defect location in insulation system of high voltage equipment. Designing sensors for specific applications often requires an iterative process of manufacturing, testing and mechanical modifications. This paper demonstrates the use of finite-difference time-domain (FDTD) technique as a tool to predict the frequency response of UHF PD sensors. Using this approach, the design process can be simplified and parametric studies can be conducted in order to assess the influence of component dimensions and material properties on the sensor response. The modelling approach is validated using gigahertz transverse electromagnetic (GTEM) calibration system. The use of a transient excitation source is particularly suitable for modeling using FDTD, which is able to simulate the step response output voltage of the sensor from which the frequency response is obtained using the same post-processing applied to the physical measurement.

  17. Eliminating Unpredictable Variation through Iterated Learning

    ERIC Educational Resources Information Center

    Smith, Kenny; Wonnacott, Elizabeth

    2010-01-01

    Human languages may be shaped not only by the (individual psychological) processes of language acquisition, but also by population-level processes arising from repeated language learning and use. One prevalent feature of natural languages is that they avoid unpredictable variation. The current work explores whether linguistic predictability might…

  18. High-order dynamic modeling and parameter identification of structural discontinuities in Timoshenko beams by using reflection coefficients

    NASA Astrophysics Data System (ADS)

    Fan, Qiang; Huang, Zhenyu; Zhang, Bing; Chen, Dayue

    2013-02-01

    Properties of discontinuities, such as bolt joints and cracks in the waveguide structures, are difficult to evaluate by either analytical or numerical methods due to the complexity and uncertainty of the discontinuities. In this paper, the discontinuity in a Timoshenko beam is modeled with high-order parameters and then these parameters are identified by using reflection coefficients at the discontinuity. The high-order model is composed of several one-order sub-models in series and each sub-model consists of inertia, stiffness and damping components in parallel. The order of the discontinuity model is determined based on the characteristics of the reflection coefficient curve and the accuracy requirement of the dynamic modeling. The model parameters are identified through the least-square fitting iteration method, of which the undetermined model parameters are updated in iteration to fit the dynamic reflection coefficient curve with the wave-based one. By using the spectral super-element method (SSEM), simulation cases, including one-order discontinuities on infinite- and finite-beams and a two-order discontinuity on an infinite beam, were employed to evaluate both the accuracy of the discontinuity model and the effectiveness of the identification method. For practical considerations, effects of measurement noise on the discontinuity parameter identification are investigated by adding different levels of noise to the simulated data. The simulation results were then validated by the corresponding experiments. Both the simulation and experimental results show that (1) the one-order discontinuities can be identified accurately with the maximum errors of 6.8% and 8.7%, respectively; (2) and the high-order discontinuities can be identified with the maximum errors of 15.8% and 16.2%, respectively; and (3) the high-order model can predict the complex discontinuity much more accurately than the one-order discontinuity model.

  19. SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Gong, S

    2016-06-15

    Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  20. Evaluation of the Fretting Resistance of the High Voltage Insulation on the ITER Magnet Feeder Busbars

    NASA Astrophysics Data System (ADS)

    Clayton, N.; Crouchen, M.; Evans, D.; Gung, C.-Y.; Su, M.; Devred, A.; Piccin, R.

    2017-12-01

    The high voltage (HV) insulation on the ITER magnet feeder superconducting busbars and current leads will be prepared from S-glass fabric, pre-impregnated with an epoxy resin, which is interleaved with polyimide film and wrapped onto the components and cured during feeder manufacture. The insulation architecture consists of nine half-lapped layers of glass/Kapton, which is then enveloped in a ground-screen, and two further half-lapped layers of glass pre-preg for mechanical protection. The integrity of the HV insulation is critical in order to inhibit electrical arcs within the feeders. The insulation over the entire length of the HV components (bus bar, current leads and joints) must provide a level of voltage isolation of 30 kV. In operation, the insulation on ITER busbars will be subjected to high mechanical loads, arising from Lorentz forces, and in addition will be subjected to fretting erosion against stainless steel clamps, as the pulsed nature of some magnets results in longitudinal movement of the busbar. This work was aimed at assessing the wear on, and the changes in, the electrical properties of the insulation when subjected to typical ITER operating conditions. High voltage tests demonstrated that the electrical isolation of the insulation was intact after the fretting test.

  1. Accurate SHAPE-directed RNA secondary structure modeling, including pseudoknots.

    PubMed

    Hajdin, Christine E; Bellaousov, Stanislav; Huggins, Wayne; Leonard, Christopher W; Mathews, David H; Weeks, Kevin M

    2013-04-02

    A pseudoknot forms in an RNA when nucleotides in a loop pair with a region outside the helices that close the loop. Pseudoknots occur relatively rarely in RNA but are highly overrepresented in functionally critical motifs in large catalytic RNAs, in riboswitches, and in regulatory elements of viruses. Pseudoknots are usually excluded from RNA structure prediction algorithms. When included, these pairings are difficult to model accurately, especially in large RNAs, because allowing this structure dramatically increases the number of possible incorrect folds and because it is difficult to search the fold space for an optimal structure. We have developed a concise secondary structure modeling approach that combines SHAPE (selective 2'-hydroxyl acylation analyzed by primer extension) experimental chemical probing information and a simple, but robust, energy model for the entropic cost of single pseudoknot formation. Structures are predicted with iterative refinement, using a dynamic programming algorithm. This melded experimental and thermodynamic energy function predicted the secondary structures and the pseudoknots for a set of 21 challenging RNAs of known structure ranging in size from 34 to 530 nt. On average, 93% of known base pairs were predicted, and all pseudoknots in well-folded RNAs were identified.

  2. On the breakdown modes and parameter space of Ohmic Tokamak startup

    NASA Astrophysics Data System (ADS)

    Peng, Yanli; Jiang, Wei; Zhang, Ya; Hu, Xiwei; Zhuang, Ge; Innocenti, Maria; Lapenta, Giovanni

    2017-10-01

    Tokamak plasma has to be hot. The process of turning the initial dilute neutral hydrogen gas at room temperature into fully ionized plasma is called tokamak startup. Even with over 40 years of research, the parameter ranges for the successful startup still aren't determined by numerical simulations but by trial and errors. However, in recent years it has drawn much attention due to one of the challenges faced by ITER: the maximum electric field for startup can't exceed 0.3 V/m, which makes the parameter range for successful startup narrower. Besides, this physical mechanism is far from being understood either theoretically or numerically. In this work, we have simulated the plasma breakdown phase driven by pure Ohmic heating using a particle-in-cell/Monte Carlo code, with the aim of giving a predictive parameter range for most tokamaks, even for ITER. We have found three situations during the discharge, as a function of the initial parameters: no breakdown, breakdown and runaway. Moreover, breakdown delay and volt-second consumption under different initial conditions are evaluated. In addition, we have simulated breakdown on ITER and confirmed that when the electric field is 0.3 V/m, the optimal pre-filling pressure is 0.001 Pa, which is in good agreement with ITER's design.

  3. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    PubMed

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Experiments and Simulations of ITER-like Plasmas in Alcator C-Mod

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    .R. Wilson, C.E. Kessel, S. Wolfe, I.H. Hutchinson, P. Bonoli, C. Fiore, A.E. Hubbard, J. Hughes, Y. Lin, Y. Ma, D. Mikkelsen, M. Reinke, S. Scott, A.C.C. Sips, S. Wukitch and the C-Mod Team

    Alcator C-Mod is performing ITER-like experiments to benchmark and verify projections to 15 MA ELMy H-mode Inductive ITER discharges. The main focus has been on the transient ramp phases. The plasma current in C-Mod is 1.3 MA and toroidal field is 5.4 T. Both Ohmic and ion cyclotron (ICRF) heated discharges are examined. Plasma current rampup experiments have demonstrated that (ICRF and LH) heating in the rise phase can save voltseconds (V-s), as was predicted for ITER by simulations, but showed that the ICRF had no effect on the current profile versus Ohmic discharges. Rampdown experiments show an overcurrent inmore » the Ohmic coil (OH) at the H to L transition, which can be mitigated by remaining in H-mode into the rampdown. Experiments have shown that when the EDA H-mode is preserved well into the rampdown phase, the density and temperature pedestal heights decrease during the plasma current rampdown. Simulations of the full C-Mod discharges have been done with the Tokamak Simulation Code (TSC) and the Coppi-Tang energy transport model is used with modified settings to provide the best fit to the experimental electron temperature profile. Other transport models have been examined also. __________________________________________________« less

  5. Overview of the hydraulic characteristics of the ITER Central Solenoid Model Coil conductors after 15 years of test campaigns

    NASA Astrophysics Data System (ADS)

    Brighenti, A.; Bonifetto, R.; Isono, T.; Kawano, K.; Russo, G.; Savoldi, L.; Zanino, R.

    2017-12-01

    The ITER Central Solenoid Model Coil (CSMC) is a superconducting magnet, layer-wound two-in-hand using Nb3Sn cable-in-conduit conductors (CICCs) with the central channel typical of ITER magnets, cooled with supercritical He (SHe) at ∼4.5 K and 0.5 MPa, operating for approximately 15 years at the National Institutes for Quantum and Radiological Science and Technology in Naka, Japan. The aim of this work is to give an overview of the issues related to the hydraulic performance of the three different CICCs used in the CSMC based on the extensive experimental database put together during the past 15 years. The measured hydraulic characteristics are compared for the different test campaigns and compared also to those coming from the tests of short conductor samples when available. It is shown that the hydraulic performance of the CSMC conductors did not change significantly in the sequence of test campaigns with more than 50 cycles up to 46 kA and 8 cooldown/warmup cycles from 300 K to 4.5 K. The capability of the correlations typically used to predict the friction factor of the SHe for the design and analysis of ITER-like CICCs is also shown.

  6. Effect of thick blanket modules on neoclassical tearing mode locking in ITER

    DOE PAGES

    La Haye, R. J.; Paz-Soldan, C.; Liu, Y. Q.

    2016-11-03

    The rotation of m/n = 2/1 tearing modes can be slowed and stopped (i.e. locked) by eddy currents induced in resistive walls in conjunction with residual error fields that provide a final 'notch' point. This is a particular issue in ITER with large inertia and low applied torque (m and n are poloidal and toroidal mode numbers respectively). Previous estimates of tolerable 2/1 island widths in ITER found that the ITER electron cyclotron current drive (ECCD) system could catch and subdue such islands before they persisted long enough and grew large enough to lock. These estimates were based on amore » forecast of initial island rotation using the n = 1 resistive penetration time of the inner vacuum vessel wall and benchmarked to DIII-D high-rotation plasmas, However, rotating tearing modes in ITER will also induce eddy currents in the blanket as the effective first wall that can shield the inner vessel. The closer fitting blanket wall has a much shorter time constant and should allow several times smaller islands to lock several times faster in ITER than previously considered; this challenges the ECCD stabilization. Here, recent DIII-D ITER baseline scenario (IBS) plasmas with low rotation through small applied torque allow better modeling and scaling to ITER with the blanket as the first resistive wall.« less

  7. Effect of thick blanket modules on neoclassical tearing mode locking in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Haye, R. J.; Paz-Soldan, C.; Liu, Y. Q.

    The rotation of m/n = 2/1 tearing modes can be slowed and stopped (i.e. locked) by eddy currents induced in resistive walls in conjunction with residual error fields that provide a final 'notch' point. This is a particular issue in ITER with large inertia and low applied torque (m and n are poloidal and toroidal mode numbers respectively). Previous estimates of tolerable 2/1 island widths in ITER found that the ITER electron cyclotron current drive (ECCD) system could catch and subdue such islands before they persisted long enough and grew large enough to lock. These estimates were based on amore » forecast of initial island rotation using the n = 1 resistive penetration time of the inner vacuum vessel wall and benchmarked to DIII-D high-rotation plasmas, However, rotating tearing modes in ITER will also induce eddy currents in the blanket as the effective first wall that can shield the inner vessel. The closer fitting blanket wall has a much shorter time constant and should allow several times smaller islands to lock several times faster in ITER than previously considered; this challenges the ECCD stabilization. Here, recent DIII-D ITER baseline scenario (IBS) plasmas with low rotation through small applied torque allow better modeling and scaling to ITER with the blanket as the first resistive wall.« less

  8. Simultaneous fitting of genomic-BLUP and Bayes-C components in a genomic prediction model.

    PubMed

    Iheshiulor, Oscar O M; Woolliams, John A; Svendsen, Morten; Solberg, Trygve; Meuwissen, Theo H E

    2017-08-24

    The rapid adoption of genomic selection is due to two key factors: availability of both high-throughput dense genotyping and statistical methods to estimate and predict breeding values. The development of such methods is still ongoing and, so far, there is no consensus on the best approach. Currently, the linear and non-linear methods for genomic prediction (GP) are treated as distinct approaches. The aim of this study was to evaluate the implementation of an iterative method (called GBC) that incorporates aspects of both linear [genomic-best linear unbiased prediction (G-BLUP)] and non-linear (Bayes-C) methods for GP. The iterative nature of GBC makes it less computationally demanding similar to other non-Markov chain Monte Carlo (MCMC) approaches. However, as a Bayesian method, GBC differs from both MCMC- and non-MCMC-based methods by combining some aspects of G-BLUP and Bayes-C methods for GP. Its relative performance was compared to those of G-BLUP and Bayes-C. We used an imputed 50 K single-nucleotide polymorphism (SNP) dataset based on the Illumina Bovine50K BeadChip, which included 48,249 SNPs and 3244 records. Daughter yield deviations for somatic cell count, fat yield, milk yield, and protein yield were used as response variables. GBC was frequently (marginally) superior to G-BLUP and Bayes-C in terms of prediction accuracy and was significantly better than G-BLUP only for fat yield. On average across the four traits, GBC yielded a 0.009 and 0.006 increase in prediction accuracy over G-BLUP and Bayes-C, respectively. Computationally, GBC was very much faster than Bayes-C and similar to G-BLUP. Our results show that incorporating some aspects of G-BLUP and Bayes-C in a single model can improve accuracy of GP over the commonly used method: G-BLUP. Generally, GBC did not statistically perform better than G-BLUP and Bayes-C, probably due to the close relationships between reference and validation individuals. Nevertheless, it is a flexible tool, in the sense, that it simultaneously incorporates some aspects of linear and non-linear models for GP, thereby exploiting family relationships while also accounting for linkage disequilibrium between SNPs and genes with large effects. The application of GBC in GP merits further exploration.

  9. Cognitive Model of Trust Dynamics Predicts Human Behavior within and between Two Games of Strategic Interaction with Computerized Confederate Agents

    PubMed Central

    Collins, Michael G.; Juvina, Ion; Gluck, Kevin A.

    2016-01-01

    When playing games of strategic interaction, such as iterated Prisoner's Dilemma and iterated Chicken Game, people exhibit specific within-game learning (e.g., learning a game's optimal outcome) as well as transfer of learning between games (e.g., a game's optimal outcome occurring at a higher proportion when played after another game). The reciprocal trust players develop during the first game is thought to mediate transfer of learning effects. Recently, a computational cognitive model using a novel trust mechanism has been shown to account for human behavior in both games, including the transfer between games. We present the results of a study in which we evaluate the model's a priori predictions of human learning and transfer in 16 different conditions. The model's predictive validity is compared against five model variants that lacked a trust mechanism. The results suggest that a trust mechanism is necessary to explain human behavior across multiple conditions, even when a human plays against a non-human agent. The addition of a trust mechanism to the other learning mechanisms within the cognitive architecture, such as sequence learning, instance-based learning, and utility learning, leads to better prediction of the empirical data. It is argued that computational cognitive modeling is a useful tool for studying trust development, calibration, and repair. PMID:26903892

  10. PREFACE: Progress in the ITER Physics Basis

    NASA Astrophysics Data System (ADS)

    Ikeda, K.

    2007-06-01

    I would firstly like to congratulate all who have contributed to the preparation of the `Progress in the ITER Physics Basis' (PIPB) on its publication and express my deep appreciation of the hard work and commitment of the many scientists involved. With the signing of the ITER Joint Implementing Agreement in November 2006, the ITER Members have now established the framework for construction of the project, and the ITER Organization has begun work at Cadarache. The review of recent progress in the physics basis for burning plasma experiments encompassed by the PIPB will be a valuable resource for the project and, in particular, for the current Design Review. The ITER design has been derived from a physics basis developed through experimental, modelling and theoretical work on the properties of tokamak plasmas and, in particular, on studies of burning plasma physics. The `ITER Physics Basis' (IPB), published in 1999, has been the reference for the projection methodologies for the design of ITER, but the IPB also highlighted several key issues which needed to be resolved to provide a robust basis for ITER operation. In the intervening period scientists of the ITER Participant Teams have addressed these issues intensively. The International Tokamak Physics Activity (ITPA) has provided an excellent forum for scientists involved in these studies, focusing their work on the high priority physics issues for ITER. Significant progress has been made in many of the issues identified in the IPB and this progress is discussed in depth in the PIPB. In this respect, the publication of the PIPB symbolizes the strong interest and enthusiasm of the plasma physics community for the success of the ITER project, which we all recognize as one of the great scientific challenges of the 21st century. I wish to emphasize my appreciation of the work of the ITPA Coordinating Committee members, who are listed below. Their support and encouragement for the preparation of the PIPB were fundamental to its completion. I am pleased to witness the extensive collaborations, the excellent working relationships and the free exchange of views that have been developed among scientists working on magnetic fusion, and I would particularly like to acknowledge the importance which they assign to ITER in their research. This close collaboration and the spirit of free discussion will be essential to the success of ITER. Finally, the PIPB identifies issues which remain in the projection of burning plasma performance to the ITER scale and in the control of burning plasmas. Continued R&D is therefore called for to reduce the uncertainties associated with these issues and to ensure the efficient operation and exploitation of ITER. It is important that the international fusion community maintains a high level of collaboration in the future to address these issues and to prepare the physics basis for ITER operation. ITPA Coordination Committee R. Stambaugh (Chair of ITPA CC, General Atomics, USA) D.J. Campbell (Previous Chair of ITPA CC, European Fusion Development Agreement—Close Support Unit, ITER Organization) M. Shimada (Co-Chair of ITPA CC, ITER Organization) R. Aymar (ITER International Team, CERN) V. Chuyanov (ITER Organization) J.H. Han (Korea Basic Science Institute, Korea) Y. Huo (Zengzhou University, China) Y.S. Hwang (Seoul National University, Korea) N. Ivanov (Kurchatov Institute, Russia) Y. Kamada (Japan Atomic Energy Agency, Naka, Japan) P.K. Kaw (Institute for Plasma Research, India) S. Konovalov (Kurchatov Institute, Russia) M. Kwon (National Fusion Research Center, Korea) J. Li (Academy of Science, Institute of Plasma Physics, China) S. Mirnov (TRINITI, Russia) Y. Nakamura (National Institute for Fusion Studies, Japan) H. Ninomiya (Japan Atomic Energy Agency, Naka, Japan) E. Oktay (Department of Energy, USA) J. Pamela (European Fusion Development Agreement—Close Support Unit) C. Pan (Southwestern Institute of Physics, China) F. Romanelli (Ente per le Nuove tecnologie, l'Energia e l'Ambiente, Italy and European Fusion Development Agreement—Close Support Unit) N. Sauthoff (Princeton Plasma Physics Laboratory, USA and Oak Ridge National Laboratories, USA) Y. Saxena (Institute for Plasma Research, India) Y. Shimomura (ITER Organization) R. Singh (Institute for Plasma Research, India) S. Takamura (Nagoya University, Japan) K. Toi (National Institute for Fusion Studies, Japan) M. Wakatani (Kyoto University, Japan (deceased)) H. Zohm (Max-Planck-Institut für Plasmaphysik, Garching, Germany)

  11. Coarse-grained modeling of crystal growth and polymorphism of a model pharmaceutical molecule.

    PubMed

    Mandal, Taraknath; Marson, Ryan L; Larson, Ronald G

    2016-10-04

    We describe a systematic coarse-graining method to study crystallization and predict possible polymorphs of small organic molecules. In this method, a coarse-grained (CG) force field is obtained by inverse-Boltzmann iteration from the radial distribution function of atomistic simulations of the known crystal. With the force field obtained by this method, we show that CG simulations of the drug phenytoin predict growth of a crystalline slab from a melt of phenytoin, allowing determination of the fastest-growing surface, as well as giving the correct lattice parameters and crystal morphology. By applying meta-dynamics to the coarse-grained model, a new crystalline form of phenytoin (monoclinic, space group P2 1 ) was predicted which is different from the experimentally known crystal structure (orthorhombic, space group Pna2 1 ). Atomistic simulations and quantum calculations then showed the polymorph to be meta-stable at ambient temperature and pressure, and thermodynamically more stable than the conventional orthorhombic crystal at high pressure. The results suggest an efficient route to study crystal growth of small organic molecules that could also be useful for identification of possible polymorphs as well.

  12. An analysis for high Reynolds number inviscid/viscid interactions in cascades

    NASA Technical Reports Server (NTRS)

    Barnett, Mark; Verdon, Joseph M.; Ayer, Timothy C.

    1993-01-01

    An efficient steady analysis for predicting strong inviscid/viscid interaction phenomena such as viscous-layer separation, shock/boundary-layer interaction, and trailing-edge/near-wake interaction in turbomachinery blade passages is needed as part of a comprehensive analytical blade design prediction system. Such an analysis is described. It uses an inviscid/viscid interaction approach, in which the flow in the outer inviscid region is assumed to be potential, and that in the inner or viscous-layer region is governed by Prandtl's equations. The inviscid solution is determined using an implicit, least-squares, finite-difference approximation, the viscous-layer solution using an inverse, finite-difference, space-marching method which is applied along the blade surfaces and wake streamlines. The inviscid and viscid solutions are coupled using a semi-inverse global iteration procedure, which permits the prediction of boundary-layer separation and other strong-interaction phenomena. Results are presented for three cascades, with a range of inlet flow conditions considered for one of them, including conditions leading to large-scale flow separations. Comparisons with Navier-Stokes solutions and experimental data are also given.

  13. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

    NASA Astrophysics Data System (ADS)

    Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

    2018-03-01

    The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

  14. Numerical Simulation of Molten Flow in Directed Energy Deposition Using an Iterative Geometry Technique

    NASA Astrophysics Data System (ADS)

    Vincent, Timothy J.; Rumpfkeil, Markus P.; Chaudhary, Anil

    2018-06-01

    The complex, multi-faceted physics of laser-based additive metals processing tends to demand high-fidelity models and costly simulation tools to provide predictions accurate enough to aid in selecting process parameters. Of particular difficulty is the accurate determination of melt pool shape and size, which are useful for predicting lack-of-fusion, as this typically requires an adequate treatment of thermal and fluid flow. In this article we describe a novel numerical simulation tool which aims to achieve a balance between accuracy and cost. This is accomplished by making simplifying assumptions regarding the behavior of the gas-liquid interface for processes with a moderate energy density, such as Laser Engineered Net Shaping (LENS). The details of the implementation, which is based on the solver simpleFoam of the well-known software suite OpenFOAM, are given here and the tool is verified and validated for a LENS process involving Ti-6Al-4V. The results indicate that the new tool predicts width and height of a deposited track to engineering accuracy levels.

  15. Iterative Methods for the Non-LTE Transfer of Polarized Radiation: Resonance Line Polarization in One-dimensional Atmospheres

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, Javier; Manso Sainz, Rafael

    1999-05-01

    This paper shows how to generalize to non-LTE polarization transfer some operator splitting methods that were originally developed for solving unpolarized transfer problems. These are the Jacobi-based accelerated Λ-iteration (ALI) method of Olson, Auer, & Buchler and the iterative schemes based on Gauss-Seidel and successive overrelaxation (SOR) iteration of Trujillo Bueno and Fabiani Bendicho. The theoretical framework chosen for the formulation of polarization transfer problems is the quantum electrodynamics (QED) theory of Landi Degl'Innocenti, which specifies the excitation state of the atoms in terms of the irreducible tensor components of the atomic density matrix. This first paper establishes the grounds of our numerical approach to non-LTE polarization transfer by concentrating on the standard case of scattering line polarization in a gas of two-level atoms, including the Hanle effect due to a weak microturbulent and isotropic magnetic field. We begin demonstrating that the well-known Λ-iteration method leads to the self-consistent solution of this type of problem if one initializes using the ``exact'' solution corresponding to the unpolarized case. We show then how the above-mentioned splitting methods can be easily derived from this simple Λ-iteration scheme. We show that our SOR method is 10 times faster than the Jacobi-based ALI method, while our implementation of the Gauss-Seidel method is 4 times faster. These iterative schemes lead to the self-consistent solution independently of the chosen initialization. The convergence rate of these iterative methods is very high; they do not require either the construction or the inversion of any matrix, and the computing time per iteration is similar to that of the Λ-iteration method.

  16. ELM Mitigation in Low-rotation ITER Baseline Scenario Plasmas on DIII-D with Deuterium Pellet Injection

    NASA Astrophysics Data System (ADS)

    Baylor, L. R.

    2016-10-01

    ELM mitigation using high frequency D2 pellet ELM pacing has been demonstrated in ITER baseline scenario plasmas on DIII D with low rotation obtained with low NBI input torque. The ITER burning plasmas will have relatively low input torque and are expected to have low rotation. ELM mitigation by on-demand pellet ELM triggering has not been observed before in these conditions. New experiments on DIII-D in these conditions with 90 Hz D2 pellets have shown that significant mitigation of the divertor ELM peak heat flux by a factor of 8 is possible without detrimental effects to the plasma confinement. High-Z impurity accumulation is dramatically reduced at all input torques from 0.1 to 2.5 N-m. Fueling with high field side injection of D2 pellets has been employed to demonstrate that density buildup can be obtained simultaneously with ELM mitigation. The implications are that rapid pellet injection remains a promising technique to trigger on-demand ELMs in low rotating plasmas with greatly reduced peak flux while preventing impurity accumulation in ITER. Supported by the US DOE under DE-AC05-00OR22725, DE-FC02-04ER54698.

  17. Evaluation of power transfer efficiency for a high power inductively coupled radio-frequency hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Jain, P.; Recchia, M.; Cavenago, M.; Fantz, U.; Gaio, E.; Kraus, W.; Maistrello, A.; Veltri, P.

    2018-04-01

    Neutral beam injection (NBI) for plasma heating and current drive is necessary for International Thermonuclear Experimental reactor (ITER) tokamak. Due to its various advantages, a radio frequency (RF) driven plasma source type was selected as a reference ion source for the ITER heating NBI. The ITER relevant RF negative ion sources are inductively coupled (IC) devices whose operational working frequency has been chosen to be 1 MHz and are characterized by high RF power density (˜9.4 W cm-3) and low operational pressure (around 0.3 Pa). The RF field is produced by a coil in a cylindrical chamber leading to a plasma generation followed by its expansion inside the chamber. This paper recalls different concepts based on which a methodology is developed to evaluate the efficiency of the RF power transfer to hydrogen plasma. This efficiency is then analyzed as a function of the working frequency and in dependence of other operating source and plasma parameters. The study is applied to a high power IC RF hydrogen ion source which is similar to one simplified driver of the ELISE source (half the size of the ITER NBI source).

  18. Designing hydrologic monitoring networks to maximize predictability of hydrologic conditions in a data assimilation system: a case study from South Florida, U.S.A

    NASA Astrophysics Data System (ADS)

    Flores, A. N.; Pathak, C. S.; Senarath, S. U.; Bras, R. L.

    2009-12-01

    Robust hydrologic monitoring networks represent a critical element of decision support systems for effective water resource planning and management. Moreover, process representation within hydrologic simulation models is steadily improving, while at the same time computational costs are decreasing due to, for instance, readily available high performance computing resources. The ability to leverage these increasingly complex models together with the data from these monitoring networks to provide accurate and timely estimates of relevant hydrologic variables within a multiple-use, managed water resources system would substantially enhance the information available to resource decision makers. Numerical data assimilation techniques provide mathematical frameworks through which uncertain model predictions can be constrained to observational data to compensate for uncertainties in the model forcings and parameters. In ensemble-based data assimilation techniques such as the ensemble Kalman Filter (EnKF), information in observed variables such as canal, marsh and groundwater stages are propagated back to the model states in a manner related to: (1) the degree of certainty in the model state estimates and observations, and (2) the cross-correlation between the model states and the observable outputs of the model. However, the ultimate degree to which hydrologic conditions can be accurately predicted in an area of interest is controlled, in part, by the configuration of the monitoring network itself. In this proof-of-concept study we developed an approach by which the design of an existing hydrologic monitoring network is adapted to iteratively improve the predictions of hydrologic conditions within an area of the South Florida Water Management District (SFWMD). The objective of the network design is to minimize prediction errors of key hydrologic states and fluxes produced by the spatially distributed Regional Simulation Model (RSM), developed specifically to simulate the hydrologic conditions in several intensively managed and hydrologically complex watersheds within the SFWMD system. In a series of synthetic experiments RSM is used to generate the notionally true hydrologic state and the relevant observational data. The EnKF is then used as the mechanism to fuse RSM hydrologic estimates with data from the candidate network. The performance of the candidate network is measured by the prediction errors of the EnKF estimates of hydrologic states, relative to the notionally true scenario. The candidate network is then adapted by relocating existing observational sites to unobserved areas where predictions of local hydrologic conditions are most uncertain and the EnKF procedure repeated. Iteration of the monitoring network continues until further improvements in EnKF-based predictions of hydrologic conditions are negligible.

  19. Enhanced understanding of non-axisymmetric intrinsic and controlled field impacts in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    In, Y.; Park, J. -K.; Jeon, Y. M.

    Here, an extensive study of intrinsic and controlled non-axisymmetric field (δB) impacts in KSTAR has enhanced the understanding about non-axisymmetric field physics and its implications, in particular, on resonant magnetic perturbation (RMP) physics and power threshold (P th) for L–H transition. The n=1 intrinsic non-axisymmetric field in KSTAR was measured to remain as low as δB/B 0 ~ 4×10 –5 even at high-beta plasmas (β N ~ 2), which corresponds to approximately 20% below the targeted ITER tolerance level. As for the RMP edge-localized-modes (ELM) control, robust n=1 RMP ELM-crash-suppression has been not only sustained for more than ~90 τ E, but also confirmed to be compatible with rotating RMP. An optimal window of radial position of lower X-point (i.e. R x =more » $$1.44\\pm 0.02\\,$$ m) proved to be quite critical to reach full n=1 RMP-driven ELM-crash-suppression, while a constraint of the safety factor could be relaxed (q 95 = 5 $$\\pm $$ 0.25). A more encouraging finding was that even when R x cannot be positioned in the optimal window, another systematic scan in the vicinity of the previously optimal R x allows for a new optimal window with relatively small variations of plasma parameters. Also, we have addressed the importance of optimal phasing (i.e. toroidal phase difference between adjacent rows) for n=1 RMP-driven ELM control, consistent with an ideal plasma response modeling which could predict phasing-dependent ELM suppression windows. In support of ITER RMP study, intentionally misaligned RMPs have been found to be quite effective during ELM-mitigation stage in lowering the peaks of divertor heat flux, as well as in broadening the 'wet' areas. Besides, a systematic survey of P th dependence on non-axisymmetric field has revealed the potential limit of the merit of low intrinsic non-axisymmetry. Considering that the ITER RMP coils are composed of 3-rows, just like in KSTAR, further 3D physics study in KSTAR is expected to help us minimize the uncertainties of the ITER RMP coils, as well as establish an optimal 3D configuration for ITER and future reactors.« less

  20. Enhanced understanding of non-axisymmetric intrinsic and controlled field impacts in tokamaks

    NASA Astrophysics Data System (ADS)

    In, Y.; Park, J.-K.; Jeon, Y. M.; Kim, J.; Park, G. Y.; Ahn, J.-W.; Loarte, A.; Ko, W. H.; Lee, H. H.; Yoo, J. W.; Juhn, J. W.; Yoon, S. W.; Park, H.; Physics Task Force in KSTAR, 3D

    2017-11-01

    An extensive study of intrinsic and controlled non-axisymmetric field (δB) impacts in KSTAR has enhanced the understanding about non-axisymmetric field physics and its implications, in particular, on resonant magnetic perturbation (RMP) physics and power threshold (P th) for L-H transition. The n  =  1 intrinsic non-axisymmetric field in KSTAR was measured to remain as low as δB/B 0 ~ 4  ×  10-5 even at high-beta plasmas (β N ~ 2), which corresponds to approximately 20% below the targeted ITER tolerance level. As for the RMP edge-localized-modes (ELM) control, robust n  =  1 RMP ELM-crash-suppression has been not only sustained for more than ~90 τ E, but also confirmed to be compatible with rotating RMP. An optimal window of radial position of lower X-point (i.e. R x   =  1.44+/- 0.02 m) proved to be quite critical to reach full n  =  1 RMP-driven ELM-crash-suppression, while a constraint of the safety factor could be relaxed (q 95  =  5 +/- 0.25). A more encouraging finding was that even when R x cannot be positioned in the optimal window, another systematic scan in the vicinity of the previously optimal R x allows for a new optimal window with relatively small variations of plasma parameters. Also, we have addressed the importance of optimal phasing (i.e. toroidal phase difference between adjacent rows) for n  =  1 RMP-driven ELM control, consistent with an ideal plasma response modeling which could predict phasing-dependent ELM suppression windows. In support of ITER RMP study, intentionally misaligned RMPs have been found to be quite effective during ELM-mitigation stage in lowering the peaks of divertor heat flux, as well as in broadening the ‘wet’ areas. Besides, a systematic survey of P th dependence on non-axisymmetric field has revealed the potential limit of the merit of low intrinsic non-axisymmetry. Considering that the ITER RMP coils are composed of 3-rows, just like in KSTAR, further 3D physics study in KSTAR is expected to help us minimize the uncertainties of the ITER RMP coils, as well as establish an optimal 3D configuration for ITER and future reactors.

  1. Enhanced understanding of non-axisymmetric intrinsic and controlled field impacts in tokamaks

    DOE PAGES

    In, Y.; Park, J. -K.; Jeon, Y. M.; ...

    2017-08-24

    Here, an extensive study of intrinsic and controlled non-axisymmetric field (δB) impacts in KSTAR has enhanced the understanding about non-axisymmetric field physics and its implications, in particular, on resonant magnetic perturbation (RMP) physics and power threshold (P th) for L–H transition. The n=1 intrinsic non-axisymmetric field in KSTAR was measured to remain as low as δB/B 0 ~ 4×10 –5 even at high-beta plasmas (β N ~ 2), which corresponds to approximately 20% below the targeted ITER tolerance level. As for the RMP edge-localized-modes (ELM) control, robust n=1 RMP ELM-crash-suppression has been not only sustained for more than ~90 τ E, but also confirmed to be compatible with rotating RMP. An optimal window of radial position of lower X-point (i.e. R x =more » $$1.44\\pm 0.02\\,$$ m) proved to be quite critical to reach full n=1 RMP-driven ELM-crash-suppression, while a constraint of the safety factor could be relaxed (q 95 = 5 $$\\pm $$ 0.25). A more encouraging finding was that even when R x cannot be positioned in the optimal window, another systematic scan in the vicinity of the previously optimal R x allows for a new optimal window with relatively small variations of plasma parameters. Also, we have addressed the importance of optimal phasing (i.e. toroidal phase difference between adjacent rows) for n=1 RMP-driven ELM control, consistent with an ideal plasma response modeling which could predict phasing-dependent ELM suppression windows. In support of ITER RMP study, intentionally misaligned RMPs have been found to be quite effective during ELM-mitigation stage in lowering the peaks of divertor heat flux, as well as in broadening the 'wet' areas. Besides, a systematic survey of P th dependence on non-axisymmetric field has revealed the potential limit of the merit of low intrinsic non-axisymmetry. Considering that the ITER RMP coils are composed of 3-rows, just like in KSTAR, further 3D physics study in KSTAR is expected to help us minimize the uncertainties of the ITER RMP coils, as well as establish an optimal 3D configuration for ITER and future reactors.« less

  2. Integrated modeling of high βN steady state scenario on DIII-D

    DOE PAGES

    Park, Jin Myung; Ferron, J. R.; Holcomb, Christopher T.; ...

    2018-01-10

    Theory-based integrated modeling validated against DIII-D experiments predicts that fully non-inductive DIII-D operation with β N > 4.5 is possible with certain upgrades. IPS-FASTRAN is a new iterative numerical procedure that integrates models of core transport, edge pedestal, equilibrium, stability, heating, and current drive self-consistently to find steady-state ( d/dt = 0) solutions and reproduces most features of DIII-D high β N discharges with a stationary current profile. Projecting forward to scenarios possible on DIII-D with future upgrades, the high q min > 2 scenario achieves stable operation at β N as high as 5 by using a very broadmore » current density profile to improve the ideal-wall stabilization of low- n instabilities along with confinement enhancement from low magnetic shear. This modeling guides the necessary upgrades of the heating and current drive system to realize reactor-relevant high β N steady-state scenarios on DIII-D by simultaneous optimization of the current and pressure profiles.« less

  3. Integrated modeling of high βN steady state scenario on DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jin Myung; Ferron, J. R.; Holcomb, Christopher T.

    Theory-based integrated modeling validated against DIII-D experiments predicts that fully non-inductive DIII-D operation with β N > 4.5 is possible with certain upgrades. IPS-FASTRAN is a new iterative numerical procedure that integrates models of core transport, edge pedestal, equilibrium, stability, heating, and current drive self-consistently to find steady-state ( d/dt = 0) solutions and reproduces most features of DIII-D high β N discharges with a stationary current profile. Projecting forward to scenarios possible on DIII-D with future upgrades, the high q min > 2 scenario achieves stable operation at β N as high as 5 by using a very broadmore » current density profile to improve the ideal-wall stabilization of low- n instabilities along with confinement enhancement from low magnetic shear. This modeling guides the necessary upgrades of the heating and current drive system to realize reactor-relevant high β N steady-state scenarios on DIII-D by simultaneous optimization of the current and pressure profiles.« less

  4. Integrated modeling of high βN steady state scenario on DIII-D

    NASA Astrophysics Data System (ADS)

    Park, J. M.; Ferron, J. R.; Holcomb, C. T.; Buttery, R. J.; Solomon, W. M.; Batchelor, D. B.; Elwasif, W.; Green, D. L.; Kim, K.; Meneghini, O.; Murakami, M.; Snyder, P. B.

    2018-01-01

    Theory-based integrated modeling validated against DIII-D experiments predicts that fully non-inductive DIII-D operation with βN > 4.5 is possible with certain upgrades. IPS-FASTRAN is a new iterative numerical procedure that integrates models of core transport, edge pedestal, equilibrium, stability, heating, and current drive self-consistently to find steady-state (d/dt = 0) solutions and reproduces most features of DIII-D high βN discharges with a stationary current profile. Projecting forward to scenarios possible on DIII-D with future upgrades, the high qmin > 2 scenario achieves stable operation at βN as high as 5 by using a very broad current density profile to improve the ideal-wall stabilization of low-n instabilities along with confinement enhancement from low magnetic shear. This modeling guides the necessary upgrades of the heating and current drive system to realize reactor-relevant high βN steady-state scenarios on DIII-D by simultaneous optimization of the current and pressure profiles.

  5. Comparison of baseline removal methods for laser-induced breakdown spectroscopy of geological samples

    NASA Astrophysics Data System (ADS)

    Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas

    2016-12-01

    This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.

  6. Pigeons ("Columba Livia") Approach Nash Equilibrium in Experimental Matching Pennies Competitions

    ERIC Educational Resources Information Center

    Sanabria, Federico; Thrailkill, Eric

    2009-01-01

    The game of Matching Pennies (MP), a simplified version of the more popular Rock, Papers, Scissors, schematically represents competitions between organisms with incentives to predict each other's behavior. Optimal performance in iterated MP competitions involves the production of random choice patterns and the detection of nonrandomness in the…

  7. Cross Sectional Study of Agile Software Development Methods and Project Performance

    ERIC Educational Resources Information Center

    Lambert, Tracy

    2011-01-01

    Agile software development methods, characterized by delivering customer value via incremental and iterative time-boxed development processes, have moved into the mainstream of the Information Technology (IT) industry. However, despite a growing body of research which suggests that a predictive manufacturing approach, with big up-front…

  8. First Steps in Computational Systems Biology: A Practical Session in Metabolic Modeling and Simulation

    ERIC Educational Resources Information Center

    Reyes-Palomares, Armando; Sanchez-Jimenez, Francisca; Medina, Miguel Angel

    2009-01-01

    A comprehensive understanding of biological functions requires new systemic perspectives, such as those provided by systems biology. Systems biology approaches are hypothesis-driven and involve iterative rounds of model building, prediction, experimentation, model refinement, and development. Developments in computer science are allowing for ever…

  9. Strategic by Design: Iterative Approaches to Educational Planning

    ERIC Educational Resources Information Center

    Chance, Shannon

    2010-01-01

    Linear planning and decision-making models assume a level of predictability that is uncommon today. Such models inadequately address the complex variables found in higher education. When academic organizations adopt paired-down business strategies, they restrict their own vision. They fail to harness emerging opportunities or learn from their own…

  10. Experimental search for high-temperature ferroelectric perovskites guided by two-step machine learning.

    PubMed

    Balachandran, Prasanna V; Kowalski, Benjamin; Sehirlioglu, Alp; Lookman, Turab

    2018-04-26

    Experimental search for high-temperature ferroelectric perovskites is a challenging task due to the vast chemical space and lack of predictive guidelines. Here, we demonstrate a two-step machine learning approach to guide experiments in search of xBi[Formula: see text]O 3 -(1 - x)PbTiO 3 -based perovskites with high ferroelectric Curie temperature. These involve classification learning to screen for compositions in the perovskite structures, and regression coupled to active learning to identify promising perovskites for synthesis and feedback. The problem is challenging because the search space is vast, spanning ~61,500 compositions and only 167 are experimentally studied. Furthermore, not every composition can be synthesized in the perovskite phase. In this work, we predict x, y, Me', and Me″ such that the resulting compositions have both high Curie temperature and form in the perovskite structure. Outcomes from both successful and failed experiments then iteratively refine the machine learning models via an active learning loop. Our approach finds six perovskites out of ten compositions synthesized, including three previously unexplored {Me'Me″} pairs, with 0.2Bi(Fe 0.12 Co 0.88 )O 3 -0.8PbTiO 3 showing the highest measured Curie temperature of 898 K among them.

  11. Adaptive iterated function systems filter for images highly corrupted with fixed - Value impulse noise

    NASA Astrophysics Data System (ADS)

    Shanmugavadivu, P.; Eliahim Jeevaraj, P. S.

    2014-06-01

    The Adaptive Iterated Functions Systems (AIFS) Filter presented in this paper has an outstanding potential to attenuate the fixed-value impulse noise in images. This filter has two distinct phases namely noise detection and noise correction which uses Measure of Statistics and Iterated Function Systems (IFS) respectively. The performance of AIFS filter is assessed by three metrics namely, Peak Signal-to-Noise Ratio (PSNR), Mean Structural Similarity Index Matrix (MSSIM) and Human Visual Perception (HVP). The quantitative measures PSNR and MSSIM endorse the merit of this filter in terms of degree of noise suppression and details/edge preservation respectively, in comparison with the high performing filters reported in the recent literature. The qualitative measure HVP confirms the noise suppression ability of the devised filter. This computationally simple noise filter broadly finds application wherein the images are highly degraded by fixed-value impulse noise.

  12. Evaluation of the cryogenic mechanical properties of the insulation material for ITER Feeder superconducting joint

    NASA Astrophysics Data System (ADS)

    Wu, Zhixiong; Huang, Rongjin; Huang, ChuanJun; Yang, Yanfang; Huang, Xiongyi; Li, Laifeng

    2017-12-01

    The Glass-fiber reinforced plastic (GFRP) fabricated by the vacuum bag process was selected as the high voltage electrical insulation and mechanical support for the superconducting joints and the current leads for the ITER Feeder system. To evaluate the cryogenic mechanical properties of the GFRP, the mechanical properties such as the short beam strength (SBS), the tensile strength and the fatigue fracture strength after 30,000 cycles, were measured at 77K in this study. The results demonstrated that the GFRP met the design requirements of ITER.

  13. iCI: Iterative CI toward full CI.

    PubMed

    Liu, Wenjian; Hoffmann, Mark R

    2016-03-08

    It is shown both theoretically and numerically that the minimal multireference configuration interaction (CI) approach [Liu, W.; Hoffmann, M. R. Theor. Chem. Acc. 2014, 133, 1481] converges quickly and monotonically from above to full CI by updating the primary, external, and secondary states that describe the respective static, dynamic, and again static components of correlation iteratively, even when starting with a rather poor description of a strongly correlated system. In short, the iterative CI (iCI) is a very effective means toward highly correlated wave functions and, ultimately, full CI.

  14. Validation of the model for ELM suppression with 3D magnetic fields using low torque ITER baseline scenario discharges in DIII-D

    NASA Astrophysics Data System (ADS)

    Moyer, R. A.; Paz-Soldan, C.; Nazikian, R.; Orlov, D. M.; Ferraro, N. M.; Grierson, B. A.; Knölker, M.; Lyons, B. C.; McKee, G. R.; Osborne, T. H.; Rhodes, T. L.; Meneghini, O.; Smith, S.; Evans, T. E.; Fenstermacher, M. E.; Groebner, R. J.; Hanson, J. M.; La Haye, R. J.; Luce, T. C.; Mordijck, S.; Solomon, W. M.; Turco, F.; Yan, Z.; Zeng, L.; DIII-D Team

    2017-10-01

    Experiments have been executed in the DIII-D tokamak to extend suppression of Edge Localized Modes (ELMs) with Resonant Magnetic Perturbations (RMPs) to ITER-relevant levels of beam torque. The results support the hypothesis for RMP ELM suppression based on transition from an ideal screened response to a tearing response at a resonant surface that prevents expansion of the pedestal to an unstable width [Snyder et al., Nucl. Fusion 51, 103016 (2011) and Wade et al., Nucl. Fusion 55, 023002 (2015)]. In ITER baseline plasmas with I/aB = 1.4 and pedestal ν * ˜ 0.15, ELMs are readily suppressed with co- I p neutral beam injection. However, reducing the beam torque from 5 Nm to ≤ 3.5 Nm results in loss of ELM suppression and a shift in the zero-crossing of the electron perpendicular rotation ω ⊥ e ˜ 0 deeper into the plasma. The change in radius of ω ⊥ e ˜ 0 is due primarily to changes to the electron diamagnetic rotation frequency ωe * . Linear plasma response modeling with the resistive MHD code m3d-c1 indicates that the tearing response location tracks the inward shift in ω ⊥ e ˜ 0. At pedestal ν * ˜ 1, ELM suppression is also lost when the beam torque is reduced, but the ω ⊥ e change is dominated by collapse of the toroidal rotation v T . The hypothesis predicts that it should be possible to obtain ELM suppression at reduced beam torque by also reducing the height and width of the ωe * profile. This prediction has been confirmed experimentally with RMP ELM suppression at 0 Nm of beam torque and plasma normalized pressure β N ˜ 0.7. This opens the possibility of accessing ELM suppression in low torque ITER baseline plasmas by establishing suppression at low beta and then increasing beta while relying on the strong RMP-island coupling to maintain suppression.

  15. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  16. SU-F-BRCD-09: Total Variation (TV) Based Fast Convergent Iterative CBCT Reconstruction with GPU Acceleration.

    PubMed

    Xu, Q; Yang, D; Tan, J; Anastasio, M

    2012-06-01

    To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.

  17. Prediction of overall and blade-element performance for axial-flow pump configurations

    NASA Technical Reports Server (NTRS)

    Serovy, G. K.; Kavanagh, P.; Okiishi, T. H.; Miller, M. J.

    1973-01-01

    A method and a digital computer program for prediction of the distributions of fluid velocity and properties in axial flow pump configurations are described and evaluated. The method uses the blade-element flow model and an iterative numerical solution of the radial equilbrium and continuity conditions. Correlated experimental results are used to generate alternative methods for estimating blade-element turning and loss characteristics. Detailed descriptions of the computer program are included, with example input and typical computed results.

  18. Discrete Logic Modelling Optimization to Contextualize Prior Knowledge Networks Using PRUNET

    PubMed Central

    Androsova, Ganna; del Sol, Antonio

    2015-01-01

    High-throughput technologies have led to the generation of an increasing amount of data in different areas of biology. Datasets capturing the cell’s response to its intra- and extra-cellular microenvironment allows such data to be incorporated as signed and directed graphs or influence networks. These prior knowledge networks (PKNs) represent our current knowledge of the causality of cellular signal transduction. New signalling data is often examined and interpreted in conjunction with PKNs. However, different biological contexts, such as cell type or disease states, may have distinct variants of signalling pathways, resulting in the misinterpretation of new data. The identification of inconsistencies between measured data and signalling topologies, as well as the training of PKNs using context specific datasets (PKN contextualization), are necessary conditions to construct reliable, predictive models, which are current challenges in the systems biology of cell signalling. Here we present PRUNET, a user-friendly software tool designed to address the contextualization of a PKNs to specific experimental conditions. As the input, the algorithm takes a PKN and the expression profile of two given stable steady states or cellular phenotypes. The PKN is iteratively pruned using an evolutionary algorithm to perform an optimization process. This optimization rests in a match between predicted attractors in a discrete logic model (Boolean) and a Booleanized representation of the phenotypes, within a population of alternative subnetworks that evolves iteratively. We validated the algorithm applying PRUNET to four biological examples and using the resulting contextualized networks to predict missing expression values and to simulate well-characterized perturbations. PRUNET constitutes a tool for the automatic curation of a PKN to make it suitable for describing biological processes under particular experimental conditions. The general applicability of the implemented algorithm makes PRUNET suitable for a variety of biological processes, for instance cellular reprogramming or transitions between healthy and disease states. PMID:26058016

  19. Recognizing short coding sequences of prokaryotic genome using a novel iteratively adaptive sparse partial least squares algorithm

    PubMed Central

    2013-01-01

    Background Significant efforts have been made to address the problem of identifying short genes in prokaryotic genomes. However, most known methods are not effective in detecting short genes. Because of the limited information contained in short DNA sequences, it is very difficult to accurately distinguish between protein coding and non-coding sequences in prokaryotic genomes. We have developed a new Iteratively Adaptive Sparse Partial Least Squares (IASPLS) algorithm as the classifier to improve the accuracy of the identification process. Results For testing, we chose the short coding and non-coding sequences from seven prokaryotic organisms. We used seven feature sets (including GC content, Z-curve, etc.) of short genes. In comparison with GeneMarkS, Metagene, Orphelia, and Heuristic Approachs methods, our model achieved the best prediction performance in identification of short prokaryotic genes. Even when we focused on the very short length group ([60–100 nt)), our model provided sensitivity as high as 83.44% and specificity as high as 92.8%. These values are two or three times higher than three of the other methods while Metagene fails to recognize genes in this length range. The experiments also proved that the IASPLS can improve the identification accuracy in comparison with other widely used classifiers, i.e. Logistic, Random Forest (RF) and K nearest neighbors (KNN). The accuracy in using IASPLS was improved 5.90% or more in comparison with the other methods. In addition to the improvements in accuracy, IASPLS required ten times less computer time than using KNN or RF. Conclusions It is conclusive that our method is preferable for application as an automated method of short gene classification. Its linearity and easily optimized parameters make it practicable for predicting short genes of newly-sequenced or under-studied species. Reviewers This article was reviewed by Alexey Kondrashov, Rajeev Azad (nominated by Dr J.Peter Gogarten) and Yuriy Fofanov (nominated by Dr Janet Siefert). PMID:24067167

  20. Overview of ASDEX Upgrade results

    DOE PAGES

    Aguiam, D.

    2017-06-28

    Here, the ASDEX Upgrade (AUG) programme is directed towards physics input to critical elements of the ITER design and the preparation of ITER operation, as well as addressing physics issues for a future DEMO design. Since 2015, AUG is equipped with a new pair of 3-strap ICRF antennas, which were designed for a reduction of tungsten release during ICRF operation. As predicted, a factor two reduction on the ICRF-induced W plasma content could be achieved by the reduction of the sheath voltage at the antenna limiters via the compensation of the image currents of the central and side straps in the antenna frame. There are two main operational scenario lines in AUG. Experiments with low collisionality, which comprise current drive, ELM mitigation/suppression and fast ion physics, are mainly done with freshly boronized walls to reduce the tungsten influx at these high edge temperature conditions. Full ELM suppression and non-inductive operation up to a plasma current ofmore » $${{I}_{\\text{p}}}=0.8$$ MA could be obtained at low plasma density. Plasma exhaust is studied under conditions of high neutral divertor pressure and separatrix electron density, where a fresh boronization is not required. Substantial progress could be achieved for the understanding of the confinement degradation by strong D puffing and the improvement with nitrogen or carbon seeding. Inward/outward shifts of the electron density profile relative to the temperature profile effect the edge stability via the pressure profile changes and lead to improved/decreased pedestal performance. Seeding and D gas puffing are found to effect the core fueling via changes in a region of high density on the high field side (HFSHD).« less

  1. Active Player Modeling in the Iterated Prisoner's Dilemma

    PubMed Central

    Park, Hyunsoo; Kim, Kyung-Joong

    2016-01-01

    The iterated prisoner's dilemma (IPD) is well known within the domain of game theory. Although it is relatively simple, it can also elucidate important problems related to cooperation and trust. Generally, players can predict their opponents' actions when they are able to build a precise model of their behavior based on their game playing experience. However, it is difficult to make such predictions based on a limited number of games. The creation of a precise model requires the use of not only an appropriate learning algorithm and framework but also a good dataset. Active learning approaches have recently been introduced to machine learning communities. The approach can usually produce informative datasets with relatively little effort. Therefore, we have proposed an active modeling technique to predict the behavior of IPD players. The proposed method can model the opponent player's behavior while taking advantage of interactive game environments. This experiment used twelve representative types of players as opponents, and an observer used an active modeling algorithm to model these opponents. This observer actively collected data and modeled the opponent's behavior online. Most of our data showed that the observer was able to build, through direct actions, a more accurate model of an opponent's behavior than when the data were collected through random actions. PMID:26989405

  2. Active Player Modeling in the Iterated Prisoner's Dilemma.

    PubMed

    Park, Hyunsoo; Kim, Kyung-Joong

    2016-01-01

    The iterated prisoner's dilemma (IPD) is well known within the domain of game theory. Although it is relatively simple, it can also elucidate important problems related to cooperation and trust. Generally, players can predict their opponents' actions when they are able to build a precise model of their behavior based on their game playing experience. However, it is difficult to make such predictions based on a limited number of games. The creation of a precise model requires the use of not only an appropriate learning algorithm and framework but also a good dataset. Active learning approaches have recently been introduced to machine learning communities. The approach can usually produce informative datasets with relatively little effort. Therefore, we have proposed an active modeling technique to predict the behavior of IPD players. The proposed method can model the opponent player's behavior while taking advantage of interactive game environments. This experiment used twelve representative types of players as opponents, and an observer used an active modeling algorithm to model these opponents. This observer actively collected data and modeled the opponent's behavior online. Most of our data showed that the observer was able to build, through direct actions, a more accurate model of an opponent's behavior than when the data were collected through random actions.

  3. Predicting Silk Fiber Mechanical Properties through Multiscale Simulation and Protein Design.

    PubMed

    Rim, Nae-Gyune; Roberts, Erin G; Ebrahimi, Davoud; Dinjaski, Nina; Jacobsen, Matthew M; Martín-Moldes, Zaira; Buehler, Markus J; Kaplan, David L; Wong, Joyce Y

    2017-08-14

    Silk is a promising material for biomedical applications, and much research is focused on how application-specific, mechanical properties of silk can be designed synthetically through proper amino acid sequences and processing parameters. This protocol describes an iterative process between research disciplines that combines simulation, genetic synthesis, and fiber analysis to better design silk fibers with specific mechanical properties. Computational methods are used to assess the protein polymer structure as it forms an interconnected fiber network through shearing and how this process affects fiber mechanical properties. Model outcomes are validated experimentally with the genetic design of protein polymers that match the simulation structures, fiber fabrication from these polymers, and mechanical testing of these fibers. Through iterative feedback between computation, genetic synthesis, and fiber mechanical testing, this protocol will enable a priori prediction capability of recombinant material mechanical properties via insights from the resulting molecular architecture of the fiber network based entirely on the initial protein monomer composition. This style of protocol may be applied to other fields where a research team seeks to design a biomaterial with biomedical application-specific properties. This protocol highlights when and how the three research groups (simulation, synthesis, and engineering) should be interacting to arrive at the most effective method for predictive design of their material.

  4. Implicit methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Yoon, S.; Kwak, D.

    1990-01-01

    Numerical solutions of the Navier-Stokes equations using explicit schemes can be obtained at the expense of efficiency. Conventional implicit methods which often achieve fast convergence rates suffer high cost per iteration. A new implicit scheme based on lower-upper factorization and symmetric Gauss-Seidel relaxation offers very low cost per iteration as well as fast convergence. High efficiency is achieved by accomplishing the complete vectorizability of the algorithm on oblique planes of sweep in three dimensions.

  5. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.

  6. Radioactivity measurements of ITER materials using the TFTR D-T neutron field

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Abdou, M. A.; Barnes, C. W.; Kugel, H. W.

    1994-06-01

    The availability of high D-T fusion neutron yields at TFTR has provided a useful opportunity to directly measure D-T neutron-induced radioactivity in a realistic tokamak fusion reactor environment for materials of vital interest to ITER. These measurements are valuable for characterizing radioactivity in various ITER candidate materials, for validating complex neutron transport calculations, and for meeting fusion reactor licensing requirements. The radioactivity measurements at TFTR involve potential ITER materials including stainless steel 316, vanadium, titanium, chromium, silicon, iron, cobalt, nickel, molybdenum, aluminum, copper, zinc, zirconium, niobium, and tungsten. Small samples of these materials were irradiated close to the plasma and just outside the vacuum vessel wall of TFTR, locations of different neutron energy spectra. Saturation activities for both threshold and capture reactions were measured. Data from dosimetric reactions have been used to obtain preliminary neutron energy spectra. Spectra from the first wall were compared to calculations from ITER and to measurements from accelerator-based tests.

  7. Progress in the Design and Development of the ITER Low-Field Side Reflectometer (LFSR) System

    NASA Astrophysics Data System (ADS)

    Doyle, E. J.; Wang, G.; Peebles, W. A.; US LFSR Team

    2015-11-01

    The US has formed a team, comprised of personnel from PPPL, ORNL, GA and UCLA, to develop the LFSR system for ITER. The LFSR system will contribute to the measurement of a number of plasma parameters on ITER, including edge plasma electron density profiles, monitor Edge Localized Modes (ELMs) and L-H transitions, and provide physics measurements relating to high frequency instabilities, plasma flows, and other density transients. An overview of the status of design activities and component testing for the system will be presented. Since the 2011 conceptual design review, the number of microwave transmission lines (TLs) and antennas has been reduced from twelve (12) to seven (7) due to space constraint in the ITER Tokamak Port Plug. This change has required a reconfiguration and recalculation of the performance of the front-end antenna design, which now includes use of monostatic transmission lines and antennas. Work supported by US ITER/PPPL Subcontracts S013252-C and S012340, and PO 4500051400 from GA to UCLA.

  8. Design and first plasma measurements of the ITER-ECE prototype radiometer.

    PubMed

    Austin, M E; Brookman, M W; Rowan, W L; Danani, S; Bryerton, E W; Dougherty, P

    2016-11-01

    On ITER, second harmonic optically thick electron cyclotron emission (ECE) in the range of 220-340 GHz will supply the electron temperature (T e ). To investigate the requirements and capabilities prescribed for the ITER system, a prototype radiometer covering this frequency range has been developed by Virginia Diodes, Inc. The first plasma measurements with this instrument have been carried out on the DIII-D tokamak, with lab bench tests and measurements of third through fifth harmonic ECE from high T e plasmas. At DIII-D the instrument shares the transmission line of the Michelson interferometer and can simultaneously acquire data. Comparison of the ECE radiation temperature from the absolutely calibrated Michelson and the prototype receiver shows that the ITER radiometer provides accurate measurements of the millimeter radiation across the instrument band.

  9. FBILI method for multi-level line transfer

    NASA Astrophysics Data System (ADS)

    Kuzmanovska, O.; Atanacković, O.; Faurobert, M.

    2017-07-01

    Efficient non-LTE multilevel radiative transfer calculations are needed for a proper interpretation of astrophysical spectra. In particular, realistic simulations of time-dependent processes or multi-dimensional phenomena require that the iterative method used to solve such non-linear and non-local problem is as fast as possible. There are several multilevel codes based on efficient iterative schemes that provide a very high convergence rate, especially when combined with mathematical acceleration techniques. The Forth-and-Back Implicit Lambda Iteration (FBILI) developed by Atanacković-Vukmanović et al. [1] is a Gauss-Seidel-type iterative scheme that is characterized by a very high convergence rate without the need of complementing it with additional acceleration techniques. In this paper we make the implementation of the FBILI method to the multilevel atom line transfer in 1D more explicit. We also consider some of its variants and investigate their convergence properties by solving the benchmark problem of CaII line formation in the solar atmosphere. Finally, we compare our solutions with results obtained with the well known code MULTI.

  10. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    NASA Astrophysics Data System (ADS)

    Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; Ng, Esmond G.; Maris, Pieter; Vary, James P.

    2018-01-01

    We describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. The use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. We also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.

  11. Oxytocin attenuates trust as a subset of more general reinforcement learning, with altered reward circuit functional connectivity in males.

    PubMed

    Ide, Jaime S; Nedic, Sanja; Wong, Kin F; Strey, Shmuel L; Lawson, Elizabeth A; Dickerson, Bradford C; Wald, Lawrence L; La Camera, Giancarlo; Mujica-Parodi, Lilianne R

    2018-07-01

    Oxytocin (OT) is an endogenous neuropeptide that, while originally thought to promote trust, has more recently been found to be context-dependent. Here we extend experimental paradigms previously restricted to de novo decision-to-trust, to a more realistic environment in which social relationships evolve in response to iterative feedback over twenty interactions. In a randomized, double blind, placebo-controlled within-subject/crossover experiment of human adult males, we investigated the effects of a single dose of intranasal OT (40 IU) on Bayesian expectation updating and reinforcement learning within a social context, with associated brain circuit dynamics. Subjects participated in a neuroeconomic task (Iterative Trust Game) designed to probe iterative social learning while their brains were scanned using ultra-high field (7T) fMRI. We modeled each subject's behavior using Bayesian updating of belief-states ("willingness to trust") as well as canonical measures of reinforcement learning (learning rate, inverse temperature). Behavioral trajectories were then used as regressors within fMRI activation and connectivity analyses to identify corresponding brain network functionality affected by OT. Behaviorally, OT reduced feedback learning, without bias with respect to positive versus negative reward. Neurobiologically, reduced learning under OT was associated with muted communication between three key nodes within the reward circuit: the orbitofrontal cortex, amygdala, and lateral (limbic) habenula. Our data suggest that OT, rather than inspiring feelings of generosity, instead attenuates the brain's encoding of prediction error and therefore its ability to modulate pre-existing beliefs. This effect may underlie OT's putative role in promoting what has typically been reported as 'unjustified trust' in the face of information that suggests likely betrayal, while also resolving apparent contradictions with regard to OT's context-dependent behavioral effects. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, W. M., E-mail: solomon@fusion.gat.com; Bortolon, A.; Grierson, B. A.

    A new high pedestal regime (“Super H-mode”) has been predicted and accessed on DIII-D. Super H-mode was first achieved on DIII-D using a quiescent H-mode edge, enabling a smooth trajectory through pedestal parameter space. By exploiting Super H-mode, it has been possible to access high pedestal pressures at high normalized densities. While elimination of Edge localized modes (ELMs) is beneficial for Super H-mode, it may not be a requirement, as recent experiments have maintained high pedestals with ELMs triggered by lithium granule injection. Simulations using TGLF for core transport and the EPED model for the pedestal find that ITER canmore » benefit from the improved performance associated with Super H-mode, with increased values of fusion power and gain possible. Similar studies demonstrate that the Super H-mode pedestal can be advantageous for a steady-state power plant, by providing a path to increasing the bootstrap current while simultaneously reducing the demands on the core physics performance.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, W. M.; Snyder, P. B.; Bortolon, A.

    In a new high pedestal regime ("Super H-mode") we predicted and accessed DIII-D. Super H-mode was first achieved on DIII-D using a quiescent H-mode edge, enabling a smooth trajectory through pedestal parameter space. By exploiting Super H-mode, it has been possible to access high pedestal pressures at high normalized densities. And while elimination of Edge localized modes (ELMs) is beneficial for Super H-mode, it may not be a requirement, as recent experiments have maintained high pedestals with ELMs triggered by lithium granule injection. Simulations using TGLF for core transport and the EPED model for the pedestal find that ITER canmore » benefit from the improved performance associated with Super H-mode, with increased values of fusion power and gain possible. In similar studies demonstrate that the Super H-mode pedestal can be advantageous for a steady-state power plant, by providing a path to increasing the bootstrap current while simultaneously reducing the demands on the core physics performance.« less

  14. PROGRAM VSAERO: A computer program for calculating the non-linear aerodynamic characteristics of arbitrary configurations: User's manual

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1982-01-01

    VSAERO is a computer program used to predict the nonlinear aerodynamic characteristics of arbitrary three-dimensional configurations in subsonic flow. Nonlinear effects of vortex separation and vortex surface interaction are treated in an iterative wake-shape calculation procedure, while the effects of viscosity are treated in an iterative loop coupling potential-flow and integral boundary-layer calculations. The program employs a surface singularity panel method using quadrilateral panels on which doublet and source singularities are distributed in a piecewise constant form. This user's manual provides a brief overview of the mathematical model, instructions for configuration modeling and a description of the input and output data. A listing of a sample case is included.

  15. Static shape of an acoustically levitated drop with wave-drop interaction

    NASA Astrophysics Data System (ADS)

    Lee, C. P.; Anilkumar, A. V.; Wang, T. G.

    1994-11-01

    The static shape of a drop levitated and flattened by an acoustic standing wave field in air is calculated, requiring self-consistency between the drop shape and the wave. The wave is calculated for a given shape using the boundary integral method. From the resulting radiation stress on the drop surface, the shape is determined by solving the Young-Laplace equation, completing an iteration cycle. The iteration is continued until both the shape and the wave converge. Of particular interest are the shapes of large drops that sustain equilibrium, beyond a certain degree of flattening, by becoming more flattened at a decreasing sound pressure level. The predictions for flattening versus acoustic radiation stress, for drops of different sizes, compare favorably with experimental data.

  16. Altered predictive capability of the brain network EEG model in schizophrenia during cognition.

    PubMed

    Gomez-Pilar, Javier; Poza, Jesús; Gómez, Carlos; Northoff, Georg; Lubeiro, Alba; Cea-Cañas, Benjamín B; Molina, Vicente; Hornero, Roberto

    2018-05-12

    The study of the mechanisms involved in cognition is of paramount importance for the understanding of the neurobiological substrates in psychiatric disorders. Hence, this research is aimed at exploring the brain network dynamics during a cognitive task. Specifically, we analyze the predictive capability of the pre-stimulus theta activity to ascertain the functional brain dynamics during cognition in both healthy and schizophrenia subjects. Firstly, EEG recordings were acquired during a three-tone oddball task from fifty-one healthy subjects and thirty-five schizophrenia patients. Secondly, phase-based coupling measures were used to generate the time-varying functional network for each subject. Finally, pre-stimulus network connections were iteratively modified according to different models of network reorganization. This adjustment was applied by minimizing the prediction error through recurrent iterations, following the predictive coding approach. Both controls and schizophrenia patients follow a reinforcement of the secondary neural pathways (i.e., pathways between cortical brain regions weakly connected during pre-stimulus) for most of the subjects, though the ratio of controls that exhibited this behavior was statistically significant higher than for patients. These findings suggest that schizophrenia is associated with an impaired ability to modify brain network configuration during cognition. Furthermore, we provide direct evidence that the changes in phase-based brain network parameters from pre-stimulus to cognitive response in the theta band are closely related to the performance in important cognitive domains. Our findings not only contribute to the understanding of healthy brain dynamics, but also shed light on the altered predictive neuronal substrates in schizophrenia. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Fusion Power measurement at ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertalot, L.; Barnsley, R.; Krasilnikov, V.

    2015-07-01

    Nuclear fusion research aims to provide energy for the future in a sustainable way and the ITER project scope is to demonstrate the feasibility of nuclear fusion energy. ITER is a nuclear experimental reactor based on a large scale fusion plasma (tokamak type) device generating Deuterium - Tritium (DT) fusion reactions with emission of 14 MeV neutrons producing up to 700 MW fusion power. The measurement of fusion power, i.e. total neutron emissivity, will play an important role for achieving ITER goals, in particular the fusion gain factor Q related to the reactor performance. Particular attention is given also tomore » the development of the neutron calibration strategy whose main scope is to achieve the required accuracy of 10% for the measurement of fusion power. Neutron Flux Monitors located in diagnostic ports and inside the vacuum vessel will measure ITER total neutron emissivity, expected to range from 1014 n/s in Deuterium - Deuterium (DD) plasmas up to almost 10{sup 21} n/s in DT plasmas. The neutron detection systems as well all other ITER diagnostics have to withstand high nuclear radiation and electromagnetic fields as well ultrahigh vacuum and thermal loads. (authors)« less

  18. ITER ECE Diagnostic: Design Progress of IN-DA and the diagnostic role for Physics

    NASA Astrophysics Data System (ADS)

    Pandya, H. K. B.; Kumar, Ravinder; Danani, S.; Shrishail, P.; Thomas, Sajal; Kumar, Vinay; Taylor, G.; Khodak, A.; Rowan, W. L.; Houshmandyar, S.; Udintsev, V. S.; Casal, N.; Walsh, M. J.

    2017-04-01

    The ECE Diagnostic system in ITER will be used for measuring the electron temperature profile evolution, electron temperature fluctuations, the runaway electron spectrum, and the radiated power in the electron cyclotron frequency range (70-1000 GHz), These measurements will be used for advanced real time plasma control (e.g. steering the electron cyclotron heating beams), and physics studies. The scope of the Indian Domestic Agency (IN-DA) is to design and develop the polarizer splitter units; the broadband (70 to 1000 GHz) transmission lines; a high temperature calibration source in the Diagnostics Hall; two Michelson Interferometers (70 to 1000 GHz) and a 122-230 GHz radiometer. The remainder of the ITER ECE diagnostic system is the responsibility of the US domestic agency and the ITER Organization (IO). The design needs to conform to the ITER Organization’s strict requirements for reliability, availability, maintainability and inspect-ability. Progress in the design and development of various subsystems and components considering various engineering challenges and solutions will be discussed in this paper. This paper will also highlight how various ECE measurements can enhance understanding of plasma physics in ITER.

  19. Explaining Cooperation in Groups: Testing Models of Reciprocity and Learning

    ERIC Educational Resources Information Center

    Biele, Guido; Rieskamp, Jorg; Czienskowski, Uwe

    2008-01-01

    What are the cognitive processes underlying cooperation in groups? This question is addressed by examining how well a reciprocity model, two learning models, and social value orientation can predict cooperation in two iterated n-person social dilemmas with continuous contributions. In the first of these dilemmas, the public goods game,…

  20. FLEXWAL: A computer program for predicting the wall modifications for two-dimensional, solid, adaptive-wall tunnels

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1983-01-01

    A program called FLEXWAL for calculating wall modifications for solid, adaptive-wall wind tunnels is presented. The method used is the iterative technique of NASA TP-2081 and is applicable to subsonic and transonic test conditions. The program usage, program listing, and a sample case are given.

  1. Optimization design combined with coupled structural-electrostatic analysis for the electrostatically controlled deployable membrane reflector

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Yang, Guigeng; Zhang, Yiqun

    2015-01-01

    The electrostatically controlled deployable membrane reflector (ECDMR) is a promising scheme to construct large size and high precision space deployable reflector antennas. This paper presents a novel design method for the large size and small F/D ECDMR considering the coupled structure-electrostatic problem. First, the fully coupled structural-electrostatic system is described by a three field formulation, in which the structure and passive electrical field is modeled by finite element method, and the deformation of the electrostatic domain is predicted by a finite element formulation of a fictitious elastic structure. A residual formulation of the structural-electrostatic field finite element model is established and solved by Newton-Raphson method. The coupled structural-electrostatic analysis procedure is summarized. Then, with the aid of this coupled analysis procedure, an integrated optimization method of membrane shape accuracy and stress uniformity is proposed, which is divided into inner and outer iterative loops. The initial state of relatively high shape accuracy and uniform stress distribution is achieved by applying the uniform prestress on the membrane design shape and optimizing the voltages, in which the optimal voltage is computed by a sensitivity analysis. The shape accuracy is further improved by the iterative prestress modification using the reposition balance method. Finally, the results of the uncoupled and coupled methods are compared and the proposed optimization method is applied to design an ECDMR. The results validate the effectiveness of this proposed methods.

  2. Modeling the Lyα Forest in Collisionless Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorini, Daniele; Oñorbe, José; Lukić, Zarija

    2016-08-11

    Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present in this paper "Iteratively Matched Statistics" (IMS), a novel method to accurately model the Lyα forest with collisionless N-body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) andmore » the power spectrum of the real-space Lyα forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N-body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Lyα forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N-body simulations with achievable mean inter-particle separations in large-volume simulations. Finally, in addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic "mock" skies for Lyα forest surveys.« less

  3. MODELING THE Ly α FOREST IN COLLISIONLESS SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorini, Daniele; Oñorbe, José; Hennawi, Joseph F.

    2016-08-20

    Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present “Iteratively Matched Statistics” (IMS), a novel method to accurately model the Ly α forest with collisionless N -body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) and themore » power spectrum of the real-space Ly α forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N -body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Ly α forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N -body simulations with achievable mean inter-particle separations in large-volume simulations. In addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic “mock” skies for Ly α forest surveys.« less

  4. Experimental and Theoretical Research on the Compression Performance of CFRP Sheet Confined GFRP Short Pipe

    PubMed Central

    Zhao, Qilin; Chen, Li; Shao, Guojian

    2014-01-01

    The axial compressive strength of unidirectional FRP made by pultrusion is generally quite lower than its axial tensile strength. This fact decreases the advantages of FRP as main load bearing member in engineering structure. A theoretical iterative calculation approach was suggested to predict the ultimate axial compressive stress of the combined structure and analyze the influences of geometrical parameters on the ultimate axial compressive stress of the combined structure. In this paper, the experimental and theoretical research on the CFRP sheet confined GFRP short pole was extended to the CFRP sheet confined GFRP short pipe, namely, a hollow section pole. Experiment shows that the bearing capacity of the GFRP short pipe can also be heightened obviously by confining CFRP sheet. The theoretical iterative calculation approach in the previous paper is amended to predict the ultimate axial compressive stress of the CFRP sheet confined GFRP short pipe, of which the results agree with the experiment. Lastly the influences of geometrical parameters on the new combined structure are analyzed. PMID:24672288

  5. Multivariate qualitative analysis of banned additives in food safety using surface enhanced Raman scattering spectroscopy.

    PubMed

    He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei

    2015-02-25

    A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Iterated intracochlear reflection shapes the envelopes of basilar-membrane click responses

    PubMed Central

    Shera, Christopher A.

    2015-01-01

    Multiple internal reflection of cochlear traveling waves has been argued to provide a plausible explanation for the waxing and waning and other temporal structures often exhibited by the envelopes of basilar-membrane (BM) and auditory-nerve responses to acoustic clicks. However, a recent theoretical analysis of a BM click response measured in chinchilla concludes that the waveform cannot have arisen via any equal, repetitive process, such as iterated intracochlear reflection [Wit and Bell (2015), J. Acoust. Soc. Am. 138, 94–96]. Reanalysis of the waveform contradicts this conclusion. The measured BM click response is used to derive the frequency-domain transfer function characterizing every iteration of the loop. The selfsame transfer function that yields waxing and waning of the BM click response also captures the spectral features of ear-canal stimulus-frequency otoacoustic emissions measured in the same animal, consistent with the predictions of multiple internal reflection. Small shifts in transfer-function phase simulate results at different measurement locations and reproduce the heterogeneity of BM click response envelopes observed experimentally. PMID:26723327

  7. Finite element analysis of heat load of tungsten relevant to ITER conditions

    NASA Astrophysics Data System (ADS)

    Zinovev, A.; Terentyev, D.; Delannay, L.

    2017-12-01

    A computational procedure is proposed in order to predict the initiation of intergranular cracks in tungsten with ITER specification microstructure (i.e. characterised by elongated micrometre-sized grains). Damage is caused by a cyclic heat load, which emerges from plasma instabilities during operation of thermonuclear devices. First, a macroscopic thermo-mechanical simulation is performed in order to obtain temperature- and strain field in the material. The strain path is recorded at a selected point of interest of the macroscopic specimen, and is then applied at the microscopic level to a finite element mesh of a polycrystal. In the microscopic simulation, the stress state at the grain boundaries serves as the marker of cracking initiation. The simulated heat load cycle is a representative of edge-localized modes, which are anticipated during normal operations of ITER. Normal stresses at the grain boundary interfaces were shown to strongly depend on the direction of grain orientation with respect to the heat flux direction and to attain higher values if the flux is perpendicular to the elongated grains, where it apparently promotes crack initiation.

  8. Plasma-surface interaction in the Be/W environment: Conclusions drawn from the JET-ILW for ITER

    NASA Astrophysics Data System (ADS)

    Brezinsek, S.; JET-EFDA contributors

    2015-08-01

    The JET ITER-Like Wall experiment (JET-ILW) provides an ideal test bed to investigate plasma-surface interaction (PSI) and plasma operation with the ITER plasma-facing material selection employing beryllium in the main chamber and tungsten in the divertor. The main PSI processes: material erosion and migration, (b) fuel recycling and retention, (c) impurity concentration and radiation have be1en studied and compared between JET-C and JET-ILW. The current physics understanding of these key processes in the JET-ILW revealed that both interpretation of previously obtained carbon results (JET-C) and predictions to ITER need to be revisited. The impact of the first-wall material on the plasma was underestimated. Main observations are: (a) low primary erosion source in H-mode plasmas and reduction of the material migration from the main chamber to the divertor (factor 7) as well as within the divertor from plasma-facing to remote areas (factor 30 - 50). The energetic threshold for beryllium sputtering minimises the primary erosion source and inhibits multi-step re-erosion in the divertor. The physical sputtering yield of tungsten is low as 10-5 and determined by beryllium ions. (b) Reduction of the long-term fuel retention (factor 10 - 20) in JET-ILW with respect to JET-C. The remaining retention is caused by implantation and co-deposition with beryllium and residual impurities. Outgassing has gained importance and impacts on the recycling properties of beryllium and tungsten. (c) The low effective plasma charge (Zeff = 1.2) and low radiation capability of beryllium reveal the bare deuterium plasma physics. Moderate nitrogen seeding, reaching Zeff = 1.6 , restores in particular the confinement and the L-H threshold behaviour. ITER-compatible divertor conditions with stable semi-detachment were obtained owing to a higher density limit with ILW. Overall JET demonstrated successful plasma operation in the Be/W material combination and confirms its advantageous PSI behaviour and gives strong support to the ITER material selection.

  9. High power millimeter wave experiment of ITER relevant electron cyclotron heating and current drive system.

    PubMed

    Takahashi, K; Kajiwara, K; Oda, Y; Kasugai, A; Kobayashi, N; Sakamoto, K; Doane, J; Olstad, R; Henderson, M

    2011-06-01

    High power, long pulse millimeter (mm) wave experiments of the RF test stand (RFTS) of Japan Atomic Energy Agency (JAEA) were performed. The system consists of a 1 MW/170 GHz gyrotron, a long and short distance transmission line (TL), and an equatorial launcher (EL) mock-up. The RFTS has an ITER-relevant configuration, i.e., consisted by a 1 MW-170 GHz gyrotron, a mm wave TL, and an EL mock-up. The TL is composed of a matching optics unit, evacuated circular corrugated waveguides, 6-miter bends, an in-line waveguide switch, and an isolation valve. The EL-mock-up is fabricated according to the current design of the ITER launcher. The Gaussian-like beam radiation with the steering capability of 20°-40° from the EL mock-up was also successfully proved. The high power, long pulse power transmission test was conducted with the metallic load replaced by the EL mock-up, and the transmission of 1 MW/800 s and 0.5 MW/1000 s was successfully demonstrated with no arcing and no damages. The transmission efficiency of the TL was 96%. The results prove the feasibility of the ITER electron cyclotron heating and current drive system. © 2011 American Institute of Physics

  10. The motional Stark effect diagnostic for ITER using a line-shift approach.

    PubMed

    Foley, E L; Levinton, F M; Yuh, H Y; Zakharov, L E

    2008-10-01

    The United States has been tasked with the development and implementation of a motional Stark effect (MSE) system on ITER. In the harsh ITER environment, MSE is particularly susceptible to degradation, as it depends on polarimetry, and the polarization reflection properties of surfaces are highly sensitive to thin film effects due to plasma deposition and erosion of a first mirror. Here we present the results of a comprehensive study considering a new MSE-based approach to internal plasma magnetic field measurements for ITER. The proposed method uses the line shifts in the MSE spectrum (MSE-LS) to provide a radial profile of the magnetic field magnitude. To determine the utility of MSE-LS for equilibrium reconstruction, studies were performed using the ESC-ERV code system. A near-term opportunity to test the use of MSE-LS for equilibrium reconstruction is being pursued in the implementation of MSE with laser-induced fluorescence on NSTX. Though the field values and beam energies are very different from ITER, the use of a laser allows precision spectroscopy with a similar ratio of linewidth to line spacing on NSTX as would be achievable with a passive system on ITER. Simulation results for ITER and NSTX are presented, and the relative merits of the traditional line polarization approach and the new line-shift approach are discussed.

  11. X-Divertors on ITER - with no hardware changes

    NASA Astrophysics Data System (ADS)

    Valanju, Prashant; Covele, Brent; Kotschenreuther, Mike; Mahajan, Swadesh; Kessel, Charles

    2014-10-01

    Using CORSICA, we have discovered that X-Divertor (XD) equilibria are possible on ITER - without any extra PF coils inside the TF coils, and with no changes to ITER's poloidal field (PF) coil set, divertor cassette, strike points, or first wall. Starting from the Standard Divertor (SD), a sequence of XD configurations (with increasing flux expansions at the divertor plate) can be made by reprogramming ITER PF coil currents while keeping them all under their design limits (Lackner and Zohm have shown this to be impossible for Snowflakes). The strike point is held fixed, so no changes in the divertor or pumping hardware will be needed. The main plasma shape is kept very close to the SD case, so no hardware changes to the main chamber will be needed. Time-dependent ITER-XD operational scenarios are being checked using TSC. This opens the possibility that many XDs could be tested and used to assist in high-power operation on ITER. Because of the toroidally segmented ITER divertor plates, strongly detached operation may be critical for making use of the largest XD flux expansion possible. The flux flaring in XDs is expected to increase the stability of detachment, so that H-mode confinement is not affected. Detachment stability is being examined with SOLPS. This work supported by US DOE Grants DE-FG02-04ER54742 and DE-FG02-04ER54754 and by TACC at UT Austin.

  12. Iterative solution of the inverse Cauchy problem for an elliptic equation by the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.

    2017-10-01

    This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution

  13. Bragg x-ray survey spectrometer for ITER.

    PubMed

    Varshney, S K; Barnsley, R; O'Mullane, M G; Jakhar, S

    2012-10-01

    Several potential impurity ions in the ITER plasmas will lead to loss of confined energy through line and continuum emission. For real time monitoring of impurities, a seven channel Bragg x-ray spectrometer (XRCS survey) is considered. This paper presents design and analysis of the spectrometer, including x-ray tracing by the Shadow-XOP code, sensitivity calculations for reference H-mode plasma and neutronics assessment. The XRCS survey performance analysis shows that the ITER measurement requirements of impurity monitoring in 10 ms integration time at the minimum levels for low-Z to high-Z impurity ions can largely be met.

  14. Gaussian beam and physical optics iteration technique for wideband beam waveguide feed design

    NASA Technical Reports Server (NTRS)

    Veruttipong, W.; Chen, J. C.; Bathker, D. A.

    1991-01-01

    The Gaussian beam technique has become increasingly popular for wideband beam waveguide (BWG) design. However, it is observed that the Gaussian solution is less accurate for smaller mirrors (approximately less than 30 lambda in diameter). Therefore, a high-performance wideband BWG design cannot be achieved by using the Gaussian beam technique alone. This article demonstrates a new design approach by iterating Gaussian beam and BWG parameters simultaneously at various frequencies to obtain a wideband BWG. The result is further improved by comparing it with physical optics results and repeating the iteration.

  15. Robust model predictive control of nonlinear systems with unmodeled dynamics and bounded uncertainties based on neural networks.

    PubMed

    Yan, Zheng; Wang, Jun

    2014-03-01

    This paper presents a neural network approach to robust model predictive control (MPC) for constrained discrete-time nonlinear systems with unmodeled dynamics affected by bounded uncertainties. The exact nonlinear model of underlying process is not precisely known, but a partially known nominal model is available. This partially known nonlinear model is first decomposed to an affine term plus an unknown high-order term via Jacobian linearization. The linearization residue combined with unmodeled dynamics is then modeled using an extreme learning machine via supervised learning. The minimax methodology is exploited to deal with bounded uncertainties. The minimax optimization problem is reformulated as a convex minimization problem and is iteratively solved by a two-layer recurrent neural network. The proposed neurodynamic approach to nonlinear MPC improves the computational efficiency and sheds a light for real-time implementability of MPC technology. Simulation results are provided to substantiate the effectiveness and characteristics of the proposed approach.

  16. Numerical Evaluation of P-Multigrid Method for the Solution of Discontinuous Galerkin Discretizations of Diffusive Equations

    NASA Technical Reports Server (NTRS)

    Atkins, H. L.; Helenbrook, B. T.

    2005-01-01

    This paper describes numerical experiments with P-multigrid to corroborate analysis, validate the present implementation, and to examine issues that arise in the implementations of the various combinations of relaxation schemes, discretizations and P-multigrid methods. The two approaches to implement P-multigrid presented here are equivalent for most high-order discretization methods such as spectral element, SUPG, and discontinuous Galerkin applied to advection; however it is discovered that the approach that mimics the common geometric multigrid implementation is less robust, and frequently unstable when applied to discontinuous Galerkin discretizations of di usion. Gauss-Seidel relaxation converges 40% faster than block Jacobi, as predicted by analysis; however, the implementation of Gauss-Seidel is considerably more expensive that one would expect because gradients in most neighboring elements must be updated. A compromise quasi Gauss-Seidel relaxation method that evaluates the gradient in each element twice per iteration converges at rates similar to those predicted for true Gauss-Seidel.

  17. Developing a predictive understanding of landscape importance to the Punan-Pelancau of East Kalimantan, Borneo.

    PubMed

    Cunliffe, Robert N; Lynam, Timothy J P; Sheil, Douglas; Wan, Meilinda; Salim, Agus; Basuki, Imam; Priyadi, Hari

    2007-11-01

    In order for local community views to be incorporated into new development initiatives, their perceptions need to be clearly understood and documented in a format that is readily accessible to planners and developers. The current study sought to develop a predictive understanding of how the Punan Pelancau community, living in a forested landscape in East Kalimantan, assigns importance to its surrounding landscapes and to present these perceptions in the form of maps. The approach entailed the iterative use of a combination of participatory community evaluation methods and more formal modeling and geographic information system techniques. Results suggest that landscape importance is largely dictated by potential benefits, such as inputs to production, health, and houses. Neither land types nor distance were good predictors of landscape importance. The grid-cell method, developed as part of the study, appears to offer a simple technique to capture and present the knowledge of local communities, even where their relationship to the land is highly complex, as was the case for this particular community.

  18. Predicting Persuasion-Induced Behavior Change from the Brain

    PubMed Central

    Falk, Emily B.; Berkman, Elliot T.; Mann, Traci; Harrison, Brittany; Lieberman, Matthew D.

    2011-01-01

    Although persuasive messages often alter people’s self-reported attitudes and intentions to perform behaviors, these self-reports do not necessarily predict behavior change. We demonstrate that neural responses to persuasive messages can predict variability in behavior change in the subsequent week. Specifically, an a priori region of interest (ROI) in medial prefrontal cortex (MPFC) was reliably associated with behavior change (r = 0.49, p < 0.05). Additionally, an iterative cross-validation approach using activity in this MPFC ROI predicted an average 23% of the variance in behavior change beyond the variance predicted by self-reported attitudes and intentions. Thus, neural signals can predict behavioral changes that are not predicted from self-reported attitudes and intentions alone. Additionally, this is the first functional magnetic resonance imaging study to demonstrate that a neural signal can predict complex real world behavior days in advance. PMID:20573889

  19. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    PubMed

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P < 0.05). Adaptive statistical iterative reconstruction-V 90% showed superior LCD and had the highest CNR in the liver, aorta, and, pancreas, measuring 7.32 ± 3.22, 11.60 ± 4.25, and 4.60 ± 2.31, respectively, compared with the next best series of ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P <0.0001). Veo 3.0 and ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  20. Iterative Addition of Kinetic Effects to Cold Plasma RF Wave Solvers

    NASA Astrophysics Data System (ADS)

    Green, David; Berry, Lee; RF-SciDAC Collaboration

    2017-10-01

    The hot nature of fusion plasmas requires a wave vector dependent conductivity tensor for accurate calculation of wave heating and current drive. Traditional methods for calculating the linear, kinetic full-wave plasma response rely on a spectral method such that the wave vector dependent conductivity fits naturally within the numerical method. These methods have seen much success for application to the well-confined core plasma of tokamaks. However, quantitative prediction of high power RF antenna designs for fusion applications has meant a requirement of resolving the geometric details of the antenna and other plasma facing surfaces for which the Fourier spectral method is ill-suited. An approach to enabling the addition of kinetic effects to the more versatile finite-difference and finite-element cold-plasma full-wave solvers was presented by where an operator-split iterative method was outlined. Here we expand on this approach, examine convergence and present a simplified kinetic current estimator for rapidly updating the right-hand side of the wave equation with kinetic corrections. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  1. Eight channel transmit array volume coil using on-coil radiofrequency current sources

    PubMed Central

    Kurpad, Krishna N.; Boskamp, Eddy B.

    2014-01-01

    Background At imaging frequencies associated with high-field MRI, the combined effects of increased load-coil interaction and shortened wavelength results in degradation of circular polarization and B1 field homogeneity in the imaging volume. Radio frequency (RF) shimming is known to mitigate the problem of B1 field inhomogeneity. Transmit arrays with well decoupled transmitting elements enable accurate B1 field pattern control using simple, non-iterative algorithms. Methods An eight channel transmit array was constructed. Each channel consisted of a transmitting element driven by a dedicated on-coil RF current source. The coil current distributions of characteristic transverse electromagnetic (TEM) coil resonant modes were non-iteratively set up on each transmitting element and 3T MRI images of a mineral oil phantom were obtained. Results B1 field patterns of several linear and quadrature TEM coil resonant modes that typically occur at different resonant frequencies were replicated at 128 MHz without having to retune the transmit array. The generated B1 field patterns agreed well with simulation in most cases. Conclusions Independent control of current amplitude and phase on each transmitting element was demonstrated. The transmit array with on-coil RF current sources enables B1 field shimming in a simple and predictable manner. PMID:24834418

  2. Predicting knee replacement damage in a simulator machine using a computational model with a consistent wear factor.

    PubMed

    Zhao, Dong; Sakoda, Hideyuki; Sawyer, W Gregory; Banks, Scott A; Fregly, Benjamin J

    2008-02-01

    Wear of ultrahigh molecular weight polyethylene remains a primary factor limiting the longevity of total knee replacements (TKRs). However, wear testing on a simulator machine is time consuming and expensive, making it impractical for iterative design purposes. The objectives of this paper were first, to evaluate whether a computational model using a wear factor consistent with the TKR material pair can predict accurate TKR damage measured in a simulator machine, and second, to investigate how choice of surface evolution method (fixed or variable step) and material model (linear or nonlinear) affect the prediction. An iterative computational damage model was constructed for a commercial knee implant in an AMTI simulator machine. The damage model combined a dynamic contact model with a surface evolution model to predict how wear plus creep progressively alter tibial insert geometry over multiple simulations. The computational framework was validated by predicting wear in a cylinder-on-plate system for which an analytical solution was derived. The implant damage model was evaluated for 5 million cycles of simulated gait using damage measurements made on the same implant in an AMTI machine. Using a pin-on-plate wear factor for the same material pair as the implant, the model predicted tibial insert wear volume to within 2% error and damage depths and areas to within 18% and 10% error, respectively. Choice of material model had little influence, while inclusion of surface evolution affected damage depth and area but not wear volume predictions. Surface evolution method was important only during the initial cycles, where variable step was needed to capture rapid geometry changes due to the creep. Overall, our results indicate that accurate TKR damage predictions can be made with a computational model using a constant wear factor obtained from pin-on-plate tests for the same material pair, and furthermore, that surface evolution method matters only during the initial "break in" period of the simulation.

  3. Integrated simulations of saturated neoclassical tearing modes in DIII-D, Joint European Torus, and ITER plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halpern, Federico D.; Bateman, Glenn; Kritz, Arnold H.

    2006-06-15

    A revised version of the ISLAND module [C. N. Nguyen et al., Phys. Plasmas 11, 3604 (2004)] is used in the BALDUR code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)] to carry out integrated modeling simulations of DIII-D [J. Luxon, Nucl. Fusion 42, 614 (2002)], Joint European Torus (JET) [P. H. Rebut et al., Nucl. Fusion 25, 1011 (1985)], and ITER [R. Aymar et al., Plasma Phys. Control. Fusion 44, 519 (2002)] tokamak discharges in order to investigate the adverse effects of multiple saturated magnetic islands driven by neoclassical tearing modes (NTMs). Simulations are carried outmore » with a predictive model for the temperature and density pedestal at the edge of the high confinement mode (H-mode) plasma and with core transport described using the Multi-Mode model. The ISLAND module, which is used to compute magnetic island widths, includes the effects of an arbitrary aspect ratio and plasma cross sectional shape, the effect of the neoclassical bootstrap current, and the effect of the distortion in the shape of each magnetic island caused by the radial variation of the perturbed magnetic field. Radial transport is enhanced across the width of each magnetic island within the BALDUR integrated modeling simulations in order to produce a self-consistent local flattening of the plasma profiles. It is found that the main consequence of the NTM magnetic islands is a decrease in the central plasma temperature and total energy. For the DIII-D and JET discharges, it is found that inclusion of the NTMs typically results in a decrease in total energy of the order of 15%. In simulations of ITER, it is found that the saturated magnetic island widths normalized by the plasma minor radius, for the lowest order individual tearing modes, are approximately 24% for the 2/1 mode and 12% for the 3/2 mode. As a result, the ratio of ITER fusion power to heating power (fusion Q) is reduced from Q=10.6 in simulations with no NTM islands to Q=2.6 in simulations with fully saturated NTM islands.« less

  4. IDC Re-Engineering Phase 2 Iteration E2 Use Case Realizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, James M.; Burns, John F.; Hamlet, Benjamin R.

    2016-06-01

    This architecturally significant use case describes how the System acquires meteorological data to build atmospheric models used in automatic and interactive processing of infrasound data. The System requests the latest available high-resolution global meteorological data from external data centers and puts it into the correct formats for generation of infrasound propagation models. The system moves the meteorological data from Data Acquisition Partition to the Data Processing Partition and stores the meteorological data. The System builds a new atmospheric model based on the meteorological data. This use case is architecturally significant because it describes acquiring meteorological data from various sources andmore » creating dynamic atmospheric transmission model to support the prediction of infrasonic signal detection« less

  5. Diffusive molecular dynamics simulations of lithiation of silicon nanopillars

    NASA Astrophysics Data System (ADS)

    Mendez, J. P.; Ponga, M.; Ortiz, M.

    2018-06-01

    We report diffusive molecular dynamics simulations concerned with the lithiation of Si nano-pillars, i.e., nano-sized Si rods held at both ends by rigid supports. The duration of the lithiation process is of the order of milliseconds, well outside the range of molecular dynamics but readily accessible to diffusive molecular dynamics. The simulations predict an alloy Li15Si4 at the fully lithiated phase, exceedingly large and transient volume increments up to 300% due to the weakening of Sisbnd Si iterations, a crystalline-to-amorphous-to-lithiation phase transition governed by interface kinetics, high misfit strains and residual stresses resulting in surface cracks and severe structural degradation in the form of extensive porosity, among other effects.

  6. Rapid Speech Transmission Index predictions and auralizations of unusual instructional spaces at MIT's new Stata Center

    NASA Astrophysics Data System (ADS)

    Conant, David A.

    2005-04-01

    The Stata Center for Computer, Information and Intelligence Sciences, recently opened at the Massachusetts Institute of Technology, includes a variety of oddly-shaped seminar rooms in addition to lecture spaces of somewhat more conventional form. The architects design approach prohibited following conventional, well understood room-acoustical behavior yet MIT and the design team were keenly interested in ensuring that these spaces functioned exceptionally well, acoustically. CATT-Acoustic room modeling was employed to assess RASTI through multiple design iterations for all these spaces. Presented here are computational and descriptive results achieved for these rooms which are highly-regarded by faculty. They all sound peculiarly good, given their unusual form. In addition, binaural auralizations for selected spaces are provided.

  7. The derivation of vector magnetic fields from Stokes profiles - Integral versus least squares fitting techniques

    NASA Technical Reports Server (NTRS)

    Ronan, R. S.; Mickey, D. L.; Orrall, F. Q.

    1987-01-01

    The results of two methods for deriving photospheric vector magnetic fields from the Zeeman effect, as observed in the Fe I line at 6302.5 A at high spectral resolution (45 mA), are compared. The first method does not take magnetooptical effects into account, but determines the vector magnetic field from the integral properties of the Stokes profiles. The second method is an iterative least-squares fitting technique which fits the observed Stokes profiles to the profiles predicted by the Unno-Rachkovsky solution to the radiative transfer equation. For sunspot fields above about 1500 gauss, the two methods are found to agree in derived azimuthal and inclination angles to within about + or - 20 deg.

  8. Dynamics of internal models in game players

    NASA Astrophysics Data System (ADS)

    Taiji, Makoto; Ikegami, Takashi

    1999-10-01

    A new approach for the study of social games and communications is proposed. Games are simulated between cognitive players who build the opponent’s internal model and decide their next strategy from predictions based on the model. In this paper, internal models are constructed by the recurrent neural network (RNN), and the iterated prisoner’s dilemma game is performed. The RNN allows us to express the internal model in a geometrical shape. The complicated transients of actions are observed before the stable mutually defecting equilibrium is reached. During the transients, the model shape also becomes complicated and often experiences chaotic changes. These new chaotic dynamics of internal models reflect the dynamical and high-dimensional rugged landscape of the internal model space.

  9. Progress and supercomputing in computational fluid dynamics; Proceedings of U.S.-Israel Workshop, Jerusalem, Israel, December 1984

    NASA Technical Reports Server (NTRS)

    Murman, E. M. (Editor); Abarbanel, S. S. (Editor)

    1985-01-01

    Current developments and future trends in the application of supercomputers to computational fluid dynamics are discussed in reviews and reports. Topics examined include algorithm development for personal-size supercomputers, a multiblock three-dimensional Euler code for out-of-core and multiprocessor calculations, simulation of compressible inviscid and viscous flow, high-resolution solutions of the Euler equations for vortex flows, algorithms for the Navier-Stokes equations, and viscous-flow simulation by FEM and related techniques. Consideration is given to marching iterative methods for the parabolized and thin-layer Navier-Stokes equations, multigrid solutions to quasi-elliptic schemes, secondary instability of free shear flows, simulation of turbulent flow, and problems connected with weather prediction.

  10. Iterative Design and Classroom Evaluation of Automated Formative Feedback for Improving Peer Feedback Localization

    ERIC Educational Resources Information Center

    Nguyen, Huy; Xiong, Wenting; Litman, Diane

    2017-01-01

    A peer-review system that automatically evaluates and provides formative feedback on free-text feedback comments of students was iteratively designed and evaluated in college and high-school classrooms. Classroom assignments required students to write paper drafts and submit them to a peer-review system. When student peers later submitted feedback…

  11. Enhancement of First Wall Damage in Iter Type Tokamak due to Lenr Effects

    NASA Astrophysics Data System (ADS)

    Lipson, Andrei G.; Miley, George H.; Momota, Hiromu

    In recent experiments with pulsed periodic high current (J ~ 300-500 mA/cm2) D2-glow discharge at deuteron energies as low as 0.8-2.45 keV a large DD-reaction yield has been obtained. Thick target yield measurement show unusually high DD-reaction enhancement (at Ed = 1 keV the yield is about nine orders of magnitude larger than that deduced from standard Bosch and Halle extrapolation of DD-reaction cross-section to lower energies) The results obtained in these LENR experiments with glow discharge suggest nonnegligible edge plasma effects in the ITER TOKAMAK that were previously ignored. In the case of the ITER DT plasma core, we here estimate the DT reaction yield at the metal edge due to plasma ion bombardment of the first wall and/or divertor materials.

  12. Design and first plasma measurements of the ITER-ECE prototype radiometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Austin, M. E.; Brookman, M. W.; Rowan, W. L.

    2016-11-15

    On ITER, second harmonic optically thick electron cyclotron emission (ECE) in the range of 220-340 GHz will supply the electron temperature (T{sub e}). To investigate the requirements and capabilities prescribed for the ITER system, a prototype radiometer covering this frequency range has been developed by Virginia Diodes, Inc. The first plasma measurements with this instrument have been carried out on the DIII-D tokamak, with lab bench tests and measurements of third through fifth harmonic ECE from high T{sub e} plasmas. At DIII-D the instrument shares the transmission line of the Michelson interferometer and can simultaneously acquire data. Comparison of themore » ECE radiation temperature from the absolutely calibrated Michelson and the prototype receiver shows that the ITER radiometer provides accurate measurements of the millimeter radiation across the instrument band.« less

  13. Diagnostic accuracy of second-generation dual-source computed tomography coronary angiography with iterative reconstructions: a real-world experience.

    PubMed

    Maffei, E; Martini, C; Rossi, A; Mollet, N; Lario, C; Castiglione Morelli, M; Clemente, A; Gentile, G; Arcadi, T; Seitun, S; Catalano, O; Aldrovandi, A; Cademartiri, F

    2012-08-01

    The authors evaluated the diagnostic accuracy of second-generation dual-source (DSCT) computed tomography coronary angiography (CTCA) with iterative reconstructions for detecting obstructive coronary artery disease (CAD). Between June 2010 and February 2011, we enrolled 160 patients (85 men; mean age 61.2±11.6 years) with suspected CAD. All patients underwent CTCA and conventional coronary angiography (CCA). For the CTCA scan (Definition Flash, Siemens), we use prospective tube current modulation and 70-100 ml of iodinated contrast material (Iomeprol 400 mgI/ ml, Bracco). Data sets were reconstructed with iterative reconstruction algorithm (IRIS, Siemens). CTCA and CCA reports were used to evaluate accuracy using the threshold for significant stenosis at ≥50% and ≥70%, respectively. No patient was excluded from the analysis. Heart rate was 64.3±11.9 bpm and radiation dose was 7.2±2.1 mSv. Disease prevalence was 30% (48/160). Sensitivity, specificity and positive and negative predictive values of CTCA in detecting significant stenosis were 90.1%, 93.3%, 53.2% and 99.1% (per segment), 97.5%, 91.2%, 61.4% and 99.6% (per vessel) and 100%, 83%, 71.6% and 100% (per patient), respectively. Positive and negative likelihood ratios at the per-patient level were 5.89 and 0.0, respectively. CTCA with second-generation DSCT in the real clinical world shows a diagnostic performance comparable with previously reported validation studies. The excellent negative predictive value and likelihood ratio make CTCA a first-line noninvasive method for diagnosing obstructive CAD.

  14. Predictions of toroidal rotation and torque sources arising in non-axisymmetric perturbed magnetic fields in tokamaks

    NASA Astrophysics Data System (ADS)

    Honda, M.; Satake, S.; Suzuki, Y.; Shinohara, K.; Yoshida, M.; Narita, E.; Nakata, M.; Aiba, N.; Shiraishi, J.; Hayashi, N.; Matsunaga, G.; Matsuyama, A.; Ide, S.

    2017-11-01

    Capabilities of the integrated framework consisting of TOPICS, OFMC, VMEC and FORTEC-3D, have been extended to calculate toroidal rotation in fully non-axisymmetric perturbed magnetic fields for demonstrating operation scenarios in actual tokamak geometry and conditions. The toroidally localized perturbed fields due to the test blanket modules and the tangential neutral beam ports in ITER augment the neoclassical toroidal viscosity (NTV) substantially, while do not significantly influence losses of beam ions and alpha particles in an ITER L-mode discharge. The NTV takes up a large portion of total torque in ITER and fairly decelerates toroidal rotation, but the change in toroidal rotation may have limited effectiveness against turbulent heat transport. The error field correction coils installed in JT-60SA can externally apply the perturbed fields, which may alter the NTV and the resultant toroidal rotation profiles. However, the non-resonant n=18 components of the magnetic fields arising from the toroidal field ripple mainly contribute to the NTV, regardless of the presence of the applied field by the coil current of 10 kA , where n is the toroidal mode number. The theoretical model of the intrinsic torque due to the fluctuation-induced residual stress is calibrated by the JT-60U data. For five JT-60U discharges, the sign of the calibration factor conformed to the gyrokinetic linear stability analysis and a range of the amplitude thereof was revealed. This semi-empirical approach opens up access to an attempt on predicting toroidal rotation in H-mode plasmas.

  15. ITER Central Solenoid Module Fabrication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, John

    The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort betweenmore » the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first ITER module is in progress. The seven modules will be individually shipped to Cadarache, France upon their completion. This paper describes the processes and status of the fabrication of the CS Modules for ITER.« less

  16. Predicting the evolution of spreading on complex networks

    PubMed Central

    Chen, Duan-Bing; Xiao, Rui; Zeng, An

    2014-01-01

    Due to the wide applications, spreading processes on complex networks have been intensively studied. However, one of the most fundamental problems has not yet been well addressed: predicting the evolution of spreading based on a given snapshot of the propagation on networks. With this problem solved, one can accelerate or slow down the spreading in advance if the predicted propagation result is narrower or wider than expected. In this paper, we propose an iterative algorithm to estimate the infection probability of the spreading process and then apply it to a mean-field approach to predict the spreading coverage. The validation of the method is performed in both artificial and real networks. The results show that our method is accurate in both infection probability estimation and spreading coverage prediction. PMID:25130862

  17. Artificial neural network prediction of aircraft aeroelastic behavior

    NASA Astrophysics Data System (ADS)

    Pesonen, Urpo Juhani

    An Artificial Neural Network that predicts aeroelastic behavior of aircraft is presented. The neural net was designed to predict the shape of a flexible wing in static flight conditions using results from a structural analysis and an aerodynamic analysis performed with traditional computational tools. To generate reliable training and testing data for the network, an aeroelastic analysis code using these tools as components was designed and validated. To demonstrate the advantages and reliability of Artificial Neural Networks, a network was also designed and trained to predict airfoil maximum lift at low Reynolds numbers where wind tunnel data was used for the training. Finally, a neural net was designed and trained to predict the static aeroelastic behavior of a wing without the need to iterate between the structural and aerodynamic solvers.

  18. Directed evolution of a synthetic phylogeny of programmable Trp repressors.

    PubMed

    Ellefson, Jared W; Ledbetter, Michael P; Ellington, Andrew D

    2018-04-01

    As synthetic regulatory programs expand in sophistication, an ever increasing number of biological components with predictable phenotypes is required. Regulators are often 'part mined' from a diverse, but uncharacterized, array of genomic sequences, often leading to idiosyncratic behavior. Here, we generate an entire synthetic phylogeny from the canonical allosteric transcription factor TrpR. Iterative rounds of positive and negative compartmentalized partnered replication (CPR) led to the exponential amplification of variants that responded with high affinity and specificity to halogenated tryptophan analogs and novel operator sites. Fourteen repressor variants were evolved with unique regulatory profiles across five operators and three ligands. The logic of individual repressors can be modularly programmed by creating heterodimeric fusions, resulting in single proteins that display logic functions, such as 'NAND'. Despite the evolutionarily limited regulatory role of TrpR, vast functional spaces exist around this highly conserved protein scaffold and can be harnessed to create synthetic regulatory programs.

  19. Construction of hydrodynamic bead models from high-resolution X-ray crystallographic or nuclear magnetic resonance data.

    PubMed Central

    Byron, O

    1997-01-01

    Computer software such as HYDRO, based upon a comprehensive body of theoretical work, permits the hydrodynamic modeling of macromolecules in solution, which are represented to the computer interface as an assembly of spheres. The uniqueness of any satisfactory resultant model is optimized by incorporating into the modeling procedure the maximal possible number of criteria to which the bead model must conform. An algorithm (AtoB, for atoms to beads) that permits the direct construction of bead models from high resolution x-ray crystallographic or nuclear magnetic resonance data has now been formulated and tested. Models so generated then act as informed starting estimates for the subsequent iterative modeling procedure, thereby hastening the convergence to reasonable representations of solution conformation. Successful application of this algorithm to several proteins shows that predictions of hydrodynamic parameters, including those concerning solvation, can be confirmed. PMID:8994627

  20. Three-dimensional drift kinetic response of high-β plasmas in the DIII-D tokamak.

    PubMed

    Wang, Z R; Lanctot, M J; Liu, Y Q; Park, J-K; Menard, J E

    2015-04-10

    A quantitative interpretation of the experimentally measured high-pressure plasma response to externally applied three-dimensional (3D) magnetic field perturbations, across the no-wall Troyon β limit, is achieved. The self-consistent inclusion of the drift kinetic effects in magnetohydrodynamic (MHD) modeling [Y. Q. Liu et al., Phys. Plasmas 15, 112503 (2008)] successfully resolves an outstanding issue of the ideal MHD model, which significantly overpredicts the plasma-induced field amplification near the no-wall limit, as compared to experiments. The model leads to quantitative agreement not only for the measured field amplitude and toroidal phase but also for the measured internal 3D displacement of the plasma. The results can be important to the prediction of the reliable plasma behavior in advanced fusion devices, such as ITER [K. Ikeda, Nucl. Fusion 47, S1 (2007)].

  1. Analytic approximations of Von Kármán plate under arbitrary uniform pressure—equations in integral form

    NASA Astrophysics Data System (ADS)

    Zhong, XiaoXu; Liao, ShiJun

    2018-01-01

    Analytic approximations of the Von Kármán's plate equations in integral form for a circular plate under external uniform pressure to arbitrary magnitude are successfully obtained by means of the homotopy analysis method (HAM), an analytic approximation technique for highly nonlinear problems. Two HAM-based approaches are proposed for either a given external uniform pressure Q or a given central deflection, respectively. Both of them are valid for uniform pressure to arbitrary magnitude by choosing proper values of the so-called convergence-control parameters c 1 and c 2 in the frame of the HAM. Besides, it is found that the HAM-based iteration approaches generally converge much faster than the interpolation iterative method. Furthermore, we prove that the interpolation iterative method is a special case of the first-order HAM iteration approach for a given external uniform pressure Q when c 1 = - θ and c 2 = -1, where θ denotes the interpolation iterative parameter. Therefore, according to the convergence theorem of Zheng and Zhou about the interpolation iterative method, the HAM-based approaches are valid for uniform pressure to arbitrary magnitude at least in the special case c 1 = - θ and c 2 = -1. In addition, we prove that the HAM approach for the Von Kármán's plate equations in differential form is just a special case of the HAM for the Von Kármán's plate equations in integral form mentioned in this paper. All of these illustrate the validity and great potential of the HAM for highly nonlinear problems, and its superiority over perturbation techniques.

  2. A pseudo-discrete algebraic reconstruction technique (PDART) prior image-based suppression of high density artifacts in computed tomography

    NASA Astrophysics Data System (ADS)

    Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong

    2016-12-01

    We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.

  3. Control of Warm Compression Stations Using Model Predictive Control: Simulation and Experimental Results

    NASA Astrophysics Data System (ADS)

    Bonne, F.; Alamir, M.; Bonnay, P.

    2017-02-01

    This paper deals with multivariable constrained model predictive control for Warm Compression Stations (WCS). WCSs are subject to numerous constraints (limits on pressures, actuators) that need to be satisfied using appropriate algorithms. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to achieve precise control of pressures in normal operation or to avoid reaching stopping criteria (such as excessive pressures) under high disturbances (such as a pulsed heat load expected to take place in future fusion reactors, expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details the simulator used to validate this new control scheme and the associated simulation results on the SBTs WCS. This work is partially supported through the French National Research Agency (ANR), task agreement ANR-13-SEED-0005.

  4. A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.

    2017-03-01

    Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.

  5. High-throughput prediction of eucalypt lignin syringyl/guaiacyl content using multivariate analysis: a comparison between mid-infrared, near-infrared, and Raman spectroscopies for model development

    PubMed Central

    2014-01-01

    Background In order to rapidly and efficiently screen potential biofuel feedstock candidates for quintessential traits, robust high-throughput analytical techniques must be developed and honed. The traditional methods of measuring lignin syringyl/guaiacyl (S/G) ratio can be laborious, involve hazardous reagents, and/or be destructive. Vibrational spectroscopy can furnish high-throughput instrumentation without the limitations of the traditional techniques. Spectral data from mid-infrared, near-infrared, and Raman spectroscopies was combined with S/G ratios, obtained using pyrolysis molecular beam mass spectrometry, from 245 different eucalypt and Acacia trees across 17 species. Iterations of spectral processing allowed the assembly of robust predictive models using partial least squares (PLS). Results The PLS models were rigorously evaluated using three different randomly generated calibration and validation sets for each spectral processing approach. Root mean standard errors of prediction for validation sets were lowest for models comprised of Raman (0.13 to 0.16) and mid-infrared (0.13 to 0.15) spectral data, while near-infrared spectroscopy led to more erroneous predictions (0.18 to 0.21). Correlation coefficients (r) for the validation sets followed a similar pattern: Raman (0.89 to 0.91), mid-infrared (0.87 to 0.91), and near-infrared (0.79 to 0.82). These statistics signify that Raman and mid-infrared spectroscopy led to the most accurate predictions of S/G ratio in a diverse consortium of feedstocks. Conclusion Eucalypts present an attractive option for biofuel and biochemical production. Given the assortment of over 900 different species of Eucalyptus and Corymbia, in addition to various species of Acacia, it is necessary to isolate those possessing ideal biofuel traits. This research has demonstrated the validity of vibrational spectroscopy to efficiently partition different potential biofuel feedstocks according to lignin S/G ratio, significantly reducing experiment and analysis time and expense while providing non-destructive, accurate, global, predictive models encompassing a diverse array of feedstocks. PMID:24955114

  6. Status of US ITER Diagnostics

    NASA Astrophysics Data System (ADS)

    Stratton, B.; Delgado-Aparicio, L.; Hill, K.; Johnson, D.; Pablant, N.; Barnsley, R.; Bertschinger, G.; de Bock, M. F. M.; Reichle, R.; Udintsev, V. S.; Watts, C.; Austin, M.; Phillips, P.; Beiersdorfer, P.; Biewer, T. M.; Hanson, G.; Klepper, C. C.; Carlstrom, T.; van Zeeland, M. A.; Brower, D.; Doyle, E.; Peebles, A.; Ellis, R.; Levinton, F.; Yuh, H.

    2013-10-01

    The US is providing 7 diagnostics to ITER: the Upper Visible/IR cameras, the Low Field Side Reflectometer, the Motional Stark Effect diagnostic, the Electron Cyclotron Emission diagnostic, the Toroidal Interferometer/Polarimeter, the Core Imaging X-Ray Spectrometer, and the Diagnostic Residual Gas Analyzer. The front-end components of these systems must operate with high reliability in conditions of long pulse operation, high neutron and gamma fluxes, very high neutron fluence, significant neutron heating (up to 7 MW/m3) , large radiant and charge exchange heat flux (0.35 MW/m2) , and high electromagnetic loads. Opportunities for repair and maintenance of these components will be limited. These conditions lead to significant challenges for the design of the diagnostics. Space constraints, provision of adequate radiation shielding, and development of repair and maintenance strategies are challenges for diagnostic integration into the port plugs that also affect diagnostic design. The current status of design of the US ITER diagnostics is presented and R&D needs are identified. Supported by DOE contracts DE-AC02-09CH11466 (PPPL) and DE-AC05-00OR22725 (UT-Battelle, LLC).

  7. The Role of Combined ICRF and NBI Heating in JET Hybrid Plasmas in Quest for High D-T Fusion Yield

    NASA Astrophysics Data System (ADS)

    Mantsinen, Mervi; Challis, Clive; Frigione, Domenico; Graves, Jonathan; Hobirk, Joerg; Belonohy, Eva; Czarnecka, Agata; Eriksson, Jacob; Gallart, Dani; Goniche, Marc; Hellesen, Carl; Jacquet, Philippe; Joffrin, Emmanuel; King, Damian; Krawczyk, Natalia; Lennholm, Morten; Lerche, Ernesto; Pawelec, Ewa; Sips, George; Solano, Emilia R.; Tsalas, Maximos; Valisa, Marco

    2017-10-01

    Combined ICRF and NBI heating played a key role in achieving the world-record fusion yield in the first deuterium-tritium campaign at the JET tokamak in 1997. The current plans for JET include new experiments with deuterium-tritium (D-T) plasmas with more ITER-like conditions given the recently installed ITER-like wall (ILW). In the 2015-2016 campaigns, significant efforts have been devoted to the development of high-performance plasma scenarios compatible with ILW in preparation of the forthcoming D-T campaign. Good progress was made in both the inductive (baseline) and the hybrid scenario: a new record JET ILW fusion yield with a significantly extended duration of the high-performance phase was achieved. This paper reports on the progress with the hybrid scenario which is a candidate for ITER longpulse operation (˜1000 s) thanks to its improved normalized confinement, reduced plasma current and higher plasma beta with respect to the ITER reference baseline scenario. The combined NBI+ICRF power in the hybrid scenario was increased to 33 MW and the record fusion yield, averaged over 100 ms, to 2.9x1016 neutrons/s from the 2014 ILW fusion record of 2.3x1016 neutrons/s. Impurity control with ICRF waves was one of the key means for extending the duration of the high-performance phase. The main results are reviewed covering both key core and edge plasma issues.

  8. Spectroscopic Investigations of Highly Charged Tungsten Ions - Atomic Spectroscopy and Fusion Plasma Diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clementson, Joel

    2010-05-01

    The spectra of highly charged tungsten ions have been investigated using x-ray and extreme ultraviolet spectroscopy. These heavy ions are of interest in relativistic atomic structure theory, where high-precision wavelength measurements benchmark theoretical approaches, and in magnetic fusion research, where the ions may serve to diagnose high-temperature plasmas. The work details spectroscopic investigations of highly charged tungsten ions measured at the Livermore electron beam ion trap (EBIT) facility. Here, the EBIT-I and SuperEBIT electron beam ion traps have been employed to create, trap, and excite tungsten ions of M- and L-shell charge states. The emitted spectra have been studied inmore » high resolution using crystal, grating, and x-ray calorimeter spectrometers. In particular, wavelengths of n = 0 M-shell transitions in K-like W 55+ through Ne-like W 64+, and intershell transitions in Zn-like W 44+ through Co-like W 47+ have been measured. Special attention is given to the Ni-like W46+ ion, which has two strong electric-dipole forbidden transitions that are of interest for plasma diagnostics. The EBIT measurements are complemented by spectral modeling using the Flexible Atomic Code (FAC), and predictions for tokamak spectra are presented. The L-shell tungsten ions have been studied at electron-beam energies of up to 122 keV and transition energies measured in Ne-like W 64+ through Li-like W 71+. These spectra constitute the physics basis in the design of the ion-temperature crystal spectrometer for the ITER tokamak. Tungsten particles have furthermore been introduced into the Sustained Spheromak Physics Experiment (SSPX) spheromak in Livermore in order to investigate diagnostic possibilities of extreme ultraviolet tungsten spectra for the ITER divertor. The spheromak measurement and spectral modeling using FAC suggest that tungsten ions in charge states around Er-like W 6+ could be useful for plasma diagnostics.« less

  9. Development of a mirror-based endoscope for divertor spectroscopy on JET with the new ITER-like wall (invited).

    PubMed

    Huber, A; Brezinsek, S; Mertens, Ph; Schweer, B; Sergienko, G; Terra, A; Arnoux, G; Balshaw, N; Clever, M; Edlingdon, T; Egner, S; Farthing, J; Hartl, M; Horton, L; Kampf, D; Klammer, J; Lambertz, H T; Matthews, G F; Morlock, C; Murari, A; Reindl, M; Riccardo, V; Samm, U; Sanders, S; Stamp, M; Williams, J; Zastrow, K D; Zauner, C

    2012-10-01

    A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II, C I, C II, C III) with high optical transmittance (≥ 30% in the designed wavelength range) as well as high spatial resolution that is ≤ 2 mm at the object plane and ≤ 3 mm for the full depth of field (± 0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the λ > 0.95 μm range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.

  10. Low pressure and high power rf sources for negative hydrogen ions for fusion applications (ITER neutral beam injection).

    PubMed

    Fantz, U; Franzen, P; Kraus, W; Falter, H D; Berger, M; Christ-Koch, S; Fröschle, M; Gutser, R; Heinemann, B; Martens, C; McNeely, P; Riedl, R; Speth, E; Wünderlich, D

    2008-02-01

    The international fusion experiment ITER requires for the plasma heating and current drive a neutral beam injection system based on negative hydrogen ion sources at 0.3 Pa. The ion source must deliver a current of 40 A D(-) for up to 1 h with an accelerated current density of 200 Am/(2) and a ratio of coextracted electrons to ions below 1. The extraction area is 0.2 m(2) from an aperture array with an envelope of 1.5 x 0.6 m(2). A high power rf-driven negative ion source has been successfully developed at the Max-Planck Institute for Plasma Physics (IPP) at three test facilities in parallel. Current densities of 330 and 230 Am/(2) have been achieved for hydrogen and deuterium, respectively, at a pressure of 0.3 Pa and an electron/ion ratio below 1 for a small extraction area (0.007 m(2)) and short pulses (<4 s). In the long pulse experiment, equipped with an extraction area of 0.02 m(2), the pulse length has been extended to 3600 s. A large rf source, with the width and half the height of the ITER source but without extraction system, is intended to demonstrate the size scaling and plasma homogeneity of rf ion sources. The source operates routinely now. First results on plasma homogeneity obtained from optical emission spectroscopy and Langmuir probes are very promising. Based on the success of the IPP development program, the high power rf-driven negative ion source has been chosen recently for the ITER beam systems in the ITER design review process.

  11. Development of a mirror-based endoscope for divertor spectroscopy on JET with the new ITER-like wall (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, A.; Brezinsek, S.; Mertens, Ph.

    2012-10-15

    A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II,more » C I, C II, C III) with high optical transmittance ({>=}30% in the designed wavelength range) as well as high spatial resolution that is {<=}2 mm at the object plane and {<=}3 mm for the full depth of field ({+-}0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the {lambda} > 0.95 {mu}m range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.« less

  12. Design of a Rail Gun System for Mitigating Disruptions in Fusion Reactors

    NASA Astrophysics Data System (ADS)

    Lay, Wei-Siang

    Magnetic fusion devices, such as the tokamak, that carry a large amount of current to generate the plasma confining magnetic fields have the potential to lose magnetic stability control. This can lead to a major plasma disruption, which can cause most of the stored plasma energy to be lost to localized regions on the walls, causing severe damage. This is the most important issue for the $20B ITER device (International Thermonuclear Experimental Reactor) that is under construction in France. By injecting radiative materials deep into the plasma, the plasma energy could be dispersed more evenly on the vessel surface thus mitigating the harmful consequences of a disruption. Methods currently planned for ITER rely on the slow expansion of gases to propel the radiative payloads, and they also need to be located far away from the reactor vessel, which further slows down the response time of the system. Rail guns are being developed for aerospace applications, such as for mass transfer from the surface of the moon and asteroids to low earth orbit. A miniatured version of this aerospace technology seems to be particularly well suited to meet the fast time response needs of an ITER disruption mitigation system. Mounting this device close to the reactor vessel is also possible, which substantially increases its performance because the stray magnetic fields near the vessel walls could be used to augment the rail gun generated magnetic fields. In this thesis, the potential viability on Rail Gun based DMS is studied to investigate its projected fast time response capability by design, fabrication, and experiment of an NSTX-U sized rail gun system. Material and geometry based tests are used to find the most suitable armature design for this system for which the desirable attributes are high specific stiffness and high electrical conductivity. With the best material in these studies being aluminum 7075, the experimental Electromagnetic Particle Injector (EPI) system has propelled an aluminum armature (weighing 3g) to a velocity more than 150 m/s within two milliseconds post trigger, consistent with the predicted projection for a system with those parameters. Fixed magnetic field probes and high-speed images capture the velocity profile. To propel the armatures, a 20 mF capacitor bank charged to 2 kV and augmented with external field coils powers the rails. These studies indicate that an EPI based system can indeed operate with a fast response time of less than three milliseconds after an impending disruption is detected, and thus warrants further studies to more fully develop the concept as a back-up option for an ITER DMS.

  13. FAST COGNITIVE AND TASK ORIENTED, ITERATIVE DATA DISPLAY (FACTOID)

    DTIC Science & Technology

    2017-06-01

    approaches. As a result, the following assumptions guided our efforts in developing modeling and descriptive metrics for evaluation purposes...Application Evaluation . Our analytic workflow for evaluation is to first provide descriptive statistics about applications across metrics (performance...distributions for evaluation purposes because the goal of evaluation is accurate description , not inference (e.g., prediction). Outliers depicted

  14. Item Structural Properties as Predictors of Item Difficulty and Item Association.

    ERIC Educational Resources Information Center

    Solano-Flores, Guillermo

    1993-01-01

    Studied the ability of logical test design (LTD) to predict student performance in reading Roman numerals for 211 sixth graders in Mexico City tested on Roman numeral items varying on LTD-related and non-LTD-related variables. The LTD-related variable item iterativity was found to be the best predictor of item difficulty. (SLD)

  15. Development of a HEC-RAS temperature model for the North Santiam River, northwestern Oregon

    USGS Publications Warehouse

    Stonewall, Adam J.; Buccola, Norman L.

    2015-01-01

    Much of the error in temperature predictions resulted from the model’s inability to accurately simulate the full range of diurnal fluctuations during the warmest months. Future iterations of the model could be improved by the collection and inclusion of additional streamflow and temperature data, especially near the mouth of the South Santiam River. Presently, the model is able to predict hourly and daily water temperatures under a wide variety of conditions with a typical error of 0.8 and 0.7 °C, respectively.

  16. Low-temperature tensile strength of the ITER-TF model coil insulation system after reactor irradiation

    NASA Astrophysics Data System (ADS)

    Bittner-Rohrhofer, K.; Humer, K.; Weber, H. W.

    The windings of the superconducting magnet coils for the ITER-FEAT fusion device are affected by high mechanical stresses at cryogenic temperatures and by a radiation environment, which impose certain constraints especially on the insulating materials. A glass fiber reinforced plastic (GFRP) laminate, which consists of Kapton/R-glass-fiber reinforcement tapes, vacuum-impregnated in a DGEBA epoxy system, was used for the European toroidal field model coil turn insulation of ITER. In order to assess its mechanical properties under the actual operating conditions of ITER-FEAT, cryogenic (77 K) static tensile tests and tension-tension fatigue measurements were done before and after irradiation to a fast neutron fluence of 1×10 22 m -2 ( E>0.1 MeV), i.e. the ITER-FEAT design fluence level. We find that the mechanical strength and the fracture behavior of this GFRP are strongly influenced by the winding direction of the tape and by the radiation induced delamination process. In addition, the composite swells by 3%, forming bubbles inside the laminate, and loses weight (1.4%) at the design fluence.

  17. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  18. Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks

    PubMed Central

    Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng

    2017-01-01

    High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328

  19. Arc detection for the ICRF system on ITER

    NASA Astrophysics Data System (ADS)

    D'Inca, R.

    2011-12-01

    The ICRF system for ITER is designed to respect the high voltage breakdown limits. However arcs can still statistically happen and must be quickly detected and suppressed by shutting the RF power down. For the conception of a reliable and efficient detector, the analysis of the mechanism of arcs is necessary to find their unique signature. Numerous systems have been conceived to address the issues of arc detection. VSWR-based detectors, RF noise detectors, sound detectors, optical detectors, S-matrix based detectors. Until now, none of them has succeeded in demonstrating the fulfillment of all requirements and the studies for ITER now follow three directions: improvement of the existing concepts to fix their flaws, development of new theoretically fully compliant detectors (like the GUIDAR) and combination of several detectors to benefit from the advantages of each of them. Together with the physical and engineering challenges, the development of an arc detection system for ITER raises methodological concerns to extrapolate the results from basic experiments and present machines to the ITER scale ICRF system and to conduct a relevant risk analysis.

  20. Modeling of the ITER-like wide-angle infrared thermography view of JET.

    PubMed

    Aumeunier, M-H; Firdaouss, M; Travère, J-M; Loarer, T; Gauthier, E; Martin, V; Chabaud, D; Humbert, E

    2012-10-01

    Infrared (IR) thermography systems are mandatory to ensure safe plasma operation in fusion devices. However, IR measurements are made much more complicated in metallic environment because of the spurious contributions of the reflected fluxes. This paper presents a full predictive photonic simulation able to assess accurately the surface temperature measurement with classical IR thermography from a given plasma scenario and by taking into account the optical properties of PFCs materials. This simulation has been carried out the ITER-like wide angle infrared camera view of JET in comparing with experimental data. The consequences and the effects of the low emissivity and the bidirectional reflectivity distribution function used in the model for the metallic PFCs on the contribution of the reflected flux in the analysis are discussed.

  1. ITER L-Mode Confinement Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S.M. Kaye and the ITER Confinement Database Working Group

    This paper describes the content of an L-mode database that has been compiled with data from Alcator C-Mod, ASDEX, DIII, DIII-D, FTU, JET, JFT-2M, JT-60, PBX-M, PDX, T-10, TEXTOR, TFTR, and Tore-Supra. The database consists of a total of 2938 entries, 1881 of which are in the L-phase while 922 are ohmically heated (OH) only. Each entry contains up to 95 descriptive parameters, including global and kinetic information, machine conditioning, and configuration. The paper presents a description of the database and the variables contained therein, and it also presents global and thermal scalings along with predictions for ITER. The L-modemore » thermal confinement time scaling was determined from a subset of 1312 entries for which the thermal confinement time scaling was provided.« less

  2. Singular value decomposition for collaborative filtering on a GPU

    NASA Astrophysics Data System (ADS)

    Kato, Kimikazu; Hosino, Tikara

    2010-06-01

    A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.

  3. Constraints on geomagnetic secular variation modeling from electromagnetism and fluid dynamics of the Earth's core

    NASA Technical Reports Server (NTRS)

    Benton, E. R.

    1986-01-01

    A spherical harmonic representation of the geomagnetic field and its secular variation for epoch 1980, designated GSFC(9/84), is derived and evaluated. At three epochs (1977.5, 1980.0, 1982.5) this model incorporates conservation of magnetic flux through five selected patches of area on the core/mantle boundary bounded by the zero contours of vertical magnetic field. These fifteen nonlinear constraints are included like data in an iterative least squares parameter estimation procedure that starts with the recently derived unconstrained field model GSFC (12/83). Convergence is approached within three iterations. The constrained model is evaluated by comparing its predictive capability outside the time span of its data, in terms of residuals at magnetic observatories, with that for the unconstrained model.

  4. Sensitivity of alpha-particle-driven Alfvén eigenmodes to q-profile variation in ITER scenarios

    NASA Astrophysics Data System (ADS)

    Rodrigues, P.; Figueiredo, A. C. A.; Borba, D.; Coelho, R.; Fazendeiro, L.; Ferreira, J.; Loureiro, N. F.; Nabais, F.; Pinches, S. D.; Polevoi, A. R.; Sharapov, S. E.

    2016-11-01

    A perturbative hybrid ideal-MHD/drift-kinetic approach to assess the stability of alpha-particle-driven Alfvén eigenmodes in burning plasmas is used to show that certain foreseen ITER scenarios, namely the {{I}\\text{p}}=15 MA baseline scenario with very low and broad core magnetic shear, are sensitive to small changes in the background magnetic equilibrium. Slight variations (of the order of 1% ) of the safety-factor value on axis are seen to cause large changes in the growth rate, toroidal mode number, and radial location of the most unstable eigenmodes found. The observed sensitivity is shown to proceed from the very low magnetic shear values attained throughout the plasma core, raising issues about reliable predictions of alpha-particle transport in burning plasmas.

  5. Architectural Specialization for Inter-Iteration Loop Dependence Patterns

    DTIC Science & Technology

    2015-10-01

    Architectural Specialization for Inter-Iteration Loop Dependence Patterns Christopher Batten Computer Systems Laboratory School of Electrical and...Trends in Computer Architecture Transistors (Thousands) Frequency (MHz) Typical Power (W) MIPS R2K Intel P4 DEC Alpha 21264 Data collected by M...T as ks p er Jo ule ) Simple Processor Design Power Constraint High-Performance Architectures Embedded Architectures Design Performance

  6. Lessons from the Reading Brain for Reading Development and Dyslexia

    ERIC Educational Resources Information Center

    Wolf, Maryanne; Ullman-Shade, Catherine; Gottwald, Stephanie

    2016-01-01

    This essay is about the improbable emergence of written language six millennia ago that gave rise to the even more improbable, highly sophisticated reading brain of the twenty-first century. How it emerged and what it comprises--both in its most basic iteration in the very young reader and in its most elaborated iteration in the expert reader--is…

  7. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    DOE PAGES

    Shao, Meiyue; Aktulga, H.  Metin; Yang, Chao; ...

    2017-09-14

    In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less

  8. Evaluating the iterative development of VR/AR human factors tools for manual work.

    PubMed

    Liston, Paul M; Kay, Alison; Cromie, Sam; Leva, Chiara; D'Cruz, Mirabelle; Patel, Harshada; Langley, Alyson; Sharples, Sarah; Aromaa, Susanna

    2012-01-01

    This paper outlines the approach taken to iteratively evaluate a set of VR/AR (virtual reality / augmented reality) applications for five different manual-work applications - terrestrial spacecraft assembly, assembly-line design, remote maintenance of trains, maintenance of nuclear reactors, and large-machine assembly process design - and examines the evaluation data for evidence of the effectiveness of the evaluation framework as well as the benefits to the development process of feedback from iterative evaluation. ManuVAR is an EU-funded research project that is working to develop an innovative technology platform and a framework to support high-value, high-knowledge manual work throughout the product lifecycle. The results of this study demonstrate the iterative improvements reached throughout the design cycles, observable through the trending of the quantitative results from three successive trials of the applications and the investigation of the qualitative interview findings. The paper discusses the limitations of evaluation in complex, multi-disciplinary development projects and finds evidence of the effectiveness of the use of the particular set of complementary evaluation methods incorporating a common inquiry structure used for the evaluation - particularly in facilitating triangulation of the data.

  9. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Meiyue; Aktulga, H.  Metin; Yang, Chao

    In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less

  10. Performance assessment of the antenna setup for the ITER plasma position reflectometry in-vessel systems.

    PubMed

    Varela, P; Belo, J H; Quental, P B

    2016-11-01

    The design of the in-vessel antennas for the ITER plasma position reflectometry diagnostic is very challenging due to the need to cope both with the space restrictions inside the vacuum vessel and with the high mechanical and thermal loads during ITER operation. Here, we present the work carried out to assess and optimise the design of the antenna. We show that the blanket modules surrounding the antenna strongly modify its characteristics and need to be considered from the early phases of the design. We also show that it is possible to optimise the antenna performance, within the design restrictions.

  11. Iterative Correction of Reference Nucleotides (iCORN) using second generation sequencing technology.

    PubMed

    Otto, Thomas D; Sanders, Mandy; Berriman, Matthew; Newbold, Chris

    2010-07-15

    The accuracy of reference genomes is important for downstream analysis but a low error rate requires expensive manual interrogation of the sequence. Here, we describe a novel algorithm (Iterative Correction of Reference Nucleotides) that iteratively aligns deep coverage of short sequencing reads to correct errors in reference genome sequences and evaluate their accuracy. Using Plasmodium falciparum (81% A + T content) as an extreme example, we show that the algorithm is highly accurate and corrects over 2000 errors in the reference sequence. We give examples of its application to numerous other eukaryotic and prokaryotic genomes and suggest additional applications. The software is available at http://icorn.sourceforge.net

  12. Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure

    NASA Astrophysics Data System (ADS)

    Bogani, C.; Gasparo, M. G.; Papini, A.

    2009-07-01

    We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.

  13. 1.5 MW RF Load for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ives, Robert Lawrence; Marsden, David; Collins, George

    Calabazas Creek Research, Inc. developed a 1.5 MW RF load for the ITER fusion research facility currently under construction in France. This program leveraged technology developed in two previous SBIR programs that successfully developed high power RF loads for fusion research applications. This program specifically focused on modifications required by revised technical performance, materials, and assembly specification for ITER. This program implemented an innovative approach to actively distribute the RF power inside the load to avoid excessive heating or arcing associated with constructive interference. The new design implemented materials and assembly changes required to meet specifications. Critical components were builtmore » and successfully tested during the program.« less

  14. Assessment of acquisition protocols for routine imaging of Y-90 using PET/CT

    PubMed Central

    2013-01-01

    Background Despite the early theoretical prediction of the 0+-0+ transition of 90Zr, 90Y-PET underwent only recently a growing interest for the development of imaging radioembolization of liver tumors. The aim of this work was to determine the minimum detectable activity (MDA) of 90Y by PET imaging and the impact of time-of-flight (TOF) reconstruction on detectability and quantitative accuracy according to the lesion size. Methods The study was conducted using a Siemens Biograph® mCT with a 22 cm large axial field of view. An IEC torso-shaped phantom containing five coplanar spheres was uniformly filled to achieve sphere-to-background ratios of 40:1. The phantom was imaged nine times in 14 days over 30 min. Sinograms were reconstructed with and without TOF information. A contrast-to-noise ratio (CNR) index was calculated using the Rose criterion, taking partial volume effects into account. The impact of reconstruction parameters on quantification accuracy, detectability, and spatial localization of the signal was investigated. Finally, six patients with hepatocellular carcinoma and four patients included in different 90Y-based radioimmunotherapy protocols were enrolled for the evaluation of the imaging parameters in a clinical situation. Results The highest CNR was achieved with one iteration for both TOF and non-TOF reconstructions. The MDA, however, was found to be lower with TOF than with non-TOF reconstruction. There was no gain by adding TOF information in terms of CNR for concentrations higher than 2 to 3 MBq mL−1, except for infra-centimetric lesions. Recovered activity was highly underestimated when a single iteration or non-TOF reconstruction was used (10% to 150% less depending on the lesion size). The MDA was estimated at 1 MBq mL−1 for a TOF reconstruction and infra-centimetric lesions. Images from patients treated with microspheres were clinically relevant, unlike those of patients who received systemic injections of 90Y. Conclusions Only one iteration and TOF were necessary to achieve an MDA around 1 MBq mL−1 and the most accurate localization of lesions. For precise quantification, at least three iterations gave the best performance, using TOF reconstruction and keeping an MDA of roughly 1 MBq mL−1. One and three iterations were mandatory to prevent false positive results for quantitative analysis of clinical data. Trial registration http://IDRCB 2011-A00043-38 P101103 PMID:23414629

  15. Rocketdyne LOX bearing tester program

    NASA Technical Reports Server (NTRS)

    Keba, J. E.; Beatty, R. F.

    1988-01-01

    The cause, or causes, for the Space Shuttle Main Engine ball wear were unknown, however, several mechanisms were suspected. Two testers were designed and built for operation in liquid oxygen to empirically gain insight into the problems and iterate solutions in a timely and cost efficient manner independent of engine testing. Schedules and test plans were developed that defined a test matrix consisting of parametric variations of loading, cooling or vapor margin, cage lubrication, material, and geometry studies. Initial test results indicated that the low pressure pump thrust bearing surface distress is a function of high axial load. Initial high pressure turbopump bearing tests produced the wear phenomenon observed in the turbopump and identified an inadequate vapor margin problem and a coolant flowrate sensitivity issue. These tests provided calibration data of analytical model predictions to give high confidence in the positive impact of future turbopump design modification for flight. Various modifications will be evaluated in these testers, since similar turbopump conditions can be produced and the benefit of the modification will be quantified in measured wear life comparisons.

  16. Design of High Altitude Long Endurance UAV: Structural Analysis of Composite Wing using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Kholish Rumayshah, Khodijah; Prayoga, Aditya; Mochammad Agoes Moelyadi, Ing., Dr.

    2018-04-01

    Research on a High Altitude Long Endurance (HALE) Unmanned Aerial Vehicle (UAV) is currently being conducted at Bandung Institute of Technology (ITB). Previously, the 1st generation of HALE UAV ITB used balsa wood for most of its structure. Flight test gave the result of broken wings due to extreme side-wind that causes large bending to its high aspect ratio wing. This paper conducted a study on designing the 2nd generation of HALE UAV ITB which used composite materials in order to substitute balsa wood at some critical parts of the wing’s structure. Finite element software ABAQUS/CAE is used to predict the stress and deformation that occurred. Tsai-Wu and Von-Mises failure criteria were applied to check whether the structure failed or not. The initial configuration gave the results that the structure experienced material failure. A second iteration was done by proposing a new configuration and it was proven safe against the load given.

  17. Modeling the Distribution and Type of High-Latitude Natural Wetlands for Methane Studies

    NASA Astrophysics Data System (ADS)

    Romanski, J.; Matthews, E.

    2017-12-01

    High latitude (>50N) natural wetlands emit a substantial amount of methane to the atmosphere, and are located in a region of amplified warming. Northern hemisphere high latitudes are characterized by cold climates, extensive permafrost, poor drainage, short growing seasons, and slow decay rates. Under these conditions, organic carbon accumulates in the soil, sequestering CO2 from the atmosphere. Methanogens produce methane from this carbon reservoir, converting stored carbon into a powerful greenhouse gas. Methane emission from wetland ecosystems depends on vegetation type, climate characteristics (e.g, precipitation amount and seasonality, temperature, snow cover, etc.), and geophysical variables (e.g., permafrost, soil type, and landscape slope). To understand how wetland methane dynamics in this critical region will respond to climate change, we have to first understand how wetlands themselves will change and therefore, what the primary controllers of wetland distribution and type are. Understanding these relationships permits data-anchored, physically-based modeling of wetland distribution and type in other climate scenarios, such as paleoclimates or future climates, a necessary first step toward modeling wetland methane emissions in these scenarios. We investigate techniques and datasets for predicting the distribution and type of high latitude (>50N) natural wetlands from a suite of geophysical and climate predictors. Hierarchical clustering is used to derive an empirical methane-centric wetland model. The model is applied in a multistep process - first to predict the distribution of wetlands from relevant geophysical parameters, and then, given the predicted wetland distribution, to classify the wetlands into methane-relevant types using an expanded suite of climate and biogeophysical variables. As the optimum set of predictor variables is not known a priori, the model is applied iteratively, and each simulation is evaluated with respect to observed high-latitude wetlands.

  18. A machine learning heuristic to identify biologically relevant and minimal biomarker panels from omics data

    PubMed Central

    2015-01-01

    Background Investigations into novel biomarkers using omics techniques generate large amounts of data. Due to their size and numbers of attributes, these data are suitable for analysis with machine learning methods. A key component of typical machine learning pipelines for omics data is feature selection, which is used to reduce the raw high-dimensional data into a tractable number of features. Feature selection needs to balance the objective of using as few features as possible, while maintaining high predictive power. This balance is crucial when the goal of data analysis is the identification of highly accurate but small panels of biomarkers with potential clinical utility. In this paper we propose a heuristic for the selection of very small feature subsets, via an iterative feature elimination process that is guided by rule-based machine learning, called RGIFE (Rule-guided Iterative Feature Elimination). We use this heuristic to identify putative biomarkers of osteoarthritis (OA), articular cartilage degradation and synovial inflammation, using both proteomic and transcriptomic datasets. Results and discussion Our RGIFE heuristic increased the classification accuracies achieved for all datasets when no feature selection is used, and performed well in a comparison with other feature selection methods. Using this method the datasets were reduced to a smaller number of genes or proteins, including those known to be relevant to OA, cartilage degradation and joint inflammation. The results have shown the RGIFE feature reduction method to be suitable for analysing both proteomic and transcriptomics data. Methods that generate large ‘omics’ datasets are increasingly being used in the area of rheumatology. Conclusions Feature reduction methods are advantageous for the analysis of omics data in the field of rheumatology, as the applications of such techniques are likely to result in improvements in diagnosis, treatment and drug discovery. PMID:25923811

  19. Suppression of turbulent transport in NSTX internal transport barriers

    NASA Astrophysics Data System (ADS)

    Yuh, Howard

    2008-11-01

    Electron transport will be important for ITER where fusion alphas and high-energy beam ions will primarily heat electrons. In the NSTX, internal transport barriers (ITBs) are observed in reversed (negative) shear discharges where diffusivities for electron and ion thermal channels and momentum are reduced. While neutral beam heating can produce ITBs in both electron and ion channels, High Harmonic Fast Wave (HHFW) heating can produce electron thermal ITBs under reversed magnetic shear conditions without momentum input. Interestingly, the location of the electron ITB does not necessarily match that of the ion ITB: the electron ITB correlates well with the minimum in the magnetic shear determined by Motional Stark Effect (MSE) [1] constrained equilibria, whereas the ion ITB better correlates with the maximum ExB shearing rate. Measured electron temperature gradients can exceed critical linear thresholds for ETG instability calculated by linear gyrokinetic codes in the ITB confinement region. The high-k microwave scattering diagnostic [2] shows reduced local density fluctuations at wavenumbers characteristic of electron turbulence for discharges with strongly negative magnetic shear versus weakly negative or positive magnetic shear. Fluctuation reductions are found to be spatially and temporally correlated with the local magnetic shear. These results are consistent with non-linear gyrokinetic simulations predictions showing the reduction of electron transport in negative magnetic shear conditions despite being linearly unstable [3]. Electron transport improvement via negative magnetic shear rather than ExB shear highlights the importance of current profile control in ITER and future devices. [1] F.M. Levinton, H. Yuh et al., PoP 14, 056119 [2] D.R. Smith, E. Mazzucato et al., RSI 75, 3840 [3] Jenko, F. and Dorland, W., PRL 89 225001

  20. Generation of High Resolution Water Vapour Fields from GPS Observations and Integration With ECMWF and MODIS

    NASA Astrophysics Data System (ADS)

    Yu, C.; Li, Z.; Penna, N. T.

    2016-12-01

    Precipitable water vapour (PWV) can be routinely retrieved from ground-based GPS arrays in all-weather conditions and also in real-time. But to provide dense spatial coverage maps, for example for calibrating SAR images, for correcting atmospheric effects in Network RTK GPS positioning and which may be used for numerical weather prediction, the pointwise GPS PWV measurements must be interpolated. Several previous interpolation studies have addressed the importance of the elevation dependency of water vapour, but it is often a challenge to separate elevation-dependent tropospheric delays from turbulent components. We present a tropospheric turbulence iterative decomposition model that decouples the total PWV into (i) a stratified component highly correlated with topography which therefore delineates the vertical troposphere profile, and (ii) a turbulent component resulting from disturbance processes (e.g., severe weather) in the troposphere which trigger uncertain patterns in space and time. We will demonstrate that the iterative decoupled interpolation model generates improved dense tropospheric water vapour fields compared with elevation dependent models, with similar accuracies obtained over both flat and mountainous terrain, as well as for both inland and coastal areas. We will also show that our GPS-based model may be enhanced with ECMWF zenith tropospheric delay and MODIS PWV, producing multi-data sources high temporal-spatial resolution PWV fields. These fields were applied to Sentinel-1 SAR interferograms over the Los Angeles region, for which a maximum noise reduction due to atmosphere artifacts reached 85%. The results reveal that the turbulent troposphere noise, especially those in a SAR image, often occupy more than 50% of the total zenith tropospheric delay and exert systematic, rather than random patterns.

  1. SteadyCom: Predicting microbial abundances while ensuring community stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Siu Hung Joshua; Simons, Margaret N.; Maranas, Costas D.

    Genome-scale metabolic modeling has become widespread for analyzing microbial metabolism. Extending this established paradigm to more complex microbial communities is emerging as a promising way to unravel the interactions and biochemical repertoire of these omnipresent systems. While several modeling techniques have been developed for microbial communities, little emphasis has been placed on the need to impose a time-averaged constant growth rate across all members for a community to ensure co-existence and stability. In the absence of this constraint, the faster growing organism will ultimately displace all other microbes in the community. This is particularly important for predicting steady-state microbiota compositionmore » as it imposes significant restrictions on the allowable community membership, composition and phenotypes. In this study, we introduce the SteadyCom optimization framework for predicting metabolic flux distributions consistent with the steady-state requirement. SteadyCom can be rapidly converged by iteratively solving linear programming (LP) problem and the number of iterations is independent of the number of organisms. A significant advantage of SteadyCom is compatibility with flux variability analysis. SteadyCom is first demonstrated for a community of four E. coli double auxotrophic mutants and is then applied to a gut microbiota model consisting of nine species, with representatives from the phyla Bacteroidetes, Firmicutes, Actinobacteria and Proteobacteria. In contrast to the direct use of FBA, SteadyCom is able to predict the change in species abundance in response to changes in diets with minimal additional imposed constraints on the model. Furthermore, by randomizing the uptake rates of microbes, an abundance profile with a good agreement to experimental gut microbiota is inferred. SteadyCom provides an important step towards the cross-cutting task of predicting the composition of a microbial community in a given environment.« less

  2. SteadyCom: Predicting microbial abundances while ensuring community stability

    DOE PAGES

    Chan, Siu Hung Joshua; Simons, Margaret N.; Maranas, Costas D.; ...

    2017-05-15

    Genome-scale metabolic modeling has become widespread for analyzing microbial metabolism. Extending this established paradigm to more complex microbial communities is emerging as a promising way to unravel the interactions and biochemical repertoire of these omnipresent systems. While several modeling techniques have been developed for microbial communities, little emphasis has been placed on the need to impose a time-averaged constant growth rate across all members for a community to ensure co-existence and stability. In the absence of this constraint, the faster growing organism will ultimately displace all other microbes in the community. This is particularly important for predicting steady-state microbiota compositionmore » as it imposes significant restrictions on the allowable community membership, composition and phenotypes. In this study, we introduce the SteadyCom optimization framework for predicting metabolic flux distributions consistent with the steady-state requirement. SteadyCom can be rapidly converged by iteratively solving linear programming (LP) problem and the number of iterations is independent of the number of organisms. A significant advantage of SteadyCom is compatibility with flux variability analysis. SteadyCom is first demonstrated for a community of four E. coli double auxotrophic mutants and is then applied to a gut microbiota model consisting of nine species, with representatives from the phyla Bacteroidetes, Firmicutes, Actinobacteria and Proteobacteria. In contrast to the direct use of FBA, SteadyCom is able to predict the change in species abundance in response to changes in diets with minimal additional imposed constraints on the model. Furthermore, by randomizing the uptake rates of microbes, an abundance profile with a good agreement to experimental gut microbiota is inferred. SteadyCom provides an important step towards the cross-cutting task of predicting the composition of a microbial community in a given environment.« less

  3. Convergence of Defect-Correction and Multigrid Iterations for Inviscid Flows

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2011-01-01

    Convergence of multigrid and defect-correction iterations is comprehensively studied within different incompressible and compressible inviscid regimes on high-density grids. Good smoothing properties of the defect-correction relaxation have been shown using both a modified Fourier analysis and a more general idealized-coarse-grid analysis. Single-grid defect correction alone has some slowly converging iterations on grids of medium density. The convergence is especially slow for near-sonic flows and for very low compressible Mach numbers. Additionally, the fast asymptotic convergence seen on medium density grids deteriorates on high-density grids. Certain downstream-boundary modes are very slowly damped on high-density grids. Multigrid scheme accelerates convergence of the slow defect-correction iterations to the extent determined by the coarse-grid correction. The two-level asymptotic convergence rates are stable and significantly below one in most of the regions but slow convergence is noted for near-sonic and very low-Mach compressible flows. Multigrid solver has been applied to the NACA 0012 airfoil and to different flow regimes, such as near-tangency and stagnation. Certain convergence difficulties have been encountered within stagnation regions. Nonetheless, for the airfoil flow, with a sharp trailing-edge, residuals were fast converging for a subcritical flow on a sequence of grids. For supercritical flow, residuals converged slower on some intermediate grids than on the finest grid or the two coarsest grids.

  4. Multi-machine analysis of termination scenarios with comparison to simulations of controlled shutdown of ITER discharges

    DOE PAGES

    de Vries, Peter C.; Luce, Timothy C.; Bae, Young-soon; ...

    2017-11-22

    To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in fGW limits the duration ofmore » the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q95~3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in βp at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. Here, the results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.« less

  5. Multi-machine analysis of termination scenarios with comparison to simulations of controlled shutdown of ITER discharges

    NASA Astrophysics Data System (ADS)

    de Vries, P. C.; Luce, T. C.; Bae, Y. S.; Gerhardt, S.; Gong, X.; Gribov, Y.; Humphreys, D.; Kavin, A.; Khayrutdinov, R. R.; Kessel, C.; Kim, S. H.; Loarte, A.; Lukash, V. E.; de la Luna, E.; Nunes, I.; Poli, F.; Qian, J.; Reinke, M.; Sauter, O.; Sips, A. C. C.; Snipes, J. A.; Stober, J.; Treutterer, W.; Teplukhina, A. A.; Voitsekhovitch, I.; Woo, M. H.; Wolfe, S.; Zabeo, L.; the Alcator C-MOD Team; the ASDEX Upgrade Team; the DIII-D Team; the EAST Team; contributors, JET; the KSTAR Team; the NSTX-U Team; the TCV Team; IOS members, ITPA; experts

    2018-02-01

    To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in f GW limits the duration of the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q 95 ~ 3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in β p at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. The results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.

  6. CORSICA modelling of ITER hybrid operation scenarios

    NASA Astrophysics Data System (ADS)

    Kim, S. H.; Bulmer, R. H.; Campbell, D. J.; Casper, T. A.; LoDestro, L. L.; Meyer, W. H.; Pearlstein, L. D.; Snipes, J. A.

    2016-12-01

    The hybrid operating mode observed in several tokamaks is characterized by further enhancement over the high plasma confinement (H-mode) associated with reduced magneto-hydro-dynamic (MHD) instabilities linked to a stationary flat safety factor (q ) profile in the core region. The proposed ITER hybrid operation is currently aiming at operating for a long burn duration (>1000 s) with a moderate fusion power multiplication factor, Q , of at least 5. This paper presents candidate ITER hybrid operation scenarios developed using a free-boundary transport modelling code, CORSICA, taking all relevant physics and engineering constraints into account. The ITER hybrid operation scenarios have been developed by tailoring the 15 MA baseline ITER inductive H-mode scenario. Accessible operation conditions for ITER hybrid operation and achievable range of plasma parameters have been investigated considering uncertainties on the plasma confinement and transport. ITER operation capability for avoiding the poloidal field coil current, field and force limits has been examined by applying different current ramp rates, flat-top plasma currents and densities, and pre-magnetization of the poloidal field coils. Various combinations of heating and current drive (H&CD) schemes have been applied to study several physics issues, such as the plasma current density profile tailoring, enhancement of the plasma energy confinement and fusion power generation. A parameterized edge pedestal model based on EPED1 added to the CORSICA code has been applied to hybrid operation scenarios. Finally, fully self-consistent free-boundary transport simulations have been performed to provide information on the poloidal field coil voltage demands and to study the controllability with the ITER controllers. Extended from Proc. 24th Int. Conf. on Fusion Energy (San Diego, 2012) IT/P1-13.

  7. Progress on the application of ELM control schemes to ITER scenarios from the non-active phase to DT operation

    NASA Astrophysics Data System (ADS)

    Loarte, A.; Huijsmans, G.; Futatani, S.; Baylor, L. R.; Evans, T. E.; Orlov, D. M.; Schmitz, O.; Becoulet, M.; Cahyna, P.; Gribov, Y.; Kavin, A.; Sashala Naik, A.; Campbell, D. J.; Casper, T.; Daly, E.; Frerichs, H.; Kischner, A.; Laengner, R.; Lisgo, S.; Pitts, R. A.; Saibene, G.; Wingen, A.

    2014-03-01

    Progress in the definition of the requirements for edge localized mode (ELM) control and the application of ELM control methods both for high fusion performance DT operation and non-active low-current operation in ITER is described. Evaluation of the power fluxes for low plasma current H-modes in ITER shows that uncontrolled ELMs will not lead to damage to the tungsten (W) divertor target, unlike for high-current H-modes in which divertor damage by uncontrolled ELMs is expected. Despite the lack of divertor damage at lower currents, ELM control is found to be required in ITER under these conditions to prevent an excessive contamination of the plasma by W, which could eventually lead to an increased disruptivity. Modelling with the non-linear MHD code JOREK of the physics processes determining the flow of energy from the confined plasma onto the plasma-facing components during ELMs at the ITER scale shows that the relative contribution of conductive and convective losses is intrinsically linked to the magnitude of the ELM energy loss. Modelling of the triggering of ELMs by pellet injection for DIII-D and ITER has identified the minimum pellet size required to trigger ELMs and, from this, the required fuel throughput for the application of this technique to ITER is evaluated and shown to be compatible with the installed fuelling and tritium re-processing capabilities in ITER. The evaluation of the capabilities of the ELM control coil system in ITER for ELM suppression is carried out (in the vacuum approximation) and found to have a factor of ˜2 margin in terms of coil current to achieve its design criterion, although such a margin could be substantially reduced when plasma shielding effects are taken into account. The consequences for the spatial distribution of the power fluxes at the divertor of ELM control by three-dimensional (3D) fields are evaluated and found to lead to substantial toroidal asymmetries in zones of the divertor target away from the separatrix. Therefore, specifications for the rotation of the 3D perturbation applied for ELM control in order to avoid excessive localized erosion of the ITER divertor target are derived. It is shown that a rotation frequency in excess of 1 Hz for the whole toroidally asymmetric divertor power flux pattern is required (corresponding to n Hz frequency in the variation of currents in the coils, where n is the toroidal symmetry of the perturbation applied) in order to avoid unacceptable thermal cycling of the divertor target for the highest power fluxes and worst toroidal power flux asymmetries expected. The possible use of the in-vessel vertical stability coils for ELM control as a back-up to the main ELM control systems in ITER is described and the feasibility of its application to control ELMs in low plasma current H-modes, foreseen for initial ITER operation, is evaluated and found to be viable for plasma currents up to 5-10 MA depending on modelling assumptions.

  8. Carbon fiber composites application in ITER plasma facing components

    NASA Astrophysics Data System (ADS)

    Barabash, V.; Akiba, M.; Bonal, J. P.; Federici, G.; Matera, R.; Nakamura, K.; Pacher, H. D.; Rödig, M.; Vieider, G.; Wu, C. H.

    1998-10-01

    Carbon Fiber Composites (CFCs) are one of the candidate armour materials for the plasma facing components of the International Thermonuclear Experimental Reactor (ITER). For the present reference design, CFC has been selected as armour for the divertor target near the plasma strike point mainly because of unique resistance to high normal and off-normal heat loads. It does not melt under disruptions and might have higher erosion lifetime in comparison with other possible armour materials. Issues related to CFC application in ITER are described in this paper. They include erosion lifetime, tritium codeposition with eroded material and possible methods for the removal of the codeposited layers, neutron irradiation effect, development of joining technologies with heat sink materials, and thermomechanical performance. The status of the development of new advanced CFCs for ITER application is also described. Finally, the remaining R&D needs are critically discussed.

  9. Design and first plasma measurements of the ITER-ECE prototype radiometer

    DOE PAGES

    Austin, M. E.; Brookman, M. W.; Rowan, W. L.; ...

    2016-08-09

    On ITER, second harmonic optically thick electron cyclotron emission (ECE) in the range of 220-340 GHz will supply the electron temperature (T e). In order to investigate the requirements and capabilities prescribed for the ITER system, a prototype radiometer covering this frequency range has been developed by Virginia Diodes, Inc. The first plasma measurements with this instrument have been carried out on the DIII-D tokamak, with lab bench tests and measurements of third through fifth harmonic ECE from high T e plasmas. At DIII-D the instrument shares the transmission line of the Michelson interferometer and can simultaneously acquire data. Inmore » our comparison of the ECE radiation temperature from the absolutely calibrated Michelson and the prototype receiver we show that the ITER radiometer provides accurate measurements of the millimeter radiation across the instrument band.« less

  10. Elliptic polylogarithms and iterated integrals on elliptic curves. II. An application to the sunrise integral

    NASA Astrophysics Data System (ADS)

    Broedel, Johannes; Duhr, Claude; Dulat, Falko; Tancredi, Lorenzo

    2018-06-01

    We introduce a class of iterated integrals that generalize multiple polylogarithms to elliptic curves. These elliptic multiple polylogarithms are closely related to similar functions defined in pure mathematics and string theory. We then focus on the equal-mass and non-equal-mass sunrise integrals, and we develop a formalism that enables us to compute these Feynman integrals in terms of our iterated integrals on elliptic curves. The key idea is to use integration-by-parts identities to identify a set of integral kernels, whose precise form is determined by the branch points of the integral in question. These kernels allow us to express all iterated integrals on an elliptic curve in terms of them. The flexibility of our approach leads us to expect that it will be applicable to a large variety of integrals in high-energy physics.

  11. Status of the ITER Electron Cyclotron Heating and Current Drive System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darbos, Caroline; Albajar, Ferran; Bonicelli, Tullio

    2015-10-07

    We present that the electron cyclotron (EC) heating and current drive (H&CD) system developed for the ITER is made of 12 sets of high-voltage power supplies feeding 24 gyrotrons connected through 24 transmission lines (TL), to five launchers, four located in upper ports and one at the equatorial level. Nearly all procurements are in-kind, following general ITER philosophy, and will come from Europe, India, Japan, Russia and the USA. The full system is designed to couple to the plasma 20 MW among the 24 MW generated power, at the frequency of 170 GHz, for various physics applications such as plasmamore » start-up, central H&CD and magnetohydrodynamic (MHD) activity control. The design takes present day technology and extends toward high-power continuous operation, which represents a large step forward as compared to the present state of the art. The ITER EC system will be a stepping stone to future EC systems for DEMO and beyond.The development of the EC system is facing significant challenges, which includes not only an advanced microwave system but also compliance with stringent requirements associated with nuclear safety as ITER became the first fusion device licensed as basic nuclear installations as of 9 November 2012. Finally, since the conceptual design of the EC system was established in 2007, the EC system has progressed to a preliminary design stage in 2012 and is now moving forward toward a final design.« less

  12. A novel decoding algorithm based on the hierarchical reliable strategy for SCG-LDPC codes in optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong

    2013-11-01

    An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.

  13. Iterative matrix algorithm for high precision temperature and force decoupling in multi-parameter FBG sensing.

    PubMed

    Hopf, Barbara; Dutz, Franz J; Bosselmann, Thomas; Willsch, Michael; Koch, Alexander W; Roths, Johannes

    2018-04-30

    A new iterative matrix algorithm has been applied to improve the precision of temperature and force decoupling in multi-parameter FBG sensing. For the first time, this evaluation technique allows the integration of nonlinearities in the sensor's temperature characteristic and the temperature dependence of the sensor's force sensitivity. Applied to a sensor cable consisting of two FBGs in fibers with 80 µm and 125 µm cladding diameter installed in a 7 m-long coiled PEEK capillary, this technique significantly reduced the uncertainties in friction-compensated temperature measurements. In the presence of high friction-induced forces of up to 1.6 N the uncertainties in temperature evaluation were reduced from several degrees Celsius if using a standard linear matrix approach to less than 0.5°C if using the iterative matrix approach in an extended temperature range between -35°C and 125°C.

  14. Precise and fast spatial-frequency analysis using the iterative local Fourier transform.

    PubMed

    Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook

    2016-09-19

    The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.

  15. Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.

    PubMed

    Junker, André; Brenner, Karl-Heinz

    2018-03-01

    The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.

  16. Developing stochastic model of thrust and flight dynamics for small UAVs

    NASA Astrophysics Data System (ADS)

    Tjhai, Chandra

    This thesis presents a stochastic thrust model and aerodynamic model for small propeller driven UAVs whose power plant is a small electric motor. First a model which relates thrust generated by a small propeller driven electric motor as a function of throttle setting and commanded engine RPM is developed. A perturbation of this model is then used to relate the uncertainty in throttle and engine RPM commanded to the error in the predicted thrust. Such a stochastic model is indispensable in the design of state estimation and control systems for UAVs where the performance requirements of the systems are specied in stochastic terms. It is shown that thrust prediction models for small UAVs are not a simple, explicit functions relating throttle input and RPM command to thrust generated. Rather they are non-linear, iterative procedures which depend on a geometric description of the propeller and mathematical model of the motor. A detailed derivation of the iterative procedure is presented and the impact of errors which arise from inaccurate propeller and motor descriptions are discussed. Validation results from a series of wind tunnel tests are presented. The results show a favorable statistical agreement between the thrust uncertainty predicted by the model and the errors measured in the wind tunnel. The uncertainty model of aircraft aerodynamic coefficients developed based on wind tunnel experiment will be discussed at the end of this thesis.

  17. Characterization of onset of parametric decay instability of lower hybrid waves

    NASA Astrophysics Data System (ADS)

    Baek, S. G.; Bonoli, P. T.; Parker, R. R.; Shiraiwa, S.; Wallace, G. M.; Porkolab, M.; Takase, Y.; Brunner, D.; Faust, I. C.; Hubbard, A. E.; LaBombard, B. L.; Lau, C.

    2014-02-01

    The goal of the lower hybrid current drive (LHCD) program on Alcator C-Mod is to develop and optimize ITER-relevant steady-state plasmas by controlling the current density profile. Using a 4×16 waveguide array, over 1 MW of LH power at 4.6 GHz has been successfully coupled to the plasmas. However, current drive efficiency precipitously drops as the line averaged density (n¯e) increases above 1020m-3. Previous numerical work shows that the observed loss of current drive efficiency in high density plasmas stems from the interactions of LH waves with edge/scrape-off layer (SOL) plasmas [Wallace et al., Physics of Plasmas 19, 062505 (2012)]. Recent observations of parametric decay instability (PDI) suggest that non-linear effects should be also taken into account to fully characterize the parasitic loss mechanisms [Baek et al., Plasma Phys. Control Fusion 55, 052001 (2013)]. In particular, magnetic configuration dependent ion cyclotron PDIs are observed using the probes near n¯e≈1.2×1020m-3. In upper single null plasmas, ion cyclotron PDI is excited near the low field side separatrix with no apparent indications of pump depletion. The observed ion cyclotron PDI becomes weaker in inner wall limited plasmas, which exhibit enhanced current drive effects. In lower single null plasmas, the dominant ion cyclotron PDI is excited near the high field side (HFS) separatrix. In this case, the onset of PDI is correlated with the decrease in pump power, indicating that pump wave power propagates to the HFS and is absorbed locally near the HFS separatrix. Comparing the observed spectra with the homogeneous growth rate calculation indicates that the observed ion cyclotron instability is excited near the plasma periphery. The incident pump power density is high enough to overcome the collisional homogeneous threshold. For C-Mod plasma parameters, the growth rate of ion sound quasi-modes is found to be typically smaller by an order of magnitude than that of ion cyclotron quasi-modes. When considering the convective threshold near the plasma edge, convective growth due to parallel coupling rather than perpendicular coupling is likely to be responsible for the observed strength of the sidebands. To demonstrate the improved LHCD efficiency in high density plasmas, an additional launcher has been designed. In conjunction with the existing launcher, this new launcher will allow access to an ITER-like high single pass absorption regime, replicating the JLH(r) expected in ITER. The predictions from the time domain discharge scenarios, in which the two launchers are used, will be also presented.

  18. Challenges and status of ITER conductor production

    NASA Astrophysics Data System (ADS)

    Devred, A.; Backbier, I.; Bessette, D.; Bevillard, G.; Gardner, M.; Jong, C.; Lillaz, F.; Mitchell, N.; Romano, G.; Vostner, A.

    2014-04-01

    Taking the relay of the large Hadron collider (LHC) at CERN, ITER has become the largest project in applied superconductivity. In addition to its technical complexity, ITER is also a management challenge as it relies on an unprecedented collaboration of seven partners, representing more than half of the world population, who provide 90% of the components as in-kind contributions. The ITER magnet system is one of the most sophisticated superconducting magnet systems ever designed, with an enormous stored energy of 51 GJ. It involves six of the ITER partners. The coils are wound from cable-in-conduit conductors (CICCs) made up of superconducting and copper strands assembled into a multistage cable, inserted into a conduit of butt-welded austenitic steel tubes. The conductors for the toroidal field (TF) and central solenoid (CS) coils require about 600 t of Nb3Sn strands while the poloidal field (PF) and correction coil (CC) and busbar conductors need around 275 t of Nb-Ti strands. The required amount of Nb3Sn strands far exceeds pre-existing industrial capacity and has called for a significant worldwide production scale up. The TF conductors are the first ITER components to be mass produced and are more than 50% complete. During its life time, the CS coil will have to sustain several tens of thousands of electromagnetic (EM) cycles to high current and field conditions, way beyond anything a large Nb3Sn coil has ever experienced. Following a comprehensive R&D program, a technical solution has been found for the CS conductor, which ensures stable performance versus EM and thermal cycling. Productions of PF, CC and busbar conductors are also underway. After an introduction to the ITER project and magnet system, we describe the ITER conductor procurements and the quality assurance/quality control programs that have been implemented to ensure production uniformity across numerous suppliers. Then, we provide examples of technical challenges that have been encountered and we present the status of ITER conductor production worldwide.

  19. Scientific and technical challenges on the road towards fusion electricity

    NASA Astrophysics Data System (ADS)

    Donné, A. J. H.; Federici, G.; Litaudon, X.; McDonald, D. C.

    2017-10-01

    The goal of the European Fusion Roadmap is to deliver fusion electricity to the grid early in the second half of this century. It breaks the quest for fusion energy into eight missions, and for each of them it describes a research and development programme to address all the open technical gaps in physics and technology and estimates the required resources. It points out the needs to intensify industrial involvement and to seek all opportunities for collaboration outside Europe. The roadmap covers three periods: the short term, which runs parallel to the European Research Framework Programme Horizon 2020, the medium term and the long term. ITER is the key facility of the roadmap as it is expected to achieve most of the important milestones on the path to fusion power. Thus, the vast majority of present resources are dedicated to ITER and its accompanying experiments. The medium term is focussed on taking ITER into operation and bringing it to full power, as well as on preparing the construction of a demonstration power plant DEMO, which will for the first time demonstrate fusion electricity to the grid around the middle of this century. Building and operating DEMO is the subject of the last roadmap phase: the long term. Clearly, the Fusion Roadmap is tightly connected to the ITER schedule. Three key milestones are the first operation of ITER, the start of the DT operation in ITER and reaching the full performance at which the thermal fusion power is 10 times the power put in to the plasma. The Engineering Design Activity of DEMO needs to start a few years after the first ITER plasma, while the start of the construction phase will be a few years after ITER reaches full performance. In this way ITER can give viable input to the design and development of DEMO. Because the neutron fluence in DEMO will be much higher than in ITER, it is important to develop and validate materials that can handle these very high neutron loads. For the testing of the materials, a dedicated 14 MeV neutron source is needed. This DEMO Oriented Neutron Source (DONES) is therefore an important facility to support the fusion roadmap.

  20. Power requirements for electron cyclotron current drive and ion cyclotron resonance heating for sawtooth control in ITER

    NASA Astrophysics Data System (ADS)

    Chapman, I. T.; Graves, J. P.; Sauter, O.; Zucca, C.; Asunta, O.; Buttery, R. J.; Coda, S.; Goodman, T.; Igochine, V.; Johnson, T.; Jucker, M.; La Haye, R. J.; Lennholm, M.; Contributors, JET-EFDA

    2013-06-01

    13 MW of electron cyclotron current drive (ECCD) power deposited inside the q = 1 surface is likely to reduce the sawtooth period in ITER baseline scenario below the level empirically predicted to trigger neoclassical tearing modes (NTMs). However, since the ECCD control scheme is solely predicated upon changing the local magnetic shear, it is prudent to plan to use a complementary scheme which directly decreases the potential energy of the kink mode in order to reduce the sawtooth period. In the event that the natural sawtooth period is longer than expected, due to enhanced α particle stabilization for instance, this ancillary sawtooth control can be provided from >10MW of ion cyclotron resonance heating (ICRH) power with a resonance just inside the q = 1 surface. Both ECCD and ICRH control schemes would benefit greatly from active feedback of the deposition with respect to the rational surface. If the q = 1 surface can be maintained closer to the magnetic axis, the efficacy of ECCD and ICRH schemes significantly increases, the negative effect on the fusion gain is reduced, and off-axis negative-ion neutral beam injection (NNBI) can also be considered for sawtooth control. Consequently, schemes to reduce the q = 1 radius are highly desirable, such as early heating to delay the current penetration and, of course, active sawtooth destabilization to mediate small frequent sawteeth and retain a small q = 1 radius. Finally, there remains a residual risk that the ECCD + ICRH control actuators cannot keep the sawtooth period below the threshold for triggering NTMs (since this is derived only from empirical scaling and the control modelling has numerous caveats). If this is the case, a secondary control scheme of sawtooth stabilization via ECCD + ICRH + NNBI, interspersed with deliberate triggering of a crash through auxiliary power reduction and simultaneous pre-emptive NTM control by off-axis ECCD has been considered, permitting long transient periods with high fusion gain. The power requirements for the necessary degree of sawtooth control using either destabilization or stabilization schemes are expected to be within the specification of anticipated ICRH and ECRH heating in ITER, provided the requisite power can be dedicated to sawtooth control.

  1. The influence of plasma-surface interaction on the performance of tungsten at the ITER divertor vertical targets

    NASA Astrophysics Data System (ADS)

    De Temmerman, G.; Hirai, T.; Pitts, R. A.

    2018-04-01

    The tungsten (W) material in the high heat flux regions of the ITER divertor will be exposed to high fluxes of low-energy particles (e.g. H, D, T, He, Ne and/or N). Combined with long-pulse operations, this implies fluences well in excess of the highest values reached in today’s tokamak experiments. Shaping of the individual monoblock top surface and tilting of the vertical targets for leading-edge protection lead to an increased surface heat flux, and thus increased surface temperature and a reduced margin to remain below the temperature at which recrystallization and grain growth begin. Significant morphology changes are known to occur on W after exposure to high fluences of low-energy particles, be it H or He. An analysis of the formation conditions of these morphology changes is made in relation to the conditions expected at the vertical targets during different phases of operations. It is concluded that both H and He-related effects can occur in ITER. In particular, the case of He-induced nanostructure (also known as ‘fuzz’) is reviewed. Fuzz formation appears possible over a limited region of the outer vertical target, the inner target being generally a net Be deposition area. A simple analysis of the fuzz growth rate including the effect of edge-localized modes (ELMs) and the reduced thermal conductivity of fuzz shows that the fuzz thickness is likely to be limited by the occurrence of annealing during ELM-induced thermal excursions. Not only the morphology, but the material mechanical and thermal properties can be modified by plasma exposure. A review of the existing literature is made, but the existing data are insufficient to conclude quantitatively on the importance and extent of these effects for ITER. As a consequence of the high surface temperatures in ITER, W recrystallization is an important effect to consider, since it leads to a decrease in material strength. An approach is proposed here to develop an operational budget for the W material, i.e. the time the divertor material can be operated at a given temperature before a significant fraction of the material is recrystallized. In general, while it is clear that significant surface damage can occur during ITER operations, the tolerable level of damage in terms of plasma operations currently remains unknown.

  2. Domain decomposition preconditioners for the spectral collocation method

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio; Sacchilandriani, Giovanni

    1988-01-01

    Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.

  3. Thomson scattering for core plasma on DEMO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukhin, E. E.; Kurskiev, G. S.; Tolstyakov, S. Yu.

    2014-08-21

    This paper describes the challenges of Thomson scattering implementation for core plasma on DEMO and evaluates the capability to measure extremely high electron temperature range 0.5-40keV. A number of solutions to be developed for ITER diagnostics are suggested in consideration of their realization for DEMO. New approaches suggested for DEMO may also be of interest to ITER and currently operating magnetic confinement devices.

  4. Construction, classification and parametrization of complex Hadamard matrices

    NASA Astrophysics Data System (ADS)

    Szöllősi, Ferenc

    To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.

  5. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  6. Energy Confinement Recovery in Low Collisionality ITER Shape Plasmas with Applied Resonant Magnetic Perturbations (RMPs)

    NASA Astrophysics Data System (ADS)

    Cui, L.; Grierson, B.; Logan, N.; Nazikian, R.

    2016-10-01

    Application of RMPs to low collisionality (ν*e < 0.4) ITER shape plasmas on DIII-D leads to a rapid reduction in stored energy due to density pumpout that is sometimes followed by a gradual recovery in the plasma stored energy. Understanding this confinement recovery is essential to optimize the confinement of RMP plasmas in present and future devices such as ITER. Transport modeling using TRANSP+TGLF indicates that the core a/LTi is stiff in these plasmas while the ion temperature gradient is much less stiff in the pedestal region. The reduction in the edge density during pumpout leads to an increase in the core ion temperature predicted by TGLF based on experimental data. This is correlated to the increase in the normalized ion heat flux. Transport stiffness in the core combined with an increase in the edge a/LTi results in an increase of the plasma stored energy, consistent with experimental observations. For plasmas where the edge density is controlled using deuterium gas puffs, the effect of the RMP on ion thermal confinement is significantly reduced. Work supported by US DOE Grant DE-FC02-04ER54698 and DE-AC02-09CH11466.

  7. The ITER ICRF Antenna Design with TOPICA

    NASA Astrophysics Data System (ADS)

    Milanesio, Daniele; Maggiora, Riccardo; Meneghini, Orso; Vecchi, Giuseppe

    2007-11-01

    TOPICA (Torino Polytechnic Ion Cyclotron Antenna) code is an innovative tool for the 3D/1D simulation of Ion Cyclotron Radio Frequency (ICRF), i.e. accounting for antennas in a realistic 3D geometry and with an accurate 1D plasma model [1]. The TOPICA code has been deeply parallelized and has been already proved to be a reliable tool for antennas design and performance prediction. A detailed analysis of the 24 straps ITER ICRF antenna geometry has been carried out, underlining the strong dependence and asymmetries of the antenna input parameters due to the ITER plasma response. We optimized the antenna array geometry dimensions to maximize loading, lower mutual couplings and mitigate sheath effects. The calculated antenna input impedance matrices are TOPICA results of a paramount importance for the tuning and matching system design. Electric field distributions have been also calculated and they are used as the main input for the power flux estimation tool. The designed optimized antenna is capable of coupling 20 MW of power to plasma in the 40 -- 55 MHz frequency range with a maximum voltage of 45 kV in the feeding coaxial cables. [1] V. Lancellotti et al., Nuclear Fusion, 46 (2006) S476-S499

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chrystal, C.; Grierson, B. A.; Solomon, W. M.

    We measured the dependence of intrinsic torque and momentum confinement time on normalized gyroradius (ρ *) and collisionality (v *) in the DIII-D tokamak. The intrinsic torque normalized to temperature is found to have ρ * and v * dependencies of ρ * -1.5 ± 0.8 and v * -0.26 ± 0.04. This dependence on ρ * is unexpectedly favorable (increasing as ρ * decreases). The choice of normalization is important, and the implications are discussed. The unexpected dependence on ρ * is found to be robust, despite some uncertainty in the choice of normalization. Furthermore, the dependence of momentummore » confinement on ρ * does not clearly demonstrate Bohm or gyro-Bohm like scaling, and a weaker dependence on v * is found. The calculations required to use these dependencies to determine the intrinsic torque in future tokamaks such as ITER are presented, and the importance of the normalization is explained. Based on the currently available information, the intrinsic torque predicted for ITER is 33 N m, comparable to the expected torque available from neutral beam injection. The expected average intrinsic rotation associated with this intrinsic torque is small compared to current tokamaks, but it may still aid stability and performance in ITER. Published by AIP Publishing.« less

  9. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  10. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).

    PubMed

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan

    2013-11-01

    Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms, such that volumes quantified from scans of different reconstruction algorithms can be compared. The little difference found between the precision of FBP and iterative reconstructions could be a result of both iterative reconstruction's diminished noise reduction at the edge of the nodules as well as the loss of resolution at high noise levels with iterative reconstruction. The findings do not rule out potential advantage of IR that might be evident in a study that uses a larger number of nodules or repeated scans.

  11. Computation of flow in radial- and mixed-flow cascades by an inviscid-viscous interaction method

    NASA Technical Reports Server (NTRS)

    Serovy, G. K.; Hansen, E. C.

    1980-01-01

    The use of inviscid-viscous interaction methods for the case of radial or mixed-flow cascade diffusers is discussed. A literature review of investigations considering cascade flow-field prediction by inviscid-viscous iterative computation is given. Cascade aerodynamics in the third blade row of a multiple-row radial cascade diffuser are specifically investigated.

  12. Development of a Tool to Recreate the Mars Science Laboratory Aerothermal Environment

    NASA Technical Reports Server (NTRS)

    Beerman, A. F.; Lewis, M. J.; Santos, J. A.; White, T. R.

    2010-01-01

    The Mars Science Laboratory will enter the Martian atmosphere in 2012 with multiple char depth sensors and in-depth thermocouples in its heatshield. The aerothermal environment experienced by MSL may be computationally recreated using the data from the sensors and a material response program, such as the Fully Implicit Ablation and Thermal (FIAT) response program, through the matching of the char depth and thermocouple predictions of the material response program to the sensor data. A tool, CHanging Inputs from the Environment of FIAT (CHIEF), was developed to iteratively change different environmental conditions such that FIAT predictions match within certain criteria applied to an external data set. The computational environment is changed by iterating on the enthalpy, pressure, or heat transfer coefficient at certain times in the trajectory. CHIEF was initially compared against arc-jet test data from the development of the MSL heatshield and then against simulated sensor data derived from design trajectories for MSL. CHIEF was able to match char depth and in-depth thermocouple temperatures within the bounds placed upon it for these cases. Further refinement of CHIEF to compare multiple time points and assign convergence criteria may improve accuracy.

  13. Analysis of LH Launcher Arrays (Like the ITER One) Using the TOPLHA Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maggiora, R.; Milanesio, D.; Vecchi, G.

    2009-11-26

    TOPLHA (Torino Polytechnic Lower Hybrid Antenna) code is an innovative tool for the 3D/1D simulation of Lower Hybrid (LH) antennas, i.e. accounting for realistic 3D waveguides geometry and for accurate 1D plasma models, and without restrictions on waveguide shape, including curvature. This tool provides a detailed performances prediction of any LH launcher, by computing the antenna scattering parameters, the current distribution, electric field maps and power spectra for any user-specified waveguide excitation. In addition, a fully parallelized and multi-cavity version of TOPLHA permits the analysis of large and complex waveguide arrays in a reasonable simulation time. A detailed analysis ofmore » the performances of the proposed ITER LH antenna geometry has been carried out, underlining the strong dependence of the antenna input parameters with respect to plasma conditions. A preliminary optimization of the antenna dimensions has also been accomplished. Electric current distribution on conductors, electric field distribution at the interface with plasma, and power spectra have been calculated as well. The analysis shows the strong capabilities of the TOPLHA code as a predictive tool and its usefulness to LH launcher arrays detailed design.« less

  14. Simulation of Forward and Inverse X-ray Scattering From Shocked Materials

    NASA Astrophysics Data System (ADS)

    Barber, John; Marksteiner, Quinn; Barnes, Cris

    2012-02-01

    The next generation of high-intensity, coherent light sources should generate sufficient brilliance to perform in-situ coherent x-ray diffraction imaging (CXDI) of shocked materials. In this work, we present beginning-to-end simulations of this process. This includes the calculation of the partially-coherent intensity profiles of self-amplified stimulated emission (SASE) x-ray free electron lasers (XFELs), as well as the use of simulated, shocked molecular-dynamics-based samples to predict the evolution of the resulting diffraction patterns. In addition, we will explore the corresponding inverse problem by performing iterative phase retrieval to generate reconstructed images of the simulated sample. The development of these methods in the context of materials under extreme conditions should provide crucial insights into the design and capabilities of shocked in-situ imaging experiments.

  15. A spatially resolving x-ray crystal spectrometer for measurement of ion-temperature and rotation-velocity profiles on the Alcator C-Mod tokamak.

    PubMed

    Hill, K W; Bitter, M L; Scott, S D; Ince-Cushman, A; Reinke, M; Rice, J E; Beiersdorfer, P; Gu, M-F; Lee, S G; Broennimann, Ch; Eikenberry, E F

    2008-10-01

    A new spatially resolving x-ray crystal spectrometer capable of measuring continuous spatial profiles of high resolution spectra (lambda/d lambda>6000) of He-like and H-like Ar K alpha lines with good spatial (approximately 1 cm) and temporal (approximately 10 ms) resolutions has been installed on the Alcator C-Mod tokamak. Two spherically bent crystals image the spectra onto four two-dimensional Pilatus II pixel detectors. Tomographic inversion enables inference of local line emissivity, ion temperature (T(i)), and toroidal plasma rotation velocity (upsilon(phi)) from the line Doppler widths and shifts. The data analysis techniques, T(i) and upsilon(phi) profiles, analysis of fusion-neutron background, and predictions of performance on other tokamaks, including ITER, will be presented.

  16. Scattering effect of submarine hull on propeller non-cavitation noise

    NASA Astrophysics Data System (ADS)

    Wei, Yingsan; Shen, Yang; Jin, Shuanbao; Hu, Pengfei; Lan, Rensheng; Zhuang, Shuangjiang; Liu, Dezhi

    2016-05-01

    This paper investigates the non-cavitation noise caused by propeller running in the wake of submarine with the consideration of scattering effect caused by submarine's hull. The computation fluid dynamics (CFD) and acoustic analogy method are adopted to predict fluctuating pressure of propeller's blade and its underwater noise radiation in time domain, respectively. An effective iteration method which is derived in the time domain from the Helmholtz integral equation is used to solve multi-frequency waves scattering due to obstacles. Moreover, to minimize time interpolation caused numerical errors, the pressure and its derivative at the sound emission time is obtained by summation of Fourier series. It is noted that the time averaging algorithm is used to achieve a convergent result if the solution oscillated in the iteration process. Meanwhile, the developed iteration method is verified and applied to predict propeller noise scattered from submarine's hull. In accordance with analysis results, it is summarized that (1) the scattering effect of hull on pressure distribution pattern especially at the frequency higher than blade passing frequency (BPF) is proved according to the contour maps of sound pressure distribution of submarine's hull and typical detecting planes. (2) The scattering effect of the hull on the total pressure is observable in noise frequency spectrum of field points, where the maximum increment is up to 3 dB at BPF, 12.5 dB at 2BPF and 20.2 dB at 3BPF. (3) The pressure scattered from hull is negligible in near-field of propeller, since the scattering effect surrounding analyzed location of propeller on submarine's stern is significantly different from the surface ship. This work shows the importance of submarine's scattering effect in evaluating the propeller non-cavitation noise.

  17. Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2016-06-01

    This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.

  18. Scaling of the MHD perturbation amplitude required to trigger a disruption and predictions for ITER

    NASA Astrophysics Data System (ADS)

    de Vries, P. C.; Pautasso, G.; Nardon, E.; Cahyna, P.; Gerasimov, S.; Havlicek, J.; Hender, T. C.; Huijsmans, G. T. A.; Lehnen, M.; Maraschek, M.; Markovič, T.; Snipes, J. A.; the COMPASS Team; the ASDEX Upgrade Team; Contributors, JET

    2016-02-01

    The amplitude of locked instabilities, likely magnetic islands, seen as precursors to disruptions has been studied using data from the JET, ASDEX Upgrade and COMPASS tokamaks. It was found that the thermal quench, that often initiates the disruption, is triggered when the amplitude has reached a distinct level. This information can be used to determine thresholds for simple disruption prediction schemes. The measured amplitude in part depends on the distance of the perturbation to the measurement coils. Hence the threshold for the measured amplitude depends on the mode location (i.e. the rational q-surface) and thus indirectly on parameters such as the edge safety factor, q 95, and the internal inductance, li(3), that determine the shape of the q-profile. These dependencies can be used to set the disruption thresholds more precisely. For the ITER baseline scenario, with typically q 95  =  3.2, li(3)  =  0.9 and taking into account the position of the measurement coils on ITER, the maximum allowable measured locked mode amplitude normalized to engineering parameters was estimated to be a·B ML(r c)/I p  =  0.92 m mT/MA, or directly as a fraction edge poloidal magnetic field: B ML(r c)/B θ (a)  =  5 · 10-3. But these values decrease for operation at higher q 95 or lower li(3). The analysis found furthermore that the above empirical criterion to trigger a thermal quench is more consistent with a criterion derived with the concept of a critical island size, i.e. the thermal quench seemed to be triggered at a distinct island width.

  19. Electromagnetic Analysis of ITER Diagnostic Equatorial Port Plugs During Plasma Disruptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Y. Zhai, R. Feder, A. Brooks, M. Ulrickson, C.S. Pitcher and G.D. Loesser

    2012-08-27

    ITER diagnostic port plugs perform many functionsincluding structural support of diagnostic systems under high electromagnetic loads while allowing for diagnostic access to the plasma. The design of diagnostic equatorial port plugs (EPP) are largely driven by electromagnetic loads and associate responses of EPP structure during plasma disruptions and VDEs. This paper summarizes results of transient electromagnetic analysis using Opera 3d in support of the design activities for ITER diagnostic EPP. A complete distribution of disruption loads on the Diagnostic First Walls (DFWs), Diagnostic Shield Modules (DSMs) and the EPP structure, as well as impact on the system design integration duemore » to electrical contact among various EPP structural components are discussed.« less

  20. DiMES PMI research at DIII-D in support of ITER and beyond

    DOE PAGES

    Rudakov, Dimitry L.; Abrams, Tyler; Ding, Rui; ...

    2017-03-27

    An overview of recent Plasma-Material Interactions (PMI) research at the DIII-D tokamak using the Divertor Material Evaluation System (DiMES) is presented. The DiMES manipulator allows for exposure of material samples in the lower divertor of DIII-D under well-diagnosed ITER-relevant plasma conditions. Plasma parameters during the exposures are characterized by an extensive diagnostic suite including a number of spectroscopic diagnostics, Langmuir probes, IR imaging, and Divertor Thomson Scattering. Post-mortem measurements of net erosion/deposition on the samples are done by Ion Beam Analysis, and results are modelled by the ERO and REDEP/WBC codes with plasma background reproduced by OEDGE/DIVIMP modelling based onmore » experimental inputs. This article highlights experiments studying sputtering erosion, re-deposition and migration of high-Z elements, mostly tungsten and molybdenum, as well as some alternative materials. Results are generally encouraging for use of high-Z PFCs in ITER and beyond, showing high redeposition and reduced net sputter erosion. Two methods of high-Z PFC surface erosion control, with (i) external electrical biasing and (ii) local gas injection, are also discussed. Furthermore, these techniques may find applications in the future devices.« less

Top